11/29 From the CSUA minutes:
- Next Gen Console
-- If we have $1800 in our accounts, should we buy a console:
4 votes passes.
-- Console voting: 2 votes each, neither passes
* 360 = 600, more games
* PS3 = 650, not as many games
Does this mean the CSUA already has a Wii?
Since when is, "more expensive, fewer games" an argument for something?
I guess if they're gonna install Linux and try some Cell development,
THAT would be cool, but I don't think that's what they want it for.
\_ Netrek is free.. but you need to have skills
\_ I think the decision should be based on which you can hack and/or
boot alternate OS's on. I think there is a clear answer here...
\- YMWTS: KYELICK et al paper "The potential of the cell processor
for scientific computing" on the POWER of the CELL. Interesting
and quick read. Note: KYELICK now the Director of NERSC.
\_ Yeah, but Roadrunner (A combo Opteron/Cell cluster proposed
at Los Alamos) is still a dumb idea.
\_ Why do you say that? I'd be more concerned about using
/panfs as the storage system. Panasas might be ok by the
time it is deployed. A lot of impressive people there,
but mixed experiences in practice.
\_ The Cell already has a perfectly good general processor
attached to it. (A dual core power 5). What's the
Opteron doing there? The last thing the Cell
development tool kit needs is another totally different
processor to work with. Yea! A third compiler! For
hevean's sake, they don't even have the same endianness!
processor to work with. Yeah! Another compiler! For
hevean's sake, the don't even have the same endianness!
\- ibm and amd are working together on a few things
like socket compat between POWER-tng and Opteron,
and Torrenza(sp?)/HTX rather than PCIe. the HPC space
is very different from the rest of the world ...
on a $100m computer you have a legion of programers
to work on tweaking code, compilers because they
are no longer dominated by "expensive programmer
time costs".
\_ While everything you say is true, I can't see how
that excuses creating a totally wacky, needlessly
difficult architecture. Even Los Alamos
doesn't have infinite resources, programmer time
still costs money, money Los Alamos doesn't have.
Not to mention, they're buying the whole machine,
whole hog. No small test prototype. On a totally
untestest architecture.
\- no offense intended here, but are you just
reading articles on the net or do you have
some experience with how large HPC procurements
are done? i dont have any specific knowledge
of Los Alamos/Roadrunner but two things dont
ring true: 1. los alamos being on the hook for
all the dev and tuning work 2. ibm just being
responsible for dropping the machine off at
the loading dock and being done ... the "whole
machine whole hog" part. usually there are
lots of partial milestones involved. although
the somewhat dirty not that secret part of
this is those milestones are never missed
with major consequences. [well maybe once,
but not with one of the main *hpc* vendors.
i cant mention which well known vendor it was].
\_ Sorry, I didn't mean to imply what you've
read into the 'whole hog' statement. I
guess that was really poor word choice. I
just meant that Los Alamos didn't buy a
small prototype cluster to see how well this
thing will work in production, as is
normally done. I'm aware IBM has milestones
and will support the cluster. |