Berkeley CSUA MOTD:Entry 17018
Berkeley CSUA MOTD
 
WIKI | FAQ | Tech FAQ
http://csua.com/feed/
2025/05/25 [General] UID:1000 Activity:popular
5/25    

1999/12/7-9 [Computer/HW/CPU] UID:17018 Activity:very high
12/6    Is it worth getting a portable MP3 player?
        \_ I got the RCS Lyra as a gift. it's pretty simple to use. Comes
        \_ I'm looking to sell my 32mb Rio PMP300.  I don't know what
           it's worth, so make an offer. -jkwan
           \_ I'll Buy That For A Dollar!
        \_ I got the RCA Lyra as a gift. it's pretty simple to use. Comes
           with 32MB card (64 MB cards are available). a cable connects to
           both your parallel & keyboard port (not OR, it's AND), which i
           found kinda odd. it uses RealJukebox to transfer the music, which
           is quite fast.
           \_ it connects to the keyboard port only for the higher power
              capacity line available there.  The communications are
              all over the parallel.
        \_ If you like the music that is published only in mp3 form, then
           it seems like it's worth it.  It would be a bitch to have to
           rip and encode from CDs to get your collection.
                \_ I guess its not worth it if most of your music is in
                        CD format.  I guess CDR's are better if most
                        of your collection is in CD format.
                \_ Huh?  CD ripping is trivial.  It's read time plus setup time
                   which is no different than CD-r setup time.  Where do you
                   people get this stuff from?
                \_ So what are people using nowadays (say on the windows
                   platform) to rip and convert quickly?  Is there any
                   share/freeware program that does this to 128KBps
                   nicely/simply?
                        \_ Audiocatalyst is good for Windows
           \_ is perfectly reasonable and legal to rip your OWN CD's for your
              own use in things like a portable mp3 player.  Its called
              'fair use'.

12/7(?) Can anyone point me to a good CISC vs. RISC article / paper? Thanx.
           \_ No, most designers avoid CISC if at all possible.  Intel chips
              allow for their legacy CISC ISA to run on their modern
              chips but that's just a hack (CISC instructions are decoded
              into micro-RISC ops).  Basically, many things will not work
              unless what you execute are RISC instructions (pipelining,
              Tomasulo dynamic execution, etc...).  Intel chips suffer in
              that all instructions must pass through one extra level of
              decoding which could effect branch prediction recovery and
              because variable length instructions can cross cache block
              boundries instruction causing miss penalties and rates to go up.
              \_ Which is why the x86 line was predicted to die 10+ years ago
                 and oh wait, no, it's still here and still in 90+% of
                 computers from workstations on down and moving into the low
                 end server market.
        \_ Wasn't the consensus that the argument is meaningless?
           Most chips today have the best qualities of both.
                \_ This is true.  There are no more true CISC or RISC chips
                   being produced today.
                   \_ most chips now days have RISC-properties.  people
                      today seem to be confused about what the terms
                      RISC and CISC really means so depending on what your
                      definition is, most chips are RISC.
                        \_ But with everyone adding extensions like UltraSparc
                           VIS & Intel MMX to the instruction sets, most chips
                           are CISC too.
                        \_ I'd say, "RISC-like".  True RISC was only an ideal
                           and a theory anyway.
        \_ Patterson & Hennessey
           \_ something like "A case for RISC" from Computer Architecture
              News" by David Patterson (1980).  There's also a very famous
              dissertation from a Berkeley student.  If you give me some
              time I'll look it up.
              \_ - D. Patterson & D. Ditzel, "The Case for the Reduce
                   Instruction Set Computer," Computer Architecture
                   News 8,6 (Oct 15, 1980)
                 - Manoils Katevenis, "Reduced Instruction Set
                   Computer Architecture for VLSI," PhD Dissertation,
                   EECS, UC Berkeley, 1982.
                   --jeff
        \_ "Why my x86 r00lz y0r powermac!!!11", Dudester69!11
        \_ Here's your paper:
           "Real World Consumers: by Consumer Man#232
           A pc is $700.  A mac is $1500.  My 10 yr old kid can't tell the
           difference, and the 15 year old tells me linux won't run on a
           mac, oh and he wants a laptop, like in that movie he saw.
           Gee, I wonder what I'm buying for christmas."
           \_ PC R))LEZ!! MAC DR))LEZ!!!1
                   \_ Are what they said still more or less valid, now that
                      it's been two decades and the industry has been moving
                      fast?
                      \_ kubi seemed to have recommended the reading to us.
                         he didn't recommend anything more recent but probably
                         because more recent work in architecture isn't
                         focused on ISA as much as it was in the 80's
                         --jeff
                      \_ Even x86 implementations have some sort of RISC
                         core that is given a CISC interface.  RISC introduced
              follow-up letters at http://www.reason.com/9610/ltr.sl.html
                         a number of principles that are still considered
                         valid such as uniform instruction length, pipelining,
                         on and on.
        \_ There's an article in a past issue of "Reason" magazine about
           history's influence in choice between technologies.  It didn't talk
           about RISC vs. CISC, but it talked about PC/DOS/Windows vs. Mac,
           QWERTY vs. Dvorak, Beta vs. VHS, etc.  I don't have the issue with
           me right now though.  -- yuen
           \_ Do you mean: http://www.reason.com/9606/Fe.QWERTY.html (and
              follow-up letters at http://www.reason.com/9611/ltr.sl.html
                                        [url fixed]
                \_ _excellent_ article.
              \_ Wow!  Yeah, that's the article I read.  I didn't know the
                        \- helpful.    /- not helpful.
                 magazine has a web site.  -- yuen
                 \_ Your tenses do not match.  ^has^had
                    \_ actually that is just fine.  He read the article in
                       past.  HE didn't know the web site existed in the
                       present.  Those are two different statements.
                       If you are gonna be a grammar cop get it right.
                    \_ Your tenseness is showing.  Loosen up.
                    \_ Your value-add is null.
        \_ Found this article on the subject:
           http://www.ars-technica.com/cpu/4q99/risc-cisc/rvc-1.html
2025/05/25 [General] UID:1000 Activity:popular
5/25    

You may also be interested in these entries...
2009/8/6-14 [Computer/SW/OS/OsX] UID:53250 Activity:moderate
8/5     Why is Mac OS 10.6 $29 and 10.5.6 $129? Is it a typo?
        \_ $29 for existing users.
           \_ it doens't even support ppc does it.
              \_ who cares about ppc anymore? Everything is Intel based
                 \_ I have a PPC mini at home that I use.
                 \_ I have a quad core G5 ppc.
	...
2009/6/1-3 [Computer/HW/CPU] UID:53068 Activity:high
5/31    History of winners and losers by *popularity*:
        VHS > Beta Max
        USB2 > Firewire
        x86 > PowerPC > Everything Else > DEC Alpha > Itanium
        BlueRay > HDDvd
        \_ It's too early to tell RE: "Blue"Ray. They may both turn out to be
	...
2009/5/26-30 [Computer/HW/CPU] UID:53045 Activity:nil
5/26    Engineering is HOT man! Super hot co-inventor of USB at Intel:
        http://www.youtube.com/watch?v=jqLPHrCQr2I
	...
2009/1/16-23 [Computer/HW/CPU] UID:52404 Activity:nil
1/16    AMD to layoff 9%, suspend 401(k) match, cut engineer salaries 10%
        \_ Awwww, too bad                                       -Intel
           \_ My heart bleeds for you. --transmeta.
              \_ Wait, another sodan worked there? --ex-transmeta
                 \_ Hello transmeta-coward, meet another transmeta-coward.
  http://www.theregister.co.uk/2009/01/16/amd_q1_2009_job_cuts_wage_reductions
	...
2008/12/4-10 [Computer/HW/CPU, Computer/HW/Drives] UID:52163 Activity:nil
12/4    A question to you old crufy alumni: So lately we've suggested
        VMs, and been asked why it's necessary. We've suggested top-of-the-line
        hardware and been told we don't need that much power. So I'd like to
        ask -- what exactly do you think the CSUA is supposed to _be_?
        \_ Noone said VMs weren't needed.  They suggested you use the
        \_ No one said VMs weren't needed.  They suggested you use the
	...
2008/12/4-10 [Computer/HW/Memory, Computer/HW/Drives] UID:52172 Activity:nil
12/5    What would you guys think of this?
        TYAN Tank barebone
        http://tyan.com/product_barebones_detail.aspx?pid=353
        2x Intel Quad Xeon E5420
        http://www.newegg.com/Product/Product.aspx?Item=N82E16819117147
        16GB ram
	...
2008/12/3-8 [Computer/HW/CPU] UID:52157 Activity:kinda low
12/3    Are any of you CSUA alums working at Intel?  Is it possible that we
        might be able to hit Intel up for donated/partially donated (reduced
        price) Core i7 Xeons when they come out?  Who would be a good person
        to contact about something like that?  We're of course willing to put
        out for them - perhaps we'll tattoo an Intel logo on toulouse if that's
        what they want :-p  -- steven
	...
2008/11/12-26 [Computer/Companies/Google, Finance/Investment] UID:51948 Activity:low
11/12   Intel warns.  Q4 revenue forecast down 10%.  At 1996 levels.
        GOOG afterhours at 285.  Time to buy?
        \_ I think GOOG is a buy under $300 and I bought. --GOOG h8ter
           \_ how long are you holding for?  will you buy more if it gets
              cheaper?
              \_ I'll hold until I need the money, however long that
	...
Cache (8192 bytes)
www.reason.com/9606/Fe.QWERTY.html -> reason.com/9606/Fe.QWERTY.shtml
The answer is that it is the centerpiece of a theory that argues that market winners will only by the sheerest of coincidences be the best of the available alternatives. By this theory, the first technology that attracts development, the first standard that attracts adopters, or the first product that attracts consumers will tend to have an insurmountable advantage, even over superior rivals that happen to come along later. Because first on the scene is not necessarily the best, a logical conclusion would seem to be that market choices aren't necessarily good ones. So, for example, proponents of this view argue that although the Beta video recording format was better than VHS, Beta lost out because of bad luck and quirks of history that had nothing much to do with the products themselves. This conclusion is hard to quibble with, although it also seems to lack much novelty. But path dependence is transformed into a far more dramatic theory by the additional claim that the past so strongly influences the future that we become "locked in" to choices that are no longer appropriate. This is the juicy version of the theory, and the version that implies that markets cannot be trusted. The success of Intel-based computers, in this view, is a tragic piece of bad luck. To accept this view, of course, we need to ignore the fact that DOS was not the first operating system, that consumers did switch away from DOS when they moved to Windows, that the DOS system was an appropriate choice for many users given the hardware of the time, and that the Mac was far more expensive. Also, a switch to Mac required that we throw out a lot of DOS hardware, where the switch to Windows did not, something that is not an irrelevant social concern. A featured result of these theories is that merely knowing what path would be best would not help you to predict where the market will move. In this view of the world, we will too often get stuck, or locked in, on a wrong path. Most advocates of this random-selection view do not claim that everything has been pure chance, since that would be so easy to disprove. After all, how likely would it be that consecutive random draws would have increased our standard of living for so long with so few interruptions? Instead, we are told that luck plays a larger role in the success of high-technology products than for older products. A clear example of this argument is a 1990 Brian Arthur article in Scientific American. Arthur there distinguishes between a new economics of "knowledge based" technologies, which are supposedly fraught with increasing returns, and the old economics of "resource based" technologies (for example, farming, mining, building), which supposedly were not. Traditional concepts of scale economies applied to production--the more steel you made, the more cheaply you could make each additional ton, because fixed costs can be spread. Much of the path-dependence literature is concerned with economies of consumption, where a good becomes cheaper or more valuable to the consumer as more other people also have it; This sort of "network externality" is even more important when literal networks are involved, as with phones or fax machines, where the value of the good depends in part on how many other people you can connect to. What Arthur and others assert is that path dependence is an affliction associated with technologies that exhibit increasing returns--that once a product has an established network it is almost impossible for a new product to displace it. Thus, as society gets more advanced technologically, luck will play a larger and larger role. The logical chain is that new technologies exhibit increasing returns, and technologies with increasing returns exhibit path dependence. It would be only reasonable to expect, for example, that panels of experts would do better at choosing products than would random chance. Similarly, to address the kinds of concerns raised in Frank and Cook's Winner-Take-All Society, the inequalities in incomes that arise in these new-technology markets could be removed harmlessly, since inequalities arise only as a matter of luck in the first place. It does not seem an unimaginable stretch to the conclusion that if the government specifies, in advance, the race and sex of market winners, no harm would be done since the winners in the market would have been a randomly chosen outcome anyway. Theories of path dependence and their supporting mythology have begun to exert an influence on policy. Last summer, an amicus brief on the Microsoft consent decree used lock-in arguments, including the QWERTY story, and apparently prompted Judge Stanley Sporkin to refuse to ratify the decree. Carl Shapiro, one of the leading contributors to this literature, recently took a senior position in the antitrust division of the Justice Department. Almost no one uses DOS anymore, and many video recorder purchasers thought VHS was better than Beta (as it was, in terms of recording time, as we have discussed at length elsewhere). The theories of path dependence that percolate through the academic literature show the possibility of this form of market ineptitude within the context of highly stylized theoretical models. But before these theories are translated into public policy, there really had better be some good supporting examples. After all, these theories fly in the face of hundreds of years of rapid technological progress. Recently we have seen PCs replace mainframes, computers replace typewriters, fax machines replace the mails for many purposes, DOS replace CP/M, Windows replace DOS, and on and on. The typewriter keyboard is central to this literature because it appears to be the single best example where luck caused an inferior product to defeat a demonstrably superior product. It is an often repeated story that is generally believed to be true. Interestingly, the typewriter story, though charming, is also false. The Fable The operative patent for the typewriter was awarded in 1868 to Christopher Latham Sholes. Sholes and his associates experimented with various keyboard designs, in part to solve the problem of the jamming of the keys. The result of these efforts is the common QWERTY keyboard (named for the letters in the upper left hand row). It is frequently claimed that the keyboard was actually configured to reduce typing speed, since that would have been one way to avoid the jamming of the typewriter. Remington added further mechanical improvements and began commercial production in late 1873. Other companies arose and produced their own keyboard designs to compete with Remington. A watershed event in the received version of the QWERTY story is a typing contest held in Cincinnati on July 25, 1888. Frank McGurrin, a court stenographer from Salt Lake City who was purportedly the only person using touch typing at the time, won a decisive victory over Louis Taub. Taub used the hunt-and-peck method on a Caligraph, a machine with an alternative arrangement of keys. McGurrin's machine, as luck would have it, just happened to be a QWERTY machine. According to popular history, the event established once and for all that the Remington typewriter, with its QWERTY keyboard, was technically superior. So, according to this popular telling, McGurrin's fluke choice of the Remington keyboard, a keyboard designed to solve a particular mechanical problem, became the very poor standard used daily by millions of typists. Fast forward now to 1936, when August Dvorak, a professor at the University of Washington, patented the Dvorak Simplified Keyboard. Dvorak claimed to have experimental evidence that his keyboard provided advantages of greater speed, reduced fatigue, and easier learning. Navy conducted experiments demonstrating that the cost of converting typists to the Dvorak keyboard would be repaid, through increased typing speed, within 10 days from the end of training. Despite these claims, however, the Dvorak keyboard has never found much acceptance. The dimensions of performance are few, and in these dimensions the Dvorak keyboard appears to be overwhelmingly superior. The failure to choose the Dvorak keyboard c...
Cache (7455 bytes)
www.reason.com/9611/ltr.sl.html -> reason.com/9611/ltr.sl.shtml
Liebowitz and Margolis claim that they "discovered" this support, as if it were somehow hidden from public view. They claim the 1944 Navy study was difficult to find, and the author's names were concealed. My publisher has had copies of the report available for 15 years. Liebowitz and Margolis's coup de grace, though, is the General Services Administration's 1956 study by Earl Strong. They conclude that because there has been "no attempt todiscredit the GSA study," academics and journalists are not living up to their high standards when writing about the Dvorak. Harvard's Richard Land said the GSA test was "poorly designed," that "the conclusions are overstated," and that the data actually showed "great promise" for further improvement by the Dvorak typists which Strong ignored. When other researchers wanted to see the raw data so they could draw their own conclusions, they found that Strong had destroyed it all! This is an example of the high standards Liebowitz and Margolis aspire to? Further, Strong was clearly biased: In 1949, he wrote, "I am out to exploit the present keyboard to its very utmost in opposition to the change to new keyboards," and there is evidence of a personal animosity between Strong and Dvorak. I agree with Liebowitz and Margolis on another thing: There is a need for good, unbiased studies on Dvorak. The best raw data I have access to at present is from Keytime, a Seattle-based company which uses keyboard instructional technologies they developed in house. In the past nine years, they have trained several hundred typists on Dvorak and several thousand on QWERTY, using the exact same equipment and teaching methodologies. They have "repeatedly found" that after 15 hours of training and practice time, existing QWERTY hunt-and-peck typists can touch type at an average 20 words per minute. After 15 hours of training and practice on Dvorak, similarly able (QWERTY) typists consistently average 25 to 30 words per minute touch-typing on Dvorak. Further, Keytime reports that the Dvorak typists continue to improve at a higher rate. Forbes's point, it has never been our position that the QWERTY keyboard was the best of all possible keyboards. We would argue, however, that economics does demonstrate that competition leads to the least-cost methods of achieving a particular goal. A Toyota Corolla might not be the best car that can be imagined, or even produced, but for the money, it does its job about as well as anything that we can currently produce. Otherwise some smart entrepreneur would put those inefficient auto companies out of business. It is not Panglossian to say that competition leads to efficiency, since efficiency is not sufficient to achieve the best of all possible worlds. We are perfectly willing to acknowledge this limitation of economics. Argiro we would note that the mathematical simulations of typing, which show no advantage for Dvorak, do not have the drawback that he cites. Also, Dvorak did claim that QWERTY typists would also benefit from his technique. His own Navy study compared retrained QWERTY typists, appropriately mimicking the decision that faced actual typists (who already knew QWERTY). Nevertheless, we agree that it would be interesting to have a controlled experiment starting with new typists. Argiro, however, we cannot claim with certainty what the results will show, although we believe that he will be proved wrong. Armstrong, the myth of Dvorak superiority is promulgated precisely because it is a wonderful example of an alleged market failure. Also, academic theories (and theorists) do not compete in a free and open for-profit marketplace. If they did, a lot of them might well never come to exist. More seriously, it is an important part of our argument that entrepreneurs are the key players guiding markets toward efficient paths, and we regret it if our article did not make this point forcefully enough. While we acknowledge free-riding as a possibility for some network externalities, we do not regard it as a central issue for the typewriter story. Large corporations with typing pools could fairly easily have internalized sufficient gains from switching to a better keyboard to make the switch worthwhile, if the advantages of Dvorak were anything like those that Dvorak claimed. Hutchings introduces an old debate that usually asks whether Apple is guilty of some misappropriation. But it is abundantly clear that Xerox had done much work on the idea of using a mouse with menuing systems well before Apple (even if others not at Xerox were working on the same ideas, and even if they later worked at Apple) and that Apple was influenced by what Xerox did. This doesn't mean that Apple did not improve on the ideas, or that Apple didn't invest much of its own energy trying to optimize its interface. But the major ideas that separated graphical from text interfaces were born at Xerox, not Apple. Cassingham must be, that his 1986 book cannot be found in our university libraries. Nor is it to be found in the on-line catalogs at Harvard, the University of Michigan, Duke, or the University of Texas at Austin, all of which are thought to have substantial collections. We did contact his publisher (Freelance Communications, Pasadena) and discovered that they offer only three titles, all of them by Mr. Cassingham to refer to his publisher in the third person. This type of exaggeration by Dvorak advocates helps fuel our doubts regarding their claims. Cassingham's possession of the Navy study is no evidence of its general availability. Even if it were easy to find a copy of the Navy study, our claim that Dvorak's role is hidden from view is hardly changed by noting that the title page says the study was prepared by the Training Services Division of the Navy. By way of comparison, Strong does not hide his role in the GSA study. Our academic writing, by the way, cites the Navy study in full, crediting the Training Services Division. The book to which Dvorak was willing to have his name attached does read like an infomercial to us, as we think it would to any unbiased reader. By itself, this hardly proves that it is wrong, since infomercials might well be selling worthwhile products. Boosterism, even for worthwhile products, however, cannot be a substitute for scientific objectivity. Charts and tables by themselves are not scientific unless they report results that are produced in accordance with generally accepted scientific methods. This means, among other things, proper controls, which was not the case in the Dvorak book (Dvorak hardly claimed otherwise). We are certain that the Psychic Friends Network could offer charts and tables as testimonials to its value. Cassingham, although admittedly unknown to us, smack of typist self-selection and thus lack of controls, since Mr. Cassingham reports far more QWERTY typists than Dvorak typists. Finally, we are aware that some Dvorak boosters claim that Strong was biased. But serious ergonomic studies, and other studies comparing QWERTY and Dvorak (even those put forward by Yamada, a Dvorak advocate) tend to match the results of Strong, and not the Navy studies. It is the preponderance of evidence, together with the reasonableness of the reported method, that causes us to believe Strong's results. We have seen no convincing evidence that Strong's results were biased, but we are willing to entertain any contrary evidence that is other than just hearsay.
Cache (5755 bytes)
www.ars-technica.com/cpu/4q99/risc-cisc/rvc-1.html
CISC: the Post-RISC Era A historical approach to the debate by 28 Hannibal Framing the Debate The majority of today's processors cant rightfully be called completely RISC or completely CISC. The two textbook architectures have evolved towards each other to such an extent that theres no longer a clear distinction between their respective approaches to increasing performance and efficiency. To be specific, chips that implement the x86 CISC ISA have come to look a lot like chips that implement various RISC ISAs; Theyve added more instructions and more complexity to the point where theyre every bit as complex as their CISC counterparts. CISC" debate really exists only in the minds of marketing departments and platform advocates whose purpose in creating and perpetuating this fictitious conflict is to promote their pet product by means of name-calling and sloganeering. At this point, Id like to reference 29 a statement made by David Ditzel, the chief architect of Suns SPARC family and CEO of Transmeta. Instead of RISC or CISC CPUs, what we have now no longer fits in the old categories. What follows is a completely revised and re-clarified thesis which found its first expression here on Ars over a year ago, before Ditzel spoke his mind on the matter, and before I had the chance to exchange e-mail with so many thoughtful and informed readers. RISC was not a specific technology as much as it was a design strategy that developed in reaction to a particular school of thought in computer design. It was a rebellion against prevailing norms--norms that no longer prevail in today's world. We now live in a "post-RISC" world, where the terms RISC and CISC have lost their relevance (except to marketing departments and platform advocates). The historical approach Perhaps the most common approach to comparing RISC and CISC is to list the features of each and place them side-by-side for comparison, discussing how each feature aids or hinders performance. It fails because RISC and CISC are not so much technologies as they are design strategies--approaches to achieving a specific set of goals that were defined in relation to a particular set of problems. Or, to be a bit more abstract, we could also call them design philosophies, or ways of thinking about a set of problems and their solutions. Its important to see these two design strategies as having developed out of a particular set of technological conditions that existed at a specific point in time. Each was an approach to designing machines that designers felt made the most efficient use of the technological resources then available. In formulating and applying these strategies, researchers took into account the limitations of the days technologylimitations that dont necessarily exist today. Understanding what those limitations were and how computer architects worked within them is the key to understanding RISC and CISC. CISC comparison requires more than just feature lists, SPEC benchmarks and sloganeeringit requires a historical context. In order to understand the historical and technological context out of which RISC and CISC developed, it is first necessary to understand the state of the art in VLSI, storage/memory, and compilers in the late 70s and early 80s. These three technologies defined the technological environment in which researchers worked to build the fastest machines. Storage and memory Its hard to underestimate the effects that the state of storage technology had on computer design in the 70s and 80s. In the 1970s, computers used magnetic core memory to store program code; After the introduction of RAM things got a bit better on the speed front, but this didnt address the cost part of the equation. To help you wrap your mind around the situation, consider the fact that in 1977, 1MB of DRAM cost about $5,000. By 1994, that price had dropped to under $6 (in 1977 dollars) 30 2 . In addition to the high price of RAM, secondary storage was expensive and slow, so paging large volumes of code into RAM from the secondary store impeded performance in a major way. The high cost of main memory and the slowness of secondary storage conspired to make code bloat a deadly serious issue. Because RAM counted for a significant portion of the overall cost of a system, a reduction in code-size translated directly in to a reduction in the total system cost. Compilers David Patterson, in a recently published retrospective article on his original proposal paper for the RISC I project at Berkeley, writes: Something to keep in mind while reading the paper was how lousy the compilers were of that generation. C programmers had to write the word "register" next to variables to try to get compilers to use registers. The assembly language was then converted into machine code by an assembler. The compilation stage took a long time, and the output was hardly optimal. As long as the HLL => assembly translation was correct, that was about the best you could hope for. If you really wanted compact, optimized code, your only choice was to code in assembler. You just couldnt fit too much functionality onto one chip. Back in 1981 when Patterson and Sequin first proposed the RISC I project (RISC I later became the foundation for Suns SPARC architecture), a million transistors on a single chip was a lot 33 1 . Because of the paucity of available transistor resources, the CISC machines of the day, like the VAX, had their various functional units split up across multiple chips. This was a problem, because the delay-power penalty on data transfers between chips limited performance. A single-chip implementation would have been ideal, but, for reasons well get into in a moment, it wasnt feasible without a radical rethinking of current designs.