Berkeley CSUA MOTD:Entry 15031
Berkeley CSUA MOTD
 
WIKI | FAQ | Tech FAQ
http://csua.com/feed/
2025/05/24 [General] UID:1000 Activity:popular
5/24    

1998/11/25-27 [Computer/SW/OS/FreeBSD] UID:15031 Activity:very high
11/25   I'm setting up some cheap PC boxes (PII's and K6-2's) to do some
        number crunching for my PhD project. Since my calculations are just
        pure number crunching and no graphics, I'm trying to decide which
        OS to use. I can't afford (nor do I want) WinNT. Linux? FreeBSD?
        OpenBSD? Others? -- spegg
        \_ What's wrong with using an abacus?
           \_ Abucus? Those things can't do Jack.  Use a slide rule.
        \_ Better yet is if you have a multiproc system which Linux has
           supported for a while and which FreeBSD 3.0 (which just recently
           came out officially) now supports.  Aside from that I've heard
           very few complaints about either OSs.  You'll find more Linux
           binaries around but since you're probably writing your own
           programs that doesn't really matter.  FreeBSD is a bit slimmer
           than Linux (~1MB statically compiled kernel vs ~4MB statically
           linked + any extra dynamically loaded modules).

           \_ you know, I wonder what accounts for this huge size difference.
           \_ get real. Solaris x86 is best for multi-cpu usage.
              \_ oh yeah, i forgot.  but when it comes to single processor
                 performance, i don't think running any one of those OSs
                 will give you a huge performance increase.  They all
                 pretty much execute x86 ELF binaries which are optimized
                 in the same way.  The only difference you might see is
                 how well each OS handles dynamic object files and how well
                 it manages memory for calculations that require a ton of mem.
                 \_ In which case, if you're dealing with virtual memory
                    that far exceeds physical memory, I hear solaris is also
                    the winner.
                        \_ if you'r edealing with virtual memory which
                           far exceeds physical memory, you've already lost.
                           \_ not really.  It happens all the time and is
                              possible because of spatial/temporal locality.
                              Say you're running Gimp with 4 5MB tiff files
                              open a the same time on a 16MB PC.  Chances
                              are the you're not going to be dealing with
           than if there's a real OS running.  You can still tell the
           compiler to generate 32-bit or Pentium instructions.  -- yuen
                              all 20MB at the same time even though they
                              are all open. Hence +20MB virtual vs 16MB
                              physical.
                                \_ First of all, the guy is talking about
                                   number crunching, not image processing.
                                   It is likely that he's going to be
                                   addressing all of the memory he's
                                   crunching with.  And second, no one said
                                   it was impossible--it just is painfully
                                   slow.  disk is like 6 orders of magnitude
                                   slower than RAM.
                                   \_ uhh. "number crunching" applications
                                      usually exhibit greater locality
                                      than almost any other app, if
                                      optimized properly. -nick
                                        \_ which will help not at all if
                                           it's using more than physical RAM.
                                           \_ What part of "locality" don't
                                              you understand, twink?  Nick knows
                                              what he's talking about.
                    \_ Wow, someone actually used the term virtual memory
                       properly.
        \_ Since you're doing number crunching, you'll probably be best off
           with Intel.  I encourage you to benchmark a K6-2, but P-II's are
           superior in FP ability, though perhaps not most cost-effective.
           Your next big worry is the compiler to
           use.  I guess you don't want to pay for one?  Then you're stuck
           with either gcc/g77/g++ (either the GNU flavor or the egcs flavor:
           egcs is likely to be faster).  gcc/egcs can be faster or slower
           depending on how it is compiled!  You may want to spend a day or
           so making sure you have a good compiler + flags.  The OS to use is
           your SMALLEST worry if you're doing number crunching, no graphics.
           Linux/FreeBSD/OpenBSD all use gcc/egcs anyway.  I personally
           use Linux, it works fine for crunching.  I've nothing bad to say
           about FreeBSD either:  both should install easily and be easy
           to manage.  RedHat Linux is particularly easy to install.  --PeterM
        \_ By the way, Scott, your .forwarding address doesn't work.  --PeterM
           \_ just out of curiosity, are you by any chance the same Peter M
              that appears on the gimp splash screen?
              \_  No, that's Peter Mattis.  My login is peterm.  --PeterM
        \_ Actually, since you're just doing number crunching and nothing
           else, DOS may be the best "OS" to use if you're stuck with a single
           processor PC.  Sure it's the lamest OS ever (if you can even
           call it one), but then you'll have the most CPU cycles available
           than if there's a real OS running.  Plus I think (not sure)
           running in real mode is faster than running in virtual mode.
           You can still tell the compiler to generate 32-bit or Pentium
           instructions.  -- yuen
           \_ *I* would certainly not want to have to move from machine to
              machine to manage jobs!  An ethernet card is MUCH cheaper than
              a monitor for a compute-farm, and DOS has nil networking
              capability.  Linux/FreeBSD are worth it for convenience.
              Second, "real" OS's really incur very little overhead, and you
              can even reduce it to a very small amount by increasing the
              time slice each process gets.  SMP machines are a good
              suggestion though:  they're very space/cost effective, and
              linux, at least, does a good job in SMP mode keeping the
              CPU's busy when you run long-running compute intensive jobs.
              Just be sure not to run more jobs than you have CPUs. --PeterM
           \_ If you have other programs running on your system that are
              idle almost no time will be deticated to those processes.
              I'm running httpd on my computer but it takes up about 0%
              of my processor resources so idle processes shouldn't matter.
              Pentium optimized instructions may even be faster because
              they pipeline better and the memory management on unix beats
              the hell out of dos so if you're doing space inefficient
              computations dos will stink.
              \_ even if they wind up doing nothing, you're still wasting
                 cycles  with the kernels occasional interrupts to check its
                 scheduler and find that it has nothing else to do.
                 \_ Yeah, use QNx or ixWorks or code raw assembly