Berkeley CSUA MOTD:2000:March:27 Monday <Sunday, Tuesday>
Berkeley CSUA MOTD
 
WIKI | FAQ | Tech FAQ
http://csua.com/feed/
2000/3/27 [Uncategorized] UID:17864 Activity:nil
3/27    Oh Tyler, I want your abortion!
2000/3/27-28 [Recreation/Food/Alcohol, Recreation/Food] UID:17865 Activity:high
3/27    First donuts.  Then pizza.  What other college food groups
        (aside from Beer) are we missing?
        \_ YerMom.
        \_ Chocolate.  Scharffen Berger opened a factory in Berkeley.
                \_ never thought of chocolate as a college food group
                   (Stereotype warning): I guess because I'm a male
                   who doesn't crave chocolate.
        \_ burritos?  top dog hot dogs?  Greasy northside food?
2000/3/27 [Academia/Berkeley/CSUA/Troll] UID:17866 Activity:high
3/26    Mail spool troll purged as per prior warning.
        \_ is it really a troll?
           \_ Yeah, he got me.  It was a total troll.
                \_ I can smell a mail spool troll miles away.  -purger
                   \_ Maybe, but that wasn't a subtle troll, poor guy.
2000/3/27-29 [Computer/Theory] UID:17867 Activity:insanely high
3/27    What's the running time (big-O) of fibonacci series? I can't find it
        on CLR. It only talks about the golden ratio shit.
                 \_ F_n=(p^n-(1-p)^n)/sqrt(5), p=the golden ratio. --Galen
        \_ CLR was the most worthless piece of shit book I have spent my
           money on.  It's worth more as fire wood or a shooting target than
           an academic textbood.
           as an academic textbook.
           \_ Go read some Sesame Street literature, then. There's a good
              REASON that CLR is the single most popular intro algorithms
              text in the world.
        \_ O(1).
           \_ dumbass
        \_ A numeric sequence can't have a running time, only an algorithm
           can.  Please restate your question.  Also, Fibonacci numbers are a
           sequence, not a series.
           \_ How about "What's the running time of evaluating the n+1-th
              number in the Fibonacci sequence F(n) by using the recursive
              algorith F(n)=F(n-1)+F(n-2), F(1)=F(0)=1?"
              \_ The naive implementation is exponential, the slightly less
                 naive memoizing implementation is linear.
              \_ The cool implementation is constant time.  You can derive
                 a formula like F(n) = a * b^n + c * d^n.  I don't remember
                 the coefficients, but you can look them up or derive them.
                 If you need the formula and you can't find it, email me
                 and I'll take the time to derive the constants. -emin
                 \_ F_n=(p^n+(1-p)^n)/sqrt(5), p=the golden ratio. --Galen
                 \_ uh, if you have to use "to the power of n", for arbitrary
                    n, i don't think it's constant time...nice try though.
                    \_ "To the power of" is O(1).  Nice try though.
                       \_ I'm impressed...with your stupidity.  I mean,
                          you can't really be THAT clueless can you?  You'd
                          have to be TRYING to be stupid to come off that well.
                          I really hope you were expressing something in an
                          odd and painfully obtuse way rather than blithering
                          utter and complete nonsense.
                       \_ Go read a book, moron. Let's see you do O(1)
                          ANYTHING. O(1) means you don't even have time
                          to read the entire input. Exponentiation in
                          arbitrary precision is O(log) with respect to the
                          exponent. Actually something like O(n^3) w.r.t.
                          actual input size, since the base has to be
                          multiplied by itself O(log(exponent)) times.
        \_ BTW, what's the Fibonacci series famous for?  It might be useful
           for teaching computer science, but for math itself what's it
           useful for?
           \_ It comes up in the wierdest places.  People have used the
              fibonacci sequence to analyze the stock market (successfully).
              \_ yep. there is a journal called the "fiboonacci quarterly"
                 devoted entierely to such weird cases.  it should be in
                 the math library if you're interested.
              \_ If the stock market was that easily analysed, we'd all be
                 billionaires.
                 \_ you mean you're *not* a billionaire?
                        \_ I meant the rest of you peasants.
           \_ Modeling the reproduction rate of rabbits.
        \_ yeah, i hear you on that one.  the most i've ever seen it
           for is the continuous torture of students throught junior high,
           high school, and college.  if anybody can actually point out
           a url which proves that this thing actually has a practical
           application in reality, i'd be really pleased to see it...
           die, fibonacci, die!!!
           \_ Torture?  Fib series is trivial.  Where'd you go to school?
              Dummy High?
           \_ C is for Coo^WYour Grade and it's good enough for you!
           \_ If that's the most you've ever seen it used for, you must
              be an inexperienced college kid with little view of the
              wider world beyond the school walls.  That puts you in the
              right place.
           \_ for modeling the rate of spawning of new rabits in the wild -ali
                \_ This is important because....?
                  \_ because it also models the rate at which i and my progeny
                     have been doing your mom for the past two years. -ali
        \_ Try Fibonacci heaps.
           \_ Whatever.  The complexity of the heap operations incur so
              much overhead that they eventually end up costing a lot
              more to implement than with an alternative and simpler
              implementation thus cancelling out any improvements gained
              from lower running times.  This is as useless as Straussman's
              (sorry if I spelled his name wrong Ilyas). Big deal. We can
              matrix multiply in O(n^2.7) time instead of O(n^3). It turns
              out to run slower on most machines anyways so why bother.
              \_ what the fuck are you talking about?  name another data
                 structure that implements priority queues with logn insert
                 and 1 retrive. the "simpler" alternative you have in mind,
                 is it an insertion sort or something stupid?
                 \_ Maybe he is talking about standard heaps which have logn
                    insert.
              \_ Strassen's is getting to be O(n^2.3) now.  And even the
                 O(n^2.7) implementation becomes faster than normal
                 multiplication for sufficiently large arrays on the machines
                 where 170 homework is done, at least. -- ilyas
                 \_ The best known, n^2.376, isn't based on Strassen-Winograd
                    original... not that it matters, of course...
              \_ Do you have any idea how big the matrices for interesting
                 real-world problems tend to be?  Do you have any idea
                 how much less (n^2.7) is than (n^3)?    -blojo
                 \_ Why, do YOU? Because it isn't all that large. Even a lame
                    implementation passes the threshold at about 150x150
                    matrices. A _lot_ of problems involve much larger matrices
                    which are sufficiently dense for sparse methods not to
                    work.
                    \_ Are you arguing for or against Strassen's?
                    \_ That's not how it is in my industry (games).
                       Nobody doing a complex system will write brute-force
                       matrix math and expect it to run okay.  It's all
                       about multigrid methods, frequency-space operations,
                       you name it.     -blojo
                       \_ while (player_still_alive()) {draw_dot(x,y);}
              \_ Ideas similiar to Strassen's algorithm are used for
                 fast FIR filtering which is widely used in all sorts
                 of DSP algorithms.  I don't know if people thought of
                 fast FIR filtering first or Strassen's algoirthm first.
                 However, the point is that the ideas behind Strassen's
                 algorithm turn out to be very useful.  IMHO the reason
                 to study algorithms (even weird stuff like Strassen's
                 algorithm and Fibonacci heaps) is so that when you
                 have a particular practical problem you can come up
                 with clever ideas. -emin
                 \_ Strassen was there first.
        \_ I'm sorry, Strassen's should *never* be faster on modern
           architectures, where multiplies are as fast as adds. Strassen's
           trades off more adds for fewer multiplies. As long as you've
           blocked your implementation for cache and registers, and copied
           when you have low-associativity caches or TLB, then you're OK
           without Strassen's.   -nick
           \_ Uh, go read the book again. Strassen optimizes away MATRIX
              multiplies, NOT scalar multiplies. If we knew how to multiply
              matrices as fast as we can add them (O(n^2)), all of Strassen's
                      instead of 8 muls, you have 7 muls. -ali
              stuff would be obsolete, yes.... But we _don't_
                        \_ Isn't this one of the advantages of a Quantum
                           Computer over standard computers.
                           \_ Oh, shut the fuck up, and consult the literature.
                \_ Find a new book.  At the instruction level, a multiply is
                   a multiply.  Thank you for playing.
                   \_ Wow.  Are you just a total idiot or what?  For a square
                      matrix of size n x n, addition or scalar multiplication
                      requires O(n^2) operations, whereas naive matrix
                      multiplication requires O(n^3) opations.  Yes, add and
                      mul are single instructions, but Strassen attempts to
                      reduce the number of those instructions.
                        \_ This was exactly my point.  So there's no reason,
                           as you said, why we should use add instead of mult.
                           \_ because while add and mult are equally fast,
                              add is faster than mult.  Duh! --pld
                   \_ look, strassen's replaces a multiply by 14 adds so that
                      instead of 8 matrix muls, you have 7 matrix muls. -ali
                        \_ Thank you for reaffirming my point that an mult is
                           the same difficulty computationally as an add, so
                           there's no need for all these Strassen methods and
                           their variants.
                           \_ it replaces 1 matrix multiply by 14 matrix adds.
                              A matrix mutliply (brute force) is O(n^3).  A
                              matrix add is O(n^2).  are you really this dense?
                              \_ But they're the same if n is 1, so fuck you!
        \_ I think you can use a generating function z() to compute the nth
           fib. number in constant time-  hence the O(1).
           \_ Formatting fixed.
           \_ The above function IS the generating function and it's NOT
              O(1) with respect to input size. Read above posts.
                \_ Think outside the box.  The answer is there if only you
                   would reach for it.
           \_ i think bigdickbastardpedant's complaint is that you can't
              generally compute e^x in constant time, because you need to
              compute at least a taylor series or something, and the number
              of terms used does depend on x, as larger x's are more poorly
              approximated with few terms. i don't know of a way to compute
              e^x in time independent of x either. -ali
              \_ "Hello, Pedant." *clickety click*
              \_ Well The Pedant needs to go re-read exactly what O() notation
                 is all about and how to properly compute it.  The size of the
                 input is unrelated to the O(), therefore The Pedant is not
                 only a pedantic idiot, but is simply flat out wrong as well.
                 \_ you must have this strange verison of O() notation that
                    depends on the phase of the moon or something.
                 \_ If sorting is O(n log n), what does n measure? lumen? -pld
       \_ fibonacci sucks big dick
          \_ I wish. but mine wont fit
             \_ You wish a guy suck your dick?
                \_ You wish your writing good grammar?
                \_ Why not?  Who cares who does it?  Lips & tounge are
                        lips & tounge.
                   \_ Sigh.
                      \_ Ahhh.
                   \_ Is formatting so hard?  I think I'll start purging all
                      content-free comments that are poorly formatted.
                                --formatting nazi
                      \_ Do it.  Blow 'em away!
                   \_ what the fuck is a tounge?
                      \_ An extremely clever anagram.
2000/3/27 [Uncategorized] UID:17868 Activity:nil 60%like:17863
3/27    /dev/da0e              2540480  2312510    24732    99%    /var/mail
         purge me again.  I dare Ya'!  Look how full it's getting tho!
2000/3/27-28 [Politics/Domestic] UID:17869 Activity:kinda low
3/27    I hear so much about HMO's in the news today. But what the hell does
        the acronymn stand for?
        \_ Health Maintainence Organization
        \_ The only one here (at present) is IMHO.  It stands for "In My
           Humble Opinion"
           \_ Thanks for the answer ulike the crap below.  All I ask for
              was a stupid acronymn and I get a 1 page troll from a bunch
              of political nut bags.  geez.
        \_ Basically, health insurance providers.  You are hearing about
           in the news b/c they are evil...cutting corners whenever
           possible to the detriment ot patients under the health care
           plan.  From what I can ascertain, these HMO's are run by
           businesspeople, not doctors, so they are not really looking
           in the patient's best interest most of the time.
           \_ HMO's are an evil socialist monstrosity devised by liberals in
              the name of "affordable healthcare."  A real businessman
              ALWAYS has the patient's best interests at heart, because
              otherwise he ll lose a customer.  The only thing that will
              fix the evil of HMOs is fully privatized healthcare.
                \_ Nice theory.  Completely wrong though.  Most HMO customers
                   have no choice - they go with whichever HMO gave their
                   employer the best deal on the group plan.
                   \_ You just don't get it, do you.  People don't have the
                      choice PRECISELY because HMOs are forced down our throats
                      by lefties, and because there are laws REQUIRING the
                      employers to provide HMO coverage for their employees.
                      In these sorts of nasty socialist conditions, what do you
                      expect will happen?  Exactly what is happening now.
                      A fully privatized healthcare will give people choice,
                      and stay affordable the way any service stays affordable:
                      competition.
                        \_ hey, do you know what a mixed economy is? govt
                        regulated capitalism. welcome to america.
                        \_ I'm with flacid
2000/3/27-28 [Finance/Investment] UID:17870 Activity:high
3/27    Is it easy to manage half shares and fractional shares in E*Trade?
        (stocks)
        \_ Go buy a full block like everyone else you pennyless wimp.
                \_ too many assumptions: What if the company gets bought
                   out and gets exchanged for a fractional amount of shares
                   in another company?  (which happens to be the case).
                   Or a DRIP system?
2000/3/27-30 [Computer/SW/OS/Windows] UID:17871 Activity:nil
3/27    How do i make win2k browse samba shares?
        \_ Same as before.
        \_ This was answered.  There is no difference for win2k and non-win2k
           browsing of samba shares.  -using win2k and samba
Berkeley CSUA MOTD:2000:March:27 Monday <Sunday, Tuesday>