Berkeley CSUA MOTD:Entry 17867
Berkeley CSUA MOTD
 
WIKI | FAQ | Tech FAQ
http://csua.com/feed/
2025/05/24 [General] UID:1000 Activity:popular
5/24    

2000/3/27-29 [Computer/Theory] UID:17867 Activity:insanely high
3/27    What's the running time (big-O) of fibonacci series? I can't find it
        on CLR. It only talks about the golden ratio shit.
                 \_ F_n=(p^n-(1-p)^n)/sqrt(5), p=the golden ratio. --Galen
        \_ CLR was the most worthless piece of shit book I have spent my
           money on.  It's worth more as fire wood or a shooting target than
           an academic textbood.
           as an academic textbook.
           \_ Go read some Sesame Street literature, then. There's a good
              REASON that CLR is the single most popular intro algorithms
              text in the world.
        \_ O(1).
           \_ dumbass
        \_ A numeric sequence can't have a running time, only an algorithm
           can.  Please restate your question.  Also, Fibonacci numbers are a
           sequence, not a series.
           \_ How about "What's the running time of evaluating the n+1-th
              number in the Fibonacci sequence F(n) by using the recursive
              algorith F(n)=F(n-1)+F(n-2), F(1)=F(0)=1?"
              \_ The naive implementation is exponential, the slightly less
                 naive memoizing implementation is linear.
              \_ The cool implementation is constant time.  You can derive
                 a formula like F(n) = a * b^n + c * d^n.  I don't remember
                 the coefficients, but you can look them up or derive them.
                 If you need the formula and you can't find it, email me
                 and I'll take the time to derive the constants. -emin
                 \_ F_n=(p^n+(1-p)^n)/sqrt(5), p=the golden ratio. --Galen
                 \_ uh, if you have to use "to the power of n", for arbitrary
                    n, i don't think it's constant time...nice try though.
                    \_ "To the power of" is O(1).  Nice try though.
                       \_ I'm impressed...with your stupidity.  I mean,
                          you can't really be THAT clueless can you?  You'd
                          have to be TRYING to be stupid to come off that well.
                          I really hope you were expressing something in an
                          odd and painfully obtuse way rather than blithering
                          utter and complete nonsense.
                       \_ Go read a book, moron. Let's see you do O(1)
                          ANYTHING. O(1) means you don't even have time
                          to read the entire input. Exponentiation in
                          arbitrary precision is O(log) with respect to the
                          exponent. Actually something like O(n^3) w.r.t.
                          actual input size, since the base has to be
                          multiplied by itself O(log(exponent)) times.
        \_ BTW, what's the Fibonacci series famous for?  It might be useful
           for teaching computer science, but for math itself what's it
           useful for?
           \_ It comes up in the wierdest places.  People have used the
              fibonacci sequence to analyze the stock market (successfully).
              \_ yep. there is a journal called the "fiboonacci quarterly"
                 devoted entierely to such weird cases.  it should be in
                 the math library if you're interested.
              \_ If the stock market was that easily analysed, we'd all be
                 billionaires.
                 \_ you mean you're *not* a billionaire?
                        \_ I meant the rest of you peasants.
           \_ Modeling the reproduction rate of rabbits.
        \_ yeah, i hear you on that one.  the most i've ever seen it
           for is the continuous torture of students throught junior high,
           high school, and college.  if anybody can actually point out
           a url which proves that this thing actually has a practical
           application in reality, i'd be really pleased to see it...
           die, fibonacci, die!!!
           \_ Torture?  Fib series is trivial.  Where'd you go to school?
              Dummy High?
           \_ C is for Coo^WYour Grade and it's good enough for you!
           \_ If that's the most you've ever seen it used for, you must
              be an inexperienced college kid with little view of the
              wider world beyond the school walls.  That puts you in the
              right place.
           \_ for modeling the rate of spawning of new rabits in the wild -ali
                \_ This is important because....?
                  \_ because it also models the rate at which i and my progeny
                     have been doing your mom for the past two years. -ali
        \_ Try Fibonacci heaps.
           \_ Whatever.  The complexity of the heap operations incur so
              much overhead that they eventually end up costing a lot
              more to implement than with an alternative and simpler
              implementation thus cancelling out any improvements gained
              from lower running times.  This is as useless as Straussman's
              (sorry if I spelled his name wrong Ilyas). Big deal. We can
              matrix multiply in O(n^2.7) time instead of O(n^3). It turns
              out to run slower on most machines anyways so why bother.
              \_ what the fuck are you talking about?  name another data
                 structure that implements priority queues with logn insert
                 and 1 retrive. the "simpler" alternative you have in mind,
                 is it an insertion sort or something stupid?
                 \_ Maybe he is talking about standard heaps which have logn
                    insert.
              \_ Strassen's is getting to be O(n^2.3) now.  And even the
                 O(n^2.7) implementation becomes faster than normal
                 multiplication for sufficiently large arrays on the machines
                 where 170 homework is done, at least. -- ilyas
                 \_ The best known, n^2.376, isn't based on Strassen-Winograd
                    original... not that it matters, of course...
              \_ Do you have any idea how big the matrices for interesting
                 real-world problems tend to be?  Do you have any idea
                 how much less (n^2.7) is than (n^3)?    -blojo
                 \_ Why, do YOU? Because it isn't all that large. Even a lame
                    implementation passes the threshold at about 150x150
                    matrices. A _lot_ of problems involve much larger matrices
                    which are sufficiently dense for sparse methods not to
                    work.
                    \_ Are you arguing for or against Strassen's?
                    \_ That's not how it is in my industry (games).
                       Nobody doing a complex system will write brute-force
                       matrix math and expect it to run okay.  It's all
                       about multigrid methods, frequency-space operations,
                       you name it.     -blojo
                       \_ while (player_still_alive()) {draw_dot(x,y);}
              \_ Ideas similiar to Strassen's algorithm are used for
                 fast FIR filtering which is widely used in all sorts
                 of DSP algorithms.  I don't know if people thought of
                 fast FIR filtering first or Strassen's algoirthm first.
                 However, the point is that the ideas behind Strassen's
                 algorithm turn out to be very useful.  IMHO the reason
                 to study algorithms (even weird stuff like Strassen's
                 algorithm and Fibonacci heaps) is so that when you
                 have a particular practical problem you can come up
                 with clever ideas. -emin
                 \_ Strassen was there first.
        \_ I'm sorry, Strassen's should *never* be faster on modern
           architectures, where multiplies are as fast as adds. Strassen's
           trades off more adds for fewer multiplies. As long as you've
           blocked your implementation for cache and registers, and copied
           when you have low-associativity caches or TLB, then you're OK
           without Strassen's.   -nick
           \_ Uh, go read the book again. Strassen optimizes away MATRIX
              multiplies, NOT scalar multiplies. If we knew how to multiply
              matrices as fast as we can add them (O(n^2)), all of Strassen's
                      instead of 8 muls, you have 7 muls. -ali
              stuff would be obsolete, yes.... But we _don't_
                        \_ Isn't this one of the advantages of a Quantum
                           Computer over standard computers.
                           \_ Oh, shut the fuck up, and consult the literature.
                \_ Find a new book.  At the instruction level, a multiply is
                   a multiply.  Thank you for playing.
                   \_ Wow.  Are you just a total idiot or what?  For a square
                      matrix of size n x n, addition or scalar multiplication
                      requires O(n^2) operations, whereas naive matrix
                      multiplication requires O(n^3) opations.  Yes, add and
                      mul are single instructions, but Strassen attempts to
                      reduce the number of those instructions.
                        \_ This was exactly my point.  So there's no reason,
                           as you said, why we should use add instead of mult.
                           \_ because while add and mult are equally fast,
                              add is faster than mult.  Duh! --pld
                   \_ look, strassen's replaces a multiply by 14 adds so that
                      instead of 8 matrix muls, you have 7 matrix muls. -ali
                        \_ Thank you for reaffirming my point that an mult is
                           the same difficulty computationally as an add, so
                           there's no need for all these Strassen methods and
                           their variants.
                           \_ it replaces 1 matrix multiply by 14 matrix adds.
                              A matrix mutliply (brute force) is O(n^3).  A
                              matrix add is O(n^2).  are you really this dense?
                              \_ But they're the same if n is 1, so fuck you!
        \_ I think you can use a generating function z() to compute the nth
           fib. number in constant time-  hence the O(1).
           \_ Formatting fixed.
           \_ The above function IS the generating function and it's NOT
              O(1) with respect to input size. Read above posts.
                \_ Think outside the box.  The answer is there if only you
                   would reach for it.
           \_ i think bigdickbastardpedant's complaint is that you can't
              generally compute e^x in constant time, because you need to
              compute at least a taylor series or something, and the number
              of terms used does depend on x, as larger x's are more poorly
              approximated with few terms. i don't know of a way to compute
              e^x in time independent of x either. -ali
              \_ "Hello, Pedant." *clickety click*
              \_ Well The Pedant needs to go re-read exactly what O() notation
                 is all about and how to properly compute it.  The size of the
                 input is unrelated to the O(), therefore The Pedant is not
                 only a pedantic idiot, but is simply flat out wrong as well.
                 \_ you must have this strange verison of O() notation that
                    depends on the phase of the moon or something.
                 \_ If sorting is O(n log n), what does n measure? lumen? -pld
       \_ fibonacci sucks big dick
          \_ I wish. but mine wont fit
             \_ You wish a guy suck your dick?
                \_ You wish your writing good grammar?
                \_ Why not?  Who cares who does it?  Lips & tounge are
                        lips & tounge.
                   \_ Sigh.
                      \_ Ahhh.
                   \_ Is formatting so hard?  I think I'll start purging all
                      content-free comments that are poorly formatted.
                                --formatting nazi
                      \_ Do it.  Blow 'em away!
                   \_ what the fuck is a tounge?
                      \_ An extremely clever anagram.
2025/05/24 [General] UID:1000 Activity:popular
5/24    

You may also be interested in these entries...
2012/8/30-11/7 [Computer/SW/Apps, Computer/SW/Unix] UID:54470 Activity:nil
8/30    Is wall just dead? The wallall command dies for me, muttering
        something about /var/wall/ttys not existing.
        \_ its seen a great drop in usage, though it seems mostly functional.
            -ERic
        \_ Couldn't open wall log!: Bad file descriptor
           Could not open wall subscription directory /var/wall/ttys: No such file or directory
	...
2012/1/24-3/3 [Computer/SW/Languages/C_Cplusplus, Computer/SW/Languages/Misc] UID:54296 Activity:nil
1/24    http://james-iry.blogspot.com/2009/05/brief-incomplete-and-mostly-wrong.html
        Amusing "history" of computer science.
        \_ Where's the mentioning of Al Gore the inventor of AlGorithm?
	...
2011/11/20-2012/2/6 [Computer/Companies/Apple, Computer/SW/Unix] UID:54237 Activity:nil
11/20   Are there tools that can justify a chunk of plain ASCII text by
        replacing words with words of similar meaning and inserting/removing
        commas into the text?  I received a 40-line plain text mail where
        all the lines are justified on left and right.  Every word and comma
        is followed by only one space, and every period is followed by two
        spaces.  The guy is my kid's karate instructor which I don't think is
	...