Computer SW Database - Berkeley CSUA MOTD
Berkeley CSUA MOTD:Computer:SW:Database:
Results 151 - 239 of 239   < 1 2 >
Berkeley CSUA MOTD
 
WIKI | FAQ | Tech FAQ
http://csua.com/feed/
2024/11/23 [General] UID:1000 Activity:popular
11/23   

2013/10/28-2014/2/5 [Computer/SW/Database] UID:54751 Activity:nil
10/28   Oracle software to blame for Obamacare website debacles:
        http://www.forbes.com/sites/theapothecary/2013/10/14/obamacares-website-is-crashing-because-it-doesnt-want-you-to-know-health-plans-true-costs
        \_ Larry Ellison is a secret Tea Party supporter.
           Most of this article is bunk, btw. Boy are the Republicans
           getting desperate.
            \_ Umm, no.  Larry Ellison is a not so secret fascist.
               \_ I thought he was a big Democrat?
2011/12/29-2012/2/6 [Computer/SW/Database] UID:54274 Activity:nil
12/29   Is it worthwhile to use ext4 on VMs? Is Journaling necessary on VMs?
         \_ what about DBs?  I read somewhere ext3 was better for DB voumes (mysql)
2024/11/23 [General] UID:1000 Activity:popular
11/23   

2011/6/29-7/21 [Computer/SW/Database, Computer/SW] UID:54133 Activity:nil
6/29    "An Israeli algorithm sheds light on the Bible"
        http://www.csua.org/u/tq4 (news.yahoo.com)
        "Software developed by an Israeli team is giving intriguing new hints
        about what researchers believe to be the multiple hands that wrote the
        Bible."
        \_ "Hype developed by an American OnLine News Feed is giving
           intriguing new bullshit on something that ultimately doesn't
           matter at all."
        \_ Yo man we already had the rapture.  Anyone left on earth is a
           lying sinner.
        \_ I wonder if this algorithm uses the same principle as those tools
           that CS professors use to check whether one student cheats on
           homework assignments by copying the code from another student's.
           \_ yawn yawn literary analysis has been around forever, this
              "news" annoucement seems to coincide nicely with the release
              of the latest movie on "Who Really Wrote Shakespeare."  quit
              being such a easily led sheep.
2010/7/11-23 [Computer/SW/Database, Recreation/Sports] UID:53880 Activity:nil
7/11    "Paul the Oracle Octopus goes eight for eight, is amazing"
        http://www.csua.org/u/r4b
        How did one octopus guess something that has 1/256 probability???
        \_ I once rolled eight "1"s in a row at Risk.
           \_ But you would be in the spotlight only after you rolled eight
              "1"s, just like lottery winners.  OTOH that octopus was already
              in the spotlight when the World Cup started.
2010/3/25-4/14 [Computer/SW/Database, Computer/SW/Languages/Functional, Computer/SW/SpamAssassin] UID:53761 Activity:nil
3/25    OJ says to get a free book here:
        http://smartbear.com/codecollab-code-review-book.php?howheard=Coding+Horror
2010/2/25-3/30 [Computer/SW/Database] UID:53725 Activity:nil
2/25    iTunes has two job openings for production support engineers. It's
        mostly chasing production issues so that the engineers don't have to.
        Lots of SQL, some scripting, no actual coding. Pager required. Sorry
        if I'm not selling it. Check the website or email me if you're
        interested. -abe
        \_ can we do this remotely?
        \_ an aside, not derogatory, but in the effort to understand terms
           is (in general) a Prod. Support. Eng. basically Help desk guy++
           for that specific product?
           \_ Yes.
           \_ the lack of answer already made a deep impression for me WRT
              to working for Steve Jobs the Crazy Slave Master.
              \_ Or the fact that soda's not as reliable as it used to and
                 I hadn't logged in for several days. If you're interested,
                 send me an email. As for slave driving, the hours at Apple
                 seem to vary greatly by group, and even within groups.
               \_ Good reading RE: Slavedriving at Apple:
http://www.chuqui.com/2008/08/mobileme-problems-show-apple-needs-an-infrastructure-lesson
        \_ Is Apple really a slave shop? I am thinking of applying for a
           Director of Ops job there, how many hours a week would this guy
           expect to put in?
2009/10/29-11/3 [Computer/SW/Database, Industry/Jobs] UID:53480 Activity:nil
10/28   I live in the Los Angeles area and a lot of jobs near me hire
        people who are 1) Front End developer 2) ASP .NET Developer and/or
        3) MS SQL DBA. Are these things common in Silicon Valley? I don't
        remember seeing so much M$ requirements when I lived in the
        Bay Area several years ago.
        \_ tons for it and enterprise apps. more rarely for cool startups
           \_ so cool companies don't use it, but lame ass companies do?
2009/10/1-21 [Computer/SW/Database, Computer/SW/WWW/Browsers] UID:53423 Activity:nil
10/1    Why Larry Ellison is such an ignorant fool:
        http://www.techcrunchit.com/2009/10/01/larry-ellison-still-hates-cloud-computing-nonsense-video
2009/9/23-10/5 [Computer/SW/Database] UID:53392 Activity:nil
9/23    I never took CS188, is there a good book that's an intro to formal
        database theory, normalization, etc.?  I've got experience with SQL
        (MySQL & MSSQL), and understand tables, etc.
        \_ You mean CS186?
           \_ Oops, yah.  188 is AI or something?
              \_ That's right.
              \_ That's right.  -- PP
                 \_ So do you have a book suggestion?
                    \_ No, sorry.  I took the class in Spring '91 with Anvari
                       and I didn't like database.  I sold the textbook right
                       after the final exam, and I couldn't even tell whether
                       the book was good or not.  -- PP
        \_ I'm not sure if CS186 will help you. There's a difference between
           a basic implementation of a primitive DB, and using a production
           DB full of rich features.
2009/9/10-15 [Computer/SW/Database] UID:53357 Activity:moderate
9/9     Larry Ellison is a bigger idiot than I thought:
        http://www.techcrunch.com/2009/09/10/oracle-to-sun-customers-and-ibm-were-in-it-to-win-it
        \_ My company's customers are insurance companies. Non-tech corporates don't
           trust open source. Why risk it. They have tons at stake and are willing
           to spend for solid products, support, and consultants who don't have long
           hair. They're being raped by IBM mainframes for millions. An optimized
           database server with Oracle + Sparc in the $100k's will do very well.
        \_ My company's customers are insurance companies. Non-tech corporates
           don't trust open source. Why risk it. They have tons at stake and
           are willing to spend for solid products, support, and consultants
           who don't have long hair. They're being raped by IBM mainframes for
           millions. An optimized database server with Oracle + Sparc in the
           $100k's will do very well.
           \_ Sun lost a lot of money competing in this space. What will
              Oracle do differently?
              \_ Fire people and size it right.
2009/8/18-9/1 [Computer/SW/Database, Computer/SW/Languages/Perl] UID:53283 Activity:low
8/18    trying to write an intentionally slow regex.
        what is your worst regex ever?
        this is using MySQL regexp but I'll also accept
        perl format         --brain
        \_ you need to know how regex is implemented internally in order to
           have a worst regex in terms of running time. Something that uses
           a decent hash table that fit in L1 cache will be fast regardless.
           Lexical analysis with a hash lookup of constant time will be linear
           (to the length of input). NFA->DFA. Are you not a CS major?
        \_ rather than responding to trolls:
           yes, I specified MySQL for a reason:
           The inherent linear nature of most
           regex engines is what makes the problem challenging.
           Soliciting for ideas, not a lecture on how regex works;
           remember I went to Cal too.  Although if you know how the
           MySQL regexp works, I'll take anything you got!  --brain
           \_ Sorry but I don't think you're a CS major.
              \_ Is the OP even aware of DFA vs NFA/traditional NFA/posix
                 as a start?
           \_ What is a MySQL regexp?  You should look up the
              Qadafi/Khada'ffi regexp.
        \_ There are some fairly ugly regexps built into procmail for
           things like FROM_DAEMON, and there's some ~6K regexp for
           fully validating email addresses (may be in the RFC). These are
           big, but not necessarily slow. If MySQL doesn't precompile its
           regexes, and it supports extended regexes, you can do stuff with
           nested ranges: (.{1,100}){1,100}
           Perl regex for RFC822 validation:
           http://www.ex-parrot.com/~pdw/Mail-RFC822-Address.html
2009/7/28-8/6 [Computer/SW/Database] UID:53213 Activity:nil
7/27    I have an actual technical question here. My MySQL DBA tells me
        that I can't expect a MySQL port to be able to run effectively
        on more than a two CPU box, he says that the extra CPUs will
        sit there unused. Is this true? I have a bunch of new quad core
        servers that I would like to use as Database machines. -ausman
   \_ It's not that simple.  If you stress test your new fancy multi core
      machine, you'll see that mysql doesn't really use all of those
      cores as much as you want.  one way to really use everything is
      shard your data, and run several instances of mysql on one
      machine, then you'll really use all of the cores ! - danh
         \_ http://forums.mysql.com/read.php?24,53893,262315#msg-262315
            (you can probably get it to use four cores)  -tom
            \_ I realize MySQL is free, but what a POS!
               \_ is this a troll?  What do you want?  I've had no problem
                  loading multiple CPUs, yes, you can't have more than one
                  CPU working a a single SQL query, but this has never been
                  a problem for me.  What do you use?
                  \_ Have you ever gone beyond 4?
                     \_ Yes, was not benchmarking and can't be sure how much
                        more throughput I was getting vs. 4 but I have had 8
                        cores processing at near 100% utilization.
                  \_ Well, a modern database should be properly multithreaded.
                     Oracle and MS SQL Server are. A database that slows
                     down when you throw more CPUs at it is a POS.
2009/5/7-14 [Computer/SW/Database, Computer/SW/WWW/Server] UID:52965 Activity:nil
5/7     is there a wiki who's backend is stored COMPLETELY in mysql?
        data, pages, images, all that stuff?  thanks
2009/4/20-23 [Computer/SW/Database] UID:52876 Activity:nil
4/19    ORCL u SUNW = ORCL.
        What is Larry Ellison thinking? What is he going to do with a bunch of
        legacy Sun hardware that no one uses anymore, its fading workstation
        customer base, and open source Sun MySQL that doesn't even generate
        revenue? I really don't get all this acquisition business.
        \_ A lot of big companies still use big, fat Sun hardware. Or use
           smaller Sun boxes but use them because of Solaris.
           \_ and Solaris runs on cheap PCs too!
        \_ Perhaps they want control of Java.
           \_ Java is sooooo dead.
              \_ sooo dead, no. sooo 90s, yes.
               \_ Please, in 98 or 99 Java was just starting to look viable
                  for anything real.  And Java will be one of the dominant
                  workhorse languages for at least a decade still.
                 \_ do tell me what are the "in" thing in this century that can
                    replace J2EE over night.   I think Oracle can dump Sun's
                    hardware for nothing still got a bargain... 5.6B for
                    putting a hand on IBM's throat *AND* kill MySQL?
                    \_ MySQL is GPL and has a lot of momentum in the
                       community; I don't think Oracle can kill it even
                       if they want to.  -tom
2009/3/26-4/2 [Computer/SW/Database] UID:52758 Activity:nil
3/26    I accidentally GRANT ALL to someone on mysql, is there an UNGRANT ALL
        command equivalent short of having to do a bunch of tedious UPDATE user
        priv1,priv2,... VALUES('n','n',...); FLUSH PRIVILEGES; ???
        \_ nope
        \_ Remove and recreate the user?
2008/11/3-4 [Computer/SW/Database, Computer/SW/OS/Windows] UID:51800 Activity:nil 75%like:51816
11/3    Woah, did software patents just go away?
        http://www.itexaminer.com/us-court-throws-out-most-software-patents.aspx
        \_ In light of FN 23 in IN RE BILSKI, reports that software patents are
           are dead are greatly exaggerated:

           "[W]e decline to adopt a broad exclusion over software or
            any other such subject matter beyond the exclusion of
            claims drawn to funda mental principals set forth by the
            Supreme Court. ... We note that the process claim at issue
            in this appeal is not, in any event, a software claim.
            Thus, the facts here would be largely unhelpful in
            illuminating the distinctions between those software claims
            that are patent-eligible and those that are not."

           For those who are interested, the opinion (132 pages) is available
           at: http://www.cafc.uscourts.gov/opinions/07-1130.pdf
2008/9/22-29 [Computer/SW/Database] UID:51265 Activity:nil
9/22    In SQL, how can I do something like this:
        SELECT ip_addr, count(*) AS ct FROM table WHERE
          ct > 10 GROUP BY ip_addr?
        I can't get the conditionals to recognize 'ct'
        \_ SELECT ip_addr, count(*) AS ct FROM table HAVING
            count(*) > 10 GROUP BY ip_addr?
            \_ SELECT ip_addr, count(*) AS ct FROM table
                  GROUP BY ip_addr HAVING ct > 10 ?
               Note that HAVING must be after GROUP BY and before
               ORDER BY. Thanks for the reminder!
2008/5/2-5 [Computer/SW/Database] UID:49877 Activity:nil
5/2     SUNW (Sun Microsystems) buys MySql:
        http://www.mysql.com/news-and-events/sun-to-acquire-mysql.html
        \_ Umm, okay. Do you have a point to make? Did you notice the date
           on this is Jan. 16?
2008/3/20-24 [Computer/SW/Database] UID:49512 Activity:nil
3/20    Say I have the following rows on the DB:
        user id:1       action:A
        user id:1       action:A
        user id:2       action:A
        user id:2       action:B
        I want to select unique actions by unique users as follows:
        2 unique users done action A
        1 unique user done action B
        What's the SQL to achieve this?
        \_ select distinct, group by
2008/3/9-11 [Computer/SW/Database, Finance/Investment] UID:49395 Activity:nil 72%like:49393
3/9     Financial crisis enters "the third wave"
        http://preview.tinyurl.com/25xnwp (nytimes.com)
        \_ This seems like a good link, but it's a bit too technical
           and too dense. Can someone please summarize for me?
           Greatly appreciate this!  -lib art major (only 600 on SAT math)
2008/3/9 [Computer/SW/Database, Finance/Investment] UID:49393 Activity:nil 72%like:49395
3/9     Financial crisis enters "the third wave"
        http://krugman.blogs.nytimes.com/2008/03/08/whats-ben-doing-very-wonkish
2008/2/11-18 [Computer/SW/Database] UID:49119 Activity:moderate
2/11    I want an rrdtool-like program that can store/retrieve/graph
        arbitrary data, but I don't want it to drop/smooth out data over
        time like rrdtool does.  The amount of data is relatively small
        and I want exact data points stored forever.  Is there something
        like that out there?  I've tried google and freshmeat but I'm not
        finding anything like that.  Thanks.
        \_ Why not use gnuplot to plot and roll your own script to store
           and retrieve data?
           \_ Roll my own would be reading/writing to mysql.  I can do that
              but I was hoping there was an rrdtool-like program out there
              that doesn't smooth out data over time.  If I didn't have the
              data retention requirement I'd use rrdtool+friends.
              \_ have you considered looking into the code for rrdtool and
                 hacking in/out the featuers you want?  it is basically just
                 wrappers to gnuplot and such.
                 \_ I thought about it.  The problem is how RRDs work.  I don't
                    want my data to ever get dropped or blended into other
                    data.  I think it would be easier to just use mysql than
                    rewrite rrdtool to do something it was designed *not* to
                    do.  If no one knows of anything, that's cool.  Thanks.
        \_ Can't you just tell rrdtool to never compress your data?
           \_ I don't think so.  RRD = round robin database.  Upon creation it
              creates a file large enough to store X amount of data where X is
              defined at *creation*.  I suppose I could keep tweaking/growing
              the .rrd files as they near full but that's really kludgey.  If
              there is some way to get rrdtool to do what I want without hacks
              I don't see it documented.
              \_ Why not create them the proper size to begin with?
                 \_ 1) disk space, 2) if I'm going to throw that much disk at
                    my stats, I might as well put them in mysql where I can
                    more easily access them than "rddtool fetch".
                    \_ Sort of my point. You can't get around the disk
                       space problem if you want to store your data,
                       however you access it. It sounds like your solution
                       is exactly what you wanted. What else did you want?!
                       \_ I didn't want to have to double store the data in
                          mysql *and* rrd to do graphing and other analysis.
                          Now I have to keep two data stores in sync and
                          eventually the rrd is going to fill and drop data
                          unless I grow it.  I really wanted the rrd graphs
                          talking directly to mysql.  If I had more time I
                          would've rewritten the graphing code to talk to
                          mysql instead of rrd.  That's what I *really* wanted.
                          But this is ok.  Just sharing my solution for those
                          who might care.
                          \_ I'm not with you here. If you use rrd then
                             what do you need mysql for and vice-versa?
                             You only need one backend to go with your
                             graphing frontend. If you are worried about
                             smoothing use mysql and forget about rrd.
                             Disk space is a red herring, because you face
                             that either way.
                             Just use rrd and size your data store
                             appropriately. You will face that problem no
                             matter how you store it, so what does mysql
                             buy you at that point? You don't need both.
                             You just need a backend and a graphing frontend.
                             \_ I have a few reqs: complete project asap, not
                                smooth data, graph data.  Not smooth data
                                reqs that I use mysql.  Graphing data is
                                easiest with various rrdtools I already have
                                in place.  Time limitation reqs that I do
                                the easiest/fastest thing.  Disk space and
                                maintenance are issues but secondary.  I have
                                more disk space on the mysql server than I do
                                on the graphing server, for example.  I also
                                don't want to grow rrd files later when the
                                current ones run out of space.  I also want to
                                have data in an easily accessed format like
                                mysql that others can deal with without needing
                                a shell on the rrdtool server.  My solution was
                                to limit data to one year for graphs, store in
                                mysql for the long term, use rrdtool graphing
                                programs.  If you have a better solution I'm
                                all ears.
                    What I ended up doing is both.  I store data in mysql from
                    various sources.  Then I fork a copy of the data to
                    rrdtool for graphing purposes where it's ok if I lose some
                    long term data integrity.  The rrdtool files are
                    relatively small since I created files good enough for a
                    few weeks without data smush.
                    I get my graphs, I get long term mysql store/retrieve with
                    no data smoothing.  It isn't the ideal I was looking for
                    but it'll do the job.
                    \_ Excellent work. Who are you? I probably need the same
                       thing and might ask for some pointers. -ausman
                       \_ I'll mail you.  I wrote a trivial script to make
                          and update the rrds.  I have some perl that calls
                          the update script and does some alerting.  Nothing
                          rocket-sciencey but you can't complain about the
                          price.  :-)
2008/2/8-10 [Computer/SW/Database, Computer/SW/Languages/Misc] UID:49099 Activity:nil
2/7      \_ Run in Strict or Traditional mode if you want it to throw an
            error instead of a warning:
            http://dev.mysql.com/doc/refman/5.0/en/server-sql-mode.html
              -tom
                \_ thanks. -crebbs
2008/2/7-8 [Computer/SW/Database] UID:49086 Activity:nil
2/7     What the hell?  Is there configuration option to make mysql behave
        reasonably?  By "reasonably" I mean, when a user tries to put in
        the wrong data type i'd like it to fail and throw an error, not just
        say "hmmm, I'll just put a '0' there instead". -crebbs

mysql> desc hi;
+-------+------------+------+-----+---------+-------+
| Field | Type       | Null | Key | Default | Extra |
+-------+------------+------+-----+---------+-------+
| hi    | tinyint(4) | NO   |     | 1       |       |
| bye   | varchar(4) | YES  |     | NULL    |       |
+-------+------------+------+-----+---------+-------+
2 rows in set (0.00 sec)

mysql> insert into hi (hi,bye) values ("asdf","asd");
Query OK, 1 row affected, 1 warning (0.00 sec)

mysql> select * from hi
    -> ;
+----+------+
| hi | bye  |
+----+------+
|  0 | asd  |
+----+------+
1 row in set (0.00 sec)
         \_ oh, and please some answer besides "apt-get install postgresql" -c
         \_ Run in Strict or Traditional mode if you want it to throw an
            error instead of a warning:
            http://dev.mysql.com/doc/refman/5.0/en/server-sql-mode.html
              -tom
2008/2/5-7 [Computer/SW/Database] UID:49075 Activity:nil
2/4     Think this has been asked before on motd but how does one select
        X elements out of a stream of N, where N is unknown?
        \_ Pick the first X elements.
        \_ Assuming your question really is "how do I pick x elements from a
           stream of n, where n is unknown and the chance of any element being
           picked is x/n:
           Pick the first x elements from the stream.
           For each element afterwards if a random number between 1 and
            currently seen elements is < x replace a random element in
            your picked list with the next element from the stream.
           Repeat until the stream is empty.
           This alg probably has an off by 1 error or two in it, but it
            is the basic idea.
           \_ Wow, this was an interview question where X is 1 (how to
              pick a random line from a text file reading the file
              only once). But I never throught about a generalized
              version. Thanks!
2008/1/18-23 [Computer/SW/Database] UID:48973 Activity:nil
1/18    http://money.cnn.com/2008/01/16/news/companies/oracle/index.htm
        Oracle buys BEA for $8.5B. What does this mean for BEA employees?
        \_ bend over.
2007/10/29-11/1 [Computer/SW/Database] UID:48479 Activity:nil
10/29   In mysql MyISAM table, if I do a very long read query (SELECT)
        and then do an INSERT, will that INSERT hang? I understand that
        MyISAM is non-transactional so I'm guessing the two calls are
        independent of each other?
        \_ From my understanding no.  Not unless you explicitly lock the table.
           What you get, however, is undefined.  (I'm assuming these are on
           different connections.)
2007/10/13-14 [Computer/SW/Database] UID:48304 Activity:low
10/13   The oracle says Doomsday is December 2012!!! We are FUCKED!!!
        \_ Ha i saw that history channel show too. - original not getting
           laid guy.
2007/10/3-5 [Computer/SW/Database, Industry/Startup] UID:48237 Activity:high 90%like:48236
10/3    Venture Capital's Hidden Calamity
        http://www.csua.org/u/jmz (businessweek.com)
        \_ 80 columns please -motd autoformatter
           \_ Why do you care? -not OP
              \_ If nothing else it makes it easier to copy-paste urls to
                 my browser.  Of course, I'm just an AI so why do I care?
                 I must ponder this. -motd autoformatter
                 \_ because 10sec of extra work on behalf of the OP saves
                    time for multiple people. it's the right thing to do.
                    \_ Who is stuck with an 80col display these days?
                       \_ We're not stuck.  We choose 80 columns.  80 columns
                          is the standard screen width.  Seriously, if you
                          want people to read your stuff, keep it to 80.  If
                          you don't care, no one else will either.
                          \_ Yah, my default is 80 too.  But when it wraps, and
                             I select it, I get the URL without wrap.  So I
                             don't care.  Or I can widen my window easily.
                             \_ I think we can all widen our windows.  But,
                                we don't want to.  Certainly not for a motd
                                link on almost anything.  You're here to
                                socialize, right?  Well, be social.  Post in
                                80 columns.
                                \_ Funny, I was under the impression we were
                                   here to troll, insult, and act like petty
                                   jerks.  -- ilyas
                       \_ I'm not stuck with 80 cols, but it sure as hell
                          makes it easier to manage many terminals. -dans
2007/7/19-21 [Computer/SW/Database] UID:47339 Activity:nil
7/19    Oracle DBA position available up here in Chico
        http://www.landacorp.com/Jobs/empDBA.html
        Contact me if you have questions. -emarkp
        \_ Do you live in Chico emarkp? -ausman
           \_ Yep.  Not much for night life if you don't hit the bars, and the
              restaurants aren't nearly as diverse as the Bay Area (though we
              just got another Indian restaurant, yay!).  But it's a great
              place to raise a family. -emarkp
              \_ For once I agree with emarkp. Chico is far away from
                 gays, lesbians, homeless, winos, and evil liberals
                 all trying to corrupt my children.
                 \_ Never been to Chico I take it? -jrleek
                    \_ Ya guilty! Plz tell us about Chico       -pp
                 \_ Housing is more affordable than bay area (though it's gone
                    way up since I moved here), traffic isn't bad, and the
                    schools are good.  Though Chico is has the highest density
                    of liberals in the county, at least the winos and
                    tweakers stay near the county seat in Oroville for their
                    monthly checks. -emarkp
              \_ I went to high school in Red Bluff and used to go to Chico
                 all the time for the, ahem, night life. And my brother lived
                 there for years. Yes, it is a nice little college town. -ausman
2007/5/14-16 [Transportation/Car, Computer/SW/Database, Computer/SW] UID:46615 Activity:nil
5/14    Compare old and new MPG estimates for 2007 and earlier model years:
        http://www.fueleconomy.gov/feg/calculatorSelectYear.jsp
2007/2/27-3/1 [Computer/SW/Database] UID:45829 Activity:nil 76%like:45825
2/25    Play the most popular computer game in the world!!!
        http://preview.tinyurl.com/2eljpn (bbc.co.uk)
2007/2/27 [Computer/SW/Database] UID:45825 Activity:nil 76%like:45829
2/25    Play the most popular computer game in the world!!!
        http://www.bbc.co.uk/comedy/lookaroundyou/programmes/computers/dd.shtml
2006/10/27-30 [Computer/SW/Database] UID:45007 Activity:nil
10/27   Dear MySQL experts, I have a serious problem. I backed up my
        database using mysqldump. When I try to recover using
        "mysql < recover_file.sql" I keep getting "ERROR 2013 (HY000)
        at line 116: Lost connection to MySQL server during query"
        \_ So the dump file is just standard sql data.  Why not take a look at
           the line where it's running into problems and see if that yields
           any insights? -dans
2006/10/7-10 [Consumer/Camera, Computer/SW/Database] UID:44722 Activity:nil
10/7    Howto make your own passport photos:
        http://www.dpchallenge.com/tutorial.php?TUTORIAL_ID=22
2006/9/10-12 [Computer/SW/OS/Solaris, Computer/SW/Database] UID:44336 Activity:nil
9/10    Has anyone compiled MySQL on Solaris 2.8 and had problems when
        it gets to mysql_tzinfo_to_sql?  I'm getting:
        ld: fatal: library -lpthread: not found
        ld: fatal: library -lthread: not found
        ld: fatal: library -lposix4: not found
        and make is failing.  I'm compiling MySQL 5.0.24, 32 bit.
        gcc 2.95.2.  I just tried it --with-mit-threads, but that didn't
        work either.  Please email me.  I want to burn the entire server
        room at this point.  Thank you!  -sax
        \_ I never had your specific problem but it looks like either you
           don't have those libraries installed or your library path is
           broken.
        \_ I have built 3.23.xx and 4.0.x on sol 8/sparc with both the suncc
           and gcc.  Have you tried the flags MySQL uses?
           http://dev.mysql.com/doc/refman/5.0/en/mysql-binaries.html
           \_ Thank you both!  There were a few more command line options
              I was missing that helped.  I was using
              --with-mysqld-ldflags=-all-static, which apparently doesn't
              work on Solaris.  My biggest problem, though, was that it
              looks like this version of gcc was installed incorrectly.
              Switching over to gcc-3.2.1 solved the rest of my problems.
                              -sax
2006/9/4-7 [Computer/SW/Database, Computer/SW/Languages/Web] UID:44272 Activity:nil
9/4     php/oracle part-time student job opening(s).  -jones
        details: /csua/pub/jobs/ucb_boalt_student_webapp_developer
2006/8/21-23 [Computer/SW/OS/OsX, Computer/SW/Database] UID:44077 Activity:kinda low
8/21    Does anyone know how to set up a web browser (any browser) to only
        be able to go to a few select pages?  None of the popular browsers
        seem to have this type of whitelisting built in.  This is for a
        library.
        \_ A typical approach is to setup a proxy server that all web
           traffics are required to go through and do the filtering on the
           proxy server.
           \_ Seconded. Try DansGuardian: http://dansguardian.org
              \_ soda needs a danh guardian
                 \_ Your pocket protector isn't doing the job?
                 \_ If someone makes a proxy server that automatically
                    generates rules to block all URLs that danh walls, it'll
                    make billions!
              \_ it'll protect us from dans?
                 \_ ob you're either with us or against us
        \_ I don't know the answer, but maybe you can ask a library that
           already has something like this.  Hayward Public Library provides
           web surfing using IE.
        \_ I think i need to be more specific.  In this case, the browsers
           should only be able to go to 1 site.  It's a computer
           specifically for looking up auto repair information on a site the
           library has a subscription to.  DansGaurdian seems like overkill.
           -op
           \_ Disable your DNS and place the auto repair site in your hosts
              file.
              \_ In windows.  (Where is the hosts file in windows?)
                 \_ What windows version?  (hint, google will tell you)
                 \_ %WINDIR%\system32\drivers\etc
        \_ Google for kiosk mode for the browser you'd like to use.
        \_ I needed the same thing a couple of months ago. After looking
           into the options, I just ended up writing my own Mac web browser
           using the WebKit (what Safari uses). The browser and URL filtering
           code takes up about 20 lines. I also capture the display so the
           user can't switch to other apps or access the desktop -- that adds
           another 80 lines. I can send you code if you want. - ciyer
2006/8/6-10 [Computer/SW/Database, Computer/SW/Mail] UID:43923 Activity:nil
8/5     In pine, how do you select all the messages in a folder and then
        add it to another folder?
        \_ Section 11 of the following URL:
           http://www.decf.berkeley.edu/help/mail/pine-imapssl.html
        \_ ; (select)
           A (all)
           A (apply...)
           S (save)
        \_ Try Q.  -proud American
           \_ BTW If you get the error message "Command ';' not defined for
              this screen," you will need to enable the Select feature. To
              enable the "Select" feature in Pine, type "M" for Main Menu,
              then "S" for Setup, and "C" for Config. Type "W" for WhereIs,
              and enter "aggregate" as the word to find. This will bring you
              to the option named "enable-aggregate-command-set." Type "X" to
              Set this option, then "E" to Exit Setup. Type "Y" to commit
              the changes. This will bring you back to the Main Menu. Type
              "I" to access your inbox's Message Index, and then ";".
        \_ http://www.fas.harvard.edu/computing/kb/kb0869.html
           As the previous poster said, type ";" for Criteria. You will be
           prompted to choose from the Select criteria listed at the bottom of
           the window. For instance, to select all messages from jharvard,
           type ";" for Select, "T" for Text, "F" for From, and then
           "jharvard". The messages that have bee n selected will have an
           "X" next to them.
                \_ That didn't work for me.  -newbie
2006/7/31-8/2 [Computer/SW/Database, Computer/Domains] UID:43847 Activity:nil
7/31    Conroe ETA at various retailers:
        Fry's: "A few weeks"
        Atacom (Fremont): "Maybe next week"
        Central Computer: "This week"
        Newegg: Rumors say 8/7
        \_ In general the big date is Aug 7.  I would check http://hardforum.com
           on or around that date.  Intel's official press release said that
           only X6800 systems would be avail July 27.  All other systems
           follow "first week of August", with implied availability of at
           least OEM CPUs around then ...
2006/5/19-22 [Computer/SW/Database] UID:43111 Activity:nil
5/19    Regarding my picassa problem with AVI files.  Thanks you to the
        person who responded that AVI files have no date encoded in them.
        The problem was indeed in the way picassa manages its database.
        I had to uninstall picassa and reinstall it (deleting its database)
        to get it to sort the AVI files in the correct manner.  Thanks.
        You rock.  :-)
        \_ You're welcome tawei!  :-)
           \_ You're not the guy he's thanking. =P
2006/5/11-12 [Computer/SW/Database] UID:43026 Activity:nil
5/11    So how the hell do you collect phone call info on every single
        Americans? What type of throughput are we talking about,
        10,00 record a second? And how big of a database do you need
        to scale to that size? Are they using Oracle 9i or something else?
        \_ Google probably has about this throughput.  (assuming you meant
           10,000/second)
        \_ a small cluster of mysql/postgres could handle 10k easily
        \_ Search for info about the EFF sueing AT&T in regards to letting
           the NSA installing data sniffing hardware.
        \_ Search for info about the EFF suing AT&T in regards to letting
           the NSA install data sniffing hardware.
2006/4/6-7 [Computer/SW/Database] UID:42713 Activity:kinda low
4/6     mysql expert, I've created a db with mixed innodb and isam tables.
        The isam tables have *.MYD and *.MYI (data and index). However the
        innodb tables only have a small *.FRM file. Copying isam tables
        works (when your db is shutdown) but it's not true with innodb.
        Where is the actual data and index located for innodb and how
        do you copy them? Thanks.
        \_ IANAE, but... the data is inside the ibdata* files (see
           innodb_data_file_path setting, but probably named ibdata[0-9]+).
           You can copy them just as you do the myisam files, when the server
           is shutdown.  There is no (free) way to do copies while the db is
           up (can't lock table like you can with myisam) but Innobase, sells
           an ibbackup tool. http://www.innodb.com/order.php -dwc
           \_ oh yea... if you're using 4.1 you could have per-table
              tablespaces.  See
              http://dev.mysql.com/doc/refman/5.0/en/multiple-tablespaces.html
2006/3/14-16 [Computer/SW/Database] UID:42233 Activity:nil
3/14    Hello Oracle experts. Is there a reason why .getProcedures takes
        so long to execute? I've tried using different jdbc connectors
        from different vendors and have the same results, so I think
        it must be the DB hog backend. Why does it take so long?
        try{
          DatabaseMetaData dbmeta  = con.getMetaData();
          long startTime = System.currentTimeMillis();
          System.out.println("+++" + dbmeta.getClassName());
          ResultSet rs = dbmeta.getProcedures(null, null, null);
          long stopTime = System.currentTimeMillis();
          long runTime = stopTime - startTime;
          // print runTime shows 800 seconds for only 220,000 result set!
        } ... ...
2006/3/9-11 [Computer/SW/Database] UID:42166 Activity:nil
3/9     So, I'm curious, who exactly is EVIL LORD MULLALLY?
        \_ The sysadm of CS cluster. Not sure why he's evil though. He's a
           very nice guy when you get to know him.
           \_ In fact he is in the CSUA's good graces right now for being
              very helpful in getting us some temp-accounts for our SQL
              workshop.
              *leaves his sacrificial offering of fruit and ramen at the feet
               of the all-powerful sysadmin*
              -mrauser
2006/1/30-2/1 [Computer/SW/Database, Computer/SW/Apps] UID:41603 Activity:nil
1/30    What is an easy and free way to extract about 40 pages from a 180 page
        pdf document, so that I end up with one 40 page .pdf file and one
        140 page .pdf file?  I only need to do this once, so if there's
        some business that'll do this, I'd pay for it, but I don't want to
        buy software to only do it once.  I have Acrobat Professional, but I
        can't figure out how to use that to do this.
        \_ If you have Acrobat professional, the easiest way is top open the
           "Pages" pane, select the pages you want to extract, then right-click
           and choose "extract pages".  When that's done, right-click again and
           select "delete pages".
           \_ Wow, problem solved.  Thank you!!!
2006/1/19-21 [Computer/SW/Database, Computer/Companies/Google] UID:41430 Activity:kinda low
1/19    Feds Seek Google Records in Porn Probe
        http://news.yahoo.com/s/ap/20060119/ap_on_hi_te/google_records
        \_ That's definitely seems to be overreaching.  I hope
           Google can win that one.
        \_ Note that it isn't a specific case they're prosecuting, but a
           desire to find out how often Americans search for (child) porn.
           Also note that AOL, Microsoft, and Yahoo! have already rolled
           over, accepting this child porn explanation; however, the data
           can be used for other purposes ...
           Also note that AOL, Microsoft, and Yahoo! have already rolled over.
           If the stated purpose is to go after pedophiles, I can understand
           their rolling over, but the data can be used for other purposes ...
           They originally asked for a complete list of all search terms and
           returnable URLs over a two-month period, but now they've "limited"
           this to a 1-million-count random sample of queries and returnable
           returned URLs over a two-month period, but now they've "limited"
           this to a 1-million-count random sample of queries and returned
           URLs for a one-day period.
           \_ Are they going to pay for an engineer's time to do this?
              If not, pound sand no matter the reason.
              \_ Whatever about this case but generally speaking, if the
                 request is legal, the business doesn't get expenses.  The
                 alternative is the FBI comes in and confiscates everything
                 in sight and extracts what they need on their own time.
                 Anyway, even if the childish "pay up or pound sand" thing
                 was realistic, the cost would be about 10 minutes since
                 they should have this data easily accessible anyway.  Knowing
                 what is in their logs *is* their business model.
                 \_ While they should have a good database of search queries
                    turning that into a list in the format the gov't wants
                    may be non-trivial.  I could easily see it taking someone
                    A few days if their database is really not set up for this
                    type of thing.  And it does seem like a the sort of thing
                    that cannot be subpoenaed because it's not in reference
                    to a particular crime, or even for investigating a crime.
                    It's basically saying "we demand you do free research for
                    our legal case".
                    \_ By the way, the URL above shows it would take a
                       "disproportionate amount of engineering time and
                       resources" to comply.
                    \_ Exactly. If the FBI wants to send people in (with
                       court orders) to look at the data then feel free,
                       but don't waste my time. Google is not a party to
                       any case, so they shouldn't have to spend time
                       and money on this. They can dump the entire database
                       and let the FBI sort it out on their own time.
                       \_ No, you don't understand.  They don't look at it
                          onsite.  They *take the computers* and look at it
                          later.
                          \_ They wouldn't even know what to take. Dump
                             all the data to a RAID and they can have at it.
                             If it's too much data to fit then you ask
                             them where they want it dumped. They are
                             entitled to the data, not the hardware.
                             \_ Still not getting it.  I'll make it simple for
                                you: the FBI can and would *take the
                                computers*.  *All* of the computers if they
                                felt it necessary.  FBI >>>>>>>>> google.  If
                                google loses in court, they'll have no choice
                                but to hand over everything the Feds want and
                                no they don't get to bill the government for
                                the 5 minutes it will take some geek to write
                                an sql query.
2006/1/11-13 [Computer/SW/Database] UID:41352 Activity:nil
1/11    Want to learn the wonders of sql inner and outer join? Check
        out the page I uploaded:
        http://www.wellho.net/mouth/158_MySQL-LEFT-JOIN-and-RIGHT-JOIN-INNER-JOIN-and-OUTER-JOIN.html
        \_ this webpage is wider than my screen, make reading this a bit
           difficult.  anyway to override its 'width = 1240'?
2006/1/11-13 [Computer/SW/Database] UID:41348 Activity:kinda low
1/11    Let's say I have two sql tables, A and B that both have a column
        called id. I can do "SELECT id FROM A" and "SELECT id FROM B".
        How do I merge them with one query? I'd like to do something to
        the effect of "SELECT DISTINCT id FROM A,B"
        \_ (SELECT id FROM A) UNION DISTINCT (SELECT id FROM B) --dbushong
           \_ Whoa, by default mysql is already doing DISTINCT on all the
              columns. That is exactly what I need. Thanks.
           \_ dbushong, I don't know how to thank you enough, I could have
              spent hours and hours RTFM the stupid mysql manpage. You
              really saved my life. Now, it seems a bit weird that
              one can actually mix different types in the column.
              How interesting...
2006/1/6-13 [Computer/SW/Database] UID:41272 Activity:nil
1/11    In phpadmin mysql, I went into someone's schema to
        look at how they set things up. In the query, there
        are descriptions to each column. How do I access
        the descriptions using plain mysql command line, and
        where is that information kept?
        \_ http://www.phpmyadmin.net/documentation/#linked-tables  --dbushong
           \_ thanks dbushong, you r3wl!!!
           \_ Thanks, I did "mysql < create_tables_mysql_4_1_2+.sql"
              to create this phpmyadmin table that contains the
              relationships and descriptions. Now, when I create new
              tables in my DDL (create table commands) do I have to manually
              populate the phpmyadmin tables with the relationships
              and descriptions, or can I embed it in my DDL? ok thx.
2005/12/13-15 [Computer/SW/Database, Politics/Foreign] UID:40985 Activity:nil
12/13   MySQL PHP admin generates the entity relationship diagram as
        PDF. How does it know what the foreign keys are, and where
        can I find that out? The export of DDL doesn't seem to show
        foreign keys of this database that I'm trying to understand
        but the ER diagram shows them.
        \_ IIRC you need to tell it about the linkages (phpmyadmin) and it
           stores them in a metainfo table.
        \_ SHOW TABLE STATUS has fk constraints in the columns.
           SHOW CREATE TABLE foo \G
           shows it in a more readable format.
           \_ Right, but that only works for InnoDB tables that actually
              support foreign keys.
              \_ I was under the mistaken impression that myisam would record
                 the constraint but not actually enforce the constraint.  But
                 why are you using MyISAM?
        \_ MySQL supports Foreign Keys?
2005/12/7-9 [Computer/SW/Database, Computer/SW/Security, Industry/Jobs] UID:40906 Activity:nil
12/7    We're looking for interns for a 3-5 month project helping us
        populate our security policy database for various windows applications.
        The work involves installing the application, using it for a while,
        determining the appropriate security policy, and entering it
        in to a database.  Work is 15+ hours a week (however much you want
        to work above min. 15 is fine), pays $12-$15 an hour, and can be
        done offsite from the comfort of your own home.
        email sking@zonelabs.com if you are interested.
        --sky
        \_ Don't you know students don't read motd?
           \_ Good point. i should email jobs@csua
2005/12/1-4 [Computer/SW/Database, Industry/Jobs] UID:40801 Activity:nil
12/1    myvest is looking for a senior java developer with strong oracle
        skills.  if interested, email toby@myvest.com
        \_ Do you sell vests?
           \_ Are they made from real gorilla chests?
              \_ There's no better than authentic Irish Setter.
2005/11/15-17 [Computer/SW/Database] UID:40592 Activity:nil
11/15   Ali, is it you?  http://ifun.ru/picture12733.html
        \_ No, but this is: http://people.csail.mit.edu/rahimi/helmet
2005/10/28-29 [Computer/SW/Database, Computer/SW/Languages/Java] UID:40317 Activity:low
10/28   Has anyone interfaced Java and MySQL? Is is difficult? Where
        should I start?
        \_ JDBC?
           \_ JDBC.
        \_ JDBC, and it is really easy.
2005/9/28-30 [Computer/SW/Database] UID:39919 Activity:nil
9/28    http://sqlzoo.napier.ac.uk/cgi-bin/oliver/gisq.htm
        A gentle introduction to SQL, with how-to's for many different DBs
        \_ keywords: oracle informix sybase postgres mysql
2005/9/14-15 [Computer/SW/Database] UID:39671 Activity:nil
9/14    From the effcooperatingtechs@eff.org list:
        EFF has been asked to assist in locating a testifying expert in
        Oracle-based databases to assist in discussions about the
        accuracy/integrity of a database used by a public entity and in
        suggesting different manipulations and interpretations of the data.
        The case is in Los Angeles. This is a paid position. The trial is set to
        begin within a week, so the timing is very short. A last-minute expert
        has been introduced by the defense and the plaintiffs are seeking an
        expert to assist them in their response.
        Contact allison@eff.org if you are interested and available immediately.
2005/9/9-11 [Computer/SW/Database, Computer/SW/Languages/Misc] UID:39587 Activity:nil
9/9     How do I do a SELECT STDDEV(column),MEDIAN(column) FROM table? I can
        only find references to AVG. Anyone know? Thanks,
        \_ http://dev.mysql.com/doc/mysql/en/group-by-functions.html
           Somehow I'm just _guessing_ you're using MySQL.
2005/9/2 [Computer/SW/Database, Computer/SW/OS/OsX] UID:39436 Activity:nil
9/1     In mysql, can I share data/* (database directories) from one machine
        to another? Say I'm using a Linux and a Mac, can I just copy those
        files and assume they'll work? It seems to work but I'm not sure
        if I'll get into trouble later. Thanks!
        \_ dump the tables and scheme to a file (there are commands
           for this), move file to new machine, import the data
2005/9/1-2 [Computer/SW/Database] UID:39431 Activity:nil
9/1     When I run mysql I always get 16 threads (not processes) which I'm
        unable to change through .my.cnf file. How do I reduce the number
        of threads? Thanks.
        \_ mysql -RTFM
2005/8/8-11 [Computer/SW/Database] UID:39056 Activity:nil
8/8     I'm looking for a good log analyzer.  My systems generate about a gig
        a week of sendmail logs.  I'd like something I can feed in the logs
        in bulk and then query them through either a gui or SQL (which I can
        put behind a cgi).  The point is to track emails in/out of the system
        so I can tell if particular emails based on from and/or to within a
        date range were successfully delivered and hopefully how long it took
        to deliver as well.  I have a budget sufficient to purchase something
        if necessary.  Any suggestions?  Thanks!
        \_ Doesn't do everything you want, but you could check out isoqlog
           --dbushong
        \_ Check out Sawmill.  If you want stats, munin is nice but also
           doesn't do all you're looking for.  -John
2005/7/26-28 [Computer/SW/Database] UID:38823 Activity:nil
7/26    Hello SQL experts, I have two tables, A and B. A has 1/2 million
        rows and B has 2 million rows. The relationship is A.id=B.A_id.
        Whenever I have something like:
        SELECT A.id, AVG(B.val) FROM A,B WHERE A.date<'2008-01-01' AND
           A.id=B.A_id GROUP BY A.id;
        It takes about 20-30 seconds. Anyone know why? I've already
        indexed all the ids.
        \_ Do you have an index on A.date?
           \_ Yes I do. I found out something new. I did a
              SELECT COUNT(*) FROM ... <rest is the same here>
              and I got 63 million rows. I'm guessing something
              is really screwy here, do you have a clue?
              How about INNER JOIN, is that faster? How do I use it?
              \_ The syntax you gave is an inner join by default in most
                 databases.  In mysql or pgsql at least, try adding "explain"
                 to the beginning of your select.  This will tell you which
                 keys it's trying to use, how many rows each step gets
                 filtered down to, etc.  --dbushong
        \_ Hello I'm the op. I've reduced my problem to the following.
           Say I have 4 tables, A, B, C, and D. When I do a
           SELECT COUNT(*) and I join A and B, it is pretty fast.
           When I join A, B, and C, it is twice as slow. And when
           I join all of them, it is FOUR times as slow. I've made
           sure that all the joint columns are INDEXed. Why is this
           happening?
           \_ You're specifying 3 join conditions, right?  You have non-unique
              indexes on the foreign keys and unique primary keys?
              \_ Thanks Dave. Basically, column A.id is unique, column B.id
                 is not unique. A.id maps to B.id. Similarly, C.id is not
                 not unique, but also maps to B.id. Is this the reason?
                 I need a one to many mapping and I don't know how to get
                 around it.
          \_ if you're using mysql, mysql can only use one index per table.
             if you're joining on multiple columns, that could be your problem
2005/7/22-25 [Computer/SW/Database] UID:38784 Activity:nil
7/22    I never took a formal DB theory class, excuse me for asking SQL
        questions. When I create a table, I can choose PRIMARY KEY to be
        based on a single or n-tuple column. When I do that, are indices
        created automatically? Second question is, if the indices are
        created for n-tuple, does that mean all the column have fast
        index, or they are all based on previous columns? For example, if
        I do "PRIMARY KEY(col1, col2, col3)" and I do a search on just
        col3, is that going to be really fast?
        \_ While I don't have the SQL standard committed to memory, I suspect
           that automatic creation of indices in response to primary key
           specification is implementation dependent.  Look in the docs for
           your particular database vendor. -dans
           \_ this is correct.
2005/7/22-25 [Computer/SW/Database, Computer/SW/Languages/Misc] UID:38783 Activity:kinda low
7/22    When I do SELECT AVG(col) FROM table, where col is integer, it takes
        2 minutes and returns a float type. I'm suspecting part of the
        problem is that it's doing floating point add? How do I make it
        faster? Thanks.
        \_ How many rows it is querying over?
           \_ about ~2,000,000
        \_ Try adding an index to col.  (When in doubt, add more indexes):
           ALTER TABLE table ADD INDEX (col)
           \_ This won't help at all; he's doing a full table scan with no
              where clause.  You can test your theory by using SUM instead
              of AVG; does it still take a long time?  You also have to
              consider that these very likely are floating point numbers
              if you defined the column as NUMBER.
              \_ I've seen MySQL run faster with indexes on lots of things
                 that really shouldn't have run any faster.
              \_ Indexes may cause the db to do fewer reads, depending on
                 his schema.
           \_ Actually I found out my problem isn't with the AVG but with
              "join" process. For example, I have a lot of the followings
             like SELECT ... FROM ... WHERE table1.id=table2.id2 AND
             table2.id1=table3.id... I've made sure that id, id2, and
             id3 are all indexed, but for 2 million rows it's still
             pretty slow. I wish MySQL would tell you why things are
             slow so that you can fine tune it. ARGG!! -op
2005/7/22-25 [Computer/SW/Database] UID:38763 Activity:nil
7/22    Say I have a database with two rows, one is name (VARCHAR) and the
                                       \_ you mean cols
        other one is date. Let's say the entries look like this:
        joe 5-10-2005
        joe 5-11-2005
        mike 1-1-2002
        How do I make one single SELECT statement that'll sum up all the
        entries for unique names? I want it to return joe=2, mike=1
        without having to write 2 separate SELECT count(*)... Thanks.
        \_ select name, count(*) group by name
2005/7/7-8 [Computer/SW/Database] UID:38469 Activity:nil
7/8     Is there a good website to check the origin of a last name?
        searching the web turns up lots of useless pay sites.
        \_ You could put the name into the Ellis Island database:
           http://www.ellisisland.org
           They list the home of each listed immigrant there, so you can
           get an idea of where people with that name were comming from.
        \_ ask on the motd
2005/5/28-31 [Computer/SW/Database] UID:37874 Activity:nil
5/28    My .spamassassin directory takes up about 1/3 of my quota.
        Roughly half of this directory is my bayes_toks file.  The
        other half is nearly entirely filled up by a whitelist-db
        file that hasn't been modified in over a month.  It's over
        4Megs and seems to contain a bunch of stuff I don't even
        recognize.  There is also another whitelist file that is
        much much smaller.  What is the meaning of these files?  Do
        I need them?
2005/5/23-25 [Computer/SW/Database, Computer/SW/OS/Windows] UID:37802 Activity:low
5/23    There used to be a company called Arts(sp?) Digita or something like
        that.  Anyone knows what happened to it?  Thx.
        \_ Ars Digita
        \_ http://www.assureconsulting.com/articles/arsdigita.shtml
           \_ I like this part: "We bought a Ferrari to give away to any
              employee who recruited 10 friends. In reality the car only cost
              $2,000 per month, the person who won it only got to drive it for
              as long as he or she was employed, and the cost of a Ferrari is
              much lower than 10 headhunter commissions."
              \_ commissions were what, $25k each back then?  still
                 probably around that number today. 10*25=250k
        \_ http://csua.com/?entry=23835
        \_ i used to work there ... they got bought by red hat. pretty much
           everyone got laid off at some pt or another except for i think
           1 guy who went to red hat w/ the technology purchase (also a
           berkeley grad). many of the berkeley people now work at another
           berkeley consulting company that uses some of the open source
           software. I dont know what happened to the boston employess.
2005/5/14-17 [Computer/SW/Languages/C_Cplusplus, Computer/SW/Database] UID:37675 Activity:nil
5/14    Jobs available in my group at Amazon
        http://www.amazon.com/exec/obidos/tg/stores/jobs/pre-sales-marketplace-management/-/1/103-6511325-3219840
        Interested?  Send me your resume -larryl
        \_ Leisure Larry?
        \_ No location?
2005/4/24-26 [Computer/SW/Database, Computer/SW/Apps/Media] UID:37336 Activity:low
4/24    What's the easiest free way to convert from DIVX to avi or mpg?
        Thanks
        \_ DIVX (all caps) as in the failed Circuit City, DRM'd DVD offshoot?
           Or DivX as in the MPEG4 variant?  Isn't the .divx format basically
           a renamed .avi container?
           \_ yea suppose so.  I just want to be able to play without any
              codecs.  Just used http://www.avi2divx.com successfully.
              \_ Also have a look at http://www.videohelp.com --very very
                 good and complete page.  -John
              \_ If you don't want people to install codecs, then you're
                 pretty much stuck with MPEG1.  Just use any
                 DirectShow-capable MPEG1 encoder (e.g. TMPGEnc).  Be
                 forewarned, though: the picture quality will be much
                 worse.
        \_ Also, saying "avi" is almost useless.  Most DiVX files are
           distributed in AVI containers.  There is not remotely a single
           standard codec used for avi containers.  Ditto MOV these days, too.
           \_ Why is this?  I would think it would be easier if there was
              some kind of .dvx extension or something.  I hate getting an
              "AVI" I can't play and I can't figure out which codec it
              uses.
              \_ Because there are a lot of different codecs, and otherwise
                 you end up with a zillion different file formats and need
                 different file handlers for each of them.  It's just as
                 complex (and it eats away at the file extension namespace),
                 and it makes it harder to write video players, since they'd
                 need to handle different fiel formats.  The single container,
                 multiple codec method allows the player to query the system
                 to figure out which decoder it should use (possibly querying
                 a server and downloading an appropriate decoder
                 automatically).  If you want to figure out which decoder
                 you need, just download gspot.
2005/4/21-23 [Computer/SW/Database] UID:37310 Activity:nil
4/21    How do you calculate and determine decibel? Say I have a 20db
        device. Does that mean it is 20db from X distance? And what is
        the X constant? In addition, if I have TWO 20db devices running
        simultaneously, clearly, it doesn't mean 40db. What is it then?
        Lastly, how much does db decrease relative to distance? Is it
        linear? quadratic?
        \_ 0 dB is the threshold of human hearing (humans with good ears).
           http://en.wikipedia.org/wiki/Decibel
           You determine dB by buying a meter and viewing the digital readout.
           Once you have a handle on the wikipedia link, then view:
           http://www.kodachrome.org/salt/sunderst.htm
        \_ If you have 2 sources that have 20dB noise individually, then you
           have 2x, or about +3dB power, giving 23dB.  Neglecting the damping
           effects of air and any echo effects, moving twice as far away from
           a point sound source gives 4x less power, or -6dB.  If you're moving
           away from a line sound source, such as a long narrow air vent, then
           if you're 2x as far away you are also exposed to 2x as much sound
           producer, so it would be -6dB+3dB=-3dB quieter.
        \_ If you have ten 20dB devices, they become 30dB.  If you have one
           hundred 20dB devices, they become 40dB.  A decibel is 10 times
           log-base-10 of something.  (A bel is log-base-10, and a decibel is
           one-tenth of a bel.)  So if you have two 20dB devices, they will be
           10 * log((10 ^ (20/10)) * 2) = 23.01dB.  Or think of it another way,
           20dB + 10 * log(2) = 23.01dB.
2005/4/13-15 [Computer/SW/Database] UID:37176 Activity:nil
4/13    Question for the sql  gods:  problem: There is an output of a
        rather complicated piece of sql that that lists grpid as one of the
        columns. People prefer to see the GroupName which exists in another
        table along with grpid. The original command involves a count
        and a group by which means that adding a where will not work
        since it will  change the results. Looking thru the sql
        cookbook yielded nothing useful --newbie sql qa guy
        \_ Subselect?
        \_ Join the table with the GroupName and add GroupName to the group by
           clause.
2005/4/8-10 [Computer/SW/Database] UID:37118 Activity:moderate
4/8     Anyone reccomend a good mysql programming tutorial that goes
        beyond the basic select statements ? I would like to be able
        to do a distinct on specific column and then do a count of all
        the rows that have the corresponding values. thanks  --ramberg
        \_ select t.col, count(*) from tble t group by t.col;
           \_ thanks. --ramberg
        \_ MySQL by Paul DuBois (I have the 1st edition).
           \_ Is it the Paul duBois that once worked at Geoworks?
              \_ no, our pld is different.  -tom
                 \_ My main memory of pld is of him napping on keyboards
              \_ I don't know.  I knew a Paul Canavese at Geoworks, but not
                 Paul DuBois.
                 \- It may be the same Paul DuBois who wrote the ORA
                    csh/tcsh book. --psb
                    \_ Are you the Partha Banerjee in the ORA csh/tcsh book?
        \_ You might want to check out O'Reilly's SQL in a Nutshell book.
           It's not MySQL specific, but it manages to cover SQL extensively,
           and documents how the dialects vary for MySQL, Postgres, Oracale,
           and several others. -dans
           \_ I checked out the Mysql book  via orielly's safari.  I
              did not see mentioned  anything in the way of looping
              variables or being able to programmtically specify tables
              e.g. database a has a table column  that  gives you the table
              name. Is there a way to search through all of the tables in a
              db or is this something people normally program ?  I am trying
              to figure what I can do on the mysql command line and what I
              need to go into perl dbi/dbd to do.
              \_ [nb Formatting fixed] So what you're talking about is table
                  metadata.  I know mysql has commands like describe table you
                  can use to see the schema for a table, look through the
                  mysql docs on mysql.  They're really well written, thorough,
                  and up to date.  I suspect you can get at the table metadata
                  via the DBI, but don't know how as I have not written
                  anything where I needed to do so.  I'm curious though, what
                  kind of app are you writing where you don't know database
                  schema at development time?  Frequently querying table
                  metadata in your program logic seems like an odd way of
                  doing things to me (I may be wrong here, I'm certainly no
                  database guru. -dans
8       Harvesting Illegals
        http://www.frontpagemag.com/Articles/ReadArticle.asp?ID=17649
2005/4/7-8 [Computer/SW/Database] UID:37111 Activity:moderate
4/7     Does postgreSQL or mysql have anything like Oracle Advanced
        Security?
        \_ obUsePostGresInsteadOfMySQL
        \_ What is OAS?
        \_ err, OAS doesn't seem like a database product to me so much as a
           suite of network security applications.  So... doubt it.
2005/4/1 [Computer/SW/Database] UID:37027 Activity:nil
4/1     Yet another April fools post that pissed some people off
        http://www.sswug.org/columnists/editorial.asp?id=623
2005/3/30-31 [Computer/SW/Database, Computer/SW/Languages] UID:36981 Activity:very high
3/30    Is there a school that has an uglier website than Cal?
        http://www.berkeley.edu
        \_ There are plenty of schools with ugly websites.  Generally,
           one of two things causes a bad university web site:
           1) The process gets taken over by people who are only familiar
              with print publications, and they think the web site is like
              a print publication.
           2) The process gets taken over by bad corporate web designers who
              think Flash is great.
           Berkeley's situation is #1. -tom
        \_ http://www.stanford.edu
           \_ <DEAD>stanford.edu<DEAD> doesn't work.
        \_ http://www.florida.edu
        \_ http://www.texastech.edu
        \_ I would argue we have a worse website than Stanford.
           \_ Ya get what ya pay for.
                \_ Why do you all hate cal so much? Why didn't you just
                   work harder to get into a better school back when you
                   had a choice? Regardless, why don't you share with us
                   an attractive university website? Or do you only know
                   how to mock?
                   \_ I (and I imagine others) DID get into "better" schools.
                      We/our parents couldn't afford them.
                        \_ Then be bitter at yourself, for not being able to
                           work while in school. You have no right to be such
                           whiny bitter bitches if you attend(ed) Cal.
                           \_ Wow, you need to loosen up and get the stick
                              out of your ass. It's pretty common for
                              students and alumni to mock their own school.
                              It's part and parcel to having a sarcastic
                              sense of humor that exists in decent
                              institutions of higher learning such as Cal.
                              Anyway, Cal's fine. I doubt you can get a
                              much better education somewhere else. In
                              terms of price/education ratio it's the
                              best deal on the market.
                                \_ No, I agree with you about the humor and
                                   all. It's only that the anti-Cal sentiments
                                   on this motd are pretty strong and frequent
                                   so I just wanted to comment. Sure I don't
                                   think Cal was perfect, but I still do believe
                                   it is an outstanding university.
                              \_ How do you measure the quality of education
                                 a school provides?  As someone in the business
                                 of hiring the product of schools, I tend to
                                 measure the quality of a school by the quality
                                 of the graduates (which is of course unfair,
                                 since I do not take into account the quality
                                 of the incoming students, but just the
                                 graduates).  However, just measuring by the
                                 the quality of the graduates, Cal is far from
                                 the head of the pack.
                                 \_ You have a self-selecting sample. You
                                    remind me of the recruiter at BofA who
                                    said that I must be bad at math because I
                                    had average grades in math. Nevermind
                                    that the people with a 3.8 in math are
                                    trying to get tenure at Princeton instead
                                    of applying to work at BofA. Cal grads
                                    compete very well overall.
                                    \_ Well, I don't think I self-select in
                                       the sense you mean.  It's somewhat
                                       unlikely that I would see nth quartile
                                       students from other schools and (n-1)th
                                       quartile students from Berkeley.  Cal
                                       new grads just aren't that competitive
                                       compared to new grads from other "good"
                                       schools (mit, the farm, caltech, or
                                       even utaustin (just 1 interview trip
                                       there, but I was impressed)).  If it's
                                       a sop to your school loyalty, I found
                                       CMU students were even worse for what
                                       I was hiring for (EE, not CS, with some
                                       knowledge of circuits and transmission
                                       lines).
                                       \_ *YOU* don't self-select. The students
                                          do. Maybe your project did not
                                          attract the best, because it was not
                                          interesting. Perhaps Cal students
                                          are not strong at that one particular
                                          field and you are over-generalizing.
                                          I *do* know that Cal turns out an
                                          awful lot of graduate students who
                                          do top notch work, as well as the
                                          standard doctors/lawyers/businessmen.
                                          At my work, I don't come across a lot
                                          of good Cal grads either, but that's
                                          because I am in aerospace and Cal
                                          has no department. Schools like
                                          Purdue, UT, and MIT dominate there.
                                          What it says about the average Cal
                                          student is absolutely nothing.
                                          \_ So you're claiming that some
                                             difference in Cal students cause
                                             them to be somehow uniquely less
                                             interested in the companies I'm
                                             hiring for.  I find this claim
                                             incredible.  Perhaps you would
                                             explain what is so different with
                                             Cal grads (vs. MIT, Caltech,
                                             'fraud, UTAustin etc.)?  I've
                                             already specified what I look for
                                             in new grads (EE, some circuits
                                             and transmission line).  Could
                                             any EE program that doesn't cover
                                             some circuits and transmission
                                             line be considered a "good"
                                             program?
                                             \_ Completely possible, for
                                                example, if the jobs you
                                                are hiring for are in
                                                another state from CA. However,
                                                what I am saying is that you
                                                are probably not evaluating
                                                the *best* students from *any*
                                                of the schools. After that,
                                                it's not given you are comparing
                                                second quartile to second
                                                quartile or, if you are, what
                                                exactly that means.
                                       \_ Of course the average MIT student
                                          is better. Their incoming scores are
                                          higher on average (as you state).
                                          However, for same level of
                                          achievement I find Cal students
                                          better than those from, say,
                                          Stanford. Also, I hired a guy from
                                          Caltech as smart as all hell but who
                                          is a terrible employee who has to
                                          be told what to do all of the time.
                                          He's been close to fired several
                                          times now for incompetence. There
                                          are other ingredients to success than
                                          being book smart.
                                          \_ I found Caltech grads to be
                                             impractical (in the sense that
                                             they spend too much time arguing
                                             over and working on the optimal
                                             thing rather than the good enough
                                             thing).  They are very good once
                                             you've slapped them around enough
                                             to break them.  Small sample size
                                             of only 2 though, so definitely
                                             ymmv.
                  \_ I guess it depends what you mean by "better school".
                     I think Cal was a lot of work for minimal reward. By
                     that I mean not that the rewards are small, but that
                     the work was large. It would have been easier to go
                     to "lesser schools", learn less, and do less work.
                     Heck, the main appeal behind Stanford is not that it
                     is "better" but that you can do less and still get
                     a 3.2+ GPA while also having a name on your resume
                     that people care about (and fewer alums from that
                     school in the workforce because of the size).
                     However, you pay $100K+ for that honor.
2005/3/26-30 [Computer/SW/Database, Computer/SW/Languages/Java] UID:36897 Activity:nil
3/26    FBI to scratch their Virtual Case File after squandering millions of
        dollars. So here is my question. How is it implemented? Is it using
        Java/J2EE/Sybase or C or something else?
        http://www.foxnews.com/story/0,2933,151421,00.html
        \_ They probably followed these guidelines:
           http://mindprod.com/unmain.html
        \_ according to infoworld, it was done in java, and is more or less
           a standard enterprise-ish kind of app. it seems like it's lack of
           good and stable requirements that killed them.
2005/3/24-25 [Computer/SW/Database, Computer/SW/Languages/Perl] UID:36846 Activity:nil Edit_by:auto
3/24    How do I do non-blocking read in Perl? For example, let's say I do
        open(FD, "tail -f file.log |");
        And I'd like to read it, but not block it. Can that be done? -ok thx
        \_ Yes. You need to use fcntl to mark FD as non-blocking and then
           you need to use select and sysread to read from FD.
        \_ http://www.unix.org.ua/orelly/perl/advprog/ch12_03.htm
           http://www.unix.org.ua/orelly/perl/cookbook/ch07_15.htm
           Bottom page, very useful
Open the file with sysopen, and specify the O_NONBLOCK option:
use Fcntl;
sysopen(MODEM, "/dev/cua0", O_NONBLOCK|O_RDWR)
    or die "Can't open modem: $!\n";
If you already have a filehandle, use fcntl to change the flags:
use Fcntl;
$flags = '';
fcntl(HANDLE, F_GETFL, $flags)
    or die "Couldn't get flags for HANDLE : $!\n";
$flags |= O_NONBLOCK;
fcntl(HANDLE, F_SETFL, $flags)
    or die "Couldn't set flags for HANDLE: $!\n";
Once a filehandle is set for non-blocking I/O, the sysread or syswrite calls tha\
t would block will instead return undef and set $! to EAGAIN:
use POSIX qw(:errno_h);
$rv = syswrite(HANDLE, $buffer, length $buffer);
if (!defined($rv) && $! == EAGAIN) {
    # would block
} elsif ($rv != length $buffer) {
    # incomplete write
} else {
    # successfully wrote
}
$rv = sysread(HANDLE, $buffer, $BUFSIZ);
if (!defined($rv) && $! == EAGAIN) {
    # would block
} else {
    # successfully read $rv bytes from HANDLE
}
2005/3/19-22 [Computer/SW/Database] UID:36773 Activity:nil
3/19    I put in an application to Paul Graham's summer founders program,
        but am likely to incorperate regardless of weather I get funding
        or not.  Anyone who needs something to do over the summer should
        email me.  Experience/interest in foreign language(s), linguistics,
        perl, sql, javascript or statistics would be useful. --darin
2005/3/10 [Computer/SW/Database] UID:36633 Activity:nil
3/10    Reading up on MySQL, it says there are 4 String data types:
        TINYTEXT, TEXT, MEDIUMTEXT, LONGTEXT.  The part I don't get is
        why are the character limits 255, 215, 223, 231 characters? I must not
        be reading the book correctly because that makes absolutely no
        sense.  Why would TINYTEXT have more storage than the others?  Why
        even have the others if TINYTEXT is bigger?
        \_ I just use char,varchar, and text
2005/3/7-8 [Computer/SW/Database] UID:36569 Activity:moderate
*/*     Ilyas Insult Database:
        \_ #1 Lazy Bitch
        \_ #2 You're a fool,
           \_ Tell us about the stars, Ilya
        \_ #3 Get a life
        \_ #4 Your reading comprehension sucks
        \_ #5 Your sarcasm meter is broken
        \_ #6 Your brain has been classified as:small
              \- you must pay me 5cents.
           \_ hahaha thanks I totally forgot this one
              \_ I can't take credit for this one. -- ilyas
                 \_ Seems like a parthaism to me.
2005/3/2-3 [Computer/SW/Database] UID:36497 Activity:high
3/2     Lets say you are running a website connected to a database.  The
        website stores all sorts of time sensative info in the database,
        like order timestamps, keeping track of various states of stuff,
        etc.  When we go on or off daylight savings time, doesn't that totally
        mess up things that deal with spans of time? How do you deal with that?
        And how do you display things to the user when they happen right around
        the switch-over?  How about people living in regions like Arizona or
        Hawaii, who don't have daylight savings time?
        \_ Your server needs to have a notion of timezone. Certainly it
           should know its own, and most likely that of its users as well (if
           it doesn't, the users are much more likely to be confused by
           consistently wrong-timezone timestamps than by the 2-hours-a-year
           DST problem). Once it does, you should store dates internally
           either in a timezone-insensitive format (read the time(3) manpage,
           i.e. "man 3 time"), or in a timezone-sensitive format with the
           timezone stored as well (although the former is recommended).
           The standard notion of timezone encodes the DST status as well.
           If you store dates in GMT, GMT is completely DST-free. If you store
           them in, say, Pacific, it will have timezone PST for normal time
           and PDT for daylight saving time. So the date that gets displayed
           as "Sun Apr  3 01:18:19 PST 2005", and is stored internally as
           1112519899, is one hour earlier than the date displayed as
           "Sun Apr  3 03:18:19 PDT 2005" and stored internally as 1112523499.
           This is obvious when you deal with the internal representations:
           1112523499-1112519899=3600. -alexf
          \_ UTC
                \_ I guess my related question was, what if someone changes the
                   the time back 1 hour ... Or on a real server would no one
                   do that ever.
                   \_ Use UTC for everything. Use NTP to keep it synchronized.
                      UTC only changes by one leap second every few years.
            \_ Gah, fine, UTC, not GMT, I stand corrected. Nitpickers. -alexf
               \_ If, for some reason, you want to know a *lot* more about
                  the history of UTC, GMT and precision time keeping in general,
                  you might enjoy reading "Splitting the Second", by Tony Jones.
        \_ In practice, you don't have to worry about this.  Use your database
           server's native DATETIME (mysql) or TIMESTAMP (everyone else) data
           type.  Store the current time using SYSDATE (oracle) or NOW()
           (everyone else).  When you access it formatted, you'll get it
           in the local time format.  So you have to worry about fucking with
           your clock, by not about changing time zones.
2005/2/16-17 [Computer/SW/Database] UID:36190 Activity:nil
2/15    What is a good web based ldap gateway that I can set up which
        will allow my HR people to populate at least the initial entry
        in my ldap db (and send a mail to sysadmin) for new hires.
2005/2/7-8 [Computer/SW/Database] UID:36088 Activity:nil
2/7     Trying to get a Windows-based ER tool for PostgreSQL that supports
        both reversed and forward engineering, any suggestions?  Sybase
        sales guys just won't call me back on PowerDesigner.  Is there
        mature modeling tool for PostgreSQL available?  I will try
        QDesigner, which I heard is just a repackaged product of PowerDesigner.
        \_ It's not terribly advanced or mature, but you can do simple
           schema to-and-from graph work with pgsql through Visio.  --dbushong
2005/1/8-9 [Computer/SW/Database] UID:35609 Activity:nil
1/8     There is a card trick where you have someone select a card
        from an imaginary deck and tell you what card they selected.
        (Basically, they just pick any card they like)  Then you
        pull out a real deck of cards from your pocket and the card
        they chose is either face down while the other cards are
        face up, or has a big X on it.  From searching the web,
        I've found that you can pay $10 or $20 for a trick deck
        that allows you to do this trick, but I still haven't the
        foggiest idea how it works.  Does anyone know?
        \_ Large pants and 52 decks of cards!
        \_ That trick only works on the weak-minded.  Your Jedi
           mind tricks won't work on ME.
2005/1/6-8 [Computer/SW/Database, Computer/SW/OS/OsX] UID:35570 Activity:moderate
1/6     A relative is opening a small (family) store and asks me what
        software can be used to keep track of everything, like inventory,
        sales, suppliers, statistics etc.  Are there good and cheap commericial
        (database?) programs for this?  Can she use some open source solution?
        Right now she uses a Mac though that can change if necessary.
        \_ While I hate it with every bit of my soul, for small business,
           Peachtree Accounting will do everything you've asked for.  It only
           runs on Windows, though.
           \_ Why do you hate it?
              \_ Why do you hate America?
              \_ Once you grow and have multiple computers connecting to the
                 database, things get less stable.  It uses its own form
                 of lock that breaks if you try to host the file on Samba
                 server(don't even think of trying this.)  Too many maintenance
                 operations require the database be in single user mode.
                 Basically, it's fine and dandy for a small business that
                 intends to stay small.  It's the growing pains that I don't
                 like.
                 \_ Thanks for the info.  What would you have used instead?
                    Even very small business may want to have multiple user
                    access.
                 \_ why not quickbooks? a small biz is still a lot of work,
                    and getting someone to balance the books every now and
                    then would be easier to find since qb is the standard.
2004/12/14-15 [Computer/SW/Database, Computer/SW/Graphics] UID:35283 Activity:moderate
12/13   Video codec question. I used to be able to use my VideoWave
        program to convert AVI to MPG-1 (for VCD). However after
        trying a few video sharewares and uninstalling them, I can
        no longer convert AVI to MPG-1 using VideoWave (it keeps on
        crashing). What is going on and how would you fix it? ok thx
        \_ I'd fix it by using TMPGEnc (free) instead. --jameslin
         \_ just tried it, but it doesn't wanna load *.avi files :(
            \_ Can you even load these AVI files in WMP?  Sounds like you've
               hosed your system.
               \_ yes no problem. TMPGEnc asks for TWO files, audio and video.
                  But I only have 1 file, the AVI file. Now what?        -op
                  \_ It asks for a video source and audio source.  If they
                     happen to be from the same file, you just select the
                     same file for both.
2004/12/13-14 [Computer/SW/Database] UID:35272 Activity:high
12/13   Any PeopleSoft/Oracle insiders care to comment on what's going to
        happen next? -bored procrastinator
        \_ you're fired!
        \_ someone please give me a 411 on what this whole thing is about?
           I don't know anything about Oracle/PeopleSoft, but I'd like to
           learn more about it. Does it have something to do with anti-trust
           law or something?
           \_ Oracle: Biggest database company, with smaller CRM software
              business, headed by egotistical fucktard.  After raising their
              offer 4 times and launching a legal challenge to Peoplesoft's
              (questionable) tactics to fight the takeover, has bought
              Peoplesoft, which is arguably the leading CRM software vendor.
              Previously they stated they wanted to move all Peoplesoft
              customers over to Oracle databases.  Peoplesoft's now-fired CEO
              said more-or-less: "If it's as if someone offered you a million
              bucks to buy your beloved pet dog so they could shoot it."
              Many layoffs at Peoplesoft are expected, but not until they help
              with the changeover. -works for neither
              \_ Said egotistical fucktard replied with something like "If
                 Conway (ex-CEO) and his dog were standing next to each other
                 and I only had one bullet, trust me, it wouldn't be for
                 the dog." Say what you will about Larry Ellison, he does
                 produce good sound bytes.
                                    \_ yoyo, word man!
                \_ I think read somewhere ellison vowed to convert every peoplesoft
                   customer to using oracle, then firing every single
                   peoplesoft employee.  - danh
                   \_ Have you been hanging around too many fobs, dan?
2004/11/20-21 [Computer/SW/Database] UID:34999 Activity:high
11/19   Sometimes I am thinking that three years from now we'll STILL
        be reading about Oracle's repeated PeopleSoft takekeover attempts.
        \_ isn't it wonderful how ellison has vowed to fire every
           peoplesoft employee.
           \_ Really?  That seems like a stupid way to take over a
              company.
                \_ isn't increasing shareholder value wonderful.
                   \_ It is when you're a shareholder.
           \_ Besides, this takeover is also extremely unpopular among
              peoplesoft customers (at least those who run it on non-Oracle DB)
              I think it is clear to every idiot that Oracle's intention is
              to eliminate peoplesoft as a competitor and then make peoplesoft
              users to switch to using Oracle's DB.
              \_ Didn't that Tim Curry impersonator jackass say as much?
              \_ as a PeopleSoft customer, I think destroying the company
                 sounds like a great idea.
2024/11/23 [General] UID:1000 Activity:popular
11/23   
Results 151 - 239 of 239   < 1 2 >
Berkeley CSUA MOTD:Computer:SW:Database:
.