Berkeley CSUA MOTD:Entry 54173
Berkeley CSUA MOTD
2022/05/26 [General] UID:1000 Activity:popular

2011/9/14-10/25 [Computer/HW/Drives] UID:54173 Activity:nil
9/13    Thanks to Jordan, our disk server is no longer virtualized. Our long
        nightmare of poor IO performance should hopefully be over. Prepare for
        another long nightmare of poor hardware reliability!
        Just kidding! (I hope)
        In any case, this means that cooler was taken out back and shot, and
        replaced with Keg, a real machine with real disks. Right now it's not
        running at 100%, but already you should notice that soda's not only
        fast, it's a fucking miracle compared to the past few years. I
        personally blame unforeseen edge cases in a poisonous combination of
        ZFS+NFS+OpenSolaris+1000s of users with a system too big to fail.
        Indeed, syncing the data away from cooler took two continuous weeks.
        It's no wonder it's taken until now for a very capable VP to be up to
        the task of partially unbreaking the setup. Note - we no longer have
        any VMs running off of virtualized disks stored on a NFS mounted disk
        which, itself, was virtualized. Hmmmmmmmm. Though those were mostly
        useless VMs you never saw. :P

        So anyways, as mentioned earlier, Keg isn't at 100%, but it's up. It
        looks good enough to keep for a bit, but it originally had a bunch of
        Raptors or some such. The disks are still there, but the RAID cards are
        most likely broken. We'll leave it to jordan to evaluate the server
        needs and fix accordingly. As it is, RAIDing fifteenish 10000RPM disks
        so you can edit motd SUPER-EXTRA FAST!!! is probably not a great use of
        time. We'll see where our less-shaky infrastructure takes us in the
        future. --toulouse
        \_ cooler is dead. Long live KEG!
        \_ Good work guys, thanks! #1 lesson here: don't virtualize i/o
        \_ Good work guys, thanks! #1 lesson here: don't virtualize disk i/o
           intensive applications. -ausman
           \_ That is a good lesson but definitely not the #1 lesson.
              * Exporting thousands of filesystems: bad idea, no matter how
                easy it makes backups and ZFS snapshotting.
              * Using an OS with superior filesystem support is a bad long-term
                solution if nobody but the original installer knows almost
                anything about it
              * Choosing ZFS...the jury's still out.
              * Maintaining FreeBSD 7 and 8 and OpenSolaris and Debian...kinda
              * All of this, on top of virtualized disk i/o - bad news.
              \_ Even after I collapsed NFS down to one filesystem, when our
                 FreeBSD boxes came back online and started automounting
                 thousands of filesystems apiece, the NFS server again ground
                 to a screeching halt (taking soda and friends with it).
                 Switching to one /home mount per server restored NFS's
                 snappiness; I suspect that even a virtualized NFS server could
                 perform well without the filesystem woes.  --jordan
2022/05/26 [General] UID:1000 Activity:popular

You may also be interested in these entries...
2013/5/6-18 [Transportation/Car, Computer/HW/Printer] UID:54673 Activity:nil
5/6 (shortened link from
        seems like I get these every 6 months or so from HP.  Do all drives
         have these kind of issues and I only see the ones from HP because
         they are diligent about reporting/fixing these issues?  Or do they
         suck?   (It's not actually their drives so...)  Also, do I really
         need to bring down my production infrastructure and fix all this
2012/1/4-2/6 [Computer/HW/Drives] UID:54281 Activity:nil
1/4     I want to test how my servers behave during a disk failure and
        a RAID reconstruction so I want to simulate a hardware failure.
        How can I do this in Linux without having to physically pull
        a drive? These disks are behind a RAID card and run Linux. -ausman
        \_ According to the Linux RAID wiki, you might be able to use mdadm
           to do this with something like the following:
2011/2/14-4/20 [Computer/SW/Unix] UID:54039 Activity:nil
2/14    You sure soda isn't running windows in disguise?  It would explain the
        \_ hardly, My winbox stays up longer.
        \_ Nobody cares about uptime anymore brother, that's what web2.0 has
           taught us.  Everything is "stateless".
           \_ You;d think gamers would care more about uptime.
2011/2/18-4/20 [Computer/SW/Unix] UID:54044 Activity:nil
2/18    Why does the system seem so sluggish lately?
        \_ Slow NFS is basically always the answer. --toulouse
        \_ Any truth to the rumor that soda will be decommissioned this summer?
           \_ Absolutely none. Soda might go down temporarily while disks are
              reorganized and stuff so soda doesn't suffer from such shitty
              performance nearly as much, but no, we've gotta maintain NFS and
2010/7/22-8/9 [Computer/SW/OS/FreeBSD, Computer/HW/Drives] UID:53893 Activity:nil
7/22    Playing with dd if=/dev/random of=/dev/<disk> on linux and bsd:
        2 questions, on linux when <disk>==hda it always gives me this off
        by one report i.e. Records out == records in-1 and says there is an
        error. Has anyone else seen this?  Second, when trying to repeat this
        on bsd, <disk>==rwd0 now, to my surprise, using the install disk and
        selecting (S)hell, when I try to dd a 40 gig disk it says "409 records
2010/1/22-30 [Computer/HW/Laptop, Computer/SW/OS/OsX] UID:53655 Activity:high
1/22    looking to buy a new development laptop
        needs ssdrive, >6 hr possible batt life, and runs linux reasonably
        Anyone have a recommendation? Thx.
        \_ thinkpad t23 w ssdrive and battery inplace of drive bay
        \_ Ever wondered what RICHARD STALLMAN uses for a laptop?  Well,
           wonder no more!
2009/10/27-11/3 [Computer/HW/Drives] UID:53474 Activity:nil
10/27   I just read an article that Facebook had moved their database
        to all SSD to speed throughput, but now I can't find it. Has
        anyone else seen this? Any experience with doing this? -ausman
        \_ I hope you're not running mission critical data:
        \_ Do you have any idea how much storage space is used by Facebook,