1/15 I need a small NAS-RAID (on the order of 500 gigs or so) for a bunch of
developers. I was thinking of constructing one from parts using
SATA and Linux. Anyone got a better idea? Purpose is mainly for
NFS mounting, so it has to have decent (although not blazingly fast)
performance. I don't care that much about hot-swappability. Needs to
be RAID 1 or above (redundancy at 2x is enough). I was thinking of
spending maybe 4K on it.
\_ If you want to provide NFS access don't use linux. Linux
has terrible NFS performance/reliability. FreeBSD is much
better at NFS (if you want to stick with a free/open os).
\_ Apple Xserve RAID 1TB (500GB usable RAID 1) @ $6K, free kool-aid.
\_ I said, you're very phunny. Do you make yourself happy with your
little joke?
\_ This is easily doable for the budget you have in mind. SATA
seems nice, but I haven't seen any real world results for
performance and failures. Considering that you can buy 250G
disks for $300 or less, you can easily do 500 gigs. Do RAID 5
you'll get more space.
\_ I haven't gotten a SATA board/drive yet, AFAYK how many
SATA drives can I reasonably dump into a box? If I can
dump 8 of them in there I'll get more drives. Also, are there
any more oustanding issues with Linux NFS hooking up to Solaris?
(clients here at the office are all solaris boxen).
\_ There are no major issues with linux/solaris using nfs. You
may have to tweak some mount options to get super performance
but frankly your developers are unlikely to know the diff.
There are cases available that have room for as many as 20
drives. 16 is more common. 8 fits in a 2u box. You have
many options. -MSG
\_ Our Solaris 8/9 clients have lots of problems with
with Linux servers. Only v2 seems to work well on
Linux servers.
\_ Depends on what you call "problems". Will it work at
full speeds with default settings? Usually not. Will
you have to read a man page and change one or two nfs
client mount options? Yes. Oh! Horrors! -MSG
\_ I'm not talking about full speed. I'm talking
about problems such as multiple simultaenous
reads hanging nfsd or cases where a client
writes a file but the file ends up truncated on
the server. Even if the clients mount using v2
such problems occur on a weekly basis for ordinary
files (esp. for files larger than 1gb).
\_ There's no reason at all to use SATA. Just go get a case, stick
in motherboard, any cpu, and 256 megs and an 8 port ware card.
attach 8 cheap ass 80 gig IDE drives in a raid5 stripe and forget
about it. Total price is way under 4k. If you want it mirrored,
then get 8x160 drives and mirror 4x160 stripes instead of raid5.
Mr. Motd Storage Guru signing off.
\_ That SATA/SATAN joke wasn't even close to funny. There's tons of
terms you can make risque by changing a letter.
\- if you are a solaris shop, why dont you just take sun you
presumably already have and just spend all of your budget on
a small HWraid. assuming you feel your time is valuable,
this may be the most convenient thing to do. --psb
\_ This is a pretty good idea, too, but be careful what you get
for an external array. Some are really crappy and I've lost
entire raid sets when it decided to go JBOD on me after a
hwraid unit crash. I've had good luck with Arena raid
arrays attached to different systems. --MSG |