| ||||||
| 5/16 |
| 2005/8/5-19 [Computer/SW/Unix] UID:39016 Activity:nil |
8/5 NIS Removed due to some very loud complaints. If you changed your
password or shell between Wednesday night and now, it has been
reverted. Sorry. Working on other possibilities. - jvarga
\_ Oh, and just for the record, the rest of root was notified a year
ago about the impending change to NIS. And they were periodically
notified in between. And over all that time, no one said a thing.
I don't claim to be any good at administering soda, but I don't get
much help. - jvarga |
| 2005/8/4-6 [Computer/SW/Unix] UID:38995 Activity:nil |
8/2 NIS? Do we like challenging people to crack our passwds?
\_ soda:~>ypmatch phillip passwd
phillip:iTJGnBNcyKBxQ:30311:100:Phillip "Edward" Nunez:/home/sequent/phillip:/bin/csh
\_ I'm not sure how, but some 200 people managed to get md5 hashes.
I don't know how I got one. I haven't changed my password.
\_ Someone set the default passwd_format to des in /etc/login.conf.
but that's not the issue. The issue is passwd hashes being
available at all.
\_ Dumb q: how do you find a user name given a user id? I used to
look in /etc/passwd.
\_ You can run "ypcat passwd" to see the new password file.
If you just want to look up a username, though, try "id 30311"
(or whatever) -- it's easier, and it works regardless of where
the password file is stored. --mconst
\_ Thanks! That seems really basic but somehow I never heard
of it before. |
| 2005/8/4-8 [Computer/SW/Unix] UID:38990 Activity:nil |
8/4 Now that homedirs are on an NFS partition, be very wary of delivering
your mail to your home directory. --scotsman
\_ HA! I won that bet! Only took 12 hours for someone to complain
about it! - jvarga
\_ It's not a complaint. It's a warning. Personally, if it's
avoidable, I would never use NFS for anything requiring locking.
Also, your quota update issue is also, likely, caused by NFS.
--scotsman
\_ I have been delivering my OCF mail to NFS mounted home
directory, and reading it off NFS too. I never had a problem
with it.
\_ Then you're lucky.
\_ Why?
\_ obNFSBlows
\_ use maildir.
\_ That's all fine and good, but what about the people who don't
know how to implement something like that? Or for that matter,
the people that don't even realize the challenges involved in not
using it (e.g. the person above). Okay, now you can consider
it a complaint. --scotsman
\_ because, despite my occasional vitriol, I do like to be helpful
when I can, how about some suggestions as to how people can avoid
the problems with delivering mail to homedirs over NFS, instead of
just saying "Oh Oh Oh, BEWARE, DANGER WILL ROBINSON"?
just saying "Oh Oh Oh, BAD IDEA, DANGER WILL ROBINSON"?
And to that end, as someone said, maildir (or even MH folders)
via procmail. procmail and mutt will support those. procmail
will create the maildir folder on its own (mutt seems to do so
too, but if you want to research this and add to this, please
do --Jon)
a _very_ procmailrc example:
MAILDIR=$HOME/Mail/
:0
* ^TO_listname
listname/ # "/" at the end signifies a maildir and
# it will create it properly on its own
# "/." signifies an MH folder, and, again,
# procmail will do the right thing.
Comments, feedback welcome, this is just an off the cuff post.
--Jon
\_ some potentially useful info re mutt/maildir --Jon
from http://wiki.mutt.org/index.cgi?MaildirFormat
To create a maildir format mailbox, either:
* set mbox_type="maildir" and create a new folder in Mutt
* use procmail and append '/' to the folder name
* mkdir -p testbox/{cur,new,tmp}&& chmod -R 0700 testbox
* use the [maildirmake] program as included in maildrop and Courier or the s\
imple script included in dovecot |
| 2005/8/3-4 [Computer/SW/Unix] UID:38958 Activity:low |
8/2 I work at a company where we do all our engineering work on unix.
But most of our documentation is done on MS Word. Each of us writes
our own docs and someone combines everything in the end resulting
in 600 pages of unreadable crap. I'm looking for a unix-based
documentation tool that stores individual parts of a document
in some sort of text format (like XML) and that lets multiple
people collaborate on a single large document using revision
control like CVS. Any suggestions?
\_ DocBook is what you need
\_ Thanks. This was the type of stuff I was looking for.
\_ What's wrong with HTML? -tom
\_ latex! i still haven't found a better alternative for this kind
of collaborative work. and try to reduce the amount of
formatting people are allowed to do w/ some style conventions.
i also put xfig figures in CVS and use makefiles to automate
generation of EPS, PDF, etc. from these diff-friendly text
formats.
\_ Ok, I'm one of those people who thinks that "Anyone can
learn C++" or "anyone can learn unix". I mean, you'd be
a total moron if you can't, right? But you can't
reasonably expect most people to. "Grandma, I got you
a new computer. All you have to do is check up on security
updates, download a new kernel source tree, recompile,
and install. It's not that hard. Anyone can do it."
I'm not going to ask an entire group to learn latex.
\_ I've found that M$ Word's "ease of use" is largely
illusory. It's not that much harder to look up
and use \tiny for a text size than screw with a bunch
of menus and buttons. And if you do it once and work
from a template, it's actually easier than using Word...
\_ This is a common mistake--Word's _overall_ "ease of use"
is a fairy tale. However, getting it to quickly do basic
shit without too much fidgeting is a lot easier for Joe
Schmo than LaTeX or other, more powerful tools. The same
goes for StarOffice and many other "wysiwig" toys, the only
difference is that Joe Schmo will eventually get a doc
written in word which "looks different" in StarOffice. As
for your \tiny example, yes, for us it's not much harder.
However, for most non-technical people, and yes, they do
exist, something visual like a button is way more intuitive
than a line of text. The danger of this, though, is that
most people/companies will inevitably want to do more
complex shit, at which they have already worked themselves
into a hole with buttons and similar "easy" crap. -John
\_ While I agree that LaTex is not that hard, especially
if he was to set up a nice standard set of macros
and scripts for everyone to use, I can see why a
Wysiwyg editor would be preferable.
\_ How are you currently having your Unix programmers do
documentation in Word? Also, While LaTex is not that hard,
especially if you were to set up a nice standard set of macros
and scripts for everyone to use, I can see why a Wysiwyg editor
would be preferable. Unfortunatly, I don't know of a good one
for this task in ANY OS.
documentation in Word?
\_ I believe this is the sort of thing Adobe FrameMaker was
made for. You may also want to look into TeXmacs. I think
the HTML suggestion is a good one, though.
\_ Pick any other word processor that runs on your flavor of unix. Or
use OpenOffice 2 (you must be using a different machine for Word,
right?). Since no one knows how to use Word anyway, there won't be
a retraining cost.
\_ Why not use a wiki?
\_ Probably depends on the kinds of control and approval issues they
have. My current client is really really word-dependent because
they have such incredible regulatory and audit needs that they
had to have some format that'd let them exercise a lot of
control over changes (note that I am in no way advocating ms-word
as a good way to do this.) -John |
| 2005/8/2-4 [Computer/SW/Security, Computer/SW/Unix] UID:38939 Activity:nil |
8/2 How do you create an NDMP user/pass on a netapp? The docs seem to
tell me how to check a given user for a password but not set up a
new user. thanks.
\_ Just use the admin/root user. |
| 2005/7/29-31 [Computer/SW/Unix, Computer/SW] UID:38872 Activity:kinda low |
7/29 Has anybody deployed a "checksumming/file integrity infrastructure"
across say ~100 *nix machines? Any recommendations for particular
tools? Tripwire is garbage, and for various reasons I am thinking
about moving away from veracity, which I am been using for a while.
Considering looking at osiris and samhain. Would prefer something
lean and old-school unixish (like one binary and one config file)
rather than one of these "entrprise software system" type things
with a large footprint and a lot of chrome. Tnx.
\_ Not on 100 machines, but we ran fcheck for a while. It was really
resource intensive. I moved to some one-or-two binaries C one
..i think the name started w/ an "a" It worked pretty well.
--dbushong
\- re: resouce intensiveness ... if the resources are 1. human time
2. cpu 3. disk io, i think you can decrease #2 by using fletcher
checksum instead of an expensive one like md5. not much you can
do about disk io ... so a lot of it comes down to #1 ... it's
key to have a config system flexible enough to not go crazy if
somebody say nfs mounts a 300gig parition without factoring
that into the configruation. as with intrusion detection
systems in general, resourse and ability to minimize false
alarms is what dictates success or failure in a practical
sense. for me, chekcing the OS on a sun takes about 6-10 min.
\_ The a____ program I switched to used less compute resources
because it:
a) used a weaker checksum
b) had internal optimized checksumming code (rather than
forking "md5sum" each time)
Both fcheck and it specified certain directories to scan and
didn't traverse mount points.
--dbushong
\_ There was a discussion of this on one of my security lists a
while ago--I have forwarded your question, and will forward
what comes up if you tell me who you are. So far someone has
suggested http://aide.sourceforge.net -John
\_ That was the one. --dbushong
\_ How do you mean? Does it work for you? I'd be interested
in your experience with it as I've had clients with just this
kind of requirement. -John |
| 2005/7/26-8/19 [Computer/SW/Unix, Computer/HW/Drives] UID:38832 Activity:nil |
7/26 Root will be moving office home directories to the new file server
this Friday, and soda home directories this Saturday. Expect
downtime, I've gotta rsync off of TDA: the slowest disk on the face
of the earth. - jvarga
\_ Home dir move postponed pending some serious issues with keg and
new soda. - jvarga
\_ Home dir move complete. |
| 2005/7/25-26 [Computer/SW/Editors/Emacs, Computer/SW/Unix] UID:38806 Activity:nil |
7/26 I work on Mac's mostly. I use a lot of emacs and screen. My left
hand is in pain due to the pinky finger constantly stretching downwards
to reach the Ctrl key. How do you people deal with it?
\_ It is impossible to get pains by typing. williamc says so, and
will prove it. He'll also prove that you're an idiot.
http://csua.com/?entry=37970
\_ Tiger lets you remap the caps lock to the control key. |
| 2005/7/19 [Computer/Theory, Computer/SW/Unix] UID:38706 Activity:nil |
7/19 For the betting types below: You could always use some sort of
formulation of the bet that would result in a yes/no:
eg: Rove is indicted by 12/31/2005, or
Grand Jury evidence shows 1 witness reporting Rove "handled"
the memo.
\_ You mean something like this? http://csua.org/u/crr
\_ error page
\_ Fixed |
| 5/16 |
| 2005/7/18-19 [Computer/SW/Unix] UID:38677 Activity:nil |
7/18 rookie unix permissions question:
I have the following file -
/sourcedir/filename
I have group write permission to the file.
I want to move it to another directory:
/targetdir/filename
On one system, the target file is owned by me after the
move, but on another system, it's still under the
original owner. Why is that? What determines the
ownership. tia.
\_ UID (not the username) and/or how you moved the file. In some
cases, ownership will change, in others, the original UID is
preserved. Note that the user name assoricated with a UID may
be different on different systems.
\_ I want the UID to be changed to my UID (rather than the
original UID) after the move. Is there a way to force that?
\_ cp from to && rm from
\_ cp and tar change it to your UID by default. Check your
aliases to see if you have the preserve options set. Also,
extracting with tar as root will preserve the UID.
\_ thanks for the workarounds. does it help if I have
root access (i.e. is it something the root would be
able to configure)? The file move is part of a perl script
that I would rather not change unless necessary.
\_ How are you doing the actual file copy?
\_ Is the sticky bit for the directory turned on in the second
case? That can cause the problem you are having. -ausman |
| 2005/7/15-18 [Computer/SW/Unix] UID:38640 Activity:nil |
7/15 If I am forking a bunch of background jobs in a csh script, is there
a good way to suppress the [1] 10894 [2] 10897 and "Done" type output?
\_ Your first problem is using csh for this and caring about
presentation.
\_ will this work?:
script.csh >& /dev/null
\- that will throw away the output you may want to see:
foreach i (a b c)
sleep 1 &
end
echo done
\_ write the output you want to a file.
\_ csh is not very good for shell scripting; it's more of an
interactive shell. I don't know of a way to turn these notices
off. If you'll consider moving to bourne/bash, I bet you'll be
much happier in the long run. I'm sorry I took so long to switch. |
| 2005/7/14-15 [Computer/SW/Unix, Computer/SW/Security] UID:38611 Activity:moderate |
7/13 Soda is back up, and the rest of the servers are slowly being brought
back. We're fixing lots of errors on all machines. We'll keep you
all posted. - jvarga
\_ DikuMUD doesn't work anymore. Can you please restore it, or if
you can't find it at least install a new version? I'd like to
start as level 29, one level before immortal. Thanks jvarga!
\_ Office accounts are going to be dead until I can figure out why the
*(#)&^*)#$ debian doesn't like netgroups. Anyone with insight on
this, please email me/root. Thanks. - jvarga
\_ Looks like I've fixed office accounts on everything but martini.
Problems to root. Moving on to the next stupid issues that came
out of this move... - jvarga (needs a life, and a raise)
\_ Great work. Thanks.
\_ Thanks for the time and effort you've put into this.
\_ Many thanks for seeing this through.
\_ Come now, all this nicey nice is unbecoming. Where's the
obligatory alumni bitchfest?
\_ Err, I still remember what it was like being a ugrad in cs.
I appreciate the work being put in for little reward. -mice
\_ Perhaps most of us are used to the trials and tribs of this
sort of thing.
\_ Awesome, thanks. But when do we get new soda?
\_ This is the first step to getting new soda online. But in the
interim, new soda needs to stop doing things like playing the
"OS not found" game on boot, and tell me why sshd is dead.
- jvarga |
| 2005/7/14-8/4 [Computer/SW/Unix, Computer/SW/Security] UID:38609 Activity:nil |
7/13 Scotch will be coming down tonight. Expect disruption in CSUA service
between 7pm and wheneverweactuallyfinish. We don't intend on bringing
soda down for more than a few minutes to rotate it in the rack (so
that it cooks evenly). Probability of list disruption will be high.
Office accounts will be unavailable. Njh will be piss drunk. - jvarga
\_ 7/14 Soda is back up, scotch is back up, lists are down, office
accounts are down. We're working on things, but I have to be
up at 6am for work. - jvarga
\_ 7/15 Office accounts are working again after much mudwrestling with
all systems involved. Debian mirror and other services on
screwdriver are back up. Send booze to root. - jvarga
\_ 7/24 Just realized that soda's FTP was being mounted off of scotch
(wtf?) and that's the cause of people's complaining. Am
looking at possible solutions. Please be patient. - jvarga
\_ 7/24 Lounge machines should be working again for the most part.
Still screwing with xterm logins. Send booze. Now. - jvarga
/
/
July 12, 2005
Root is planning to swap out scotch.CSUA for a newer machine in the next few
days as part of planned server upgrades. Scotch serves DNS, NIS for the
office, mailing lists, and is soda's backup mail server. During the
downtime, some or all of these services will be unavailable. The length of
the outage depends on our luck, but we hope to have everything back
available within a few hours with as little disruption as possible. Note
that the soda motd will continue to be as troll-filled as usual.
Additionally, the scotch replacement will bring in phase 1 of the new soda
upgrade. We will be unifying soda logins and office logins (but not home
directories), which means that I will be pulling the password database off
of soda to serve as the master list for office logins. This means that if
you have an office account, your office password will be the same as your
soda password. If you did not have an office account before, this change
will not grant you an office account.
The exact date and time of this switchover will be announced soon. Please
direct all questions/comments/concerns to root.
jvarga |
| 2005/7/12-14 [Computer/SW/Unix, Computer/SW/Languages/Perl] UID:38585 Activity:nil Edit_by:auto |
7/12 Your favorite O'Reilly books online, for free. This includes
big titles on Java, Perl, networking, UNIX, Oracle, Linux, and Samba.
http://www.unix.org.ua/orelly
and here
<DEAD>www.hackemate.com.ar/textos/O'reilly%20-%20Complete%20Bookshelf<DEAD>
\_ With all the political trolling that's been going on around
here, at first I thought this was about Bill O'Reilley.
\_ keywords: book learning perl mysql postgres Oreley
keywords: OReilly Reilly OReiley Reiley OReilley Reilly Orelly Orelley |
| 2005/7/12-13 [Computer/SW/Unix, Computer/SW/Security] UID:38553 Activity:low |
7/13 Scotch will be coming down tonight. Expect disruption in CSUA service
between 7pm and wheneverweactuallyfinish. We don't intend on bringing
soda down for more than a few minutes to rotate it in the rack (so
that it cooks evenly). Probability of list disruption will be high.
Office accounts will be unavailable. Njh will be piss drunk. - jvarga
/
/
July 12, 2005
Root is planning to swap out scotch.CSUA for a newer machine in the next few
days as part of planned server upgrades. Scotch serves DNS, NIS for the
office, mailing lists, and is soda's backup mail server. During the
downtime, some or all of these services will be unavailable. The length of
the outage depends on our luck, but we hope to have everything back
available within a few hours with as little disruption as possible. Note
that the soda motd will continue to be as troll-filled as usual.
Additionally, the scotch replacement will bring in phase 1 of the new soda
upgrade. We will be unifying soda logins and office logins (but not home
directories), which means that I will be pulling the password database off
of soda to serve as the master list for office logins. This means that if
you have an office account, your office password will be the same as your
soda password. If you did not have an office account before, this change
will not grant you an office account.
The exact date and time of this switchover will be announced soon. Please
direct all questions/comments/concerns to root.
jvarga |
| 2005/7/6-8 [Computer/SW/Unix, Politics/Foreign/Europe] UID:38432 Activity:nil |
7/6 Woohoo! Otherwise clueless Euros nuke software patents!
http://news.bbc.co.uk/1/hi/technology/4655955.stm -John |
| 2005/7/1-4 [Computer/SW/OS/Windows, Computer/SW/Unix] UID:38392 Activity:nil |
07/01 Hi, is there free software for windows that will create a network
drive that uses sftp to connect to the network? Windows only support
ftp as a network drive. And I can't use "web folders" because my
unix accounts online doesn't support them. thanks.
\_ Does it have to be a network drive, or will you settle for an
application that talks sftp? If it's the latter, take a look at
FileZilla. -dans
\_ You're an idiot.
\_ You're an idiot. |
| 2005/7/1 [Computer/SW/Unix, Computer/SW/Security] UID:38391 Activity:moderate |
7/1 Is there some way for a non-root person to figure out when
someone's account was created?
\_ How would a root person figure this out?
\_ The adduser script used to keep a log file. -tom
\_ You're an idiot. |
| 2005/6/23-25 [Computer/SW/Security, Computer/SW/Unix] UID:38277 Activity:low |
6/23 I was not too smart to believe what I read on SBC Yahoo!'s web
site (that after merging my Yahoo! ID with a SBC sub-account ID,
I can reverse the merge by simply deleting the sub-account) and
went ahead with the merge. The merge did NOTHING as claimed--
I did not get any extra storage nor any extra service. So I
wanted to reverse the process only to find out that I can only
'suspend' an sub-account, but not delete. I called customer
service and was told it is impossible to delete an sub-account
and hence impossible to undo the merge. I have spoken to 5
people including one manager and one level 2 support person.
None was able to offer any help. I tried suspending the
sub-account, only to find out that I could no longer access my regular
Yahoo! account. Has anyone had to deal with this issue? How
was it resolved? Are the 5 people I talked to not too bright
or their web site is just lying?
\_ I have evidence that Yahoo is controlled by Scientologists.
\_ When this was first offered (2+ years ago), I distinctly remember
reading that it was not reversible. It's possible that the 5 people
you spoke with are still operating under that assumption. Print
out the page with the relevant promise and direct support
personnel to the url.
\_ I did. I pointed the support people to the URL that states
the process is reversible. All I get was a defensive
comment, "I am telling you the truth! It cannot be done!". |
| 2005/6/22-23 [Computer/SW/Unix] UID:38247 Activity:low |
6/22 I have weird problems with soda email today - cannot receive or send
any emails. I know I'm slightly over quota but my grace period isn't
here yet. I use pine. Anyone have similar problems today? Thanks :)
\_ Soda's load gets high and then sendmail sits back and just queues
for a while until the load goes back below 10 or so. That's
probably the problem. - jvarga
\_ The grace period means nothing to the filesystem. It's how long
you have to take care of it yourself. Get yourself below quota,
and your problems will cease.
\_ Mail root, or post your name. Motd can't help you.
\_ my username is jdynin. I will also email root but I'm not sure
if my email will reach them :)
\_ You don't have any other email address you could use?
Come on. A little critical thinking here.
\_ You also need to work on your critical thinking skills.
\_ Log into yahoo. send mail to root@csua.berkeley.edu.
Yo, my login is XXXX. What's up with this quota thing?
not hard.
\_ How does jdynin know that root can receive email,
if she is unsure her problem is quota-based? Think
harder.
\_ I also have not received any email on csua for the last 7
hours. I typically get on the order of nearly 100 spams per
day, so it is almost unheard of that I would just not happen
to get any email today. -!op
\_ I concur on not getting e-mail.
I don't have a quota problem.
telnet to port 25 and 465 and you notice a problem.
Maybe root is working on it right this very moment. Maybe not.
\_ Is it likely that incoming mail is lost for good, or just
delayed?
\_ If it's less than 5 days (give or take), it's just delayed
\_ OK, it seems I was able to get mail that was sent after 2:30 PM
today. I can send mail OK now too. Anyone knows if the mail that
was sent to me or that I sent this morning is lost forever? -op
\_ Senders to you should have gotten a message delayed message
from their server. You should get it fine later. Outbound
mail should have bounced back to you, if you could send
it at all. Did you get bounces?
\_ No, I did not get any bounces from the emails I sent out. I
did get one of those "your message coudln't be delivered for
4 hours" emails for the email I sent out this morning. It
seems my emails are slowly reaching their recipients, and my
morning inbound email is slowly reaching me. Thanks for your
help! -op |
| 2005/6/21-25 [Computer/SW/Unix] UID:38233 Activity:nil |
6/21 What's the currently accepted "best" fastest way to write a lot of
little data to a file in Unix? Is it still mmap/memcpy, or is there
something more advanced nowadays? Maybe send me e-mail? -- Marco
\_ I'm curious as well, so please post responses here. --darin
\_ I think you will find that it depends entirely on the hardware
platform and in most cases the programming style only affects
a few percent on the fastest RAID drive arrays. this is because
RAM and CPU are so much faster than disks. trying to do high
bandwidth network I/O, on the other hand can be tricky and
interesting if you are into that sort of thing... zero-copy
asynchronous bulk I/O.
\- i agree, a lot of little details matter. is it totally
concurrency domainated, what kind of device is being
written to, is it ok to write to cache or do you need
to flush to metal, can you use some hack like immediate data
[veritas], is locking an issue? does your application only
use traditional sematics [supercomputing uses have special
ways of doing large i/o], can you choose your file system,
is byte range locking an issue? is the rate at which you
are given inodes an issue? etc. it would be interesting to
see how IBM/GPFS would do at this. |
| 2005/6/14-16 [Computer/SW/Unix] UID:38127 Activity:low |
6/14 Does anyone know the POSIX specified way to get the fully
qualified domain name of the local machine in a C program?
gethostname just returns the name of the box without the domain.
\_ There probably isn't a POSIX-specified way, as there is no
single way to map machine->FQDN. (What if a machine has
two, or 1000 FQDNs?) -tom
\_ So far all I've gotten is: gethostname, get IP address, do
reverse DNS lookup. Apparently this is a common problem,
and everyone hates this.
\_ It may be difficult to handle programatically, but that's
because it sounds like you're trying to do something which
simply can't be done reliably. Maybe if you explain what
you're trying to accomplish there might be some help. -tom
\_ in otherwords, you want the result of `hostname`
\_ Well, hostname -f, but yeah. Anyone have to hostname source
code? :P
\_ On Soda: /usr/src/bin/hostname/hostname.c
If you want a much better example, get a copy of Stevens
Unix Network Programming Vol 1 and look at Chapter 11.
http://www.amazon.com/exec/obidos/tg/detail/-/013490012X
http://www.unpbook.com
The http://unpbook.com site has src code for the examples which
might help.
\_ Thanks, the source isn't very helpful (it turns out)
it's non-standard. But I have the Stevens book, and
if that's the way to do it, ok I guess.
\_ Figured it out. gethostbyname returns the cononical name as
part of the hostent structure. Not optimal, but better than
the next alternitive. -op |
| 2005/6/13 [Computer/SW/Unix] UID:38094 Activity:nil |
6/12 NFS Question. Does rpc.lockd/nlockmgr and rpc.statd/status only
need to run on a linux machine that is exporting file systems
or on clients too? |
| 2005/6/7-9 [Computer/SW/SpamAssassin, Computer/SW/Unix] UID:38011 Activity:nil |
6/7 Anyone else use forwarding from procmail & unix mail? I'm
been having a problem with something that used to work.
I forward email from another account to csua via procmail.
Mail headers come with a format that looks like:
From sender@sender.com Tue Jun 7 13:30:20 2005
Status: RO
>From mds Tue Jun 7 13:30:20 2005
...
Normally I use POP for email, but sometimes when I'm in a
rush I use command-line "mail". When I read/delete some
messages and enter 'q', it saves the resulting messages with
a space between the "Status" and ">From" lines. Haven't been
able to find anything via google, but I know that it didn't
previously do this (e.g. a month+ ago). Has anyone else had
the same problem? Thanks!
\_ read your email with mutt like a real man.
\_ Consider setting up a procmail rule to pipe everything through
formail before further processing. -dans |
| 2005/6/6-7 [Computer/SW/Security, Computer/SW/Unix] UID:37988 Activity:nil |
6/6 s/key confusion and confirmation: I must have reading deficiency. I
read the s/key howto over and over but I couldn't grasp the idea. So
maybe someone can confirm my understanding of it. The s/key stuff
only dictates which machine I can access the csua server from. That
is, if I have entered the one time password from my home desktop, then
I can log in from my home desktop with my unix login/pass. I can not
log in to cusa from my work machine if I haven't entered the one-time
pass on that machine.
Basically, since ssh2 is in effect now, I downloaded PuTTY. After I
enter the login as value, it shows "s/key 92 hi97345", then "password".
However, I used the s/key calculator, and put in 92 hi97345, and got
a one-time pass, with that pass I can not log in. But I tried with my
unix password, I'm no logged in. So I am confused why it has "s/key"
stuff and didn't expect a s/key one-time pass phrase? I basically
just use my unix login/pass just like before ssh was enabled.
\_ Same here--that is, I've been seeing the s/key stuff when logging in
since the ssh change, but I'm loggin in via putty, and just use my
normal login.
\_ Thanks for overwriting my changes fucktard.
\_ vi should have locked the file if you opened it for write. others
can only open it read-only. So you must not have the lock on
the file when you tried to edit it.
\_ 1, you're wrong. 2, you overwrote someone else when adding
this post.
\_ 3, I thought we went over this, using VI will ensure a
lock on the file you are editing. Or should we run a
command before editing a file? |
| 2005/5/28-31 [Computer/SW/Unix] UID:37873 Activity:nil |
5/28 What's the equivalent of >>! in bash?
\_ I'm not a bash user, but the manpage (hey, it has one!) seems to
imply >>|
\_ It seems that >> in bash works like >>! in csh?
\_ In either bash or (t)csh, you must set "noclobber" to prevent
overwriting. Perhaps you have it set it in (t)csh but
neglected to set it in bash?
overwriting. Perhaps you have it set in (t)csh but negeclted
to set it in bash? But more importantly, what does >>! mean?
>! makes sense, but don't >> and >>! mean the same thing?
\_ "cmd >> foo" will not create foo if does not exist.
"cmd >>! foo" will. at least in *csh
\_ Oh, ok. In that case, yeah, what the previous poster said:
just use >> in bash. |
| 2005/5/27-31 [Computer/SW/Security, Computer/SW/Unix] UID:37869 Activity:nil |
5/27 I'm the guy who was asking for software for organizing web links.
I tried the sdidesk software somebody recommended but it's too
complicated (I don't have time to learn wiki). So my focus has now
shifted to generic note-taking software. Anybody use one?
There are tons of those programs on the web. If you use one, please
let us know what you use. Thanks.
\_ Check out SafeSex from Nullsoft if you want something somewhat
protected and small. It can get a bit annoying what with having
to give it a password all the time. -John |
| 2005/5/25-7/12 [Computer/SW/Unix] UID:37839 Activity:nil |
5/25 Various insecure services have been disabled, including FTP, per
departmental policy and/or request. You should be using sftp. =)
\_ This should be in officia
\_ When sftp shows this:
s/key 98 so37989
Password:
What am I supposed to type?
\_ You get an s/key program, enter it, enter your soda password,
and paste the result into the window, as per
http://www.csua.berkeley.edu/skey-howto.html -John
\_ no offense, this is about the most confusing document ever
\_ You should also be getting a life, but the department does not yet
enforce that one... - jvarga |
| 2005/5/18 [Computer/SW/Unix, Academia/Berkeley/CSUA/Motd] UID:37752 Activity:low |
5/18 Hey kchang! You posted a request for people to stop adding and
deleting stuff from the motd. But your request didn't show up in your
archive. Why not?
\_ Because the request got deleted in between archive intervals.
You can read the techical FAQ, the archiver is unfortunately
not comprehensive. Also, I don't normally do this but I thought
it's best that I took out the sicko ascii art in the archiver.
It is listed as "Entry has been invalidated." For more info,
http://csua.com/?entry=faq1
http://csua.com/?entry=faq2 |
| 2005/5/16-17 [Academia/Berkeley/CSUA/Motd, Computer/SW/Unix] UID:37711 Activity:high |
5/16 kchang, I realize this is beyond the bounds of a diff tool, but I
think it would be pretty neat if when I use your diff I could get
the things that have been added and then deleted, between my
current and previous diffs.
\_ I did have that feature in an earlier version but the output was
very ugly due to the fact that there's a tendency for certain
individuals to change their own stuff 5-10 times within 2 minutes.
Unless there are compelling reasons and that you can convince my
committee members to agree (they're on the bottom of the
24HourDiff page), I'll just leave it as it is. If you want you can
just go to the 24HourDiff page and seek linearly. -kchang
\_ Your fucking committee members? When did your delusions become
this grand?
\_ Committee Members for 24HourDiff: brain, dbushong, ilyas,
jvarga, chiry, tom. And me.
Committee Members for Kais Motd: they're all listed here:
http://csua.com/?login=1
\_ Nowhere on here do I see who the Heroic Committee
for the Glorious People's Revolution members are.
\_ I think all it takes to be a member of the 'committee' is to
suggest something and explain why it's a good idea. -- ilyas
\_ ilyas is smart. You can thank him for a lot of ideas
that turned into actual features here (like user
tracking). I just implemented, that's all. -kchang
\_ In Communist Russia, user tracks YOU.
\_ your user tracking is beyond suck.
\_ if you know how to make it better, maybe you can
share your knowledge, or just shut the fuck up.
And if you think you can get away with everything,
you're wrong. scp, cron, sendmail, etc are all
logged under /var/log/*.log, accessible by
root/wheel.
\_ I think user tracking is against the spirit of the
motd.
\_ I've got your spirit right here pal. Who died
and declared you great arbiter of the motd and
all matters CSUAish? -dans
\_ I did. -God
\_ shell> /csua/bin/finger god
finger: god: no such user
\_ Wow, so are you really abusing root to track
who edits a world-writeable file? -meyers
\_ Come now, soda has a long history of root
abuse. Why break from tradition now? -dans |
| 2005/5/9-10 [Computer/SW/Unix] UID:37596 Activity:high |
5/9 Testing anonymity.
\_ Test failed. Nice.
\_ erikred DV 36864 0:00.01 sed s/[Cc]unt/twat/g /etc/motd.public
Why are you replacing one cuss word with another?
\_ variety?
\_ keeping twat and turning cunt to twat is minimizing variety
\_ test
\_ A silly experiment. All gone now. |
| 2005/5/6-8 [Computer/SW/Security, Computer/SW/Unix] UID:37555 Activity:nil |
5/6 A lot of web sites now have a login snippet on their main page,
which forefox does not display a SSL icon
(http://www.bankofamerica.com Are those logins safe? You can
usually find a specific login page within the website that
have the SSL icon. I assume bank sites are usually safe in
their design, but what about sites like
http://www.officedepot.com Some sites's login page
(http://www.bookpool.com/ac does not have a SSL icon, but
their login button specifically says "secure login", how does
it work? As an end user, how can one be sure the login/pw
information is encrypted while in transit?
\_ It's usually good practice to put the login page under SSL to
preempt concerns like yours. Many places don't have a login box
on their front page, and make you click through to an https link
to get a login box. Others put the login box on their front
page to save you that step, but the load of putting their front
page under SSL is prohibitive. If they say it's a secure login,
the HTTP Post that sends your information will be under ssl. If
you want to test this, put in a bogus login/password and watch it
jump to SSL when you click "login".
\_ For verification:
http://www.bankofamerica.com/signin/security_details_popup.cfm
\_ So you have to 'observe' the flashing by of the SSL icon
to distinguish these sites from sites that indeed uses
no security. I guess a better question is, how do you
tell if the HTTP post used to send your login
information is under SSL?
\_ Best course of action: don't worry about it. if someone's
really intent on stealing your info, there are easier ways
to do it. There are non-technical ways to protect yourself
better. keep an eye on your account activity. get your
annual credit check (or more frequently if you're worried).
SSL is no guarantee no matter how Verisign wants to package
it.
\_ I find security policy varies significantly
between sites. Your password can be as strong as
you like, but often times the "I lost my password"
feature is typically implemented with very little
security in mind. Better sites will allow you to
reset your password after you verified who you are
(via secret questions, etc), never revealing what
your actual password was. But some no so security
conscious sites will simply email your password in
plain text, and sometimes all you have to do is to
provide your email address. Some sites will also
reset your password with only the email address.
You can only guess how careful those sites will
treat your data (such as credit card info).. I am
trying to sort out the sites that have my login
information so that the lesser secure sites do not
share the same password as the more
secure/important sites...
\_ The guy I spoke to said it used to be configurable but was
taken out. If I turn any of my URLs into https, it stays
https, including turning all the links into ssl, but I know
of several people where it redirects to http. No clue why
it varies. -John
\_ The only way to be sure is to look at the source and see
how it's posting the login. But even then, you won't know
for sure that the authentication server is using weak
encryption.
\_ What's pretty funny is that gmail defaults back to http when you've
logged in, and they seem to have removed the setting the security
guy I mentioned which lets you set ssl for all mail access. -John
\_ My gmail still stays https and always has. I know yahoo
switches back to http after login.
\_ The guy I spoke to said it used to be configurable but was
taken out. If I turn any of my URLs into https, it stays
https, including turning all the links into ssl, but I know
of several people where it redirects to http. No clue why
it varies. -John
\_ You're right. I just never noticed it, because my
bookmark specified https. Thanks for the tip. |
| 2005/5/5 [Computer/SW/Unix] UID:37533 Activity:nil 66%like:34926 |
5/5 Today is 05/05/05 whether it's MM/DD/YY, DD/MM/YY, or YY/MM/DD.
\_ Thanks for sharing.
5/5/ Can anybody think of a good way to save a directory structure without
the files? Like say you wanted to create the /usr/share/man
directory skeleton preserving directory permissions and ownership
but not keep any files as tar would normally do. I dont want to do
an tar/untar of everything and then go back and delete the files,
because we are looking at a few hundred megs in files. I have
various klugy ways to do this, but wondering if there was something
slick with existing utilities. Note: I want to *store* this
information (in a form which can be used to rebuild the tree
structure if necessary). I dont want to merely clone an existing
tree to another part of the disk, minus the files ... although i
suppose you could clone the tree and tar that skeleton and then
delete the tree.
\_ find /usr/share/man -type d | xargs tar rnf man.tar --mconst
\_ tar -n will do it, but that isnt an option on all tars.
i suppose you can do a find -type f > /tmp/exclude and then
do tar cf man.tar -X /tmp/exclude ... but ugh. any other
thoughts?
\_ If your tar doesn't support -n, you could use zip:
find /usr/share/man -type d | xargs zip -g man.zip
\_ I suppose zip is a fact of life now in the unix
world and i shouldnt feel impure to use it. |
| 2005/5/5 [Computer/SW/Unix] UID:37525 Activity:high |
5/5/ Can anybody think of a good way to save a directory structure without
the files? Like say you wanted to create the /usr/share/man
directory skeleton preserving directory permissions and ownership
but not keep any files as tar would normally do. I dont want to do
an tar/untar of everything and then go back and delete the files,
because we are looking at a few hundred megs in files. I have
various klugy ways to do this, but wondering if there was something
slick with existing utilities. Note: I want to *store* this
information (in a form which can be used to rebuild the tree
structure if necessary). I dont want to merely clone an existing
tree to another part of the disk, minus the files ... although i
suppose you could clone the tree and tar that skeleton and then
delete the tree.
\_ find /usr/share/man -type d | xargs tar rnf man.tar --mconst
\_ tar -n will do it! Thanks.
\_ tar -n will do it, but that isnt an option on all tars.
i suppose you can do a find -type f > /tmp/exclude and then
do tar cf man.tar -X /tmp/exclude ... but ugh. any other
thoughts?
\_ If your tar doesn't support -n, you could use zip:
find /usr/share/man -type d | xargs zip -g man.zip
\_ I suppose zip is a fact of life now even in the unix
\_ I suppose zip is a fact of life now in the unix
world and i shouldnt feel impure to use it. |
| 2005/5/3 [Computer/SW/Unix] UID:37484 Activity:nil |
5/2 de los boondocks cómicos
http, dig dis://images.ucomics.com/comics/bo/2005/bo050501.gif¿5/2 sobre
http, dig dis://<DEAD>www.ucomics.com/tomdedancin'bug/index.phtml<DEAD>
\_ está sobre puh'da' la época de la gente bastante estúpida leer
tomdedancin'bug
\_ significa que los medios son llenos de basura y que cualquier
witted some individuo puede publicar cualquia' cosa y todavÃa tener
\_ ah' intentó, yo no consigue el cómico en todos. Sigue siendo
dan Fillmo'e. What it is, Mama! - danh |
| 2005/5/3 [Uncategorized/Spanish, Computer/SW/Unix] UID:37483 Activity:nil |
5/2 que estoy intentando calcular fuera de qué delimitado'es dentro del
(protocol) de un uri puede significar. Ah be baaad... (en http, dig dis://foo.com, el HTTP
de) he visto ssl+http some veces. Hay cualquia' otro¿
charcters que tienen significado especial, como más? El estándar es
not muy claro en esto.el
\_ el esquema de un uri es cualquia' cosa antes del primer, dig dis:.
it necesita sa' caracteres de la palabra, puh'o no he comprobad-asso
in al rato. Los más comunes son probablemente HTTP, https,
mailto, y las cosas tienen gusto del rtsp y rellenan Gopher, si
farm.la derecha del
\_, puh'o estoy honky codeando un servicio del RMI, y yo pensamos I
need para apoyar cosas como ssl+myprotocol, dig dis://blah...el
\_ del RFC, el esquema debe comenzar con un carácta' de la
and puede contena' la alfa, dÃgito, +, -, o. |
| 2005/5/3 [Uncategorized/Spanish, Computer/SW/Unix] UID:37480 Activity:nil |
5/2 bien, puesto que se parece sa' dÃa de la poste-uno-histo'ieta. Sheeeiit...
http, dig dis://www.andysin'er. Ah be <DEAD>baaad...com/sample4.htmlel<DEAD>
\_ sÃ, puh'o el otro poste estaba de some * divertida * histo'ieta. Sheeeiit.
0/13 allÃ. Alguien en el motd no precisó ya¿el
how UNfunny este individuo es?el kintante de Andy del
\_ está some las histo'ietas polÃticas como G.W. Bush está
'esecutin'. Hey, aho'a que podrÃa sa' unfunny
cartoon número 14 para Andy. Slap mah fro! |
| 2005/5/2-4 [Computer/SW/Languages/Functional, Computer/SW/Unix] UID:37451 Activity:nil |
5/2 I'm trying to figure out what delimiters inside the "scheme"
(protocol) portion of a uri may mean. (In http://foo.com http is
the scheme) I have seen ssl+http sometimes. Are there any other
charcters that have special meaning, like plus? The standard is
not very clear on this.
\_ The scheme of a uri is anything before the first :. I think
it needs to be word characters, but I haven't checked on that
in a while. The most common ones are probably http, https, ftp,
mailto, and things like rtsp and stuff. gopher, if you're old
school.
\_ Right, but I'm programming an RMI service, and I think I
need to support things like ssl+myprotocol://blah...
\_ From the rfc, the scheme must start with an alpha character,
and can contain alpha, digit, +, -, or . |
| 2005/5/2-3 [Computer/Companies/Google, Computer/SW/Unix] UID:37447 Activity:kinda low |
5/2 What's the best way to transfer every single one of your UNIX email
(on Cory, etc) to http://gmail.com? Thanks.
\_ mutt.
T . ; b hoser@gmail.com
-tom
\_ I tried to bounce several thousand pieces of my personal email
via mutt to gmail from my machine, gmail decided I was a spammer
and stopped delivering email from my machine to gmail for a few
weeks - danh
\_ Does gmail support IMAP? If so, the (otherwise abominable) UW IMAP
package contains an excellent utility called mailutil, which can,
among other things, transfer messages from one IMAP server (or a
local unix mailbox format file) to another IMAP server, and it is
very good about preserving flags, date/time stamps etc. -dans
\_ No, I am not even sure why they support POP, unless it's some
kind of bait and switch.
\_ You could just read:
http://gmail.google.com/support/bin/topic.py?topic=194
\_ Did you reply to the right post? Their FAQ doesn't offer
any real explanation why it makes sense for them to
offer POP support for free.
\_ Perhaps only a small % of their users use POP, but those
who do are far more influential in reccomending
computer things to their friends. |
| 2005/4/28-30 [Computer/SW/Unix] UID:37418 Activity:nil |
4/28 emacs question. In tcsh, I can type a few characters, and press ESC-P
to complete it, based on history. Is it possible to do the same in
emacs, short of having to type "history" and then !<number>? I mean,
!! is cool, but having this tcsh feature in emacs would be cooler.
\_ Oh you can just type !<few charecters> God, I need more sleep
\_ M-/ (dabbrev-expand). |
| 2005/4/26-27 [Computer/SW/Unix, Finance/Investment] UID:37366 Activity:low |
4/26 So I have a problem with vmware. It is unable to shrink my / drive
and says it is an Unsupported partition. When I do the following,
I see:
tommy ~]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
81892056 4140772 73591388 6% /
I don't remember mounting it as /dev/mapper. What does this mean
and how do I change it as a supported partition? Thanks.
\_ /dev/mapper is RHEL 4. That's an LVM volume. I've heard you can't
resize / with LVM, and I wasn't able to do it in RHEL 3.
Sorry, don't know vmware... -ax
http://www.redhat.com/archives/fedora-test-list/2004-March/msg02016.html
claims it's possible under fedora, might work under RHEL. -ax |
| 2005/4/22-23 [Computer/SW/Unix] UID:37313 Activity:nil |
4/22 The woman who claimed she found a finger in her bowl of Wendy's
chili last month was arrested. Hahaha I suspected it was her when
this incident first occurred.
\_ link?
\_ http://tinyurl.com/due52 GREAT NEWS! GREEDY WHORE!
\_ Good police work.
\_ Why didn't she go with a dead cockroach? Then the police would've
paid much less attention.
\_ Probably can't (as much) money out of it as a finger would... |
| 2005/4/19-20 [Computer/SW/Unix] UID:37263 Activity:nil |
4/19 Stupid sed question: I know you can use UNIX environment variables
in a sed script by double quoting at the command line, i.e.:
sed "s/_momma_type_/$MOMMA/g" input.file
will be expanded by the shell before execution so that _momma_type_
will be substituted with whatever the value of $MOMMA happens to
be. Is there anyway do this in a sed script specified by sed -f?
or do I have to use a shell script to get shell substitution?
I'm guessing the latter, but I thought I'd ask just to be sure.
\_ Environment variable substitution is handled by the shell.
\_ You can do it in Perl. I don't know sed. Sounds like sed sucks.
\- i use sed and awk for a lot of stuff but if you are writing
enough sed to need to put it in a file, yeah use perl. --psb |
| 2005/4/15 [Computer/SW/Unix] UID:37200 Activity:very high |
4/15 Unix Wizards, how would I sort a list like this in numerical order?
I can't figure our the sort syntax. I am not a programmer!
+0.04gb
+0.11gb
+1.98gb
+10.94gb
+17.88gb
+2.72gb
+21.02gb
+3.31gb
\_ The regular "sort -n" will work, except you need to remove the
plus signs first: tr -d + | sort -n --mconst
\_ I am trying to figure out how to sort in that format!
Otherwise I will have to sort and then re-add the plus
sign and that seems lame.
\_ newsflash: you seem lame.
\_ whine whine whine.
\_ OP already said he's not a programmer.
\_ and using unix utilities is not programming.
\-gee maybe we should give an awk test before
giving people sloda accounts.
giving people sloda accounts. obviously "not
programmer" = casual unix user. i've met people
who are technical people who never thought about
the fact you could "grep a web page" by doing
soemthing like lynx -dump | grep, so things
obvious to some arent necessarily immediately
obvious to others.
\_ How about: sort -t "+" +1 -n <filename>
\_ Yeah, as long as the sign's always +, that's simpler.
(You could also do "sort -n +.1".) --mconst
\_ cat file.txt | perl -ne 's/^\+/ /g;print;' | sort -n |
perl -ne 's/^ /+/g;print;'
\_ sed 's/+/ /' file.txt | sort -n | sed 's/^ /+/'
\- those are redarded. learn to use sort, if it is in a shell
script and not already in a perl data structure or some such.
\_ Oh fuck you. Those handle negative numbers too.
\_ so does sort -t + +1 -n <file> | sort -t - +1 -n -s -r
\_ Wow, and I always know what the -s and -t options for
sort do. Not to mention - and +
\_ That's what man pages are for, man. sometimes it
does pay off to reinvestigate old tools.... |
| 2005/4/14-15 [Computer/SW/Unix, Computer/SW/Security] UID:37186 Activity:high |
4/13 Hey, if you're going to update nethack, update angband, too. You
could also install a variant, like NPPAngband:
http://home.comcast.net/~nppangband
\_ Interesting. Thanks for the pointer.
\_ there's even a competition:
http://mysite.wanadoo-members.co.uk/angband_comp/compo.html
\_ Installed angband (there was a ports version) - amckee
\_ NPPAngband is trivial to install. Why not install that too?
\_ Because I was up until 2:30 upgrading Perl and did this
between compiles? MAYBE I'll install it, though. =) amckee
\_ If by 'trivial' you mean 'completely manual', yes it was trivial.
I've installed it as NPPAngband, I did not overwrite angband
\_Oh no! There goes my weekend/life! -scottyg
\_ NetHack, Copyright 1985-2003
By Stichting Mathematisch Centrum and M. Stephenson.
See license for details.
No write permission to lock perm!
Hit space to continue:
\_ Unable to replicate with my two user-land accounts,
do you have any stale files around? Anyone else seeing this?
Send email to amckee/root, iff you see this and want it
looked at.
\_ i don't think you quite understand what userland means.
\_ You do realize that, in addition to OS, process, and
object level privileges, root accounts can run in
increased kernel priority levels? Granted, in this case
the problem is most likely to do with file permissions,
it is not an atypical usage of the word 'userland' to
refer to non-root/non-privileged users. Thanks for the
snideness, though.
\_ i still don't think you quite understand what
userland means. try looking it up in, say, the jargon
file. root accounts are not any different from
normal ones in terms of where they run (i.e., they do
not run in the kernel). the kernel will allow you to
do privileges things by being root, yes, but they are
still done by the kernel, not because you as root are
in the kernel mucking around. |
| 2005/4/6-8 [Computer/SW/Security, Computer/SW/Unix] UID:37085 Activity:nil |
4/6 In Linux, when I type "limit" I get to see the max # of file
descriptors I can have. How do I check the number of descriptors
I'm holding and how do I change it? "limit descriptors 8096"
doesn't work (think I might need root or something)
\_ limit/ulimit work at the shell level. You can see the number of
descriptors held in /proc/self/fd. To change the max fd's, you
may need to change the hardcoded limits in /etc/security/limits.conf
your syntax is right, but you are probably trying to go past the
hard limit (limit -h to view) Yes, you will need root access to
change the hard limit. |
| 2005/3/30-31 [Computer/SW/Unix, Computer/SW/Security] UID:36971 Activity:kinda low |
3/30 ssh port forwarding/X11 issue: Any ideas on how to solve this
problem: I ssh over to a remote host that shares my same home
directory. My forward X11 works okay until I sudo to root.
I get a message about wrong authentication. Any ideas ?
Being root on the base machine works just fine for X11.
\_ xhost
\_ NFS mount root squash making your $HOME/.Xauthority not readable
perhaps.
\_ Another possibility is sudo not retaining $HOME. But anyway,
look into the xauth command. |
| 2005/3/30-31 [Computer/SW/Security, Computer/SW/OS/Windows, Computer/SW/Unix] UID:36959 Activity:nil |
3/30 In Windows XP, when I share [export] a folder with read/write/execute
permissions for ALL, it still asks for username/password. How do I
configure it so that it never asks for user/password?
\_ You need to enable the Guest account. |
| 2005/3/28-30 [Computer/SW/Unix] UID:36930 Activity:moderate |
3/28 Is it possible to nfs export the same filesystem twice under two names
one read only one read-write. Perhaps with a symlink? Perhaps
mounting a partion in two different places with two different names
and exporting them seperately (one rw one ro)
(or am I begging for trouble doing something like that).
\- You can mount it in difference places, but you cant export
two diff names.
\_ You might be able to do this if you use something like vnconfig.
1. Make a file of the size of the fs you want exported:
# dd if=/dev/zero of=myfile bs=1024 count=X
2. Configure the device:
# vnconfig -c -v /dev/svnd0c myfile
3. Make a fs on the file
# newfs /dev/svnd0c
4. Configure the second copy:
# vnconfig -c -v /dev/svnd1c myfile
5. Make two directories, one for ro, one for rw mount:
# mkdir /mnt/ro /mnt/rw
6. Mount the same file, once rw, once ro:
# mount -o ro /dev/svnd0c /mnt/ro
# mount -o rw /dev/svnd1c /mnt/rw
7. NFS export the ro and rw fs's separately
(Note I haven't tested the commands).
\_ If you're going to go this route on FreeBSD, you can use null
mounts. See man mount_nullfs. I suggest you make the null-mounted
FS the read-only export; although the manpage warnings are a bit
strong, nullfs used to have some consistency problems, so it's
best to keep it simple. (Null mounts are also useful for
populating jails.) -gm
\_ linux has --bind mounts as well. And solaris has loopback mounts\
as well
\_ linux has --bind mounts as well. And solaris has loopback mounts
as well
\_ You can mount it in two different locations, one r/w and one r/o
but I don't think you can export it twice. |
| 2005/3/25-31 [Computer/SW/Security, Computer/SW/Unix] UID:36883 Activity:moderate |
3/25 My team (Yahoo! login/registration/access) has several
software engineer positions open at all experience levels. -atom
\_ I need a part time job, please give me a flexible part time
job because school sucks. -kchang
\_ How about fucking change the default login to be secure login??
Every other fucking website in the world uses secure login. Why
does Yahoo insist on using non-secure login as default????!!!
\_ Because it is secure, dufus. Assuming you have javascript
enabled anyway. They issue a random challenge string that
you answer by hashing together your password and the challenge.
\_ Oh wow, we don't really need SSL don't we?
\_ Oh wow, we don't really need SSL I guess.
\_ Wow, no, it's needed for some things.
\_ Why doesn't yahoo use SSL login by default?
\_ Well, the obvious reason is they don't want to buy
hardware that can handle craploads of SSL connections,
which is a lot more expensive than the hashing scheme.
\_ Aren't you in LA? |
| 2005/3/25 [Recreation/Food, Computer/SW/Unix] UID:36870 Activity:high |
3/24 Anyone else in the mood for some Wendy's chili?
\_ Yes, but I can't put my finger on why...
\_ The quality of Wendy's finger foods is well known.
\_ You've got to hand it to them: they make a decent meal. -gm
\_ I hear the tip is included in the price of the meal -eric
\_ Do you have a point?
\_ It's finger lickin good.
\_ This is a real nail-biter.
\_ I bet you think you're cute(icle).
\_ Oh, stop carpaling.
\_ Do you expect us just to knuckle under?
\_ Oh, give it a wrist already. -gm |
| 2005/3/24-28 [Computer/HW/Memory, Computer/SW/SpamAssassin, Computer/SW/Unix] UID:36849 Activity:nil |
3/24 I have a procmail process on Linux that I would like it to talk
HTTP to a servlet (or any URL for that matter). What is the most
efficient (the smaller the memory footprint, the better) and the
most scalable (we do have heavy email volume) and the most performing
way you can think of? TIA. |
| 2005/3/23-24 [Computer/SW/Unix] UID:36828 Activity:kinda low |
3/23 What's the best way to configure a day-to-day WinXP user
account to be unprivileged? Is there something like sudo,
or does the user need to switch to admin before making
system changes/software install?
\_ For some program icons in the Start menu, you can right-click on it
and get a "Run as..." option. If not, go to Properties -> Shortcut
-> Advanced -> Run with different credentials.
\_ Ah cool. Then when you "Run As" does windows prompt you
for the root password?
\_ Yes, but there are some administrative tasks that need root
which I don't know how to 'sudo'. Example would be control
panel stuff and things done through Explorer itself.
\_ And for those tasks do you first switch users
and then run explorer and/or control panel?
\_ Yes, but kind of a PITA.
\_ Is there a non PITA way to do it?
\_ I'd love to hear it. -pp
\_ my trick for this is to start
iexplore as root with "run as"... you
can probably bookmark desktop,
control panel, etc in root's iexplore but i
just navigate there myself.
\_ Thanks! that is great. kinda
like "sudo bash"..
\_ Thanks! That is great. kinda
like "sudo bash" -op
\_ You could do the opposite; run certain programs (Outlook, IE,
etc.) with fewer privileges. Search http://microsoft.com for "DropMyRights". |
| 2005/3/18-4/4 [Computer/SW/Unix, Computer/SW/Security] UID:36744 Activity:nil |
3/18 Office account holders - please clean up your directories, or
we'll have to unleash the wrath of root (and karen) on you! =) |
| 2005/3/9 [Computer/SW/Unix, Computer/SW/OS/Solaris] UID:36604 Activity:low |
3/9 After I start an xterm in SunOS 5 and Linux, is there any command to
change the title of the xterm window? Thanks.
\_ Check ~geordan/bin/retitle
\_ That 2 should probably be a 0 -- that way the new title stays
even when the window is minimized (or whatever the standard
X term for that is). -alexf
\_ using 1 will change the icon title. using 0 changes both.
\_ I always just type:
% cat
^[]0;New Title^G
^D
(^[ = escape, ^G = ctrl-G, ^D = ctrl-D) --dbushong |
| 2005/3/9 [Computer/SW/Languages, Computer/SW/Unix] UID:36593 Activity:high |
3/8 Favorite shell colors?
White on black: ...
Black on white: ..
\_ Does it still count as black on white if you have transparency
turned on in Terminal.App, but its primarily black text on a
"white" background?
\_ And here I read this question about what color shell I should
wear under a suit. I thought the choices were egregious.
Multicolor on anything: .
Amber on black: .
\_ Amber sounds nice, how do you set that?
\_ I use RGB 255,160,0 ...like old monitors. --op
\_ Me too -- RBG 189, 174, 81 with about 30% transperancy
on the black background. Cursor and highlight text dark and
light blue, respectively.
\_ Amber reminds me of the old 80x25 monochrome days.
Green on black: ..
Yellow on black: .
Yellow on black: .. (Yellow on black has a very high vis. contrast.)
\_ light gray on black: .
Wheat on black: .
\_ tan on dark green: ..
\_ heh, i think that's the default i started out with that
eventually led to me using wheat on black -pp
SGI colors - white on midnight blue: .
Back in Black: .
Asian on White: .
Blacks on Blondes: . |
| 2005/3/8-9 [Computer/SW/OS/Windows, Computer/SW/Unix] UID:36574 Activity:moderate |
3/8 I am thinking of switching to cygwin, but I'm afraid of certain
things that bit me in the past. For example, will I run into
problems with DOS's stupid \r\n vs \n compatibility? How about
file names with caps? Say I tar up file names from DOS and bring
it to Linux, will it all get messed up because DOS is case
insensitive? Any other comment before I dive into cygwin? ok thx.
\_ Switching to cygwin from what, exactly? cygwin is designed to help
Unix people cope with Windows. It's not a complete port of the Unix
userland to Windows, and it's not a complete replacement for the
Windows shell (although it's getting better, on both counts). You
will notice assorted oddness that points out just how not-Unix it
is. Still, it's better than being stuck with cmd.exe if you find
yourself flailing with the normal Windows UI. (I know I do.)
To answer your particular questions:
you can tell Cygwin what to use as the end-of-line sequence; cygwin
commands and filenames are case-sensitive; tarballs will be extracted
with the same capitalization as is used on the Windows machine;
beware of absolute paths (cygwin represents C: as /cygdrive/c,
which confuses many Windows apps) and the direction of slashes
(most Windows apps handle / as a directory separator, but Cygwin
doesn't like unquoted, unescaped \ for same). All in all, I'm
quite happy with it. -gm
\_ I also need basic scripting like Perl and Bash. Are those
included in the cygwin distribution?
\_ They're available. Bash is in the default install. For perl,
expand the "Interpreters" tab in the installer and select it.
\_ Yeah, I use Cygwin with Windoze too. It's great. I'm usually
in a cmd.exe window and hardly ever use tcsh or bash shells.
Most frequently used commands are zip, unzip, tar, and grep.
Your biggest challenge will be figuring out which packages to
download.
\_ I'll second it being great. I use tcsh almost exclusively instead
of cmd.exe now. The rootless Xserver (X-windows mix with your
regular Windows-windows and don't take over the whole screen)
isn't stellar but it works fine. Command line scp, globbing,
xterm.. lots of nice stuff. --dbushong |
| 2005/3/7-8 [Computer/SW/Unix] UID:36563 Activity:nil |
3/7 Can somebody recommend a web hosting plan that offers unix shell
access? I'm currently paying $24.95 with verio and I want something
cheaper. I'll settle for IMAP/SSL access if I can't get unix
shell access. I'm an old fart that can't let go of pine. Thanks.
\_ What's wrong with soda? -- old fart that can't let go of RMAIL and
/usr/bin/Mail
\_ http://alterhosting.net offers cheap domain reg. plus IMAP/SSL mail
handling. I got mine for about $30/year, with about 100 MB
mail spool size and a web interface to create mail accounts
in the domain and set quotas. each gets its own IMAP box
and IMAP/SMTP+SSL authentication settings |
| 2005/3/7-8 [Computer/SW/Security, Computer/SW/Unix] UID:36560 Activity:nil |
3/7 Are there any ISPs that still offer generic dial-up PPP accounts that
works with the Windoze generic dialer and don't require custom dial-up
clients? I have an AT&T Global Dialer account, but it needs the Global
Dialer client. I remember the old days where all I needed was to enter
the phone number, login and password into the Windoze dialer, and it'd
work. Thanks.
\_ SBC Global works for me when I'm on the road. - jvarga
\_ http://ispwest.com works well for that. even works with linux.
\_ http://sonic.net
\_ They've always had the at&t dialer, but you've been able to
authenticate with PAP and with login in the past with the
8764287346@worldnet.att.net and the gibberish password. Look for
an account.txt file -dwc |
| 2005/3/2-3 [Computer/SW/Unix] UID:36496 Activity:high |
3/2 Is there a way to do "grep p1 file | grep p2" with one grep without
piping? ie, "grep p1&&p2 file"? Thx.
\_ use egrep? man egrep. Look for egrep -e
\_ Can I write an alias where I can say "mygrep p1 p2 p3 ... file",
where I can specify variable number of patterns, with the last
one being the filename?
\_ egrep -e '(p1|p2|p3)' filename
not sure about the exact syntax
\_ are you sure about that? the original poster wants the
conjunction rather than the disjunction of several patterns
(i.e. all of them, not "any one of"). i don't see how
any grep options including those you listed would allow
that in a single grep. -alexf
\_ you are right, alexf. I think you can use some
regular expression do an AND.
\_ Can you use sed rather than grep?
$ sed -n -e '/[pat1]/h' -e '/[pat2]/h' ... -e '/[patN]/p' [file]
I'm not sure this will work w/ all versions of sed.
\_ In gnu grep at least you can do:
egrep -x '.*(p1.*p2|p2.*p1).*' file
..though this is tedious and doesn't work for overlapping patterns.
I usually just pipe grep to grep. --dbushong |
| 2005/2/28-3/2 [Computer/SW/Unix] UID:36454 Activity:kinda low |
2/28 I have a a few hundred GB of data that exists in two locations
connected by a slow network. When a change is made in one place,
I want it to update the second location (one way only with a 'master'
copy and a 'mirror'). I use rsync for this now, but it is just too
slow. Products like SnapMirror exist if you want to spend money,
but they are also hardware dependent (Netapp). Is there some better
way to solve this problem? I am not opposed to coding my own solution,
but I'm not sure where the innovation needs to be made. --dim
\_ we are currently considering a product by availl that does this.
we are a huge netapp shop and use snapmirror but needed it to be
r/w on both ends and platform independant.
\_ how much data is changing? Which part of rsync is the bottleneck?
rsync only transfers the parts that have changed, afaik... are you
using compression (if text-ish data)?
\_ the problem with rsync is that it .. at run time ... scans the entire
set of files, on the local and remote, and looks for differences
between them. So it is reading every file and checksumming every
file, regardless of whether they changed or not, and communicating
this information across the link between source and destination.
What you need is something that hooks into the file/operating
system and only notifies of changes when they happen, and
propagates them over. This is what SnapMirror does, and does quite
well. Maybe you need to look for coding a similar solution. As a
quick hack, you could just have rsync go over the files/directories
that have had timestamp change since last run, instead of scan
the whole directory tree. -ERicM
\_ actually, i think if the files are the same size and same
modification date, it'll skip them, even if the files are
actually different.
\_ There are options in rsync to do this, but in general usage
you're wrong.
\_ Actually, I'm pretty sure (s)he's right; look at the docs
for the --size-only option and what it says about the
Normal behavior. --dbushong
\_ even if you tell rsync to only check file sizes/timestamps,
it *still*, at runtime, it still has to communicate with
remote rsyncd to determine if local and remote size+timestamp
are different, for each file. This will eat time and
bandwidth galore, unless you're trying to only sync a few
large files. -EricM
\_ If you are willing to do a bit of hacking this isn't so hard. You
want a custom NFS server that journals modifications and ships them
to the slave. The slave then applies the journal. If the master is
a linux box you should look into FUSE (File system in User SpacE or
something like that), as this would be pretty easy with FUSE + clue
--twohey
\_ Veritas has a tool, called Veritas Volume Replicator that does
this. I have never used it, but considering my previous experience
with Veritas products, I would expect it to work fine. It is
also cross platform. -ausman
\_ We used VVR (as well as Cluster FS) at Walmart. It works very
well and I'd definitely use it again. I love Veritas' file
system and volume management products. (Not cheap!) -- Marco
\_ How about giving rsync a list of files that have been modified
since the last rsync? First find recently modified stuff on the
master, then rsync only that stuff to the mirror. You might
be able to hack rsync to do this for you--, like:
rsync --more-recent-than 24 hours
--PeterM
\_ This is a good idea.
\_ How about OpenAFS or Coda? They both have mirrored modes, IIRC. |
| 2005/2/23-24 [Computer/SW/Unix, Computer/SW/Unix/WindowManager] UID:36387 Activity:high |
2/23 Does anyone manage to change the copy-paste behavior of Windoze to
more Unix like? (hilight yield copy, middle click yield paste)
thx
\_ Maybe tweakui can do this.
\_ No.
\_ No offense, but the Unix way is stupid. Highlight something and
the entire copy/paste buffer is gone. I am glad windows is not
broken in the same way.
\_ Why are you hilighting something you don't want to paste?
\_ For example, in a Windows editor you can highlight something
and then hit paste to replace it with what's on the clipboard.
This is annoying to do in Unix.
\_ Maybe you just click on the window and you accidentally
highlight something?
\_ I usually highlight as a I read, it gives me a visual cue
as to how far I've read, in case I need to switch to a
different document and then switch back.
Also the highlight as copy make it really hard to overwrite
with a paste.
\_ I bet you just love click-to-focus too?
\_ Yes. It makes much more sense than mouse focus.
\_ How so? What's the purpose of pointing your mouse cursor
at some window you don't intend to interact with? It's
a waste. --likes-xmouse-but-hates-unix-copy/paste
\_ To get it out of the way, because you bumped it by
accident, etc. xmouse sucks for the same reason as unix
copy/paste. Although unix copy-paste can be convenient,
it's flaky and you can easily accidentally highlight
something when placing the cursor for the paste. Unix
human interface all suck shit except fancy cmd shells.
\_ see, there's your problem. unix paste works right
w/ focus-follows-mouse. you don't place cursor
as some extra step, you just middle-mouse the new
text right where it belongs.
the old unix mouse model is for people who think
and then act; not for clicking and
dragging all over the screen looking for visual
response.
\_ But really sucks when you're trying to paste
the URL into your webbrowser that's already open
to a page. KDE has a decent workaround with its
black "X" button, which deletes the current URL
and places the cursor in the location bar.
\_ with most browsers you can just paste a URL
into the main frame. I think you're just
not curious enough.
\_ The Unix way IS stupid. It makes no sense whatsoever. The people
who first implemented the method in xterm were morons.
\_ What more can you expect from MIT?
\_ does anyone know how to make gnome not auto-raise when I click
somewhere in the window pane? I'd like to operate on deeper
layers and only raise when I click the title bar, etc.
\_ what I like about unix is the double left click then right click
highlighting a region. windows you have to hold down shift key. |
| 2005/2/23-24 [Computer/SW/OS/Solaris, Computer/SW/Unix] UID:36382 Activity:nil |
2/23 Trying to load tcsh on a SunOS 5.7 machine, getting C compiling
errors. Resources? Suggestions? STFW returns nothing useful.
\_ what errors?
\_ Trying to load it or compile it? If you are just trying to load
it, get it from sunfreeware and pkgadd it. |
| 2005/2/20-21 [Computer/SW/OS/OsX, Computer/SW/OS/FreeBSD, Computer/SW/Unix] UID:36339 Activity:moderate |
2/20 I have several gigs of files that I need to transfer from a bsd
machine to an os x machine. What's an efficient way of doing this?
(It's way too many files to gmail to myself.)
\_ Umm, have you heard of ftp, http, scp, rsync, etc.? Email is one
of the least efficient means imaginable for this kind of thing.
\_ rsync, followed by tar | ssh, followed by create a tar/gz file and
use any of the other methods.
\_ Thanks, but I ended up just using ftp (dont' know why I
didn't thinking of it myself). -op |
| 2005/2/20-21 [Politics/Domestic/911, Computer/SW/Unix] UID:36338 Activity:kinda low |
2/20 Hey jwang, I have a suggestion/feature request. Rather than deleting
politics which I know you hate, how about moving them to another file,
like /etc/motd.politics? I appreciate the hard work you put into
political cleansing in the past few years as it makes motd more
compact, but it would be nice if they're moved instead of eradicated.
You got root, and/or you got ties to root to make it
happen. How about it jwang?
\_ fuck you kchang.
\_ fuck you ilyas.
\_ fuck you meyers.
\_ The above illustrates why conservatives are running the show
(cuz the other side can't get along with one another)
\_ I am on the liberal side now? Someone forgot to let me
know. I don't know (and don't care) what kchang's
politics are. Meyers' politics are irrelevant as he is
an idiot. -- ilyas
\_ Wow, it's like you've known me my whole life! Please
explain to me why foodstamps at gunpoint is bad, but
funding your research at gunpoint is good. -meyers
\_ I personally would go for that, though I think it is sort
of a solution looking for a problem. If I am not interested
in something, I just don't bother with it. I don't go
around trying to decide what is appropriate for others
to read. Actually, how about we create a motd.moderated
and a motd.unmoderated and you can be responsible for
maintaining motd.moderated. -ausman
\_ This is fucking hilarious. How many people does anyone really
think would read motd.moderated?
\_ after 911 the motd went into a lockdown and everyone
switched over to the underground motd. Search for
"underground motd peterm" in the archiver and you'll see.
It's really no big deal, people adapt quickly.
\_ I have a better idea. How about instead of doing that, jwang
gets a fucking clue? -- ilyas
\_ go libertarian go!
\_ Good idea, with one slight change: just made motd.unmoderated
a symlink to motd.public.
\_ Can we also symlink motd.moderated to /dev/null? |
| 2005/2/19-21 [Computer/SW/Unix, Recreation/Computer/Games] UID:36258 Activity:low |
2/19 EQII's new /pizza command is really quite brilliant, but they're missing
out on dozens of other marketing opportunities. What sort of ideas
can you think of?
\_ /hooker
\_ Yay!
\_ /term paper
\_ /shower
\_ http://www.cbsnews.com/stories/2002/10/17/48hours/main525965.shtml
http://www.geek.com/news/geeknews/2000dec/gam20010105003667.htm
I recommend "/psychologist"
\_ You mean M-x doctor
\_ /quit |
| 2005/2/18-20 [Computer/HW/Memory, Computer/SW/Unix] UID:36235 Activity:low |
2/18 I have a 64MB USB thumbdrive and I'd like to put a small
version of knoppix on it to use as a rescue medium. I don't
want X. I do want all the cool hardware detection that knoppix
does so well. Any ideas for something already made for this?
\_ http://www.knoppix.net/forum/viewtopic.php?t=12964 YMMV. Let me
know if it works. I want to do the same for Auditor on 1GB
(http://new.remote-exploit.org/index.php/Auditor_main -John
\_ Hi John, I found this, but can't find the USB for
mounting the root FS at bootup. Has an initrd though.
\_ Hi John, I found this, but it can't find the USB for
mounting the root FS after bootup. Has an initrd though.
http://www.tux.org/pub/people/kent-robotti/looplinux/rip
It is well-documented, but not as cool as knoppix. -brett
\_ Nifty--I intend to muck around with this sometime next
month (don't have any working unix boxes right now) so if
you drop me a mail I'll let you know if I figure out
how to do it. It's also tremendously reliant on whether
your bios can boot from usb, in what order you load the
drivers, etc. You may also want to look at M0n0BSD
(http://www.m0n0.ch -- I'll see the author in a few days
and can ask him for help. -John |
| 2005/2/16-17 [Computer/SW/Editors/Vi, Computer/SW/Unix] UID:36199 Activity:low |
2/16 Is there a way in ksh to set tab to auto-complete filenames as it
does in bash?
\_ isn't double-ESC good enough?
\_ No, double-ESC is like shit. TAB is way better. -!op
\_ set -o vi (can't have emacs style line editing and TAB,
in traditional ksh)
\_ I don't think there is in traditional ksh. In pdksh,
set -o vi-tabcomplete |
| 2005/2/11-12 [Computer/Companies/Apple, Computer/SW/Unix] UID:36147 Activity:nil |
2/11 More links for Scrolling Trackpad iBook Dude:
http://forums.macnn.com/showthread.php?s=&threadid=245015
http://www.ragingmenace.com/software/sidetrack/index.html
\_ what is 2-finger scrolling?
\_ Ask yermom, she likes it.
\_ clever. |
| 2005/2/10-11 [Computer/SW/Unix] UID:36134 Activity:nil |
2/10 For the guy who wanted 2 finger scrolling on an iBook:
http://www-users.kawo2.rwth-aachen.de/~razzfazz
(I don't know why you want this, but here your are)
\_ Heya, Thanks! I thought the 2-finger feature was in the
hardware, so now I have more questions... but thanks. |
| 2005/2/9-10 [Computer/SW/Unix] UID:36115 Activity:low |
2/9 What are some good oss search engines that can parse an HTML page
and spit out the top X relevant keywords? TIA.
\_ lynx -dump $URL | sed '/^References$/,$d' | perl -ne\
'while(s/([a-z]+)//i){print "$1\n";}' | sort | uniq -c | sort -rn
\_ Pretty cool. But I think the op was thinking something
kind of like google.
\_ Yea, as much I figured, but google or anything remotely
of the sort relies on _multiple_ documents linking to each
other to establish relevance/importance/etc. If all you have
to work with is a single document with no context, there's
rather little you can do unless you want to get neck-deep
in natural-language issues (well, knee-deep if you hack up
something to figure out which words are "unusually" common
in this document compared to the language at large, but
any serious solution would require some amount of parsing
and language understanding). Hence the above silly hack,
which I meant largely as a joke. -alexf
\_ What if you can assume that the page authors aren't
trying to game the system with off-topic keywords, etc? |
| 2005/2/5-7 [Computer/SW/Unix, Computer/SW/Languages/Misc] UID:36071 Activity:nil |
2/4 I want to add a "glossary" page to my website (for stuff related
to a current project). I know I could just write the html myself,
but is there some sort of utility for creating glossary pages,
which, for example, would make it easy to add or delete entries?
Thanks.
\_ Use a Wiki.
\_ Thanks. Is there any other solution? |
| 2005/1/26 [Computer/SW/Unix] UID:35922 Activity:high |
1/26 We know there are approximately 364.25 days in a year. Our ancestors
didn't know better and had to keep adjusting the # of days in a month
just to keep in sync, and now we have this unintuitive way of
counting number of days in a month, the leap year, etc and it's
somewhat of a hastle to teach children and to program it into the
computers. Suppose you were to design a completely new calender, one
that fewer rules and exceptions to remember, one that can be scaled
to the next few million years to account for earth's precession, and
one that perhaps could be extended to other planets. How would you
design and partition your calendar? It's an open question, there are
no right/wrong answers.
\_ If you're interested in time and clocks and calendars, I highly
recommend the books "longitude" and "splitting the second".
\_ I'm sure the perfect system would involve centons, centars,
sectons, sectars, and yahren.
\_ Everything the same except the last, "partial day" is irregular and
ignored once per year. Well, I guess we could try to fix the months.
There needs to be 13 months of 28 days. The 13th month will be
inserted as sexember, before september, and July/August renamed
Georgy/Bush.
\_ our society is heavily based on weeks (Sun/Sat) and quarters
(esp in the financial field). I'm not so sure that changing
the week system is a good idea. So as a compromise, I'd
probably get rid of the months and just keep the weeks, so that
we'd say that we're week 1-week 52. I'd keep the last day of
week 52 as a "flexible day" to account for earth's precession,
kind of like what the Romans did. This idea is simple and
most importantly, it is backward compatible.
\_ Wut? I kept 7 day weeks. Sure the 13 months don't divide
evenly by 4 but so what. Seasons don't fit neatly on the
ends of months as it is. In my system the flexible day
falls outside any week (worldwide party day! replaces
Jan 1st hangover.)
\_ sure it work but how do you get superstitious people to
adapt to it?
\_ We have the 13th days of months, why not 13th month?
13/13/2013 would be a fun day! Man I really want this
system now. I never thought about this before.
\_ Make sure you define which year. 365.24218967d = 1 mean tropical
365.2425 days = the average year length in Gregorian calendar
\_ Make everything multiples of 10 (or 8). 100 seconds in a
minute. 100 minutes in an hour and 10 minutes in a day.
Obviously, this implies redefining minutes and seconds.
\_ Here's another idea, 364 = 2*2*13*7, so...
How about having 26 days in a month with 14 months (364 days).
Each week will have 13 days. |
| 2005/1/24-26 [Computer/SW/P2P, Computer/SW/Unix] UID:35877 Activity:kinda low |
1/24 Of all the p2p software, which is the fastest at transferring large
(>1GB) files from one person to another? Some are premised on
multiple concurrent uploads to speed up the download (e.g.,
BitTorrent), which is great for popular files, but I have a large
set of data that would only be interesting to one other person. Is
FTP still the best way to go?
\_ Split up the file and use multiple FTP connections. The improvement
over single FTP connection is large if the distance is great (e.g.
between California and Japan).
between California and Japan), where the bottleneck is the roundtrip
time instead of bandwidth.
\_ p2p isn't about moving files from one user to the next. In your
example above, justputting the files on a website would be as fast
ast.
\_ One caveat with just putting files on a website, most Apache
builds can't send files larger than 2 gigs. -dans
\_ Also, how much data are you talking about? Tens of gigs, or
terabytes? Once you get into the terabyte range you're probably
better off just yanking the hard disks, and fedexing them.
Sneakernet is still the bandwidth king. -dans
\_ "Never underestimate the bandwidth of a station wagon full of
tapes." -some guy in the fortune file
\_ I have files around 2-5GB in size, so FTP sounds like the way to go.
Any reccommendations for free, secure FTP servers for Windows XP?
Going back to the p2p model, at what point does it become efficient?
That is, how many people does it take to share a file such that it
gets distributed fairly quickly? --op. (ps - these are VMWare
images)
\_ If you can initiate the transfer you might consider scp or rsync
with the appropriate flag to compress on the fly. As to your
p2p question, I think your understanding is a little flawed.
What p2p lets you do is aggregate bandwidth from more than one
source. Simplifying a lot and ignoring overhead, it's going to
take the same amount of time to transfer a file from a single
server connected via a T1 (1.5 megabits per second) as it would
to transfer a file via a p2p network where six sites with
256 kilobit per second connections are hosting the file. Add
more users and you add more bandwidth.
-dans |
| 2005/1/24-26 [Computer/SW/Unix] UID:35874 Activity:low |
1/24 How do I find out the maximum allowable process size on lesbians?
\_ malloc
\_ uh, how about finding out without getting squished?
\_ Run "limit".
[ deleting bitch ]
\_ limit is a csh thing.
\_ isn't "ulimit" the bash equivalent? |
| 2005/1/21-22 [Computer/SW/Unix] UID:35833 Activity:nil |
1/20 Can you guys recommend some good photo gallery software? I have
lots of pictures that I'd like to put up on a personal server, but
don't have any experience in this field. I hoping for something
simple and elegant and hopefully with thumbnails on an index page
of some sort. Any thoughts?
\_ Gallery. -John |
| 2005/1/16-17 [Computer/SW/Unix] UID:35735 Activity:low |
1/15 When I download a large file via ftp, my router/NAT keep closing the
control connection. Is there a way to keepalive from the client side?
\_ Passive FTP or SCP, if you can. Otherwise consider tunneling
the FTP over an SSH session if that is an option. Of course if
your router just generally times out sessions after x minutes with
no regard to source/destination, you're SOL. -John |
| 2005/1/14-17 [Computer/SW/Unix, Computer/SW/OS/OsX] UID:35723 Activity:kinda low |
1/14 I want the 2 Macs in my home to share files. Is NFS the fastest and
most reliable way? (I heard bad things about AFP.)
\_ OS X NFS doesn't really work. What "bad things" did you hear about
AFP? -tom
\_ ssh is broken over AFP, at least if done through the Finder
\_ just curious, why and how do you do ssh over AFP? - op
\_ I heard that AFP is slow and broadcast endlessly "i am here, are
u?" On the plus side, probably only AFP can preserve all Apple-
centric file attributes. Please correct me if I am wrong. tnx.
\_ AppleTalk broadcasts "i am here", AFP does not. You do
not need AppleTalk in order to use AFP on OS X. OS X
uses a TCP/IP based AFP client/server.
\_ I use AFP (w/o ssh) btwn my iBook and my G5 on a regular basis.
It works well even when my iBook connects via Airport. Finder
might hang a bit when you are copying files to the AFP volume
and you try to open a folder you don't already have open.
I wouldn't recommend NFS. Other than no forked file support
and screwing up finder icons, sometimes NFS will on OS X may
corrupt files.
Another possibility is use smb (you can turn on windows file
sharing in the sharing control panel/preference pane).
If you just want to mirror files btwn two Macs rsync[_hfs] and
ssh work pretty well as does cvs.
\_ Is rsync_hfs related to rsyncX?
\_ Yes. rsync_hfs is the patched version of rsync that is
the command line core of rsyncX. Although rsyncX is
said to be open source, I was only able to locate the source
for rsync_hfs. Perhaps it is deliberately so. BTW rsync_hfs
supposedly causes data loss if used only on one side. |
| 2005/1/10-17 [Academia/Berkeley/CSUA/Motd, Computer/SW/Unix] UID:35645 Activity:nil |
1/10 Unix Sysadmin and Storage Admin positions in SF in the financial
industry. /csua/jobs/BarclaysGlobal -ERic
(what happened to my original post? Are jobs listings now not
appropriate for the motd?)
\_ The motd will never be the same after four years of Dubya
and four more to go. |
| 2005/1/9 [Computer/SW/Unix] UID:35613 Activity:moderate |
1/8 The load averages aren't all that high, why is soda responding so
badly?
\_ not that high? 2:50PM up 39 days, 8:54, 120 users,
load averages: 18.39, 23.74, 21.98
\_ Not too high for around here anyway.
\_ What do the load time averages mean?
\_ Mostly nothing because they depend on too many factors like
proc speed, etc. Only comparable to themselves through time.
\_ And even then, different states on the same machine at the
same "load" can have totally different discernable slowdown.
\_ No duh. I think what he meant was, "How are the numbers
calculated? What do they represent? Why are there
three?" Which I don't know either.
\_ ever read a manpage? they are 1, 5, and 15 minute
averages. that much is pretty standard across unix
systems. I am not sure that there is any consitency
in how they are sampled, however. what it represents
is the number of runnable tasks when the sample was
taken, e.g. the number of tasks being considered by
the scheduler. |
| 2005/1/9 [Computer/SW/Unix] UID:35612 Activity:moderate |
1/8 Does anybody know if there is a very simple way to see
what files in a file list DONT have <string> in it.
I mean sort of like a combination of grep -v and grep -l.
There are various simple loops foreach $files grep string > /dev/null
|| echo $f or make a copy and grep -l string * | xargs rm and then
see what is left. But I was looking for a single command or something
more elegant.
\_ grep -L ? -L is right above -l in the man page...
\_ or just grep -vl.
\- grep -vl is not right. hmm, i guess -L is a new GNU option
not in "grep classic". "new" of course is relative. tnx.
\_ grep -vl works on both solarisa and linux.
\- it works, but it doenst do the right thing ...
for gnu grep running on solaris:
moon# ls | wc
338 338 2703
moon# grep -l XXXX * | wc
119 119 785
moon# grep -L XXXX * | wc
219 219 1918
moon# grep -vl XXXX * | wc
338 338 2703 |
| 2005/1/5-6 [Computer/SW/Unix] UID:35564 Activity:nil |
1/5 Dumb unix question, I'm recieveing an scp from my friend, is there
any way for me to get an idea of the file transfer speed?
\_ Dumb unix answer: run ls -l on the file repeatedly and guess.
Or were you looking for something cooler?
\_ Heh, yeah I did that. I was hoping for something cooler,
although that did work. :P
\_ du -k is cooler. or writing a little perl script to do this
and print the speed every n seconds. |
| 2005/1/4-5 [Computer/SW/Unix] UID:35545 Activity:moderate Edit_by:auto |
1/4 Hello UNIX command line gurus, let's say I have a file called
filenames.log that contains file names, like /usr/bin/hello,
/media/dvd/wayne's world, /media/music/my music!.mpg. I'd like
to do something like:
% cat filenames.log | xargs tar rvf /backup/today.tar
However, I can't do that because I need to escape characters
like ', \ , \!, and many others. What's an elegant way of
doing this? I thought about using sed, but I'd have to come
up with a comprehensive list of characters that I have to
escape, which is lame. Ideally I'd like something like:
% cat filenames.log | escape | xargs rvf /backup/today.tar
Got ideas? Thanks!
\_ Here are a few ways to do it. Hopefully you find one elegant.
sed 's/[^A-Za-z0-9]/\\&/g' filenames.log | xargs tar rvf ...
sed 's/./\\&/g' filenames.log | xargs tar rvf ...
tr '\n' '\0' <filenames.log | xargs -0 tar rvf ...
sed 's/[^A-Za-z0-9]/\\&/g' filenames.log | xargs tar rvf today.tar
\_ thanks. So the above sed, with "&", is equivalent to
perl's $1 or \1? It's seems like it's the same as perl's
s/([^A-Za-z0-9])/\\$1/g;
So here is another question. How do you specify $1, $2,
etc in sed? thanks.
\_ Sed's & is perl's $& (the entire search string). Sed uses \1,
\2, etc. to retrieve stuff from parens. Also note that you
have to use \( and \) in sed, not just ( and ) like in perl.
sed 's/./\\&/g' filenames.log | xargs tar rvf today.tar
tr '\n' '\0' <filenames.log | xargs -0 tar rvf today.tar
tar rvfI today.tar filenames.log
\_ % cat filenames.log | perl -ne 'print quotemeta;' |
xargs rvf /backup/today.tar
\-i'd use the tr command above to NULL pad + xargs -0 OR
modify a perl script called "findtar" OR use GNU tar's
-T|--files-from option possibly with --null. ok tnx --psb
\_ if performance is an issue would perl be slower because the
executable is bigger? Or would it be faster because it's
got optimizations built in?
\-no offense, but this is not a question worth asking.
or at least not worth answering.
\_ xargs is wrong in this case. Use normal tar with -T. -vadim
\- tar -rv -f mybackup.tar -T file_list.txt
\_ keywords: perl escape character space |
| 2005/1/4-5 [Computer/SW/Security, Computer/SW/Unix] UID:35542 Activity:low |
1/4 I added a user to my Windows 2000 machine, and now I can't login as
Administrator or any of the other user accounts. I think I changed
the automatically login user without password box. I think I need
to reset the administrator password. Any ideas?
\_ obgoogle. try system internal's website. they got tools
\_ http://home.eunet.no/~pnordahl/ntpasswd
\_ Perfect! That worked very well, I'm keeping that CD in my kit.
\_ Get tweakui for win2k. It will allow you to turn the proper login
back on. |
| 2004/12/29-30 [Computer/SW/Unix, Computer/SW/Languages/Perl] UID:35476 Activity:kinda low |
12/29 Is there a command like 'tail' that will read a file backwards?
Tail does not do what I want because of a limited buffer size. I
want to read the *ENTIRE* file backwards. Less doesn't work because
it relies on line numbers and the file is corrupted. (Essentially,
I can read 1-n and n+EOF lines but n itself is corrupt.) More/less
lets me read 1-n, but tail won't let me go back far enough.
(tail -100000 and tail -3000 are equivalent because of the buffer)
\_ you could pipe it through a perl script that reads the file
backwards, i can write a multi line one, maybe one of the perl
geeks around here will post a 1-liner.
\_ Thanks! I found a PM called File::ReadBackwards that worked.
\- hello, you can use the "tac" command or sed '1\!G;h;$\!d' --psb
\_ Tac looks cool, but seems to be a Linuxism. It's not on
soda, for instance. (Yes, it could be ported.)
\-tac is a random hack that probably predates linux.
portability is why i added the sed cmd. --psb
\_ if you use 'less +G' it doesn't actually try to figure out line
numbers, it just goes to the end of the file. You can then scroll
back ward normally. -ERic
\_ I tried this and it did not work. I went to the end of the
file, but when I tried to scroll back it attempted to
calculate line numbers even when told not to. |
| 2004/12/25-27 [Computer/SW/Unix, Computer/HW/Drives] UID:35434 Activity:moderate |
12/25 Is email down?
\_ I am getting an insufficient disk space error when trying to
send mail. -!op
\_ Fixed. - jvarga
\_ Well, wait for the load to get back to normal to actually
start getting mail again. - jvarga
\_ sweet!
df -k /var/spool/mqueue
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/da0s1f 1016303 974635 -39636 104% /var |
| 2004/12/23-25 [Computer/SW/WWW/Browsers, Computer/SW/Unix] UID:35424 Activity:nil |
12/23 When I used wget to fetch files form a particular site, it immediately
gave me a 403 forbidden (before it even get to load robots.txt).
I can view the web page using a browser, and I have set the user
agent to be 'Internet Explorer.' So what's wrong?
\_ You probably neglected to set the referer. |
| 2004/12/22 [Computer/SW/Unix, Computer/SW] UID:35399 Activity:nil |
12/22 Is there a way to make cp retain the file time but not the other shit
when I do a copy? -p preserves too much information. I just want the
file to retain its timestamp.
\_ There is if you use GNU cp. |
| 2004/12/21 [Computer/SW/Unix] UID:35370 Activity:kinda low |
12/21 uniq can get rid of 2 identical lines if they occur right after
each other. But how do you get rid of 2 identical lines, even
if they don't occur right after each other. using sort works,
but then all the lines are out of order, which is a problem.
\_ perl. counters for each unique pattern so far. Hell, you
can do it using a temp file with just /bin/sh scripts.
\_ perl -ne '$m{$_}++||print' <file>
this does the uniq thing, not kill all duplicates. -vadim
\_ do it scalably w/ bash, e.g. let the sort/uniq tools do
the heavy lifting:
n=0
while read line ; do echo "$n $line" ; n=$(($n + 1)); done \
| sort -k 2 | uniq -f 1 | sort -n \
| while read num rest ; do echo "$rest" ; done
\_ cat -n <file> | sort -uk 1.8 | sort | cut -c8- -vadim
\_ to do what you really asked, you can replace sort -uk 1.8 with
sort -k 1.8 | uniq -uf1. -vadim
\_ another one (zsh):
typeset -A m; while read l; do [ $m[$l] ] || echo $l && \
m[$l]=1; done -vadim
\_ /tmp/unique.c is something I wrote on SunOS5 a few years ago.
--- yuen
\_ waaaay unsafe. The least you could do is store md5s in the
hash. -vadim
\_ It's just some quick utility I came up with to discard
duplicated path names. It wasn't meant to be secure. --- yuen |
| 2004/12/21 [Computer/SW/Unix] UID:35369 Activity:nil |
12/20 Dear Mr. tcsh, here is a puzzle. I have a tcsh script that runs fine,
but when I use "tcsh -n" to check its syntax it always fail with.
[xxx-xx]if ( ${?prompt} ) then
[xxx-xx]tcsh: ${?prompt}: No match.
(I have turned on -v switch.) What's wrong?
\-while i no longer follow tcsh code closely, there have been
some globbing bugs [the way stuff should be parsed is a little
bit vague in csh] ... that's what i suspect you are hitting.
this isn't worth tracing at the code level.
\_ I used to use tcsh before I switched to zsh.
zsh is great: all the interactive features of tcsh,
but bash compatibility. And you can make really cool prompts. |
| 2004/12/16-17 [Computer/SW/Unix] UID:35321 Activity:kinda low |
12/16 When I downloading a directory with wget, it some generates multiple
and identical copy of index.html with names like index.html?C=N;O=D,
one for each real files in the directory. Which option should I
give wget to stop this stupidity? I read the man page already. tnx.
\_ if there's a certain type of file at that site you want,
use "-A"
wget -r -l 1 --no-parent -Amp3 http://www - danh
\_ But that still download the duplicate files but only remove
it later, right? Actually it fetches a duplicate of this
dynamically generated index.html for every real file in the
directory so it can be a real waste of bandwidth/time.
\_ But that still downloads the duplicate files but only remove
it later, right? Actually it fetches a duplicate of these
Apache generated index.html for every real file in the
directory so it can be a real waste of bandwidth/time. How
do I tell wget that they are the duplicates? |
| 2004/12/14 [Computer/SW/Unix] UID:35279 Activity:nil |
12/13 The manpage for fflush says that fflush(NULL) will flush all open
buffers-- is this POSIX (or another standard), or just a BSDism? |
| 2004/12/13-14 [Computer/SW/Unix] UID:35274 Activity:nil |
12/13 Say you have a background process currently running and spewing output
to stdout under tcsh. Is it possible to redirect the output to another
(file) location without killing and restarting the process? thnx.
\_ This worked on freebsd for me. Stop the process (^Z) then:
# tty
/dev/ttyN2
(in another session):
# cat /dev/ttyN2 > output
(in the first session):
# fg
Then when you want to stop this "logging", send an EOF to the first
session. YMMV --scotsman |
| 2004/12/9-12 [Computer/SW/Unix] UID:35237 Activity:high |
12/9 Has anyone had experience with using a file under NFS as basically
a region of memory shared between separate machines? I.e., the
machines lock the file and read/write it to communicate. Speed is
not a huge concern, but can this be done reliably? Are there
caching issues? Will fsync work properly over NFS? This doesn't
seem ideal, but thanks for any advice.
\- you may be interested in RDMA, which is something i am intrested in.
however, here this issue is speed and offloading from the processor.
if you are interested, you can send me a note, however i am a little
busy these days so i may not have a lot of time to discuss.
jeff mogul's paper on tcp offloading has some decent references
for background. --psb
\_ for extra bonus points, use mmap() to get a truly shared memory
region over the network.
\_ I've done something like this before. If you are not on the
same switch (best to make a vlan w/ just two machines) you
may have problems. If nfsd is setup to do caching you might
run into issues. You will most likely have problems between
Linux (b/c linux nfs sux) and any other os. Try and stick to
Solaris and FreeBSD (or MacOS X) if you really want this to
work. BTW, why are you even thinking about this?
\- oh it finally ocurrs to me what you are trying to do ...
you are trying to use a file as a way of talking between
processes ... in this case the processes are on different
machines and the file happens to be nfs mounted. oh jesus.
\_ Sucks as of when? I know Linux NFS used to be pretty bad, but
I've heard it has improved a lot in recent years. Does anyone
know of other approaches used successfully in distributed
processing sort of applications (yes, I will STFW, but
corroboration is nice too)? It needs to be portable
across modern Unixes, and be able to handle moderate lock
contention (say, up to a couple hundred or so processes
vying for the lock -- essentially, the shared data is just
a job number though, so it's pretty small). Thanks. -op
\_ Have you looked at MPI or PVM? -tom
\- MPI (MPICH, LAM/MPI) is the standard.
Get with the program. You could use
JavaSpaces. I think you get a free
"I am an idiot" tshirt with every
download. --psb
\_ Linux NFS has sucked and continues to suck, even with
2.6 kernel (esp. if you stick to a kernel that comes
with RH or SuSE, b/c you want support &c.). I have
to deal with Linux NFS on almost a daily basis b/c
we have a unified Jumpstart/KickStart/AutoYAST server
as part of our product and whenever anyone tries to
boot systems (more than 1) using NFS, Linux starts
having problems with NFS (both userland and kernel
nfs have problems).
NFS v3 has problems talking to most other NFS v3
implementations. You will start seeing weird file
corruption, hanging mounts, blocking reads and stuff
after a while. You could try NFS v2, but Linux has
problems doing TCP and NFS v2, and for this sort of
thing you really want TCP.
If you are trying to do some sort of parallel clustering
thing, take a look at something like GridWare.
[ Sorry, if it sounds like I'm babbling, but I just
got done with a 3+ hr final and I'm a bit tired ]
\- nfsv4 opensrc implementation is being mainly done
on linux by umich/citi which has some pretty clever
folks so the v4 implementation may be bounded non-ass.
the connectathon results look decent. by "bounded"
i mean "as good as something can be on AssOS". --psb
\_ I agree that NFSv4 will probably be pretty
good, but AFAIK only Linux and Solaris 10
have a working version right now and he
said modern Unices by which I assumed he
ment stuff like HPUX 10, 11, AIX [4?],
FreeBSD 4.x-5.3, Linux 2.[2-6], Solaris
2.6-10 and MacOS X (NetBSD and OpenBSD
don't make my list b/c hardly anyone
uses them for general purpose stuff).
\- i think there are some connectathon
summaries from about a month ago on
one of the citi WEEB sites ... nothing
dramatic there but if you are really
interested in the details about the
state of affairs.
\_ Why on the same switch/VLAN? Any idea why that makes
a difference?
\_ [ It was been a little while, so take this with
a grain of salt ]
The switch has to forward all broadcast traffic
to every port active port on a VLAN. By default
every system plugged into the switch is on the
same default VLAN (1 for Cisco, iirc). If you
have lots of other machines on the same switch
a the two systems that are using NFS, then the
extra broadcast traffic can affect your network
performance. By making a separate VLAN you are
removing this potential problem.
\_ What does NFS have to do with broadcast traffic? |
| 2004/12/8 [Computer/SW/Virus, Computer/SW/Unix] UID:35210 Activity:moderate |
12/7 I'd like to run a program and save the output to a log file
while still seeing the program output on stdout. I tried using
the tee command as in "foo.exe | tee mylog.txt" but tee only
seems to print to stdout every once in a while instead of when
foo.exe generates a line of output. How do I save output to a file
while having every new line of output sent to stdout? Thanks. -emin
\_ The problem is not in tee, but in foo. By default, the stdio
library produces output a line at a time if it's outputting
directly to a terminal, but buffers its output in large chunks
otherwise (see "man setvbuf"). When you pipe foo's output to
another program, it's no longer outputting to a terminal, so it
turns on its buffering. The easiest cure is to create a fake
terminal for it to run on: ssh -t localhost foo.exe | tee mylog.txt
I know, it sucks. The default buffering really ought to be
smarter, or at least configurable. --mconst
\_ foo and tee BOTH buffer, don't they?
\_ Tee actually never buffers its output. Even if it used the
default stdio buffering, though, it wouldn't be a problem
here since it's outputting directly to a terminal. --mconst
\_ what about foo | cat | tee mylog.txt?
\_ That won't help anything. foo is still writing to a
pipe.
\_ The mconst has spoken. Woe to those who will not
listen.
\_ You have to redirect stderr to stdout. In bourne-like shells,
foo.exe 2>&1 | tee log
In csh derivatives, I think it's something like
foo.exe |& tee log
\_ Another possibility you might explore is using 'screen' to run your
process, with screen logging to a log file. SCREEN RULES!!
\_ "Sounds like a virus. Reformat and start over."
\_ Advice like this will destabilize your computer for years to come
cunt
Cunt
cunt
Cunt |
| 2004/12/7 [Computer/SW/Languages, Computer/SW/Unix] UID:35192 Activity:high |
12/7 I'd like to run a program and save the output to a log file
while still seeing the program output on stdout. I tried using
the tee command as in "foo.exe | tee mylog.txt" but tee only
seems to print to stdout every once in a while instead of when
foo.exe generates a line of output. How do I save output to a file
while having every new line of output sent to stdout? Thanks. -emin
\_ The problem is not in tee, but in foo. By default, the stdio
library produces output a line at a time if it's outputting
directly to a terminal, but buffers its output in large chunks
otherwise (see "man setvbuf"). When you pipe foo's output to
another program, it's no longer outputting to a terminal, so it
turns on its buffering. The easiest cure is to create a fake
terminal for it to run on: ssh -t localhost foo.exe | tee mylog.txt
I know, it sucks. The default buffering really ought to be
smarter, or at least configurable. --mconst
\_ foo and tee BOTH buffer, don't they?
\_ Tee actually never buffers its output. Even if it used the
default stdio buffering, though, it wouldn't be a problem
here since it's outputting directly to a terminal. --mconst
\_ what about foo | cat | tee mylog.txt?
\_ That won't help anything. foo is still writing to a
pipe.
\_ The mconst has spoken. Woe to those who will not
listen.
\_ You have to redirect stderr to stdout. In bourne-like shells,
foo.exe 2>&1 | tee log
In csh derivatives, I think it's something like
foo.exe |& tee log
\_ Another possibility you might explore is using 'screen' to run your
process, with screen logging to a log file. SCREEN RULES!!
\_ "Sounds like a virus. Reformat and start over."
\_ Advice like this will destabilize your computer for years to come |
| 2004/12/1-2 [Computer/SW/Unix] UID:35138 Activity:nil |
11/30 I've heard that finger is a security hazard, but it would seem to me
that any well-written finger would be at most a DOS liability. Why
do many institutions of all sorts either run no finger daemon or
block it at the firewall?
\_ Well, one obvious reason is it can give away information that can
be used for social engineering attacks.
\_ Not to mention simple email/login harvesting for brute force
attacks and spammers. -John
\_ So can personal webpages.
\_ False analogy. You control 100% of all info on your pubhtml
dir. You do not have this control over your finger info
on all systems.
\- There is a discussion of "amplification" DoS attacks
using finger at: http://csua.org/u/a5q --danh
\_ But the fact that there is a page at <DEAD>csua.berkeley<DEAD>
.edu/~foo implies that foo@csua.berkeley.edu is a valid
e-mail address.
\_ I think the parent post was referring to the fact that
it's your choice to create a public_html directory
at all.
\_ I see. |
| 2004/11/29-30 [Computer/SW/Security, Computer/SW/Unix] UID:35115 Activity:low |
11/29 I archived a big direcotry (3GB) using tar with bzip2 compression (-j)
and I notice that to extract any file, tar seems to read through
the whole archive decompressing it byte by byte and takes a VERY long
time, no matter how small that file is. Is there a better archive
method? (I am archiving on to a file, so dump does not work.)
\_ Use zip. The compression isn't as good, but you can access any
file instantly.
\_ I need good compression but I won't add files to the archive,
so a tool that puts all the directory information at one place,
compress the files individually and allow random access is what
I am looking for. (And it has to be available for Macs too.)
\_ Why don't you just run bzip2 on foreach i ( * )? -John
\_ Perhaps RAR, http://www.rarlab.com Not free, though.
\_ 3GB is not that much. Burn it on a DVD. |
| 2004/11/27 [Computer/SW/P2P, Computer/SW/Unix] UID:35083 Activity:moderate |
11/26 I am new to file sharing, so here is a dumb question. Does the use
of p2p program protects the client more than direct http/ftp download?
\_ Depends on what you mean by "protect"--your security against
exploits/trojans? Anonymity? -John |
| 2004/11/25-27 [Computer/SW/Security, Computer/SW/Unix] UID:35077 Activity:kinda low |
11/26 Is there any reason to give directory world Readable permission but
not eXecute permission? I encountered this on a public ftp site.
Is this just a mistake or are they trying to block access?
[Thanks for deleting a lot of crap]
\_ no, you can't get into that directory on the porn site.
\_ well, with just read, you can list the names of the files in the
directory, but that's about it. i don't know if that's considered
useful.
\_ No you can't, unless you mean read in the sense of od/cat/etc.
\_ Yes, you can. Try it. You can use ls to list the filenames,
but you won't be able to stat the file for more details.
\_ You try it.
% ls -ld bar
drw------- 2 xxx csua 512 Nov 26 22:38 bar/
% ls bar
% chmod 700 bar
% ls bar
baz
\_ Your ls program is too smart -- it's trying to get extra
information about the files, which fails. Try /bin/ls. |
| 2004/11/16 [Computer/SW/Languages/Perl, Computer/SW/Unix] UID:34922 Activity:high |
11/16 "sed '/^[0-9].*$/\!d' inputfile" will print out only the lines
of a file that start with numbers. Supposed I want to print out
the 1st, 6th, 10th, 16th, etc lines that begin with a number.
How can I do that elegantly with sed or perl or whatever?
\_ perl -ne 'print if /^\d/ && $count++ % 5 == 0' --mconst
\_ You can replace all the newlines with a new record separator (eg
"FOO") and then awk '{print $1, $6, $10, $16}'.
\- to do this either you need to understand a little bit about
how sed works and then write a little sed program OR if
you want a cryptic one liner, it heavily depends on the
version of sed ... i cant think of a simple way to do
this in "genreic sed" ... i assume your list doenst end
at 16 ... that is trivial. i think gsed supports the +5d
operator. --psb
\_ perl -ne 'print if /^\d/ && <compare $. as line #>' file
the compare could be e.g.: $. =~ /^(1|6|10|16)$/
--dbushong
\- if all you want to do is print out those 4 lines it is
' trivial ... sed -n -e '1p;6p;10p;16p' --psb
\_ This has to work for a file that is 100000 lines long. That's
what I meant by etc. I would have thought there would be an
elegent way to basically tell it to print the first line,
skip the next n lines, print the next line, skip the next n
lines, etc. -op
\_ Is n 5, 4, or 6? |
| 2004/11/15 [Computer/SW/Unix] UID:34898 Activity:low |
11/14 The EECS network is experiencing packet loss for large packets.
I've mailed networks@eecs; I've also turned down soda's MTU, which
has helped a lot for interactive stuff and for outgoing email.
Incoming email and NFS are still affected. --mconst
\_ EECS has fixed the problem, and I've set the MTU back to normal.
Please let me know if anything still seems broken. --mconst |
| 2004/11/15-16 [Computer/SW/Security, Computer/SW/Unix] UID:34896 Activity:nil |
11/15 I can't access webpages on Soda.
\_ Looking at the logs, it appears things stopped working a little
after 7:00PM Sunday because of nfs problems at the time. Can
someone give apache a kick ("apachectl restart")?
\_ Fixed. Is anything going to work today? - root
\_ Thanks. U = awesome. |
| 2004/11/15-16 [Computer/SW/Unix] UID:34895 Activity:nil |
11/14 Is there a reliable way to use a SIGCHLD handler to figure out when
all the children a parent has forked have terminated? I find that
sometimes some SIGCHLDs seem to be dropped, which results in the
parent waiting for a child that has already terminated. AFAIK,
standard signals that arise while blocked can be queued, but only
one such signal will be queued, whereas any others will be dropped.
I know there are POSIX "realtime" signals that guarantee queueing,
but as SIGCHLD isn't a realtime signal, I don't see how that would
apply to my problem. I'm sure what I'm trying to do is not all that
uncommon; what's the canonical way to do it reliably? Thanks.
\_ read up on the wait() variations. |
| 2004/11/15-16 [Computer/SW/Unix] UID:34894 Activity:kinda low |
11/14 what is wrong with soda?
\_ can you be more specific? (a lot is/could be wrong with soda.)
if you are talking about slow interactive response and high
packet loss it seems to be an eecs network problem. it will
be reported shortly. it only seems to be causing problems for
larger packets so we've adjusted the mtu on soda. you can
thank mconst for troubleshooting this. - erikk
\_ What's "MTU" as mentioned in motd.official? Thx.
\_ Maximum Transmission Unit; basically, packet size.
1500 bytes was a typical figure at some point.
\_ Though you typically want something a little bit smaller
like 1470, because some device 'in the middle' could
encapsulate your packet and thus go over 1500.
\_ Path MTU discovery should lower your MTU in this
case, shouldn't it? Isn't the explicit MTU just a
starting point/cap? -gm
\_ In theory this is correct. In practice, many
systems/networks do stupid things that break Path
systems/networks do stupid things that break path
MTU discovery. One of the more common ones is
dumbly blocking all ICMP traffic. See:
http://www.netheaven.com/pmtu.html
(first link I clicked on after obGoogling) for
some discussion of this. Also, that 1500 byte
figure wasn't pulled out of thin air, it's the MTU
for ethernet. Notably, PPPoE links and network
tunnels typically have an MTU of 1492 bytes, and
invariably are used between systems and networks
with broken Path MTU discovery implementations.
with broken path MTU discovery implementations.
-dans
\_ So when you have one of these systems in between
you and your destination, what's the best way to
to a MTU discovery? TCP ping? Something like
'bing'?
\_ Could you rephrase that? Are you asking what
to do if you have a tunnel or PPPoE link
between you and your destination? Or are you
asking how to do path MTU discovery in spite
of a network or host that breaks path MTU
discovery? -dans
\_ I can't seem to access webpages on soda. |
| 2004/11/8-9 [Computer/SW/Unix] UID:34745 Activity:low |
11/8 Looking for a bin->iso converter on the unix AND pc, what are your
recommendations? I tried WinISO but it keeps outputting a bad
file. UltraaISO works but it costs money.
\_ What's a "bin" file?
\_ A type of CD image. Try mounting it with Daemontools and
playing with it under Alcohol120% STFW for 'bin iso file'
yields a bungload of tools, some of them free. -John |
| 2004/11/8-9 [Computer/SW/Unix] UID:34743 Activity:nil |
11/7 What are some good unrar programs on unix to use?
\_ /usr/ports/rar , /usr/ports/unrar -John
\_ http://www.rarlab.com/rar_add.htm |
| 2004/11/8-9 [Computer/SW/Languages/Perl, Computer/SW/Unix] UID:34742 Activity:low |
11/7 Is there a shell command that will unsort (randomize) a file,
like the way sort does on a line-by-line basis? I don't need any
mathematical randomizing, just want to mix up my input lines
occasionally. tia.
\_ ~mconst/bin/shuffle
\_ i have some short code to do this. if the file is "large" [+32k ll]
it's somewhat tricky to do ... need a good random generator.
like perl's default doenst have enough seed values. why do
people ask stuff like this anonymously? --psb
\- this looks really slow to me:
/bin/time ./rand-mconst.pl < /tmp/infile > /dev/null
real 46.9
/bin/time ./rand-psb.pl < /tmp/infile > /dev/null
real 4.3
\_ What do you expect? One's an algorithm, one's a one line
hack.
\_ my stupid shell script that works fine for small files:
#!/bin/sh
awk 'BEGIN { srand() }{ print rand(),$0 }' $1 \
|sort|sed 's/^[^ ]* //'
\-I dont think this is portable to "classic awk" ... but
gawk is probably good enough. --psb
\- btw, i just stumbled, er shuffled, on to:
perldoc -q shuffle --psb |
| 2004/11/5-7 [Computer/SW/Unix, Computer/SW/Languages] UID:34712 Activity:high |
11/5 Anyone with biology fu happen to know whether there are any theories
out and about on 'junk' DNA being a form of ad hoc error correcting
code for mutation robustness? -- ilyas
\_ google for exons introns error-correct, e.g.,
http://post.queensu.ca/~forsdyke/introns.htm
\_ human DNAs are persistent because of the redundancies in it. They
estimate that over 80% of the DNA doesn't actually do anything.
\_ isn't this thinking actually being overturned now? I seem to
recall reading an article in Sci Am or Discover that supposed
"junk DNA" may not be so junk afterall.
\_ Ah, okay. Found it. Scientific American, Nov. 2003. Article
titled "The Unseen Genome: Gems Among The Junk"
\_ I would be interested in reading this. Would you please
help make that easy by putting it online? --PeterM
\_ http://www.sciam.com
\_ I, perhaps immorally, was hoping to see the article
without paying. Perhaps I'll simply go
visit the library. --PeterM
\_ Communist bastard. -- ilyas
\_ On a related note, LBL scientists delete a bunch of junk DNA
from mouse genome. Mouse is fine.
http://www.llnl.gov/llnl/06news/Employee/articles/2004/10-22-04-newsline.pdf
link:tinyurl.com/6pvp3
\_ Ah, but if you do that to a whole population of mice, what would
be the effects on their decendants in a few generations?
\_ They will create a web site called "freerepublic4mice.com"
and make laws outlawing gay marriage among mice
\_ You know...I'm not sure how I should feel about this
subthread. -mice
\_ Are you a gay mouse or do you have genetic mutations?
\_ Well, I'm not gay. -mice
\- I strongly recomment the book GENOME by Matt Ridley.
It's a little out of date [as observed above there has
some recent work on junk dna, including at places like
LBL] but anythign is this field will be going out of date.
--psb
\_ partha what do you do at LBL? |
| 2004/11/4-6 [Computer/SW/Unix] UID:34686 Activity:nil |
11/4 Did any of the timeout/idling setting changed in Soda? My Tera
Term+SSH is automatically loggin me out after ~1 hour. I tried
setting "set autologout=120" (I'm using tcsh), but that does not
seem to help. Any idea?
\_ Going through a firewall of any kind? ISP maybe timing out
idle sessions or something? I have this happening, and it's our
firewalls. Try port forwarding X and running an xclock over the
ssh session for a while to see if it's from inactivity. -John
\_ Forgot to say, no, I don't think it's the firewall. A co-
worker is using the exact same setup (TTSSH) going to his
own server and he does not observe this behavior.
\_ OK so try the xclock, maybe something else is timing it
out. There is also something called 'spinnner' which you
can run to see if it's this.
http://www.laffeycomputer.com/spinner.html -John
\_ Thanks. Trying 'spinner' right now.
\_ Heh heh heh. Spinner. |
| 2004/11/1 [Computer/SW/Unix] UID:34511 Activity:very high Edit_by:auto |
11/1 Login: pollux Name: Paolo Soto
Directory: /home/sequent/pollux Shell: /usr/local/bin/ntcsh
Office: The Convenience-Free Zone (tm)
On since Mon Nov 1 16:02 (PST) on ttyDu from http://i.get.stabby.net
Mail last read Mon Nov 1 16:36 2004 (PST)
No Plan.
\_ But there is no Castor.
\_ how is this name based?
\_ who cares....
\_ paolo@csua.berkeley.edu, then pst@csua.berkeley.edu, then
pollux@csua.berkeley.edu. The point is, no matter where you
go, spam will follow. :) |
| 2004/10/31-11/1 [Computer/SW/Unix] UID:34489 Activity:nil |
10/31 /var: no space left? Can't email root.
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/da0s1f 1016303 942172 -7173 101% /var
\_ Working on it -root
\_ Fixed it -faster root
\_ Thanks faster root.
\_ I fixed a different problem -slower root
\_ fixed. thanks.
/var/mail is 98%. Is this a problem? |
| 2004/10/31-11/1 [Computer/SW/Languages, Computer/SW/Unix] UID:34472 Activity:low |
10/30 How do I prevent variable substitution within double quote in tcsh?
The manual says I can quote it with backslash but the following
does not work: echo "\$ "
\_ There is no way to prevent variable substitution within double
quotes in tcsh. Usually it's easiest to use single quotes;
failing that, the best you can do is echo "foo"\$"bar". --mconst
\_ echo "blah"'$'"blah" |
| 2004/10/28-29 [Computer/SW/Unix] UID:34409 Activity:high |
10/28 I just built a terrabyte raid server using linux. The distro is
an older version of RH 8 (need it to keep compatibility with certain
apps). I formatted the raid to be XFS. The server is an NFS server,
and I appear to be hitting the 2 gig file limit. According to the
docs I'm supposed to be able to get beyond the 2 gig limit with
RH8. I have compiled a new 2.4.27 kernel to replace the stock
RH8 kernel to be sure. Is this perhaps a limitation of the NFS
server? How can I tell and do I just need to recompile the
NFS utils and server? Isn't the server somehow integrated with
the kernel now? Or is this a limitation of the filesystem? -williamc
\_ I have no problem using ext3 and NFS with the 2.4.20-30.8
kernel. My guess is your filesystem. Does it have a largefiles
option? (I'm not familiar with xfs except on SGI.)
\_ Well, it should. XFS is supposed to handle large files
rather well. Other than dumping a 2+ gig file on the drive
is there a way to tell if the underlying filesystem supports
2+ gigs? I also heard that there may be libc issues. What
libc version are you using?
\_ The question is "Does it by default?". What's wrong with
dumping a 2+ gig file to the drive? That will provide your
answer. I am using glibc-2.3.2-4.80.6, if it matters.
\_ The fact that it's two gigs? Why apply hammer if you
can just figure it out by some setting? -williamc
\_ Are you having trouble with a particular application? does dd
create a file that large? I've had some problems with some
applications that don't deal with the largeness correctly (apache,
squid)
\_ I'm having trouble with it in general. I would like it to work
with NFS. The applications run on Sun machines, so no
2 gig barrier on the apps. -williamc
\_ I would stay away from XFS if you care about your data. The XFS
fsck code will corrupt your data if you lose power during a
journal replay. This can happen if you crash due to a power failure,
the power comes back on, you start the recovery, and the power
fails again. We just did a paper on this:
http://keeda.stanford.edu/~junfeng/papers/osdi04.pdf
--twohey.
\_ And ext3 or reiserfs is better? -williamc
\_ And ext3 or reiserfs is better? BTW, I read your paper,
you guys didn't test XFS... -williamc
\-i have not throughly read this over (yet) but any plans
to look at non-linux (AssOS) filesystems? I've been so
frustrated with linux i couldnt bring myself to participate
in this thread. ObAndrewHumeonAssOS |
| 2004/10/18-19 [Computer/SW/Unix] UID:34201 Activity:low |
10/18 How do I get GNU find to seach on all local files systems but
no NFS filesystems?
\_ man find see -fstype type |
| 2004/10/18-19 [Computer/SW/Mail, Computer/SW/Unix] UID:34196 Activity:nil |
10/18 $ uptime
12:15PM up 50 days, 8:21, 175 users, load averages: 60.94, 54.81, 44.90
help
\_ Wizard needs sendmail fu BADLY. |
| 2004/10/10-12 [Computer/SW/Unix] UID:34013 Activity:high |
10/10 Please read the latest announcement on motd.official if you don't
usually. - jvarga
\_ The reason for this is so that you can move /var/mail to another
machine and nfs mount it from soda. You also want to move off
soda accounts and merge them with office accounts and mount those
on soda via nfs. All because you don't know how to admin soda's
daemons or upgrade the kernel. Somehow I think this is the wrong
way to go about it.
\_ You are truly a dumbass. What do you expect to find?
\_ Signal: 0. Noise: 100. Troll value: nil.
\_ Absent the insult, the question still remains. What is
jvarga trying to learn from this exercise?
\_ maybe he wants "real" users to test out new HW before
putting it into production and to also help compare it
to the older shit that was there before. Yeah, let's
put new hw into use without testing.
\_ No, that doesn't make any sense. New hardware always
works perfectly every time. Nothing has even been
DOA or died in the first week of production use. The
idea that someone might want to test or burn in a new
build is just wild and only a dumbass would do testing.
I learned everything I needed to know about technology
from the motd.
\_ If you spoke less rashly and listened more carefully,
you might actually learn something.
\_ So if I just shutup and listened to your greater
wisdom which seems to imply that user testing is
useless and jvarga is a dumbass I would come away
with useful technical knowledge I could apply in
the future in production environment? Like this?
"Boss, I'm sorry, but the motd said there's no
point in testing that production raid array
before we go live with it, so you're just a
dumbass for even thinking about it". I'll try
that next time.
\_ If I were your boss, I'd certainly want to
know what you are trying to achieve with
your proposed testing and whether your
proposal is appropriate for your goal. A
later post claims that they want "people to
pound the hell out of the NFS mounts and tell
us how stable they are". Is that the goal?
Will random users doing whatever really tell
us anything about how stressed the disks
were? Would that be the most effective way
to stress the disks with representative
traffic?
\_ Is user testing the right way to check if a drive
is prone to infant mortality? Is checking one drive
under linux nfs and another under freebsd reasonable
if one is trying to catch hardware errors? There are
probably better ways to test for hardware robustness.
So the question remains. What is jvarga trying to
test for?
\_ People continually bitch and moan about the horror of
NFS and how much it sucked when they used it, but
none of those people seem to have any legitimate
complaints from any time in the last few years.
We're testing the stability of NFS on FreeBSD and
Debian Linux to see if the issues people are crying
about still exist. We want people to pound the hell
out of the NFS mounts and tell us how stable they
are. The results of this test are the last influence
on our choice of hardware setup for Soda Mark VII.
\_ So do you think a bunch of random users doing
whatever is a good way to stress test NFS for
stability and bugginess? Here I assume that
NFS is not trivially broken.
\_ If you have a better idea, please feel free
to try it out. Or, if you don't think this
stuff needs to be tested before being put into
production, mail root and I'm sure they can
move your home directory over.
\_ Jesus Christ, if you have a suggestion for the
guy, make it; otherwise shut the fuck up with
this wannabe Socratic Method nonsense. |
| 2004/10/10-11/4 [Computer/SW/OS/FreeBSD, Computer/SW/Unix] UID:34012 Activity:nil |
10/10 Please help us test out NFS. Take a look at the README file in
/linux-nfs and /freebsd-nfs. Use one, use both, have fun, break
stuff. |
| 2004/10/7 [Computer/SW/Unix] UID:33964 Activity:moderate |
10/7 Anyone else having email problems? My outgoing messages are not
getting delivered. This has been happening sporadically over the
last few days, but even worse today... Or rather, they get
delivered but it has taken up to 8 hours.
\_ same here, and it's been happening off and on for like the last
4 or 5 weeks.
\_ If you see this happen again, please check the load on soda, and
email root again. (-root)
\_ it just did happen again, and I just did email root again. -rory
\_ You're making it worse by increasing the volume of mail!
\_ The command to use to get the load is:
w | head -n 1
\_ aka 'uptime' |
| 2004/10/4 [Computer/SW/Unix, Computer/SW/Security] UID:33892 Activity:moderate |
10/4 Hey, jvarga. What the heck is bonnie and why is it sucking up
all of soda's resources. And why are you running sshd?
7803 jvarga 56 0 5544K 1816K RUN 1:38 4.49% 4.49% sshd
58395 jvarga -6 0 884K 448K nfsaio 3:27 3.56% 3.56% bonnie
58396 jvarga -6 0 884K 448K nfsaio 3:27 3.52% 3.52% bonnie
58393 jvarga -6 0 884K 448K nfsaio 3:27 3.37% 3.37% bonnie
58391 jvarga -6 0 884K 448K nfsaio 3:26 3.32% 3.32% bonnie
58397 jvarga -6 0 884K 448K nfsaio 3:28 3.27% 3.27% bonnie
58394 jvarga -6 0 884K 448K nfsaio 3:27 3.27% 3.27% bonnie
58398 jvarga -6 0 884K 448K nfsaio 3:27 3.12% 3.12% bonnie
58399 jvarga -6 0 884K 448K nfsaio 3:27 3.12% 3.12% bonnie
58392 jvarga -6 0 884K 448K nfsaio 3:25 3.03% 3.03% bonnie
\_ An sshd process is started as the user whenever you log in with ssh.
\_ Stress testing nfs for soda upgrades. I'll nice my processes a bit
more to keep the load from interfering.
\_ What are you testing? Dont be absurd. Re: nicing ... you
are certainly giving signs of not knowing what you are doing.
\_ And those signs would be??? Nicing processes will cause them
to be much lower in the priority queue than other processes,
like sendmail, and make life for you better. Nicing has
absolutly nothing to do with testing NFS.
\_ What a lamer. I wouldn't be surprised if jvarga isn't
a l33t u|\|1X H4X@r. But he's doing a pretty good job,
and a whole lot more than you are. If you have something
constructive to say, go ahead, otherwise, shut your pie
hole.
\_ You dont know who I am. By anybody's measure I've
done far more for the CSUA than jvarga. root@soda/
politburo has been quite unresponsive to requests and
has made a number of boneheaded decisions like the
"kchang finger denial of service" thing.
\_ he was evil when I met him in 97 and deserves a
permanent squishage. The decision was anything but
boneheaded. -former polit
\_ So, by "by anybody's measure", you mean "anybody who
hasn't been around to actually see how much work he's
done."
\_ How about a list of things?
\_ Said the anonymous loser.
\_ Anonymous Loser, just like you? If I signed, then
I'd be dismissed as a bitter alumnus.
\_ Like I said, lamer. We've got this thing in English,
indeed most languages. It's called present tense.
indeed most languages, it's called present tense.
Used for such words as "doing", and "sitting." Maybe
you should google for it.
\_ bonnie is a file system stress-testing benchmark. It *should* be
heavily I/O bound. Bearing that in mind, what's renicing it supposed
to accomplish?
\_ It should be I/O bound, and it is. Renicing the processes will
ensure that they don't consume CPU when others want it. It has
nothing to do with the I/O bound nature.
\_ Not to mention running a benchmark on a system with a lot
of baseline use. "Stress testing for soda upgrade" ... yeah
right.
\_ Actualy, yes, stress testing for a soda upgrade. Those bonnie
processes are hammering on an NFS mounted partition. |
| 2004/10/1-2 [Computer/SW/Unix] UID:33873 Activity:moderate |
10/1 If i want to awk '{print $2+}' is there a way to do that without
looping, or am i being TOO lazy?
\_ perl
\_ probably. what are you trying to do in not-awk-speak?
\- you can use this loop. there might be a range operator
in some versions of awk, but not in generic awk. --psb
awk '{for(i=2; i<=NF; i++) { printf " "$i } ; printf "\n" }' |
| 2004/9/29-30 [Computer/SW/Unix] UID:33840 Activity:high |
9/29 Is there a way to configure the cmd line ftp program to default
to the fucking binary transfer mode? is this a server setting?
this is linux and windows ftp client... thanks.
\_ which Windows FTP client? Most that I worked on can be preset to
bin. (ws_ftp, cuteftp, etc.)?
\_ Why are you still using FTP? try http or ssh/scp
\_ This is probably not OP's reason, but a lot of embedded
devices (like the analysis machines the people I'm dealing
with now) only permit FTP (for example). -John
with now) only permit FTP (for example). Use fetch for
good cmdline ftp'ing though. -John
\-wget will take a ftp url. i dont know wnaythign about windows.
--psb |
| 2004/9/24 [Computer/SW/Languages/Perl, Computer/SW/Unix] UID:33738 Activity:insanely high |
9/24 I have a directory with a bunch of image files names DSCNxxxx.jpg.
What's the quickest way to rename them all to Dscnxxxx.jpg? (just
changing the capitalization of the first 4 letters).
\_ foreach i (*.jpg)
mv $i `echo $i | sed -e s/DSCN/Dscn/`
end
I'm a hardware engineer and even I can come up with something
\_ You're assuming the OP has csh access to the directory.
\_ Okay, the why don't you just "dir" the files to a
text file, send it to your soda account, write a script
to change the names (DOS batch file), and viola.
\_ ObCygwin
\_ I'm looking for a one-liner that actually works... this
gives me "i: Undefined variable.". This is on linux and I do
have csh access. The perl suggestion below is a good idea but
it's overkill for what I'm doing right now. -op
\_ In Perl:
#!/usr/local/bin/perl
#
# Usage: rename perlexpr [files]
($regexp = shift @ARGV) || die "Usage: rename perlexpr [filenames]\n";
if (!@ARGV) {
@ARGV = <STDIN>;
chomp(@ARGV);
}
foreach $_ (@ARGV) {
$old_name = $_;
eval $regexp;
die $@ if $@;
rename($old_name, $_) unless $old_name eq $_;
}
exit(0);
Use 's/DSCN/dscn/' for the regex at the commandline or just
modify the $regex variable.
\_ Your OS?
\_ What you want is a nice perl script that renames it to
YYYYMMDD_HHMMSS_xxxx.jpg. This is the way to archive images.
Besides the image, the time is the next most important thing,
but with Windows and day light saving time and time zone, and
that sometime you forget to set the camera's clock correctly
when you travel, relying on file timestamp and exif time is
really not a good idea. Embed the picture time into the filename
is a permanent way to record the time of a photo.
\_ If you are using 4NT, just do "ren DSCN* Dscn*"
\-ls | awk '{print "mv " $1" "$1}' | sed 's/ DSC/Dsc//' | sh
\_ This works in NT Command Prompt. You don't need 4NT.
\-ls | awk '{print "mv " $1" "$1}' | sed 's/ DSCN/Dscn//' | sh
--psb
\_ you should just use gsub in your awk.
\- as i said last time this came up on the motd, anybody
asking a question like this isnt going to be familiar
with complicated awk or sed, backrefs etc. So it's best to
make something easy to modify. i suppose i should have
used the nth match for sed. i would personally just do
this in emacs. --psb |
| 2004/9/23 [Computer/SW/Unix, Computer/HW/Drives] UID:33715 Activity:very high |
9/23 I need to make an image of a disk a disk on a different machine
in Linux. That is, Make an image of A's disk, on B, and then
later be able to just copy the image back on to A.
ADDENDUM: It would be nice to be able to compress the image on the
way out. Tar and gzip or bzip2 I suppose.
\_ Is this a question or an observation? I need a big-breasted
blonde to cook and clean for me.
\_ I'd rather have a big-breasted blonde to have sex with me.
\_ have u tried something like 'dd if=/dev/diska | ssh machineb
"dd of=/dev/diskb"'
\_ I think you're looking for something like Norton Ghost. Check
out g4u (Ghost for Unix), probably hosted on sourceforge. |
| 2004/9/22-23 [Computer/SW/WWW/Server, Computer/SW/Unix] UID:33708 Activity:kinda low |
9/22 The DNS/web hosters for <DEAD>a.b.com<DEAD> are doing a HTTP 301 redirect to my site <DEAD>c.d.com<DEAD> How do I change the Apache httpd.conf on <DEAD>c.d.com<DEAD> so that it appears to the web browser that it is browsing <DEAD>a.b.com<DEAD> ? \_ You don't. \_ Do you own <DEAD>a.b.com<DEAD>? \_ you would have to redirect just a frame or something similar to that. the url at the top of the browser will still reflect the primary frame or div \_ JavaScript can rewrite the URL line. |
| 2004/9/22 [Computer/SW/OS/Windows, Computer/SW/Unix] UID:33697 Activity:high |
9/22 Is there a way to repeat the previos command in the Windows2000
MS-DOS prompt running inside Windows? (like the up arrow in bash)
\_ ilyas, I am stomping your response to teach you a lesson about
stomping other people's posts.
\_ Wow, ilyas stomped the two "F3" answers with his update. What
an asshole.
\_ F3.
\_ up arrow works for me
\_ Neither "up arrow" nor "F3" do it in http://command.com in my version
of win2kpro. Do I have to run http://command.com with some special
option?
\_ You need to run the (included) shell cmd.exe. I said that
already but ilyas stomped it.
\_ sweet! thanks. Does windows have a command like "date"
that prints out the current time/date? does is also
have a way to run multiple commands in one line like:
soda % date ; ls ; date
\_ Do this: "date /t & time /t"
\_ YMWTS: (-pp)
\_ what's that mean?
\_ You Might Want To See:
\_ "You might want to see"?
\_ what does "(-pp)" mean?
\_ Thanks. Anyway to get it to display seconds?
\_ time
http://www.microsoft.com/windowsxp/home/using/productdoc/en/default.asp
url=/windowsxp/home/using/productdoc/en/ntcmds.asp |
| 2004/9/21 [Computer/SW/Security, Academia/Berkeley/CSUA/Motd, Computer/SW/Unix] UID:33656 Activity:high |
9/21 Say, why don't the proponents of a logged motd actually hack it and
put it in /etc/motd.logged, and let people vote with their feet?
-- ilyas
\_ why don't you create /etc/motd.stupid and post your crap there? -tom
\_ Every account should have its own /etc/motd.<accountname>.
Only you will be allowed to post to your own motd. No one
else will be allowed to touch it, and /etc/motd.public
will be turned off. This way, everyone who wants to can rant
to the heart's content, and no one will have to worry about
their rants being baleated. Everyone else can just ignore
you if they want to. We can have special zones set up for
those that love to argue, as well - for instance,
/etc/motd.tomvsilyas, /etc/motd.freepernutzo,
/etc/motd.aaronallcapsrant, and /etc/motd.mormons. The AMC
can have his own empty file for his motd, but it will be
owned by root so that he can remain "anonymous." It will
be world readable but not writeable by anyone.
\_ and we could call these files ".plan" files, and have a
special command to read these motd files called "finger."
\_ Well, I was trying not to belabor the point too much, but
then again...
\_ you have just used the slippery slope tactic.
\_ And tom used a red herring AND an ad hominem in 1 line!
\_ uh, ilyas is the one with the red herring. -tom
\_ ilyas just volunteered! |
| 2004/9/16-17 [Computer/SW/Languages/Web, Computer/SW/Unix] UID:33570 Activity:kinda low |
9/16 Soda question: I have a file index.php in one of my public_html
subdirectories, but when I http to ~me/foo/ the PHP isn't getting
processed, though ~me/public_html/foo/index.php is getting served up.
What am I doing wrong?
\_ Do this "ln -s index.php index.html". I had to do the same thing
for my index.htm.
\_ Doesn't help. It loads the file but does not process it.
\_ why not mv index.htm index.html
\_ It was automatically generated, and I didn't want to rename it
every time it's generated.
\_ who was automatically generating it?
\_ Photoshop.
\_ Ah, I think what I'm looking for is for the line in httpd.conf
#AddType application/x-httpd-php .php
to be uncommented, but barring that, I can deal. -op
Did a workarouns with a dummy index.html and the meta tag
http-equiv="Refresh"
\_ Never use meta refresh. It breaks the back button.
http://www.w3.org/QA/Tips/reback
\_ I just added index.php to DirectoryIndex in httpd.conf so
it should work correctly now. -brett
http://httpd.apache.org/docs-2.0/mod/mod_alias.html.en#redirect
\_ meta refresh works client side when you can't change the
server config... I don't think we have mod rewrite access
on (based on quick scan of the httpd.conf)
\_ on soda you can put this in your .htaccess:
Redirect /~user/foo/old.html <DEAD>new.url/whatever<DEAD>
Meta refresh sucks. Don't use Meta refresh.
\_ Great question. index.php has just been added to DirectoryIndex. |
| 2004/9/16 [Computer/SW/Unix] UID:33556 Activity:nil |
9/16 Bash Q: in csh 'which rm' will tell me if i have rm aliased
to something. What's the bash equivelant? Can't seem to come up
with the magic incantation for Google. Thanks.
\_ try 'type' instead of which. |
| 2004/9/14 [Computer/SW/Unix] UID:33516 Activity:moderate 50%like:35981 |
9/13 How can I get grep to serach for the characters '--' or '->'
(without the single quotes)? grep '--' * doesn't work. Thanks.
\- egrep -e '<expression>' --psb
\_ grep -- '--' *
(the first -- indicates the end of the list of options) |
| 2004/9/8 [Computer/SW/Security, Recreation/Shopping, Computer/SW/Unix] UID:33417 Activity:very high |
9/8 What are some wedding registry web sites to use?
\_ http://www.uscav.com
\_ http://www.weddingchannel.com handles the registries for most
of the major stores, including Macy's, Williams-Sonoma, Pottery
Barn, Crate & Barrel... etc. Even REI!
\_ The most popular. Does what most people want to do. But of
course, if you do it with Wal-Mart (and I think Target too),
you get to walk around the store scanning whatever the hell you
feel like...
\_ You can also walk around with a scanner at a Williams-Sonoma
or Pottery Barn store.
\_ You can't scan catfood, cigarettes, and t.p. at WS or PB.
\_ It's really more a question of what store(s) you're registering at
isn't it?
\_ http://bushong.net/wishlist
\_ http://www.williams-sonoma.com |
| 2004/9/7 [Computer/SW/Unix] UID:33387 Activity:high |
9/7 Is there a way to FTP over an entire directory (in FreeBSD) to
another machine?
\_ scp -r <directory> user@anothermachine:
\_ I can't scp; I only have FTP access to the remote
machine.
\_ mget
\_ ftp, then "prompt off", then "mget -R" - danh
\_ any reason you cant just tar it first?
\_ Maybe he can't log in?
\_ wget -r |
| 2004/9/6-7 [Computer/SW/Unix] UID:33375 Activity:nil |
9/6 SAMBA Question: How do I use Samba to mount a home directory
on LINUX machine A from LINUX machine B? I knew how to do this
with Samba 2.x but I can't make this work in Samba 3.x. I
always get NT_STATUS_LOGON_FAILURE. Google didn't help. Thanks.
\_ check smbd log? turn debug option up? |
| 2004/9/2 [Computer/SW/Unix, Computer/SW/OS/Windows] UID:33307 Activity:high |
9/2 Windows is finding executables not in my path, does anyone know
how the lookup is done? it runs abc.exe even though abc.exe is
no where in my path. Thanks.
\_ Are you referring to Start button -> Run, or from a console window?
For example, if I run msconfig.exe from a DOS prompt, I get no
command found. If I run it from the Start -> Run, it gets run.
\_ start/run
\_ Windows stores the paths to certain executables in the
registry in
HKLM\Software\Microsoft\Windows\CurrentVersion\App Paths |
| 2004/9/2 [Computer/SW/Unix] UID:33299 Activity:very high |
9/2 http://csua.berkeley.edu/~fonger WHO IS THIS? \_ a transformer, robot in disguise. \_ Who, fonger or the chick? \_ That's not fonger? \_ Hot! \_ Wow! Is fonger the hottest chick on soda? \_ I like lisha personally. \_ karen has 'em both beat easy. \_ Any pics of lisha and karen? There's none in their home pages. \_ Go away chronic masturbator. \_ that is difficult to imagine. Fonger is damn near perfect. \_ Login: fonger Name: Fong Lin Directory: /home/sequent/fonger Shell: /usr/local/bin/tcsh Never logged in. No Mail. No Plan. \_ If fonger has never logged in, how does he/she have a webpage? \_ wtmp rotates \_ It doesn't matter. Politburo will have the resolve to squish these h0z3rs. \_ What? I don't think this sentence makes sense. \_ Not squishing fonger can only encourage our enemies and confuse our friends. |
| 2004/8/29-30 [Reference/Religion, Computer/SW/Unix] UID:33204 Activity:moderate |
8/29 Jesus goes GNU:
http://www.newsforge.com/article.pl?sid=04/08/25/2220201
\_ Jesus is still dead. Sorry to disappoint.
\_ Your god plutonium will not save you. |
| 2004/8/27 [Computer/SW/Security, Computer/SW/Unix] UID:33177 Activity:moderate |
8/27 Is anyone else haveing a probllem w/ spamassassin not working since
sometime late last night?
\_ Yes. I am using spamc.
\_ Fixed. Emailing root is the fastest way to get this resolved
when spamd hozes itself -njh (root)
\_ root messed up, root must be squished! |
| 2004/8/23-24 [Computer/Networking, Computer/SW/Languages/Web, Computer/SW/Unix] UID:33086 Activity:very high |
8/23 Is soda running a web proxy?
\_ Not to my knowledge, but if you need one (assuming you're talking
about a cgi proxy) I recommend setting up nph-proxy.cgi. It's
free and easy and works a charm. -John
\_ If you want a real http(s)/ftp proxy I recommend squid:
http://www.squid-cache.org
It isn't too hard to get running, and for low traffic
volume the default config provides reasonble performance.
\_ Seconded. But "real" http proxies don't work from behind
corporate firewalls, usually. CGI proxies do. -John
\_ note that running an unauthenticated web proxy is a violation of
campus policy. (And is likely to get you in trouble). -tom
\_ is that worse than fingering soda a few times per second?
\_ Only ONE MAN would DARE give me the raspberry! |
| 2004/8/20-21 [Computer/SW/Unix] UID:33042 Activity:moderate |
8/20 How do I make tcsh to return an empty list for things like
foreach foo (bar/*) if there is no glob match rather than
generating an exception and stops the script?
\_ the "nonomatch" variable doesn't quite do what you want,
but it's pretty close.
\-sigh, this is one of those lame things about csh.
you can do foreach i (`echo bar/*`) which will generate
an error, but the foreach will work. you can of course
do foreach i (`echo bar/* >&/dev/null`) to not see the
error. but in both of these cases, the loop will not run
even once. if you want to run the loop once with empty
args, you would need to foreach i (`echo bar/* >&/dev/null` " ")
[or something like that]. of course you can replace echo with
find bar -type f etc. you may be better off building the
filelist with find, then doing an if -z and then calling
the loop. otherwise you get weird behavior ...
set foo = "" then try ls $foo, ls "$foo", touch $foo, touch "$foo"
and then try with set foo = " ". you either have to be really
conservative in your quoting and such or you have to check
inputs ... it's actually pretty much impossible to be fully
conservative in csh to deal with arbitrary valid filenames
[!, space, tab, $, %, ? , * are all legit filechars].
this is a big pain in the ass on osX.
--mr. tcsh
\_ Yes, this is what perl is for. --other tcsh guy
\- well i think tcsh is fine for a lot of one shot things
and maybe if you "know" it will be well behaved. --psb |
| 2004/8/20 [Computer/SW/Security, Computer/SW/Unix] UID:33038 Activity:high |
8/20 Would someone (root type person) make mail to motd world readable,
or is it so somehow already?
\_ Why?
\_ Password registration.
\_ mailinator.
\_ I want the password and updates to be soda accessible.
\_ rcpt to: motd@csua.berkeley.edu
553 5.3.0 motd@csua.berkeley.edu... motd does not accept mail. |
| 2004/8/19-20 [Computer/SW/Unix, Academia/Berkeley/CSUA/Motd] UID:33030 Activity:high |
8/19 So, can someone explain Posix to me?
\- POSIX IS THE STANDARD --psb
\_ I usually hear about it with respect to POSIX threads. Writing
threaded programs can be a mess if the API for each architecture
is different. So, every architecture supports POSIX threads.
Write once, run everywhere (with lots of #ifdefs). Win32 does
not support POSIX threads, though there are DLL shims.
\_ I'm sure the quality of information you get here will be WAY
better than a google search on "posix standard". You lazy bitch.
\_ actually KAIS MOTD is better than Google. Search for
posix there, you'll be surprised -kchang #2 fan
\_ Not to detract from KAIS MOTD or anything, but part of
the problem here is that google is TERRIBLE for searching
for specific technical questions. I am not entirely
sure why. -- ilyas
\_ I don't know what crack you're smoking. I'd rather
have no kais motd than no google.
\_ You are an idiot. Read again what the guy you are
responding to actually said. My god your idiocy
makes me sick.
\_ Actually, I did read what he said. And I did a
posix search on both, and the information on
kai's motd was both out of date, and largely
irrelevant. -dwc
\_ Yeah, because if I asked about C++ pointing me to the C++
standard would be really helpful. I'm trying to get a summary
and a pointer maybe to an introduction. I'm capable of using
google.
\_ Apparently you aren't. "standard" was an example, I'm sure
there are many better ways to refine your query to get the
sort of thing you want. |
| 2004/8/19 [Computer/HW/Memory, Computer/SW/Unix] UID:33018 Activity:nil 77%like:33005 |
8/18 Anyone have any experience getting a boobable .iso onto a USB
memory key (yes, it is boobable and has enough space). -John
\_ I just did this recently. I haven't found a way to get a .iso
directly on, but here's what I did:
1) Format the USB storage
2) use mkbt to get the boob sector from the .iso and then put it on
the USB storage. (Get mkbt at: http://www.nu2.nu/mkbt
3) copy files from .iso to memory key
I used daemon-tools to mount the .iso to rip the boob sector.
Oh, and if you want to use Ghost's boob disk creator, you can use a
virtual floppy drive so you don't have to use a physical floppy:
http://chitchat.at.infoseek.co.jp/vmware/vfd.html
\_ There's not even a readme for this. What exactly does it do?
\_ Readme for which? vfd is a virtual floppy. Install it and
you've got a virtual floppy drive. mkbt extracts boob sectors
and writes them.
\_ Oh, and this is where I got most of my help on this:
http://www.weethet.nl/english/hardware_boobfromusbstick.php
\_ Many thanks, swami. *bows* -John |
| 2004/8/18 [Computer/SW/Unix] UID:32985 Activity:very high |
8/17 How can one match wildcards '*' with files that start with a '.' in
bash and tcsh?
\_ .[^.]* works for most cases, but will not match "..foo"
\_ ls -A1 | grep '^\.'
\_ ls * .*
\_ i personally prefer .??* .
\_ ls `find . -type f -name .\*`
\_ ls `find . -type f -name .\*`, although it's not using the wildcard
facility in the shells.
\_ this will traverse down the directory tree. this behavior
would be different from what the others do and may or may not
be desirable.
\_ Use this then: echo `find . -type f -name .\* -maxdepth 1`
\_ At this point, what do you gain by using find? It would
be better if you were tall enough to use xarg or even
-exec. There would still be no point in using find though.
\_ Example without using "find" please?
\_ from above, .* or .??*
\_ ".*" will always exclude the parent directory
because ......? And ".??*" will still work if
there's a file named ".f" because ......?
\_ % echo .*
. [deleted] .. [deleted]
% echo .?*
[deleted] .. [deleted]
Try it yourself. Whether you want . depends
on what you'd want to do with the results,
I'd guess.
\_ BTW, ls `find [...]` should choke on file
names with a space. Probably other things too.
\_ Yes it does, but as in my -maxdepth example
above I already switched to echo `find ...`.
\_ echo only works because it doesn't use
the result of the find to touch the
file system. your examply `find [...]`
will fail any time someone tries to do
that, so it's not a complete solution,
as well as being one wasteful of system
resources. again, i urge you to learn
either xargs or -exec. preferably xargs,
but like partha likes to say, you have
to be this tall to use xargs.
\_ Your example ".*" will fail even when
no one tries to "touch the file
system." (By "touch the file system"
I suppose you meant accessing the
files afterwards using the match
result, since expanding filename
wildcards already requires touching
the file system.
\_ % mkdir ".try this"
% ls -d .*
./ ../ .try this/
you have access to a csh? show me
where it fails.
\_ Again, . and .. are not files.
\_ again,
% od .
% od ..
\_ % cd /
% which od
/usr/local/bin/od
% od .
od: .: Is a directory
0000000
% od ..
od: ..: Is a directory
0000000
%
And you point?
\_ /usr/bin/od
obtw,
% ls /usr/local/bin/od
ls: /usr/local/bin/od:
No such file or directory
in any case, i have to
leave for dinner.
\_ Indeed.
% touch "try this"
% ls
try this
% ls `find .`
ls: ./try: No such file or directory
ls: this: No such file or directory
.:
try this
Like I said, I wish you were tall enough
to use xargs or -exec.
\_ You being tall enough to use xargs or
-exec didn't help the fact that your
example doesn't even work in any
directory. So what's the point.
\_ which example? i merely copied from
solutions above. .* does work for
all files, including . and .. .
.?* will exclude ., which is perhaps
nice.
\_ Excuse me? . and .. are not files,
are they?
\_ sure they're not files.
\_ obtw,
% od .
% od ..
\_ also xargs, but you have to be this tall to use it.
\_ this task is trivial if you use 'set noglob'. |
| 2004/8/16-17 [Computer/SW/Unix, Computer/SW/Mail] UID:32946 Activity:moderate |
8/16 Anybody know of an existing program similar to pop-before-smtp
but for IMAP. It wou use a current IMAP connection to allow
SMTP relaying from the same IP addres as that IMAP connection?
\_ it's the same program, just change the regexp used to search
for login lines and change the logfile watched
\_ Nope, That does not work. IMAP connections are persistent and
the users don't login repeatedly if their mail client is
open. I need a program that uses the IMAP connection status
something that would use this sort of data to create a db
of allowed IP addresses: netstat --numeric-hosts | grep imap
If I can't find such software, perhaps I'll have to create it.
\_ Actually, I've rethought my previous post. I think
pop-before-smtp IS the right program, it just needs a
feature to not expire an IP address if that user is
still connected on the IMAP port. Simply allowing
relay for any IP connected to the IMAP port would
be a huge spam opening. -op
\_ the "one true way" is to set up SMTP AUTH. The better way to
do pop/imap-before-smtp is to get the pop/imap daemon to update
your relay tables themselves instead of getting another daemon
to watch them and clean up afterward.
\_ Run this over SSL if you want to be especially tidy. If you
need help under postfix, I can give you a hand. I found it to
be tricky when trying to set up under FreeBSD though (prob. due
to master.passwd.) -John |
| 2004/8/16 [Computer/SW/Unix, Computer/SW/Security] UID:32938 Activity:very high |
8/16 Some douche changed the password for the csuamotd nytimes account
because he said he didn't like political threads. They're not going
away and you just inconvenianced a lot of people. Where do you live?
I'd like to piss in your swimming pool.
\_ if you figure out who it is, post their name.
\_ I second that.
\_ Is there a "I forgot my password, please email it" option?
\_ Yes, and it will probably go to motd@csua.berkeley.edu
\_ Yes, and it will probably go to motd@csua.berkeley.ed |
| 2004/8/13 [Computer/SW/Languages/Perl, Computer/SW/Unix] UID:32883 Activity:nil |
8/12 I am not a techie but I am trying to learn some perl to do
some text manipulation for my research. I am having trouble
with hashes. Can someone give me an example of how to read
in something like the password file (say, user, directory, shell)
into a hash?
\_ The question belies a slight misunderstanding of hashes. A hash
is a simple set of key value pairs, so reading a password file
in would require a bit of thought to the structure. You could
make the key be uid and the value be a pointer to an array of
the passwd line, i.e. (username,pw,uid,gid,gcos,hdir,shell).
You could alternately key on username. You could make an array
of hash references where each array member is like
username => 'root',
pw => '*',
uid => 0,
gid => 0,
etc. But using a single 1-dimensional hash on something like
an entire passwd file would yield you the data from the last
line in the file.
\_What would be the right structure to use to read in the
whole password table and hold it in a structure so I could
refer to some arbitrary piece of data like "the shell of
user user1"? Like if I wanted to merge the encrypted password
from the shadow file with the login and shell fields from the
password file? Thanks for your explanation.
\_ Now that you could do with a hash, but easier would be to use
(getpwnam('user1'))[8] to get the shell. But to do this for
the whole passwd file (as an example) would be
open PW,'/etc/passwd';
my %shells;
foreach my $pwentry (<PW>) {
chomp $pwentry;
($name,$shell) = (split /:/,$pwentry)[0,6];
$shells{$name} = $shell;
}
close PW;
Oh wait. I didn't catch the part about shadow password, but
this should give you an idea of where to start. |
| 2004/8/12-13 [Computer/SW/Unix] UID:32857 Activity:high |
8/12 Is there a way to combine two files without writing? I mean
unlinking the 2nd file but instead of freeing its blocks, adding them
to the end of the 1st file. Suppose gap is not an issue.
\_ Instead of asking for an answer to some obscure no-details technical
question, how about you ask us what you're actually really trying to
do and maybe we can then help you solve the real problem?
\_ Well, I want to make some kind of revision control except I
don't want to do diffs (suppose it's binary) and just want to
keep copies of old files up to a certain number (sort of like
what VMS did). As such, I don't want to read and write the
files but just chain old copies together with some control info
recorded separately. If you have a better solution I would like
to hear it, but my original question is quite well specified.
\_ How about moving the files into a special directory? I know
it's boring, but it would be easy, portable, and work.
\_ You're describing tar. Rename the old file and move it into
a tar file you've created for this purpose. Tar will append,
allow searching, has control info, allow extracting by file
name, etc, etc. Don't re-invent the wheel.
\_ Yeah it is like tar, but tar actually has to read the file
and then copy it bit by bit to another file. I want to
just append the list of data blocks of the 2nd file to
that of the 1st file before unlinking it.
\_ Why?
\_ More efficient????? You don't have to read and write
1000 50 MB files if you are not changing them. |
| 2004/8/12 [Computer/SW/Unix] UID:32855 Activity:nil |
8/12 Do csh or tcsh have a way to redirect stdout and stderr to seperate
files?
\_ (myCommand myArgs > myStdout.file) >& myStderr.file --- yuen |
| 2004/8/11-12 [Computer/SW/Unix] UID:32824 Activity:high |
8/11 If I have two libraries libfoo.so.1 and libfoo.so.2, and I do
something like 'cc foo.c -lfoo', is there a way to specify which
version of libfoo the linker should link in, without having a
libfoo.so symlink to the one I want? Thanks.
\_ How about "cc foo.c libfoo.so.2"?
\_ That works if I always have the full path handy, but
how about a way where the linker will still use its search
path? Thanks.
\_ This is usually done with symlinks. Create one and be done.
\_ This is a crappy hack. I'm not saying your answer is wrong, but
this is a crappy hack. Matt Dillon has mentioned variable symlinks
this is a crappy hack. Matt Dillon has mentioned variable symlinks
(with a fair amount of hand waving) as a possible solution; are
there any better ones (say for dynamic linking?) -!op
\_ A crappy hack that every UNIX OS uses.
\_ what is a variable symlink?
\_ a symlink that can do shell variable expansions?
\_ I thought symlink extrapolation is at is os level,
not shell? |
| 2004/8/6 [Computer/SW/Unix] UID:32733 Activity:low |
8/5 Dear network administrators, I'm just curious what percentage of the
traffic is http, ftp, ssh, finger, ping, etc? And is it really
possible to do a successful DoS via finger?
\_ Depends entirely on what your network does. Big corporation?
Internally? To the Internet? University? Carrier? It varies
tremendously. And you can DoS using pretty much anything, if you
do it enough. -John
\-well obviously this is sort of a trivial question for a
LAN ... if you are an large oil company and are crunching a
lot of data, maybe it is NFS, maybe it is AFS ... maybe you
are using computation GRIDS ... but you should probably be
surprised if it is Quake traffic. As for the internet at large
which is what i assume you are asking, I havent been following
the internet measurement area for a while but a few things:
tcp is +85%. udp is a distant second. the size of flows and
packets have some interesting distribution properties [e.g.
obviously a lot of syn/ack/fin/rst "small packets"] as
well as some directionality properties [hence asymmetric
bandwidth provisioning makes sense], as well as some time
of day, day of week effects [which are what you expect ...
weekends are quieter] and there are some hour of day properties
but i dont rembmer how geography was factored into those
measurements. and now for protocols ... yes http traffic is
something like 75% of all traffic. there is a couple of
percent DNS background [the percentage has come down a bit
over the last 10yrs]. ftp as a fraction has come down and
is now in the single digits. mail is also in the same range
but i dont remember how this has changed over time since
spam took off. unsurprsingly ftp transactions are larger
than email, so the same number of bytes represents much
fewer transactions. i believe the news background has
shrunk in percentage terms but dont know what the absolute
flow volumes are. ssh, telnet rlogin are all noise.
i dont know much about what i'll call web helper applications
like streaming audio/video. also i dont know what p2p
has done to these numbers. i also dont know to what extent
the public internet is use for online WAN gaming. i doubt
netrek is king though :-). you can look maybe around the
CAIDA website ... they might have something up to date,
look for maybe kc claffy. disclaimer: my numbers are biased
toward byte volumes, not flows or packet counts. most
importantly this is pre a lot of p2p take off. there were
some early trend numbers but i dont know what the picture
looks like after the napster rollercoaster, the rise of
gnutella, bittorrent etc. more involved statistical analysis
of flows is beyond the scope of the motd. if you are interested
in a narrow question you can send me a note. --psb
\_ What Partha said, and ditto about specific questions--I mainly
know about banking/insurance networks (Internet and LAN.)
Also you may want to differentiate between # sessions and
# packets/session (as Partha indicated.) Use something like
EtherApe on a core L3 switch SPAN port to give you a cute
graphical overview of what/how much is out there. In a
corporate LAN, your highest overhead's bound to be Windows
fileshare, web & email traffic. Also depends on what part
of a network you're looking at (e.g. some nets are dedicated
server segments, where you might see mainly SQL-type stuff
going back and forth, etc.) -John |
| 5/16 |