Berkeley CSUA MOTD:Entry 35370
Berkeley CSUA MOTD
 
WIKI | FAQ | Tech FAQ
http://csua.com/feed/
2017/12/15 [General] UID:1000 Activity:popular
12/15   

2004/12/21 [Computer/SW/Unix] UID:35370 Activity:kinda low
12/21   uniq can get rid of 2 identical lines if they occur right after
        each other.  But how do you get rid of 2 identical lines, even
        if they don't occur right after each other.  using sort works,
        but then all the lines are out of order, which is a problem.
        \_ perl.  counters for each unique pattern so far.  Hell, you
           can do it using a temp file with just /bin/sh scripts.
           \_ perl -ne '$m{$_}++||print' <file>
              this does the uniq thing, not kill all duplicates. -vadim
        \_ do it scalably w/ bash, e.g. let the sort/uniq tools do
           the heavy lifting:
           n=0
           while read line ; do echo "$n $line" ; n=$(($n + 1)); done \
           | sort -k 2 | uniq -f 1 | sort -n \
           | while read num rest ; do echo "$rest" ; done
           \_ cat -n <file> | sort -uk 1.8 | sort | cut -c8-  -vadim
        \_ to do what you really asked, you can replace sort -uk 1.8 with
           sort -k 1.8 | uniq -uf1. -vadim
        \_ another one (zsh):
           typeset -A m; while read l; do [ $m[$l] ] || echo $l && \
           m[$l]=1; done -vadim
        \_ /tmp/unique.c is something I wrote on SunOS5 a few years ago.
           --- yuen
           \_ waaaay unsafe. The least you could do is store md5s in the
              hash. -vadim
              \_ It's just some quick utility I came up with to discard
                 duplicated path names.  It wasn't meant to be secure. --- yuen
2017/12/15 [General] UID:1000 Activity:popular
12/15   

You may also be interested in these entries...
2013/8/22-10/28 [Computer/Companies/Yahoo, Industry/SiliconValley] UID:54732 Activity:nil
8/22    http://marketingland.com/yahoo-1-again-not-there-since-early-08-56585
        Y! is back to #1! Marissa, you are SEXY!!!
        \_ how the heck do you only have 225M uniq vis/month when there
           are over 1 billion internet devices out there?
           \_ You think that every single Internet user goes to Y!?
        \_ Tall blonde skinny pasty, not my type at all -former Y!
	...
2012/8/30-11/7 [Computer/SW/Unix, Computer/SW/Apps] UID:54470 Activity:nil
8/30    Is wall just dead? The wallall command dies for me, muttering
        something about /var/wall/ttys not existing.
        \_ its seen a great drop in usage, though it seems mostly functional.
            -ERic
        \_ Couldn't open wall log!: Bad file descriptor
           Could not open wall subscription directory /var/wall/ttys: No such file or directory
	...
2011/11/20-2012/2/6 [Computer/SW/Unix, Computer/Companies/Apple] UID:54237 Activity:nil
11/20   Are there tools that can justify a chunk of plain ASCII text by
        replacing words with words of similar meaning and inserting/removing
        commas into the text?  I received a 40-line plain text mail where
        all the lines are justified on left and right.  Every word and comma
        is followed by only one space, and every period is followed by two
        spaces.  The guy is my kid's karate instructor which I don't think is
	...
2011/10/26-12/6 [Computer/SW/Unix] UID:54202 Activity:nil
10/24  What's an easy way to see if say column 3 of a file matches a list of
       expressions in a file? Basically I want to combine "grep -f <file>"
       to store the patterns and awk's $3 ~ /(AAA|BBB|CCC)/ ... I realize
       I can do this with "egrep -f " and use regexp instead of strings, but
       was wondering if there was some magic way to do this.
       \_ UNIX has no magic. Make a shell script to produce the ask or egrep
	...