You might want to look at the uniq and sort applications.
You might want to look at the uniq and sort applications. . /yourscript.
Ksh | sort | uniq (FYI, yes, the sort is necessary in this command line, uniq only strips duplicate lines that are immediately after each other) EDIT: Contrary to what has been posted by Aaron Digulla in relation to uniq's commandline options: Given the following input: class jar jar jar bin bin java uniq will output all lines exactly once: class jar bin java uniq -d will output all lines that appear more than once, and it will print them once: jar bin uniq -u will output all lines that appear exactly once, and it will print them once: class java.
Does the job, thanks! – Brabster Mar 6 '09 at 10:35.
. /script. Sh | sort -u This is the same as monoxide's answer, but a little less verbose.
For larger data sets where sorting may not be desirable, you can also use the following perl script: . /yourscript. Ksh | perl -ne 'if (!defined $x{$_}) { print $_; $x{$_} = 1; }' This basically just remembers every line output so that it doesn't output it again.It has the advantage over the "sort | uniq" solution in that there's no sorting required up front.
1 Note that sorting of a very large file is not an issue per se with sort; it can sort files which are larger than the available RAM+swap. Perl, OTOH, will fail if there are only few duplicates. – Aaron Digulla Mar 6 '09 at 11:06 Yes, it's a trade-off depending on the expected data.
Perl is better for huge dataset with many duplicates (no disk-based storage required). Huge dataset with few duplicates should use sort (and disk storage). Small datasets can use either.
Personally, I'd try Perl first, switch to sort if it fails. – paxdiablo Mar 6 '09 at 11:33 Since sort only gives you a benefit if it has to swap to disk. – paxdiablo Mar 6 '09 at 11:34 I like the ability to not sort the list, thanks – Davide Mar 6 '09 at 16:51.
With zsh you can do this: zsh-4.3.9t% cat file tar gz java gz java tar class class zsh-4.3.9t% u=($(.
Pipe them through sort and "uniq". This removes all duplicates. "uniq -d" gives only the duplicates, "uniq -u" gives only the unique ones (strips single items).
Gotta sort first by the looks of it – Brabster Mar 6 '09 at 10:35 Yes, you do. Or more accurately, you need to group all the duplicate lines together. Sorting does this by definition though ;) – Matthew Scharley Mar 6 '09 at 10:37 Also, uniq -u is NOT the default behaviour (see the edit in my answer for details) – Matthew Scharley Mar 6 '09 at 10:49.
I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.