AWK/BASH: how to match a field in one file from a field in another?

Awk #! /bin/bash awk 'FNR==NR{! A$0++;next }{ b$0++ } END{ for(i in a){ for(k in b){ if (ai==1 && I ~ k ) { print I } } } }' file1 file2.

Expanding on codaddict's answer: grep -f file2 file1 | sort | uniq this will remove lines that are exactly the same, but the side effect (which may be unwanted) is that your datafile will now be sorted. It also requires the lines to be exactly the same, which is not the case in your example data. The names are the same, but the data after those same names is different.

Uniq can take a field or character count option, but this won't work on your data because your names have variable length and a variable number of fields. If you know your data fields are always the last 3 fields on a line, then you can do this: grep -f file2 file1 | sort | rev | uniq -f 3 | rev your output will be only one of each name, but which one? The lowest one lexicographically because it was sorted (sort is needed for uniq to work right).

If you don't want to sort it first, or need to be careful about which of the lines are dropped, then an awk or perl or ruby or python solution will probably work best using associative arrays.

Yea that's exactly my problem. The names can be of variable length and the data after the name is different. I am just looking to get the first occurrence of say John Allen Smith II.

I'll look into some associative arrays with awk. Thanks for the info. – Tomek Oct 16 '10 at 6:12.

You can use grep as: grep -f file2 file1 # file2 is the file with the names. The -f option of grep obtains the pattern to be search for from the file. To remove exact duplicate lines from the output you can use sort as: grep -f file2 file1 | sort -u.

– Tomek Oct 16 '10 at 5:48 I specified the -F flag wrong (I replaced -f with -F) so the final command grep -f file2 -F file1 worked. Thanks for the help. – Tomek Oct 16 '10 at 5:56 for the unique, I was looking to remove the lines from file1 that have just the names repeated (the other columns have different data) – Tomek Oct 16 '10 at 6:00 If there is a chance of duplicate lines being everywhere, sort is needed.

– ghostdog74 Oct 16 '10 at 6:04 @ghostdog74 duplicate lines(names) only occur in the same place right after each other, the problem is that the rest of the line (after the name) can be different. – Tomek Oct 16 '10 at 6:07.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions