Find unique lines

Tag: linux , sorting , unique , uniq Author: makaay_go Date: 2012-11-22

How can I find the unique lines and remove all duplicates from a file? My input file is

1
1
2
3
5
5
7
7

I would like the result to be:

2
3

sort file | uniq will not do the job. Will show all values 1 time

Other Answer1

uniq has the option you need:

   -u, --unique
          only print unique lines
$ cat file.txt
1
1
2
3
5
5
7
7
$ uniq -u file.txt
2
3

Other Answer2

uniq -u < file will do the job.

comments:

No need for the redirection.
Yea I know. Did it habitually
Only if the file is already sorted

Other Answer3

This was the first i tried

skilla:~# uniq -u all.sorted  

76679787
76679787 
76794979
76794979 
76869286
76869286 
......

After doing a cat -e all.sorted

skilla:~# cat -e all.sorted 
$
76679787$
76679787 $
76701427$
76701427$
76794979$
76794979 $
76869286$
76869286 $

Every second line has a trailing space :( After removing all trailing spaces it worked!

thank you

Other Answer4

Use as follows:

sort < filea | uniq > fileb

comments:

This isn't correct, I think you meant: uniq -u filea > fileb
I copy your data and run it and it works: sort<filea.txt | uniq>fileb.txt. Maybe you left out the extensions. I am using a Mac OS X. you have to go from filea.txt to some other fileb.txt
There is no need for the redirection with sort and what's the point of piping to uniq when you could just do sort -u file -o file what you're doing is removing the duplicate values i.e your fileb contains 1,2,3,5,7 the OP wants the unique lines only which is 2,3 and is achieved by uniq -u file File extension has nothing to with it, your answer is wrong.