0

I want to count how many duplicates are there in my log file. for example if the following was my log

[2018-10-17 15:25:24,243] [ERROR] python - Users: Unable to retrieve 1  
[2018-10-17 15:25:24,272] [ERROR] python - Users: Unable to retrieve 2  
[2018-10-17 15:25:24,280] [ERROR] python - Users: Unable to retrieve 3  
[2018-10-17 15:25:24,281] [ERROR] python - Users: Unable to retrieve 2  
[2018-10-17 15:26:45,759] [ERROR] python - CATP: Unable to retrieve 1  
[2018-10-17 15:26:48,432] [ERROR] python - Users: Unable to retrieve 3  
[2018-10-17 15:26:48,460] [ERROR] python - Users: Unable to retrieve 1  

i want the output to be

Users: Unable to retrieve 1 : 3  
Users: Unable to retrieve 2 : 2  
Users: Unable to retrieve 3 : 2

1 Answer 1

1

If I'm not interpreting it wrong, a single line of AWK would suffice.

awk '{m[$NF]++} END{for(k in m) print k,"unable to retrieve",m[k]}' test.txt

Where test.txt is your log file.

It's a pretty straightforward one-liner, as it just examines the last field in each line, accumulates them and at last printing.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.