Hello,
I Googled and checked several answer posts, but perhaps I am not wording it correctly in the search engines.
I have a lookup table and I want to remove duplicates from the table itself. Not just when the table is being used.
There are 3 fields: **ACCT**, **AUID**, **ADDR**.
It is quite possible that a user may login from another PC, so I need to keep entries where the **ACCT** and **AUID** are the same but the **ADDR** is different. I using *append=true* in my *outputlookup* command to add new entries. Issue is, all entries are being added to the lookup, including those containing duplicate values of those 3 fields.
Here is my SPL (which is running in a dashboard).
index="linuxevents" AND host=rub.us AND source="/var/log/audit/audit.log"
AND acct="$userId_tok$"
| stats count by acct, auid, addr
| fields acct, auid, addr
| head limit=0
| table acct, auid, addr -->
| rename acct AS ACCT, auid AS AUID, addr AS ADDR
| table ACCT, AUID, ADDR
| outputlookup myAAAlookup.csv append=true
I am aware that I can run this to remove duplicates at search time.
| inputlookup myAAAlookup.csv
| dedup ACCT,AUID,ADDR
| outputlookup myAAAlookup.csv append=true
However, I want to remove all duplicate entries from the lookup table itself. The table should contain only 5 rows at this time of testing. Instead, there are over 300 duplicate rows, and growing each time the dashboard is run.
Thanks and God bless,
Genesius
↧