I'm trying to do some least common occurance hunting in our environment, and would like to see if I can make a search that will show me hosts with low counts of user logons (say, less than 5?).
So, if my machine had me log in 30 times, and a pc tech once, even though it's legit it would show the pc tech on my machine in the search.
↧
How do I search for low counts of specific user logons per host?
↧
How do I calculate total time of employees from security card system?
I would like to create a report to verify when and how long each employee is in the building. Splunk indexes data from the Security system that supplies a cvs file nightly. I am running into a problem because each reader has entry and exit side but the employee can come in one door and exit a totally different door. Is there a way to correlate entry and exit for an employee, calculate the duration of that stay and then calculate the total of time that the employee is in the building, assuming that the first event is an entry, second event is exit, third event is entry, forth event is exit, etc.?
Indexed data look like this -
Timestamp, EventTable, extractedEventType, Controller, Full Name
2018-08-23 06:02:50.247,Events_268,515-0 ,VertX A-Interface 0-Reader 1, Barney Rubble
2018-08-23 07:14:53.500,Events_268,515-0 ,VertX B - V100 0 - Reader 2, Fred Flintstone
2018-08-23 09:19:05.897,Events_268,515-0 ,VertX A-Interface 0-Reader 1, Barney Rubble
2018-08-23 10:29:17.097,Events_268,515-0 ,VertX B - V100 4 - Reader 1, Fred Flintstone
2018-08-23 10:55:40.503,Events_268,515-0 ,VertX A-Interface 0-Reader 2 , Fred Flintstone
2018-08-23 10:59:22.877,Events_268,515-0 ,VertX B - V100 4 - Reader 1, Barney Rubble
2018-08-23 14:56:45.613,Events_268,515-0 ,VertX A-Interface 0-Reader 1 , Barney Rubble
2018-08-23 15:44:36.363,Events_268,515-0 ,VertX B - V100 0 - Reader 2, Fred Flintstone
What I would like to create is a report that shows
Date Full Name Total Time
2018-08-23 Barney Rubble 7.5 hours
2018-08-23 Fred Flintstone 8.0 hours
↧
↧
Issue while attempting to restore a KVStore collection
Hi Everyone,
I am currently trying to achieve a quite simple process: set up a scalable way to backup/restore some KVStore collections from production Splunk servers.
Following the appropriate Splunk documentation (link below), i was able to successfully backup my collections in JSON formatted files.
[https://docs.splunk.com/Documentation/Splunk/7.1.2/Admin/BackupKVstore][1]
However, when i try to restore them to the same production server, it fails with the following errors (from /opt/splunk/var/log/splunkd.log):
splunk: ERROR 1535113443.183 KVStorageProvider - An error occurred during the last operation ('dropCollection', domain: '5', code: '26'): ns not found
splunk: WARN 1535113443.188 KVStoreAdminHandler - No data found to restore matching specified parameters archiveName="backupfile.tar.gz", appName="all apps", collectionName="collection"
splunk: ERROR 1535113443.188 KVStoreAdminHandler - \n
This seems quite cryptic to me. I am wondering whether anyone could have encounter a similar issue or error message, or could know some troubleshooting tips that will help me solving this?
[1]: https://docs.splunk.com/Documentation/Splunk/7.1.2/Admin/BackupKVstore
↧
How to search for all the IPs that are located in the domain controller
this is my first time using splunk and I dont know many commands. I am looking for a command where I can get all the IPs in the domain controller and their account name.
↧
Getting incorrect host name
We have a server that was cloned to that have a different hostname. the old server was shutdown and the team is now using the new server with different hostname. looking at DS, the name of the host is still the same as the old one. looking at the events from the new cloned server, its still showing the old server name before it was cloned.
We wanted to reflect the new hostname. should we delete the server as client and make it as client again by restarting the forwarder? it should reflect the new hostname, right?
↧
↧
Javascript code is not working. The script code is not working.
Lifecycle Hygiene
↧
How to concatenate results from same field
Hi ,
I want to concatenate results from same field into string. How can I do that ?
e..g
|inputlookup user.csv| table User
User
------------
User 1
User 2
User 3
Users = User 1+User2+User3
↧
Search for fields that contain exactly 6 digits
I need to search for fields that contain exactly 6 digits.
For example, it should return fields that contain "123456".
I'm currently trying regex_raw="\d{6}" but I think I'm missing something or doing something wrong. Any help would be appreciated!
↧
Google Apps for Splunk: Gsuite HttpError
We are seeing the following error on Splunk when we configured the Gsuite add-on. Is there some json that I need to change on any of the .py files?
"log_level": "ERROR", "modular_input_consumption_time": "Fri, 24 Aug 2018 14:35:43 +0000", "timestamp": "Fri, 24 Aug 2018 14:35:43 +0000", "errors": [{"input_name": "ga://domain.edu_gsuite", "exception_arguments": "error=http_error message=''", "filename": "GoogleAppsForSplunkModularInput.py", "msg": "error=http_error message=''", "exception_type": "unicode", "line": 324}]}
↧
↧
Choropleth map: Is there a setting that converts decimal data bin values into proper integer value ranges?
I am facing a issue where one of my Choropleth map dashboard panel is creating decimal data bin values for sequential color mode. Just wanted to know if we there is any setting to convert that into proper integer value ranges? Below is the format of the data bin ranges being created, currently my panel output gives two country count values as 2 & 3.
2 - 2.2
2.2 - 2.4
2.4 - 2.6
2.6 - 2.8
2.8 - 3
3 - 3.2
↧
Attempting to restore a KVStore collection: has anyone seen or successfully troubleshooted this error?
Hi Everyone,
I am currently trying to achieve a quite simple process: set up a scalable way to backup/restore some KVStore collections from production Splunk servers.
Following the appropriate Splunk documentation (link below), i was able to successfully backup my collections in JSON formatted files.
[https://docs.splunk.com/Documentation/Splunk/7.1.2/Admin/BackupKVstore][1]
However, when i try to restore them to the same production server, it fails with the following errors (from /opt/splunk/var/log/splunkd.log):
splunk: ERROR 1535113443.183 KVStorageProvider - An error occurred during the last operation ('dropCollection', domain: '5', code: '26'): ns not found
splunk: WARN 1535113443.188 KVStoreAdminHandler - No data found to restore matching specified parameters archiveName="backupfile.tar.gz", appName="all apps", collectionName="collection"
splunk: ERROR 1535113443.188 KVStoreAdminHandler - \n
This seems quite cryptic to me. I am wondering whether anyone could have encounter a similar issue or error message, or could know some troubleshooting tips that will help me solving this?
[1]: https://docs.splunk.com/Documentation/Splunk/7.1.2/Admin/BackupKVstore
↧
How do I search for all the IPs that are located in the domain controller?
This is my first time using Splunk and I don't know many commands. I am looking for a command where I can get all the IPs in the domain controller and their account name.
↧
Why is our new cloned server reflecting an old hostname?
We have a server that was cloned to that have a different hostname. The old server was shutdown and the team is now using the new server with a different hostname. Looking at DS, the name of the host is still the same as the old one. Looking at the events from the new cloned server, its still showing the old server name before it was cloned.
We wanted to reflect the new hostname. should we delete the server as client and make it as client again by restarting the forwarder? it should reflect the new hostname, right?
↧
↧
Why is my Javascript code not working?
Lifecycle Hygiene
↧
What regex search could I use to find fields that contain exactly 6 digits?
I need to search for fields that contain exactly 6 digits.
For example, it should return fields that contain "123456".
I'm currently trying regex_raw="\d{6}" but I think I'm missing something or doing something wrong. Any help would be appreciated!
↧
How to extract multi-valued fields from XML?
I have a XML file with multi values on a specific tag (below).
![alt text][1]
I need to extract the attributes (NAME and CLASSORIGIN) and the VALUE , ignoring the rows without the tag VALUE.
I loaded the file as a XML and I was able to convert this to a multi-line result but now I need to extract the fields. Any ideas?
![alt text][2]
[1]: /storage/temp/255809-xml.jpg
[2]: /storage/temp/255810-capture.jpg
↧
Seeing all the forwarded data on indexer but universal forwarder is saying "configured but inactive"
Hi splunkers ,
I have forwarded the data using universal forwarder to heavy forwarder and then to indexer , where i am seeing all my data of agent server. But, the problem is I don't know why UF is still saying that "configured but inactive "
At universal forwarder end i am seeing in splunkd.log :
08-14-2018 07:03:34.401 -0400 INFO TcpOutputProc - Initializing connection for non-ssl forwarding to 165.113.21.66:9997
08-14-2018 07:03:34.538 -0400 INFO TcpOutputProc - Connected to idx=165.113.21.66:9997, pset=0, reuse=0.
08-14-2018 07:14:15.696 -0400 INFO TcpOutputProc - Initializing connection for non-ssl forwarding to 165.113.21.66:9997
08-14-2018 07:14:15.814 -0400 INFO TcpOutputProc - Connected to idx=165.113.21.66:9997, pset=0, reuse=0.
08-20-2018 06:12:36.906 -0400 INFO TcpOutputProc - Initializing connection for non-ssl forwarding to 165.113.21.66:9997
08-20-2018 06:12:37.038 -0400 INFO TcpOutputProc - Connected to idx=165.113.21.66:9997, pset=0, reuse=0.
and this also (don't know why)
[root@abc.com bin]# ./splunk list forward-server
Active forwards:
None
Configured but inactive forwards:
165.113.21.66:9997
and at heavy forwarder end
[root@def.com bin]# ./splunk display listen
Your session is invalid. Please login.
Splunk username: admin
Password:
Receiving is enabled on port 9997
in splunkd.log at heavy forwarder end :
08-14-2018 07:04:26.163 -0400 INFO TcpInputProc - clustering is enabled but ACK not enabled on forwarder=165.113.20.239
Everything is connected. But still, why am I seeing this "Configured but inactive forwards:" I don't know why, and i also have tried telnet from universal forwarder for heavy forwarder server
[root@abc.com bin]# telnet def.com 9997
Trying def.com...
Connected to def.com.
Escape character is '^]'.
Guys please help. Although, i am receiving all my data at indexer, but still i want to know why i am seeing the "configured but not active" entry in universal forwarder
↧
↧
Attempting to restore a KVStore collection: has anyone seen or successfully troubleshooted the following error?
Hi Everyone,
I am currently trying to achieve a quite simple process: set up a scalable way to backup/restore some KVStore collections from production Splunk servers.
Following the appropriate Splunk documentation (link below), i was able to successfully backup my collections in JSON formatted files.
[https://docs.splunk.com/Documentation/Splunk/7.1.2/Admin/BackupKVstore][1]
However, when i try to restore them to the same production server, it fails with the following errors (from /opt/splunk/var/log/splunkd.log):
splunk: ERROR 1535113443.183 KVStorageProvider - An error occurred during the last operation ('dropCollection', domain: '5', code: '26'): ns not found
splunk: WARN 1535113443.188 KVStoreAdminHandler - No data found to restore matching specified parameters archiveName="backupfile.tar.gz", appName="all apps", collectionName="collection"
splunk: ERROR 1535113443.188 KVStoreAdminHandler - \n
This seems quite cryptic to me. I am wondering whether anyone could have encounter a similar issue or error message, or could know some troubleshooting tips that will help me solving this?
[1]: https://docs.splunk.com/Documentation/Splunk/7.1.2/Admin/BackupKVstore
↧
timechart count for last status=up, each month
So, I've simplified my real problem down to this example with as few variables as possible. I wish I could simply alter the manor which the data is coming in, but, I can not, so I need a solution via SPL.
Here it goes:
Almost daily Splunk indexes a set of data that has two important fields, system_id and system_status. system_id is a unique identifier to each system, and system_status can have the values of "up" or "down". This data is indexed all at once, almost daily. And example of events would look like this:
One day:
08/24/2018T01:00:00 5671 up
08/24/2018T01:00:00 5672 up
08/24/2018T01:00:00 5673 down
08/24/2018T01:00:00 5674 up
08/24/2018T01:00:00 5675 up
08/24/2018T01:00:00 5676 down
08/24/2018T01:00:00 5677 up
The next day:
08/25/2018T01:00:00 5671 up
08/25/2018T01:00:00 5672 up
08/25/2018T01:00:00 5673 up
08/25/2018T01:00:00 5674 up
08/25/2018T01:00:00 5675 up
08/25/2018T01:00:00 5676 down
08/25/2018T01:00:00 5677 up
My goal: a timechart which shows the count of the number of systems "up" for the last data indexed each month. If it helps, each system_id is guaranteed to be in each set of indexed data.
This seems deceptively difficult. Many thanks to any help!
↧
Why is restarting Splunk messing up my dashboard libraries ?
I have added libraries on my search app like JQuery-UI and Font Awesome icons that I use in my dashboards. But, for some reason, every time I restart Splunk or the search head, the dashboards say they can't find these libraries. But, when I check the server, they are still there. Then, if I restart it again, the dashboard works.
Does anyone know why this might be happening or what I can do to avoid having to remember to restart it twice every time? Thanks
↧