Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Converting extracted information to 12 hour AM/PM format

$
0
0
Hello, I am extract information from logs via rex, and I am currently extra information in military time format. (i.e.: 13:15). I also extract things such as 11:15, but I want it to be consistent in a 12 hour AM/PM format. Example: 1:15 PM instead of 13:15. 11:15 AM instead of 11:15. I was wondering if it were possible to convert the information I extract, if it is between 13:00 and 23:59, that would be PM. Here is my log: ![alt text][1] Here is my table currently. ![alt text][2] Here is my query so far: index=monitoring sourcetype=PEGA:WinEventLog:Application ( SourceName="RoboticLogging" OR SourceName="Application" ) ("Type=" "Information") | rex field=_raw "Department=\"(?.+?)\"" | where Department = "HRSS_NEO" OR Department = "HRSS Daily NEO Report" | rex "Duration:\s*(?\d+):(?\d+):(?\d+\.\d+)" | rex "Number of supervisor reminder memos sent:\s*(?[^,]+)" | rex "Number of New Employees in NEO Report with job title Temporary Agy Svc Asst:\s*(?[^,]+)" | rex "Number of New Employees in NEO Report without job title Temporary Agy Svc Asst:\s*(?[^,]+)" | rex "Number of supervisors found when searching AD:\s*(?[^,]+)" | rex "UserID=\"UNTOPR\\\(?.+?)\"" | rex "Number of supervisors not found when searching AD:(?[^,]+)" | rex "Email Received\s*Time:(?.{5}?)" | rex "Email Process Started At:\s*(?.{5}?)" | eval processed = if(isnull(processed), "-", processed) | rex "StartTime:\s*(?.{5})" | eval startTime = if(isnull(startTime), "-", startTime) | eval dur = round(((hh * 3600) + (mm * 60) + ss),0) | eval avghndl = round(dur/memo, 0) | eval dur = tostring(dur,"duration") | eval avghndl = tostring(avghndl,"duration") | eval Time = strftime(_time, "%m/%d/%Y at %r") | where dur != " " | eval valid = if(isnull(valid), "0", valid) | eval received = if(isnull(received), "-", received) | replace "" with "0" | eval strr = host." : ".UID | eval strr=upper(strr) | eval invalid = if(isnull(invalid), "0", invalid) | fields - _time | dedup Time | table strr, Time, dur, received, startTime, processed, memo, yes, no, valid, invalid, avghndl, | rename strr as "Workstation : User", dur as "Duration (HR:MIN:SEC)", memo as "Supervisor Reminder Memos Sent", yes as "New Temporary Employees", no as "New Employees (Not Temporary)", valid as "Valid Aliases", invalid as "Invalid Aliases", avghndl as "Average Handle Time per Email", received as "Email Received Time", startTime as "Start Time", processed as "Email Processed Time" | sort by Time desc [1]: /storage/temp/283609-test2.png [2]: /storage/temp/283610-table.png

Can I set an alert in splunk where the event id is 4663, with this object specifications?

$
0
0
Object: Object Server: Security Object Type: File Object Name: \Device\HarddiskVolume54\Tax\Confidential Handle ID: 0x1110 Resource Attributes: S:AI

Should we use SAI 2.0.2 or App For Windows Infrastructure regarding compatibility with AWS AddOn

$
0
0
Hi, I need to monitor Windows, Linux and AWS resources (multiple AWS accounts). SAI 2.0.2 is no longer compatible with AWS AddOn as stated in Splunkbase. Should I use: 1. App for Windows Infrastructure + AddOn for Linux and Unix + App for AWS 2. App for Infrastructure + App for AWS 3. Some other combination?

Highlight specific string in JSON formatted events

$
0
0
Hello, I have a dashboard where I am displaying events which are JSON formatted (a requirement not to have them in raw format) and I need certain keywords to be highlighted. Since it is JSON formatted, I cannot simply use Splunk's highlight function. Instead I have tried to use JavaScipt and CSS. I can get it to work using online editors like jsfiddle.net, where I just copy paste my .js and .css-files together with the html file I download from my dashboard. Everything works fine. However, when I upload the .js and .css-files to \Splunk\etc\apps\search\appserver\static, the highlighting works e.g. in the title to my panel, but it does not highlight the keywords in the JSON formatted events displayed which I also need. See image and code (Note: the image doesn't include the real JSON data that I'm going to be using it for later. The keyword 'gustav' is something that should be highlighted in the events but is not). Does anyone have any idea what is causing this or how it could be fixed? ![alt text][1] [1]: /storage/temp/282609-capture.png The code I'm using is the following: .js-file: function highlight(elem, keywords, cls = 'highlight') { const flags = 'gi'; // Sort longer matches first to avoid // highlighting keywords within keywords. keywords.sort((a, b) => b.length - a.length); Array.from(elem.childNodes).forEach(child => { const keywordRegex = RegExp(keywords.join('|'), flags); if (child.nodeType !== 3) { // not a text node highlight(child, keywords, cls); } else if (keywordRegex.test(child.textContent)) { const frag = document.createDocumentFragment(); let lastIdx = 0; child.textContent.replace(keywordRegex, (match, idx) => { const part = document.createTextNode(child.textContent.slice(lastIdx, idx)); const highlighted = document.createElement('span'); highlighted.textContent = match; highlighted.classList.add(cls); frag.appendChild(part); frag.appendChild(highlighted); lastIdx = idx + match.length; }); const end = document.createTextNode(child.textContent.slice(lastIdx)); frag.appendChild(end); child.parentNode.replaceChild(frag, child); } }); } var myElement = document.getElementById("events_highlighted"); // Used document.body instead of the value of myElement highlight(document.body, ['is', 'Robotics', 'Top', 'gustav', 'failed', 'success', 'info', 'error', 'event', 'res']); .css-file: .highlight { background: lightpink; }

captain SHC recommendation

$
0
0
Good afternoon    Is there splunk documentation where it is reported that in a SHC the servers must be certified at the hardware level?   I ask why, what would be the disadvantage if my captain has 40 cores and 60gb RAM compared to the other search head servers that have 60 cores and 250GB ram?   We currently have a server that acts as a captain but he only performs AD-hoc queries and the user has almost no access to this machine, therefore he has no load and is only dedicated to making the bundle when he is captain of the cluseter.   Any information is welcome regards

Need to assign ip address to a description

$
0
0
I need to create a new field called ip_address_location and for each IP address perform an if. So like this: if ip = "1.1.1.*" assign "site_abc" in ip_address_location if ip = "1.1.2.*" assign "site_efg" in ip_address_location etc. Any suggestions?

Error: Invalid key stanza

$
0
0
Hi Team, I am getting below error in spluk local insatance : **Error details :** Invalid key in stanza [tcp01] in C:\Program Files\Splunk\etc\app s\XYZ_test_local\default\indexes.conf, line 5: maxTotalDatasize (value: 1024MB ). Invalid key in stanza [tcp01] in C:\Program Files\Splunk\etc\app s\XYZ_test_local\default\indexes.conf, line 7: maxWarmDBcount (value: 4). Your indexes and inputs configurations are not internally consistent tent. Indexes.conf : [tcp01] coldPath = $SPLUNK_DB/tcp01/colddb homePath = $SPLUNK_DB/tcp01/db thawedPath = $SPLUNK_DB/tcp01/thaweddb maxTotalDatasize = 1024MB maxHotSpanSecs = 243264 maxWarmDBcount = 4 maxHotbuckets = 3 disabled  = false [monitor://C:\Program Files\Splunk\XYZ_Alerts\*] sourcetype=st01 index=tcp01 blacklist=\.(gz|zip)$ initCrcLength=750

REST API Modular Input invalid header

$
0
0
I am trying to use the REST API Modular Input app, but I am getting this error: ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/rest_ta/bin/rest.py" Exception performing request: Invalid header name 'X-APIKeys: accessKey' X-APIKeys: accessKey=blah;secretKey=blah Ideas on how to fix?

Sending uncooked data from indexer level

$
0
0
Hi all, I am sending data from intermediate forwarder to indexer and during indexing, I would like to send raw "uncooked data" to 3rd party application. Recently I tried to use CEF app index and forward but , it is working but it is becoming cooked data. Is there any way to handle this from **indexer level**? Thanks

Schedule a custom search command to run for 100+ different variables

$
0
0
We have several searches that we run and have a manual backend process to load that data to each endpoint (100+ endpoints). I want to be able to schedule this custom search command to run daily and be able to have an editable list of 100+ endpoints to pass in to the search. Is this possible to do within Splunk?

How to make an alert for the status of a dashboard panel?

$
0
0
Hi there! I am trying to make an alert that tells me when a particular dashboard panel returns >0. Does anybody know how to reference a particular dashboard panel in the alert? Furthermore, then how to reference the return number of that dashboard panel?

Unable to create inputs for TA-Tenable add on

$
0
0
Hi, I am trying to set up inputs on TA-Tenable add on and it fails with error "Argument validation for scheme=tenable_securitycenter: script running failed (killed by signal 9: Killed).". I installed "Tenable add-on for Splunk" version 3.1.0 on one of our heavy forwarder. Anyone have any suggestions what could be wrong here?

How do I count certain field values by row and covert the total found into two other tables to be used in time charts? =0(

$
0
0
![alt text][1] I've been plugging away at this for a few days and I'm stuck =0( Above is a lookup csv (insert dummy data) I have from Nessus. I am trying to use Splunk to create totals of vulnerability severity levels in two separate tables, one by organization and another by system. Scans are run everyday, so inevitability the totals will change over time, which is what I'm trying to capture with the timecharts. Below is what I want to do, any ideas how to do this? ![alt text][2] Lastly, I’m trying to use the newly created tables and make two time graphs on vulnerability severity level totals by organization/date and another graph by system/date. Any ideas? Thanks! [1]: /storage/temp/282614-1st-table.png [2]: /storage/temp/282615-2nd-and-3rd-table.png

How to find count of occurrences of each IP for the first 15 mins starting from the first occurrence of each IP ?

$
0
0
Say I have an index A which has all the IPs logged during the day. So every event has an IP and the timestamp it was seen. What I need to find is the count of the occurrence of each IP for the first 15 mins starting from the timestamp of the first occurrence of the IP. Example: Say I find IP 1.2.3.4 at 10:00, 10:05,10:12, 10:16,10:20 and IP 9.8.7.6 at 11:00, 11:05, 11:10, 11:20. For IP 1.2.3.4 the first occurrence was at 10:00 . So in the first 15 mins which is from 10:00 till 10:15 I get the occurrence count as 3. Occurrence at 10:16 and 10:20 is ignored. Similarly for IP 9.8.7.6 the first occurrence was at 11:00 , so the first 15 mins i.e from 11:00 to 11:15 the occurrence count is 3. 11:20 occurrence is ignored. So basically I want a search query which will give me the count of occurrence of each IP for the first 15 mins starting from the first occurrence of each IP. The search result here would be 1.2.3.4 3 9.8.7.6 3

Need to get JOB.ID per instance per dashboard

$
0
0
Hi I have a list of all the ID RUNNING per dashboard (But if someone else is running the same dashboard i get those ID's as well, how can i reduce it down? ) I run this SPL from the dashboard i want to reduce it down to. In this case the Dashboard is kpi_monitoring_robbie. | rest /services/search/jobs | search dispatchState="RUNNING" AND provenance="*kpi_monitoring_robbie*" | fields id provenance dispatchState OUTPUT id provenance dispatchState https://127.0.0.1:8089/services/search/jobs/admin__admin__Murex__search60_1581445194.220651 UI:Dashboard:kpi_monitoring_robbie RUNNING https://127.0.0.1:8089/services/search/jobs/admin__admin__Murex__search61_1581445194.220652 UI:Dashboard:kpi_monitoring_robbie RUNNING https://127.0.0.1:8089/services/search/jobs/admin__admin__Murex__search63_1581445194.220654 UI:Dashboard:kpi_monitoring_robbie RUNNING But one of the above was from the second dashboard. I cant do it per user as a lot of user have the same user name, if i was using LDAP i could. Thanks in Advance Robert

scheduled reports with zero values

$
0
0
We have 3 search heads and they are in cluster.We are observing scheduled reports with zero values for few reports.zero value reports are generating from search head 3.Issue is not consistent. we have one main search which will run in every 15 mins and 20 sub reports which use the main search via loadjob. these reports run every 15 mins ie 3 mins after the main search.so now few reports are delivered recepents with zero values from the search head 3. i was seeing scheduler log usually runtime is 0.3 sec for a successful reports but for failed reports it run time showing 300sec. Can anyone please help me to understand how i need to trouble shoot this?

Microsoft Azure Sentinel integration with Splunk?

$
0
0
Does anyone know if there is a way to integrate Microsoft Azure Sentinel with Splunk? I'm specifically looking for events of interest/alerts/indicators from Sentinel into Splunk. It appears that the Microsoft Azure Add-on for Splunk provides access to many aspects of Azure including Security Center but I don't see anything specifically for Sentinel. Presumably Sentinel would take these various feeds and apply the Microsoft secret sauce to them to provide insight. Rather than having to reverse-engineer or build new in Splunk it would be good if there was a way to integrate the curated information from Sentinel into Splunk. I can't seem to find any information on a Sentinel API. There are data connectors to get data into Sentinel but I can't seem to find anything on getting data out. Thanks.

Crashplan Service Log Date timestamp incorrect

$
0
0
I've looked through a lot of the posts about date timestamp extraction and I think I'm decent enough at it but for the life of me I can't figure out what is going on with my logs for Crashplan. I found a post with a working example of [crashplan service][1] log props and mine matched almost exactly but still no go. props.conf [crashplan_service] TIME_PREFIX = ^\[ MAX_TIMESTAMP_LOOKAHEAD = 21 TIME_FORMAT = %m.%d.%y %H:%M:%S.%3N SHOULD_LINEMERGE = false NO_BINARY_CHECK = true inputs.conf [monitor:///opt/crashplan/log/service.log.0] source = crashplan sourcetype = crashplan_service index = crashplan disabled = false Here is one event where the day/month look to be swapped: ![1st screenshot of event and incorrect timestamp][2] With this event I have no idea how it's getting the day/month: ![2nd screenshot, wrong date but correct timestamp][3] [1]: https://answers.splunk.com/answers/481617/how-to-troubleshoot-why-time-format-is-not-being-a.html#answer-674391 [2]: /storage/temp/283615-screen-shot-2020-02-11-at-124558-pm.png [3]: /storage/temp/283616-screen-shot-2020-02-11-at-124614-pm.png

How to extract the given log into one log using props.conf?

$
0
0
I am trying to extract the below file into single log, but it got breaks into two or more files in splunk Sample file : PING 20.152.32.XXX (20.152.32.XXX) 56(84) bytes of data. 64 bytes from 20.152.32.XXX: icmp_seq=1 ttl=248 time=67.9 ms 64 bytes from 20.152.32.XXX: icmp_seq=2 ttl=248 time=68.2 ms 64 bytes from 20.152.32.XXX: icmp_seq=3 ttl=248 time=68.1 ms 64 bytes from 20.152.32.XXX: icmp_seq=4 ttl=248 time=68.2 ms --- 20.152.32.XXX ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 67.926/68.153/68.276/0.134 ms What need to changed in the props.conf [lala_pop] BREAK_ONLY_BEFORE = PING\s+\d+\.\d+\.\d+\.\d+ NO_BINARY_CHECK = true SHOULD_LINEMERGE = true Appreciate you help. Thanks

capability admin all objects

$
0
0
good afternoon    I have the following question, there are currently roles in our cluster that have the following restriction srchMaxTime = 3600, but it is validated that certain users are searching for more than 1 hour and I ask if this is due to the cability "admin all object". any help is appreciated Cheers
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>