Splunk Version: Splunk Enterprise 7.0.3
Local Host OS: Windows 7
I have been unable to start Splunkd Service successfully using an MSA. The following is a summary of the steps taken:
- Installed Splunk via CLI to be run as Local System user
- Started Splunk successfully
- Switch Log On option of Splunkd Service to my personal domain user account
- Started Splunk successfully
- Switch Log On option of Splunkd Service to MSA
- Splunk fails to start.
- CLI reports “Timed out waiting for splunkd to start.”
Windows’ Services GUI reports "Error 1067: The process terminated unexpectedly."
- Investigation of $SPLUNK_HOME\var\log\splunk\splunkd-utility.log revealed MSA permission issues for accessing the following:
- File - splunk.secret
Dir - $Splunk_Home\etc\license
- I manually granted MSA permissions necessary to access these locations. No more errors show up in the splunkd-utility.log
- Splunk fails to start.
- No splunkd.log is created when the Service does not successfully start
The only semi-suspicious log found is in splunkd-utility.log which states:
- ServerConfig – Found no hostname options in server.conf. Will attempt to use default for now.
ServerConfig – Host name option is “”.
Could this be an issue? Is there some other location (.log) I should inspect to determine where my current issue resides?
↧
Why is Splunkd service timing out when trying to use a MSA?
↧
How do you calculate concurrency by second from start time and duration?
Newbie here...I have an index of data that represents calls. Each event has a start_time and duration. I've been asked to take all of these events and to calculate how many concurrent calls there are per second. It was suggested that I use Python and split the calls into different rows of a DB but that sounds tedious.
Is there a way to take each events data with start time and duration and chunk it up into seconds like this...?
↧
↧
How do I alter props.conf via Python SDK?
I can alter props.conf via the REST API using the following request:
curl -k -u admin:password https://localhost:8089/servicesNS/nobody/search/configs/conf-props -d name=source::/logs/mylog.log -d TRANSFORMS-null=setnull
This will add the following stanza to props.conf:
[source::/logs/mylog.log]
TRANSFORMS-null = setnull
However, is there a way I can get the same results using the Python SDK?
↧
How search head knows which site primaries copies to search when search affinity is disabled in multisite clustering
When search affinity is disabled, search head can search across all indexers across multiple sites.
But primaries on respective indexer information is provided to search head by indexer itself. If this is the case how search head searches primary copy only once eventhough it is present on two sites. How search head manages to search specific primary copy only one time if it is present on other site as well.
Is there any mechanism like S.H gets primary copy details from both sites indexers at first, then it will make decision which primary to search and accordingly intiate search on indexers?
↧
Using Streamstats to group the results happening within 10 seconds
Hi Experts,
I have a query which finds total number of non 200 responses and total responses based on the web access.logs by api and application location.
Now i need to group/club/count all the non 200 requests as 1 which happen within 10 seconds(for a location, api, application) as they are the results of the same problem.
index=web api=www.something.com | stats count as TOTAL count(eval(sc_status!=200)) as TOTAL_NON_200 count(eval(sc_status=404)) as TOTAL_404 by api location application
I saw on other answers that i can group using streamstats , not sure how that would fit in my case. Can i get some help ?
↧
↧
Accessing dispatch via the API
Is there a way to access the dispatch directories for gathering debugs etc via the REST API?
Working in a shared environment and have limited access to CLI.
↧
I want to get the connection ID ("conn") for the Errors with the Message "I/O Error" from the log file. And with the connection ID i need to display the full connection details from the client IP which contains multiple lines.
I want to get the connection ID ("conn") for the Errors with the Message "I/O Error" from the log file. And with the connection ID i need to display the full connection details from the client IP which contains multiple lines.
My log message displays like below
[2018-09-20T08:31:44.642+00:00] [TRACE] [OID-24641547] [PROTOCOL] [host: hostname] [tid: 123] [userId: oracle] [conn: 16914] [reason: I/O Error] [msg: Client requested protocol SSLv3 not enabled or not supported] DISCONNECT
↧
How to rename dbxquery fields
I am converting many dashboards from using dbquery to dbxquery. I have a few hundred of these queries to convert, with thousands if fields involved.
One of the problems is that dbxquery either returns fields in alphabetical order (unlike dbquery which returns them in schema order), or if you specify shortnames=false you can get them in schema order, but the field names include the order number and the data type!
I am thinking I have to use shortnames=false in order to get the schema order, and try to rename the fields back to their original names. The fields come out like "(001).FIELDA.VARCHAR2", "(002).FIELDB.NUMBER" etc.
I was hoping to make a macro that would rename these fields back to FIELDA, FIELDB etc. If I include all the different datatypes, I can do:
| rename *.VARCHAR2 as *, *.NUMBER as *, etc.
I can also have a long list of:
| rename "(001).*" as *," "(002).*" as * etc. However, these matches don't work. If I remove the "." then they work, but I wind up with fields named .FIELDA, .FIELDB etc. (all starting with a dot). I also wonder if these field names have a new line character in them -- they always appear on different lines and maybe that's why the number and the dot together won't match.
My questions are: 1) can I do this with renames at all? 2) can I do this with more of a regex search? or 3) is there some other way to get dbxquery to return fields in the schema order using shortnames=true?
I have seen other postings complaining about this behavior. It's also a problem if your SQL query lists fields in a particular order -- again they get switched to alphabetical when made into Splunk fields.
Thanks, Rick
↧
AWS Config snapshots missing
Currently my Splunk index only has aws:config:rule and aws:config:notification events. There are no aws:config snapshot events, so the topology feature doesn't work. I have set up the old Config input that takes in an SQS per region. Every Config service in every other account has its delivery channel send to a central SNS in the same region, which then sends to the SQS that Splunk queries.
The dev manager of the AWS app said
> The initial inventory get populated by> triggering a AWS Config Snapshot. When> you add a Config input, the snapshot> will be triggered automatically,> unless your IAM user don't have such> permission.
(see https://answers.splunk.com/answers/337327/splunk-app-for-aws-will-my-current-configuration-f.html answer).
My IAM user has the proper permission (config:DeliverConfigSnapshot). But no snapshot was triggered or imported. I even manually triggered a Config snapshot via the CLI as recommended in https://answers.splunk.com/answers/378001/aws-app-description-vs-config.html, but that did not do anything.
For context, I also have some Config Rule inputs set up beforehand that I did not touch during this whole process.
Thoughts on how I can get my Splunk app to populate with aws:config events??
↧
↧
On the capture of structured data
By default Splunk assumes the same file when the first 256 bytes are the same.
How is Splunk structured data judged?
For example csv file.
props.conf
*******************************************************************
[testcsv]
INDEXED_EXTRACTIONS=csv
*******************************************************************
If the file names are the same, it seems to be regarded as the same file.
Do you know something?
Is there a description in the manual?
For files with the same name, the second time is not recognized as a header line.
Please give me some help.
Thanks.
↧
Only splunkd.logs for index=_internal is visible for my AWS UF server
Hi,
I have around 1k number of hosts setup in AWS containers where I don't have access to any forwarders.
All my forwarders points out my splunk on-premise environment. When I'm trying to look for the metrics log for all my forwarders, I coudn't find any of them and able to see only splunkd.log in my splunk.
I'm pretty sure only splunkd.log are ingested to splunk.
But is this possible, can I restrict splunk internal logs like metrics.logs not to forward to indexer server.
If yes, where is the configuration needs to be set. Kindly help.
Thanks,
Ramu Chittiprolu
↧
Token issue between a source dashboard and a target dashboard
Hi
I use the code below in a source dashboard
host=$tok_filterhost$
index="windows" sourcetype="wineventlog:Application" "SourceName=Application Error" Type="Critique" OR Type="Avertissement" OR Type="Erreur" "Chemin d’accès de l’application défaillante"
| dedup _time SourceName
| table _time SourceName "Chemin d’accès de l’application défaillante"
| stats count by "Chemin d’accès de l’application défaillante"
| rename "Chemin d’accès de l’application défaillante" as Application, count as Erreurs
| rex field="Application" "(?[^\\\\]+)$"
| sort -Erreurs limit=10
When I click on a result line I want to have the details of the event on a target dashboard
I have described here what I have exactly done
Could you tell me what is wrong please because it doesnt works![link text][1]
[1]: /storage/temp/255002-test.pdf
↧
no events in gui
I'm faced with very strange issue and wasted a lot of time with trying to resolve this.
In search window don't shows any events and after modifying search and go, search bar staining gray and it is freezing.
1. I'm tested chrome\mozilla
2. on other PC it work fine (with same version of browsers)
3. on other Splunk (same version 7.1.3) instance all work ok on my PC too
1st search
![alt text][1]
2nd search
![alt text][2]
[1]: /storage/temp/255004-1.png
[2]: /storage/temp/255005-2.png
↧
↧
no data shown in search GUI
In search window don't shows any events and after modifying search and go, search bar staining gray and it is freezing.
1. another splunk instance (same version 7.1.3) all work fine on my PC
2. on another PC all instances work fine (same browser version)
![alt text][1]
![alt text][2]
[1]: /storage/temp/255006-1.png
[2]: /storage/temp/255007-2.png
↧
Pagerduty integration with splunk not working
Dear All,
We have Prod and DR environment. And the Splunk searchhead of Prod setup is working well with Pagerduty but the DR setup is not. I even tried to replace Pagerduty integration URL of DR setup with Prod setup but still got same errors. Please suggest.
**Pagerduty_incidents version is 1.1
splunk enterprise v 6.5.1**
Below are the errors:
Time Event
9/19/18
8:05:04.415 AM
09-19-2018 08:05:04.415 -0400 ERROR SearchScheduler - Error in 'sendalert' command: Alert script returned error code 1., search='sendalert pagerduty results_file="$SPLUNK_HOME/var/run/splunk/dispatch/scheduler__bishtk_d2Vic2l0ZV9tb25pdG9yaW5n__RMD5d360115fdafa9e4a_at_1537358700_29811/results.csv.gz" results_link="http://:8000/app/website_monitoring/@go?sid=scheduler__bishtk_d2Vic2l0ZV9tb25pdG9yaW5n__RMD5d360115fdafa9e4a_at_1537358700_29811"'
host = source = $SPLUNK_HOME/var/log/splunk/splunkd.log sourcetype = splunkd
9/19/18
8:05:04.415 AM
09-19-2018 08:05:04.415 -0400 WARN sendmodalert - action=pagerduty - Alert action script returned error code=1
host = source = $SPLUNK_HOME/var/log/splunk/splunkd.log sourcetype = splunkd
9/19/18
8:05:04.415 AM
09-19-2018 08:05:04.415 -0400 INFO sendmodalert - action=pagerduty - Alert action script completed in duration=72 ms with exit code=1
host = source = $SPLUNK_HOME/var/log/splunk/splunkd.log sourcetype = splunkd
9/19/18
8:05:04.409 AM
09-19-2018 08:05:04.409 -0400 ERROR sendmodalert - action=pagerduty STDERR - TypeError: object of type 'NoneType' has no len()
host = source = $SPLUNK_HOME/var/log/splunk/splunkd.log sourcetype = splunkd
9/19/18
8:05:04.409 AM
09-19-2018 08:05:04.409 -0400 ERROR sendmodalert - action=pagerduty STDERR - if len(url) == 32:
host = source = $SPLUNK_HOME/var/log/splunk/splunkd.log sourcetype = splunkd
9/19/18
8:05:04.409 AM
09-19-2018 08:05:04.409 -0400 ERROR sendmodalert - action=pagerduty STDERR - File "$SPLUNK_HOME/etc/apps/pagerduty_incidents/bin/pagerduty.py", line 18, in send_notification
host = source = $SPLUNK_HOME/var/log/splunk/splunkd.log sourcetype = splunkd
9/19/18
8:05:04.409 AM
09-19-2018 08:05:04.409 -0400 ERROR sendmodalert - action=pagerduty STDERR - success = send_notification(payload)
host = source = $SPLUNK_HOME/var/log/splunk/splunkd.log sourcetype = splunkd
9/19/18
8:05:04.409 AM
09-19-2018 08:05:04.409 -0400 ERROR sendmodalert - action=pagerduty STDERR - File "$SPLUNK_HOME/etc/apps/pagerduty_incidents/bin/pagerduty.py", line 43, in
host = source = $SPLUNK_HOME/var/log/splunk/splunkd.log sourcetype = splunkd
9/19/18
8:05:04.409 AM
09-19-2018 08:05:04.409 -0400 ERROR sendmodalert - action=pagerduty STDERR - Traceback (most recent call last):
host = source = $SPLUNK_HOME/var/log/splunk/splunkd.log sourcetype = splunkd
9/19/18
8:05:04.340 AM
09-19-2018 08:05:04.340 -0400 INFO sendmodalert - Invoking modular alert action=pagerduty for search="SEV3::PROD Website Monitoring Alert for DNS" sid="scheduler__bishtk_d2Vic2l0ZV9tb25pdG9yaW5n__RMD5d360115fdafa9e4a_at_1537358700_29811" in app="website_monitoring" owner="bishtk" type="saved"
host = source = $SPLUNK_HOME/var/log/splunk/splunkd.log sourcetype = splunkd
9/19/18
7:00:08.168 AM
09-19-2018 07:00:08.168 -0400 ERROR SearchScheduler - Error in 'sendalert' command: Al
Thanks in advance,
↧
How can I extract all fields from a log file including json definiton?
I have a log file which entries/lines look like this:
12:17:35.4641 Info {"message":"TestKevin execution ended","level":"Information","logType":"Default","timeStamp":"2018-09-19T12:17:35.4641435+00:00","fingerprint":"2ee56795-e30b-4c98-b6cc-166249d18375","windowsIdentity":"RPA1\\Robotics","machineName":"RPA1","processName":"TestKevin","processVersion":"1.0.6828.23354","fileName":"Main_Alf_Production","jobId":"d415160d-8bfd-4374-b0c4-b03a35316b79","robotName":"ROBOTICS","totalExecutionTimeInSeconds":34,"totalExecutionTime":"00:00:34"}
First of all, I want to extract the time(12:17:35.4641), the Status (Info), and all of the key-value pairs included in the json definition. It would be great to do this at indexing time, so I don't have to include this logic within my search. Any Ideas?
↧
Extract number from a string
Hi,
I have a field which produces a value like this example: DB=HR_10_7_3043_TGTHRLIVE
I am trying extract the number and write it in the following way: DB_Version=10.7.3043
How do I get Splunk to cut off before and after the number and then replace the _ with .
Note: The strings before and after the numbers can vary in length, and the number can vary too.
Many thanks,
Sam
↧
↧
Finding matching fields from 2 out of 3 sources
Hello, I hope someone can help.
I am attempting to do a subsearch that I am having difficulty with and hope someone here can assist.
I would like any fields in SourceB or SourceC that match SourceA, to be returned
I'd previously had the following syntax:
**SourceA | table field1 | search [ | search SourceB table field1 ] | search [ |search SourceC field1 | table src]**
but now, I need it to be interpreded more like this:
**SourceA field1 (SourceB field1 or SourceC field1)**
↧
How to send Splunk alert name in SNMP trap with snmp-ma app
We are trying to generate SNMP trap from Splunk Alert to our HP BSM monitoring solution. We wan't to send Splunk Alert name in SNMP trap. We are using Splunk version 7.1.2. However we are having problem configuring splunk-ma app to do this.
Can someone provide what to put in MIB Name and MIB Object fields to achieve this? Do we need to import Splunk Alert MIBs somewhere?
↧
Getting Invalid key in stanza errors when running ./splunk btool check --debug
Checking: /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf
Invalid key in stanza [http] in /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf, line 3: p
ort (value: 8088)
Invalid key in stanza [http] in /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf, line 4: e
nableSSL (value: 1)
Invalid key in stanza [http] in /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf, line 6: d
edicatedIoThreads (value: 2)
Invalid key in stanza [http] in /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf, line 7: m
axThreads (value: 0)
Invalid key in stanza [http] in /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf, line 8: maxSockets (value: 0)
Invalid key in stanza [http] in /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf, line 9: useDeploymentServer (value: 0)
Invalid key in stanza [http] in /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf, line 11: sslVersions (value: *,-ssl2)
Did you mean 'source'?
Did you mean 'sourcetype'?
Invalid key in stanza [http] in /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf, line 12: allowSslCompression (value: true)
Invalid key in stanza [http] in /opt/splunk/etc/apps/splunk_httpinput/default/inputs.conf, line 13: allowSslRenegotiation (value: true)
Checking: /fs/untd-1/splunk/etc/apps/splunk_instrumentation/default/app.conf
Invalid key in stanza [ui] in /opt/splunk/etc/apps/splunk_instrumentation/default/app.conf, line 12: show_in_nav (value: 0)
Checking: /fs/untd-1/splunk/etc/apps/splunk_instrumentation/default/collections.conf
Invalid key in stanza [instrumentation] in /opt/splunk/etc/apps/splunk_instrumentation/default/collections.conf, line 10: type (value: internal_cache)
What I have identified is after the Splunk server moved from CentOS 5 to CentOS 6, below are new folders got created.
drwxr-xr-x 3 31855 31855 4096 Feb 28 2018 splunk_httpinput
drwxr-xr-x 5 31855 31855 4096 Feb 28 2018 splunk_archiver
drwxr-xr-x 4 31855 31855 4096 Feb 28 2018 appsbrowser
drwxr-xr-x 7 31855 31855 4096 Feb 28 2018 alert_webhook
drwxr-xr-x 7 31855 31855 4096 Feb 28 2018 alert_logevent
drwxr-xr-x 7 31855 31855 4096 Feb 28 2018 splunk_instrumentation
drwxr-xr-x 11 31855 31855 4096 Feb 28 2018 splunk_monitoring_console
I'm getting alerts from all the files in the above dirs. How can I fix them? I'm using Splunk 6.2.2 version
Thanks
Rajesh
↧