Hi guys,
Have some question: I have this errors for my inputs -
"msg="A script exited abnormally
input=/opt/splunk/etc/apps/Splunk_TA_nessus/bin/nessus.py
stanza=nessus://nessus_plugin status="exited with code 1"
and something like this but for another inputs.
I use Nessus version=6.11.1 and Splunk Add-On for Tenable version=5.1.1.
Can someone explain me why I have this errors and how can I fix this?
P.S, I've tried reload my inputs with new API-keys, but it haven't helped me.
↧
Splunk Add-on for Tenable: issues with inputs (error message in Splunk )
↧
Eval and multiple logic operators
Hi,
Can anyone explain why the following dosent work?
....
| eval suppress=if((hour >=10 AND hour <=12, "yes","no") AND (dest="x.x.x.x"))
| where suppress="no"
...
the idea being not to produce results if the hour is between 10 - 12 AND the server equals x.x.x.x
I still want to see results produced between 10 - 12 for devices other than that server.
Thanks in advance.
↧
↧
update lookup table column
I have a lookup table that has several columns as follows, with no data in the "Manager" column:
![alt text][1]
I have an index that has two fields of interest: IP, Manager. The field IP in the index will be the same as that in the lookup table. What I need to accomplish is:
1. Query the index for all instances where the IP in the lookup table is found also in the index
2. Populate the lookup table column "Manager" with the field data found from the query above, in the appropriate row based on IP relationship
Hope somebody can help
[1]: /storage/temp/214593-capture.jpg
↧
One master/searchead and one indexer
Hello
So I'm trying to produce following topology. Where I have one master/searchead and one seperate indexer.
I've set up the master and indexer. Data is being forwarded to the indexer and the indexer has it's indexes configured. The master is also configured to send all it's data to the indexer.
The problem now is that I can't access the indexes from the indexer with search except for the internal ones.
Is it possible to search on the other indexes without defining them in the master or do I also need to recreate them there?
↧
Palo Alto App - Traffic Dashboard - Real-time problem
Just installed Splunk 6.6.3 and the Palo Alto App 5.4.2 on Windows server 2016. I'm facing an issue with real-time searches in the traffic-dashboard of the Palo Alto app. All relative searches like "last 15 min, last 4 hours, ..." are working fine. As soon as I choose a real-time search from the presets menu (doesn't matter which one) I get the following error on all graphs -> Error in 'tstats' command: This command is not supported in a real-time search
Any ideas?
Thanks Oliver
↧
↧
Fail connecting with ODBC to Power BI
Hi Splunkers,
we get error while connecting to Power BI using 64 bit ODBC driver on Windows 2008 R2 like this:
*The setup routines for the Splunk ODBC Driver. ODBC Driver could not be loaded due to system error code 193.*
PowerBI Details: "ODBC: ERROR [IM003] Specified driver could not be loaded due to system error 193: (Splunk ODBC Driver, )."
All works on similar instance.
Any known or potential scenarios to solve the issue, where to look at?
Thanks in advance!
↧
Unable to make several independent tab areas in a dashboard
We've created a dashboard with tabs using steps from this post https://www.splunk.com/blog/2015/03/30/making-a-dashboard-with-tabs-and-searches-that-run-when-clicked.html
Now we need several areas with tabs in one dashboard. E.g. first several tabs show data on sourcetypes, next block of tabs shows data on sources etc. And they do not depend on each other.
What we get - just one active panel on the dashboard. We see chart either on sourcetypes or on sources when we need to see both with ability to select tabs.
Why tabs counstruction is preferable and not linkswitcher - because we need to reduce number of searches that launch when opening dashboard. At the moment I know that tabs can do such trick but if splunk can do this with links as well I'll be glad to know how.
Thanks in advance!
....
|
All sourcetypes ... $link1$...
....|
All CR ... $link5$...
...
↧
Why splunk taking apiStartTime='Thu Jan 1 00:00:00 1970' in spite of explicitly mentioned in query earliest=-2m@m latest=-1m@m. Please help
I have simple query which query the index to get the data in last 2 mints but i am seeing this query is failing because it took apiStartTime='Thu Jan 1 00:00:00 1970'
Here is full detail from audit index
Audit:[timestamp=09-15-2017 12:41:07.647, id=744860, user=admin, action=search, info=granted , search_id='1505479267.94198', search='search index=os sourcetype=cpu all earliest=-2m@m latest=-1m@m |dedup host| eval fields=split(_raw," ") | eval num=mvindex(fields,-1)| eval cpuUtilization = 100-num |eval human_readable_time=strftime(_time, "%Y-%m-%d %H:%M:%S") |table human_readable_time host cpuUtilization', autojoin='1', buckets=0, ttl=600, max_count=500000, maxtime=8640000, enable_lookups='1', extra_fields='', apiStartTime='Thu Jan 1 00:00:00 1970', apiEndTime='MIN_TIME', savedsearch_name=""][OOj/tZOTT67cXMJngBqHtmpymXMqPZk1wkW1X026icQsZ7ngXEcld/gYjUW4Lx2dAKstiykGXcD7JQcFxlZWS5+k9opZO04TntE8VP9ZbcAwwyJqgm6pVnJnHE0nwtExDgrn3tFxp33fs2Xgj15106f59VCvM39d5WHA7b6oD8c=]
↧
Can I detect a deleted bucket when I enable data integrity on the indexes
If I configure a index with **enableDataIntegrityControl=true**, will I be able to recognize recognize a bucket which has been deleted with bad intensions to cover up something?
↧
↧
NDV json freed parsin on splunk
I am trying to import JSON file on splunk enterprise, my sourcetype is below:
CHARSET=UTF-8
INDEXED_EXTRACTIONS=json
KV_MODE=none
NO_BINARY_CHECK=true
SHOULD_LINEMERGE=true
TIMESTAMP_FIELDS=timestamp
find below is also the Json file format example :
"cve" : {
"CVE_data_meta" : {
"ID" : "CVE-2011-3177"
},
"affects" : {
"vendor" : {
"vendor_data" : [ ]
}
},
"problemtype" : {
"problemtype_data" : [ {
"description" : [ ]
} ]
},
"references" : {
"reference_data" : [ {
"url" : "https://bugzilla.suse.com/show_bug.cgi?id=713661"
}, {
"url" : "https://github.com/yast/yast-core/commit/7fe2e3df308b8b6a901cb2cfd60f398df53219de"
} ]
},
"description" : {
"description_data" : [ {
"lang" : "en",
"value" : "The YaST2 network created files with world readable permissions which could have allowed local users to read sensitive material out of network configuration files, like passwords for wireless networks."
} ]
}
},
"configurations" : {
"CVE_data_version" : "4.0",
"nodes" : [ ]
},
"impact" : { },
"publishedDate" : "2017-09-08T18:29Z",
"lastModifiedDate" : "2017-09-08T18:29Z"
},
Question: The sourcetype is on the indexer, I have any ideas what is wrong?
↧
Imperva field not generating after installing add-on
After installing the add on, the imperva field is not generating the only thing that was added is the tag. How do I get it to generating extra fields?
↧
How to display count of distinct values of one field by another field
Have this:
search... | stats values(interfaces) AS Interfaces by circuit
![alt text][1]
Thank you in advance!
[1]: /storage/temp/215586-cusersv907863documents3.jpg
↧
How to extract nested key value pairs from a specific JSON string field using spath and kvdelim?
I have JSON that looks like this. With the "message" field, there can be one or more key value pairs. How can I extract the key value pairs that are within the "message" field?
{
"severity":"INFO",
"logger":"controllers.offers.OfferController",
"thread":"application-akka.actor.default-dispatcher-297",
"message":"2017-09-14 15:12:56,980 [I] c.o.OfferController h5FCZGLPj95A7DPq 67b33d676699b9cab76c7f86 \/offers\/private\/offer\/saveOffer\/25 POST Successfully saved offerId=69 for productId=3 ",
"properties":{
"path":"\/offers\/private\/offer\/saveOffer\/25",
"http_method":"POST",
"request_id":"xxxxxGLPj95xxxxx",
"client_id":"xxxxxd676699b9cab76xxxxx"
}
}
I've tried this, but it doesn't work:
index=xyz | spath input=message | extract kvdelim="=" pairdelim=" " | table offerId, productId
I need to be able to do this at search time since it's not possible for me to modify props.conf
↧
↧
Having troubles extracting a time stamp.
Hello all,
I'm having an issue with my environment while trying to index a set of logs i get from a file nightly and attempt to process them. what is happening is splunk is not finding the timestamp and either setting it as file mod time or at index time. I do not have this issue with the other logs sent from this same server.(syslog server sending many logs). At the bottom are 3 log lines as an example.
I'm trying to extract the epoch timestamp from the start of the line. AV - Alert - "**1504324797**" --> i'm not seeing any failed to parse timestamp errors so i'm confused as to why this is being bypassed and setting it to the file mod time or index time.
The input stanza:
[monitor:///apps/alienvault/ossec-alerts-*.log]
whitelist=ossec-alerts
index = test
sourcetype = alienv
disabled = 0
Props.conf: (I've commented out the field extractions to make sure they arnt the issue.)
[alienv]
TIME_PREFIX = ^\w+\W+\w+\W+ I've also tried: AV - Alert - " , \-\s\" , no time prefix and others
TIME_FORMAT = %s 10 digit epoch format
TZ = UTC
#REPORT-alienv = av-syslog-hdr, av-syslog-user, av-syslog-srcip, av-syslog-location1, av-syslog-location2, av-syslog-message
#REPORT-alienv-loc = av-syslog-location1, av-syslog-location2
#FIELDALIAS-signature = action as signature
#FIELDALIAS-src = src_ip as src
#TRANSFORMS-sev = av-syslog-sev
#TRANSFORMS-suppressions = av-win-suppress-detail
I have a distributed environment so i've placed the props.conf/transforms.conf on the indexers and search heads for search time field extractions. The indexers and search heads are version 6.5, the server i'm forwarding from uses a universal forwarder version 6.4.1
Log line examples:
AV - Alert - "1504324797" --> RID: "700008"; RL: "2"; RG: "windows,authentication_success,"; RC: "A Kerberos service ticket was requested: Success."; USER: "user@server.com"; SRCIP: "None"; HOSTNAME: "(Host-xxx-xxx-xxx-xxx) xxx-xxx-xxx-xxx->WinEvtLog"; LOCATION: "(Host-xxx-xxx-xxx-xxx) xxx-xxx-xxx-xxx->WinEvtLog"; EVENT: "[INIT]2017 Sep 02 00:00:02 WinEvtLog: Security: AUDIT_SUCCESS(4769): Microsoft-Windows-Security-Auditing: user@server.com: server.domain: server.domain: A Kerberos service ticket was requested. Account Information: Account Name: user@server.com Account Domain: server.domain Logon GUID: {5DDE4BE2-4A37-D51B-77F1-CDFE96B24E23} Service Information: Service Name: krbtgt Service ID: S-1-5-21-2277870611-162051517-1830794436-502 Network Information: Client Address: xxx.xxx.xxx.xxx Client Port: 65168 Additional Information: Ticket Options: 0x40810000 Ticket Encryption Type: 0x12 Failure Code: 0x0 Transited Services: - This event is generated every time access is requested to a resource such as a computer or a Windows service. The service name indicates the resource to which access was requested. [END]";
AV - Alert - "1504324797" --> RID: "700008"; RL: "2"; RG: "windows,authentication_success,"; RC: "A Kerberos service ticket was requested: Success."; USER: "user@server.com"; SRCIP: "None"; HOSTNAME: "(Host-xxx-xxx-xxx-xxx) xxx-xxx-xxx-xxx->WinEvtLog"; LOCATION: "(Host-xxx-xxx-xxx-xxx) xxx-xxx-xxx-xxx->WinEvtLog"; EVENT: "[INIT]2017 Sep 02 00:00:02 WinEvtLog: Security: AUDIT_SUCCESS(4769): Microsoft-Windows-Security-Auditing: user@server.com: server.domain: server.domain: A Kerberos service ticket was requested. Account Information: Account Name: user@server.com Account Domain: server.domain Logon GUID: {5DDE4BE2-4A37-D51B-77F1-CDFxxxxxxxxx} Service Information: Service Name: service$ Service ID: S-1-5-21-2277870611-162051517-1830794436-1296 Network Information: Client Address: xxx.xxx.xxx.xxx Client Port: 65170 Additional Information: Ticket Options: 0x40810000 Ticket Encryption Type: 0x12 Failure Code: 0x0 Transited Services: - This event is generated every time access is requested to a resource such as a computer or a Windows service. The service name indicates the resource to which access was requested. [END]";
AV - Alert - "1504324797" --> RID: "700008"; RL: "2"; RG: "windows,authentication_success,"; RC: "A Kerberos service ticket was requested: Success."; USER: "user@server.com"; SRCIP: "None"; HOSTNAME: "(Host-xxx-xxx-xxx-xxx) xxx-xxx-xxx-xxx->WinEvtLog"; LOCATION: "(Host-xxx-xxx-xxx-xxx) xxx-xxx-xxx-xxx->WinEvtLog"; EVENT: "[INIT]2017 Sep 02 00:00:02 WinEvtLog: Security: AUDIT_SUCCESS(4769): Microsoft-Windows-Security-Auditing: user@server.com: server.domain: server.domain: A Kerberos service ticket was requested. Account Information: Account Name: user@server.com Account Domain: server.domain Logon GUID: {5DDE4BE2-4A37-D51B-77F1-CDFxxxxxxxxx} Service Information: Service Name: service$ Service ID: S-1-5-21-2277870611-162051517-183079xxxx-xxxx Network Information: Client Address: xxx.xxx.xxx.xxx Client Port: 65169 Additional Information: Ticket Options: 0x40810000 Ticket Encryption Type: 0x12 Failure Code: 0x0 Transited Services: - This event is generated every time access is requested to a resource such as a computer or a Windows service. The service name indicates the resource to which access was requested. [END]";
↧
How to Combine more than one macros in to a single macro
Hi All,
i have 10 to 15 macros in my splunk. i want to use all of the 15 macros in to a single macro .. is there any possibilities or this use case
FYI - All of the macro's are independent to each other. No one interrelated to each other.
↧
Set multiple tokens using "condition match"
To set tokens, I have several "condition match" in a search but, if more than one condition is matched, only the first one seems to work.
To simplify my use case:index=_internal | stats count by host | table host, count @d now 1
What I expect is that both tokens will be set (both result.host and result.count exist and have a value).
However, only "showtab1" is set.
To my surprise, if I swap the conditions order:
[...]
↧
rest api option for compress file?
I want to set up a rest api call to https get request but this site will return a zip file instead of xml, jason , or text. Is there a way I could set it to index the zip file?If not, is there any workaround? This is the description from the site:
![alt text][1]
[1]: /storage/temp/215587-a.png
↧
↧
What are the capabilities of the "force_local_processing"
Does anyone know the full effects of the new option "force_local_processing "? How does it change the following information: https://wiki.splunk.com/Where_do_I_configure_my_Splunk_settings%3F
What are the aggregator and regex replacement processors?
↧
How to combine 10-15 macros into a single macro
Hi All,
i have 10 to 15 macros in my splunk. i want to use all of the 15 macros in to a single macro .. is there any possibilities or this use case
FYI - All of the macro's are independent to each other. No one interrelated to each other.
↧
XML help - collection isn't showing up in this navigation
We have the following code -
For some reason the following doesn't show up -
What can it be?
↧