Hi All,
I have a problem to form the logic for sorting Latest and Previous Data to compare.
Looking Field1=Status , and Field2=ID and sort by Latest compare with Previous.
Search and Filter Data as below.
Event 1 -> Time=10:02AM , Field1=100 , Field2=1
Event 2 -> Time=10:01AM, Field1=50, Field2=2
Event 3 -> Time=9:25AM, Field1=80, Field2=1
Event 4 -> Time=9:24AM, Field1=40, Field2=2
Event 5 -> Time=9:05AM, Field1=70, Field2=1
Event 6 -> Time=9:02AM, Field1=20, Field2=2
End Result
Total Field1=150(which sum from 100+50) by picking latest from Field2=1&2.
And compare previous result Field1=120(which sum from 80+40) by picking 2nd latest from Field2=1&2.
My objective is to present the values different for Single Value Visualization.
Thanks.
↧
How to Sum Latest and Previous Field1 from multiple Field2.
↧
CISCO ASA add on is not extracting fields
We recently upgraded the environment from 6.5 to 7.2 and ever since there is an upgradation in the environment we see that the rule fields are not getting extracted properly for Cisco message id 106100 but it is getting extracted for cisco message id 106123. We have defined the props and transforms still not able to extract the rule field for message id 106100. The rule field should be extracted by default but I see that some of the fields are not getting extracted by default but we need to extract it manually.
↧
↧
Am I using modular regular expressions wrong?
Hey,
I need to route my data to a different index and append something to the host field if a certain regex matches, following the well know method using props.conf and transforms.conf, for example documented [here][1] but also mentioned in [transforms.conf][2]. My transforms.conf looks like this (props.conf has `TRANSFORMS-class = route_host_by_foo,route_index_by_foo` applying this to the appropriate data):
[route_host_by_foo]
REGEX = foo
DEST_KEY = MetaData:host
FORMAT = $0_custom_suffix
# $0 already contains "host::", so no need to prepend
[route_index_by_foo]
REGEX = foo
DEST_KEY = _MetaData:Index
FORMAT = custom_index
This is working fine. Since I need to change two DEST_KEYS, host and index, and this requires using two transforms.conf stanzas, I've tried to move my regex to a modular regular expression as documented [here][3] (search for "MODULAR REGULAR EXPRESSION") to avoid redundant config. It looks like this:
[foo]
REGEX = foo
[route_host_by_foo]
REGEX = [[foo]]
DEST_KEY = MetaData:host
FORMAT = $0_custom_suffix
# $0 already contains "host::", so no need to prepend
[route_index_by_foo]
REGEX = [[foo]]
DEST_KEY = _MetaData:Index
FORMAT = custom_index
Unfortunately, this doesn't work (same setup as before with props.conf), and I don't see why. Can someone explain?
[1]: https://docs.splunk.com/Documentation/Splunk/7.2.6/Indexer/Setupmultipleindexes#Route_specific_events_to_a_different_index
[2]: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Transformsconf
[3]: https://docs.splunk.com/Documentation/Splunk/7.2.6/Admin/Transformsconf#transforms.conf.example
↧
How to fix problem with attribute errors due to split in ldapgroup.py
Hi,
I'm using the ldapgroup command from SA-ldapsearch (Splunk Supporting Add-on for Active Directory). It allows me to get obtain nested users in AD groups. However, there is a problem with the command when running the command for certain groups in my AD. Although we can't "see it", there seem to be "list objects" in some of the AD groups and/or users, as opposed to "normal" string objects. we get the following error in Splunk when trying to use the ldapgroup command on these groups (cut away some not interesting lines).
AttributeError at ".../ldapgroup.py": 'list' object has no attribute 'split'
Traceback:
...
netbios_domain_name = entry_attributes.get('msDS-principalName', ' ').split('\\',1)[0]
Does anyone have a solution to this problem other than manually going through AD and changing all the list objects? That would also mean we would need a way to actually identify these objects. In addition, is there any arguments for not using list objects in AD? If not, then the ldapgroup script should have supported list objects as well, and I can file this as a bug/improvement.
↧
Server Availability query from Incident data
I have a lookup table with fields Application name and host, and i have a realtime Incident data with index, sourcetype and ServerName. I have two things to be retrieved.
1. If there is a incident created with ServerName matching the host in the lookup table and the Summary field as Server down then display the result as Down
2. if there is no incident ticket created for the host then the result should always show as UP ( meaning by default the result should be UP for last 60 mins unless we have a ticket with above criteria)
below is the query which i tried but it is showing the result only when there are any relevant data in the index
| inputlookup ServerFile
| rename "host" as "ServerName"
| join type=inner ServerName
[ search index=*Data* sourcetype="incidents"
| eval Result=if(SHORT_DESCRIPTION like "host Down%", "DOWN","UP")
| table ServerName Result]
↧
↧
'Configure Splunk forwarding to use your own certificates' possible documentation error
Hi,
I'm trying to configure Splunk forwarders and indexers to use our own certificates and while checking the documentation (https://docs.splunk.com/Documentation/Splunk/7.2.6/Security/ConfigureSplunkforwardingtousesignedcertificates) I've seen the following:
**Configure your forwarders to use your certificates**
...
[tcpout:group1]
server=10.1.1.197:9997
disabled = 0
clientCert = The full path to the client SSL certificate in PEM format. If this value is provided, the connection will use SSL.
useClientSSLCompression = Disabling tls compression can cause bandwidth issues.
**sslPassword = The password for the CAcert**
I don't understand how can the CAcert password needed as this is a private password.
Is this correct? Is the documentation okay? Could someone explain the reason for this?
Thanks.
↧
how Independent stream forwarder app on Linux machine, forwards netflow data to Indexer's in clustered environment?
Hi!
The Splunk environment has 2 Indexers (Clustered) and 1 Search Head.
There is a dedicated Linux machine which is forwarding the NetFlow data received on port 9998,
to the indexers.
The streamfwd.conf is set like below:
----------
[streamfwd]
httpEventCollectorToken = 06e31ecb-61e7-4f5d-bf7e-5651dbbc125a
ipAddr = 0.0.0.0
indexer.0.uri = http://10.23.0.14:8088
indexer.1.uri = http://10.23.0.15:8088
netflowReceiver.0.ip = 11.23.112.13
netflowReceiver.0.port = 9998
netflowReceiver.0.decoder = netflow
logConfig = streamfwdlog.conf
dedicatedCaptureMode = 0
netflowReceiver.0.protocol = udp
netflowReceiver.0.decodingThreads = 16
netflowElement.0.id = 258
tcpServer.0.address = 11.23.112.13
tcpServer.0.port = 80
----------
The data is being sent to both Indexers.
But around 97% of the data is being sent to Indexer no. 2.
Is there any logic how the streamfwd sends which data to which Indexer,
or does it need to send all the data to both the Indexers?
How does streamfwd work in clustered environments?
Thanks a lot.
↧
Bootstrapping a secure management configuration with company certificates
Distributing certificates to forwarders for the indexer configuration works fine in Splunk.
But what about the management communication?
It seems to be a chicken and egg problem.
Can this be done via the deployment mechanism, sending the forwarders appropriate configuration and certificates?
But then as soon as that configuration is active, the deployment server will no longer accept the connections until that is switched as well. Or is there a fallback mechanism to internal certs that allows a smooth transition?
thx
afx
↧
Problem with map command - Using search from lookup
Hi all,
I am trying to run a map command that will run searches from a lookup one by one as follows :
| inputlookup "Correlation_searches.csv"
| head 1
| map search="$check_search$"
The head 1 is just for debug purpose. The value of $check_search$ is the search.
For some reason i get the next error :
Unable to run query '"| tstats `summariesonly` count from
datamodel=\"Change_Analysis.All_Changes\" where earliest=-7d@h latest=now
nodename=\"All_Changes.Account_Management\" \"All_Changes.tag\"=\"delete\""'.
But i ran this search and it worked just fine:
| makeresults 1
| map search="| tstats `summariesonly` count from datamodel=\"Change_Analysis.All_Changes\"
where earliest=-7d@h latest=now nodename=\"All_Changes.Account_Management\"
\"All_Changes.tag\"=\"delete\""
Thanks !
↧
↧
count number of serialnumber with dc takes lots of time
hello
i have this query :
index = amer_pj
| `SerialNumber`
| `Region`
| stats dc(SerialNumber) as SerialNumber by Region
| table SerialNumber
which supposed to count the number of uniqe SerialNumbers
for last 30 days it take more than an hour to complete the query
what am i doing wrong ? is there a better way to do it ?
is there a way to save the result of the last run ?
(it is a dashboard, not a report)
thanks
↧
linebreak on expression passed into log
Trying to do a linebreak on **"CIB"** being passed into log. (I know, these logs are awful) Having problems breaking on the **CIB** expression though. Any suggestions? Splunk wants to break on OFX
SHOULD_LINEMERGE=false
LINE_BREAKER=(^(?P\w+\s+))
TZ=America/Chicago
**Log Format:**
**CIB** 2019-05-06 09:07:30,839] [THREAD: iner : 17] com.ffusion.ffs.ofx.servlets.OFXServlet - Mon May 06 09:07:30 CDT 2019: OFXServlet: OFXHEADER:100
DATA:OFXSGML
VERSION:151
SECURITY:NONE
ENCODING:USASCII
Show all 9 lines
20190506090730.831 19640191 ctaxnqidzgkzuete1557133644150B1732PK0400 ENG 426 051900395 CIB 0200 Y PROD -2b777cc0:16a8c453ac4:-2a0a 051900395 87836273 CHECKING 20190506 20190506 Y Y
20190506090730.796 13927199 wlipfswymcgvelcy1557133638179B1182PK0400 ENG 642 071901604 CIB 0200 Y PROD -12f8b87f:16a8c39e671:-e19 071901604 3332930001 CHECKING 20190506 20190506 Y Y
**CIB** 2019-05-06 09:07:30,724] [THREAD: iner : 40] com.ffusion.ffs.ofx.servlets.OFXServlet - Mon May 06 09:07:30 CDT 2019: OFXServlet: RQID:20190506140725.981_5323260_zoadfefhnclstbhc1557151644827B2797PK0900 user: null is authorized
**CIB** 2019-05-06 09:07:30,724] [THREAD: iner : 40] com.ffusion.ffs.ofx.servlets.OFXServlet - Inside New Parser processing
**CIB** 2019-05-06 09:07:30,885] [THREAD: iner : 40] com.ffusion.ffs.ofx.servlets.OFXServlet - Mon May 06 09:07:30 CDT 2019: OFXServlet: OFXHEADER:100
DATA:OFXSGML
VERSION:151
Show all 12 lines
**CIB** 2019-05-06 09:07:30,723] [THREAD: iner : 40] com.ffusion.ffs.ofx.servlets.OFXServlet - Mon May 06 09:07:30 CDT 2019: OFXServlet: RQID:20190506140725.981_5323260_zoadfefhnclstbhc1557151644827B2797PK0900 OFXHEADER:100
DATA:OFXSGML
VERSION:151
SECURITY:NONE
ENCODING:USASCII
Show all 9 lines
20190506090730.708 9866661 vfhntpuabsayykui1557133650682B1172PK0400 ENG 774 084201294 CIB 0200 Y PROD -12f8b87f:16a8c39e671:-e1b 19990101 Y Y N Y Y
20190506090730.670 11761432 zhecbsmbwliobrgk1557133646660B2948PK0400 ENG 144 111102758 CIB 0200 Y PROD -125ceb71:16a8c6053a2:-7e56 TRHST 500 111102758 261503081 CHECKING 20190410 20190506 Y
20190506090730.647 8480130 yxsidmahmlailtri1557133622247B2718PK0400 ENG 448 325081306 CIB 0200 Y PROD -125ceb71:16a8c6053a2:-7e5a ESP 20180406 20190506 2000510-1 LOAN
20190506090730.639 8964814 ooaxvqjugedktndw1557133650611B2878PK0400 ENG 092 211871691 CIB 0200 Y PROD -12f8b87f:16a8c39e671:-e1f 19990101 Y Y N Y
20190506090730.633 8437258 yqfixwpbmjyuxycs1557133650578B2585PK0400 ENG 158 071925567 CIB 0200 Y PROD 4c4e9ea8:16a8bfa5cde:-68b2 19990101 Y Y N Y
20190506090730.621 9516145 oaergmlhxnraymbb1557133647475B2893PK0400 ENG 446 096010415 CIB 0200 Y PROD -492b898c:16a8a6f9bd4:4ba5 TRHST 500 096010415 69833115 SAVINGS 20190429 20190506 Y
↧
Estimated date/release for end of support of Linux kernel 2.6
Hello,
I see that Linux kernel 2.6 is deprecated since 1 year (on April 2018, with Splunk 7.1.0).
https://docs.splunk.com/Documentation/Splunk/7.1.0/ReleaseNotes/Deprecatedfeatures#Platform_support
I would like to know which future version of Splunk will "remove" this kernel version, and if there is an estimated release date.
Regards,
Christophe
↧
Addon approval procedure
Hello everyone,
Now we don't need to request for Addon approval any more? It is done automatically by AppInspect instead?
Thanks,
↧
↧
How to get rid of blank space in my linechart result when using timechart command?
I am trying to read cpu usage from PC and trying to present it using timechart. It adds blank (the chart has gaps inbetween) when machine is offline and no data to populate during that time. How can I ignore those gaps and make it continuous.
When I use cont=false, it removes the _time values from x-axis and shows only the name _time. I need both values as well as I need to get of the gaps from the timechart.
Is there any option ?
↧
Connecting IBM MQ to splunk
Does anyone know how to load the MQ queue data to Splunk? I mean I have a series of events constantly coming to IBM MQ and I want to load that data to Splunk automatically and create dashboards for the customers. TIA.
↧
Splunk Enterprise deployment on AWS fails.
I followed the Splunk Enterprise Deployment guide and created a stack on my existing AWS VPC. I was in the middle of configuration when the CF process did a rollback due to some "Failed to receive 1 resource signal(s) within the specified duration" exception. What signal did it miss? I UI was up and running and I had several configuration steps complete. Was there an installation step I missed?
↧
nslookup TXT queries with Splunk
I am trying to see if its possible to run nslookup -q=TXT domain 8.8.8.8 so i can compare the results of the output to an existing lookup csv file.
↧
↧
Jboss server running on linux - Check whether it is running or not?
Hi,
I want to create a server status dashboard. I want to check whether the jboss server running on linux os is up or not.
I cannot use any add-ons. It needs to be achieved using the simple splunk Search & Reporting app.
Is this achievable? If yes, please help.
Thanks in advance!
↧
How to restrict access to indexed fields
I would like to restrict access to a specific indexed field. Here's my scenario:
- events contain usernames
- I use INGEST_EVAL to: create the user field (user), create a hashed version of the user field (user_hash) and to modify _raw to replace the username with the value of user_hash.
- this is done at index time and fields are indexed
- my goal is to allow all users access to the user_hash. This psuedo-anonymization allows for stats by user without having the actual username
- more privileged users will be allowed to access the user field to see the actual value
- I've set up the INGEST_EVAL extractions and they work fine
- what is the best way to restrict access to the user field to only specific role?
↧
How can I add a percentage sign to the radial gauge number that is displayed ?
Hi splunkers!
I got this query and I would like to display the percentage symbol in a radial gauge , but it doenst display the number with "%" inside of the gauge.
What do I have to do for get this done?
index="db_archer2" sourcetype="db_archer2" "C_xE1lculo ponto Aberto _ Fechado"="Aberto" "Criticidad _CVSS"="Cr\\xEDtico"OR"Criticidad _CVSS"="Alta" "Risco Atual do ponto"="Real" "Gestion vulnerabilidades"="Web Gestionadas Internamente"OR"Gestion vulnerabilidades"="Webs Externalizadas"OR"Gestion vulnerabilidades"="Apps M\\xF3viles" "Respons_xE1vel T_xE9cnico pela Corre_xE7_xE3o"!="Equipe Fraudes" "Nome do Projeto"!="Cyber-Hunting"OR"Nome do Projeto"!="Purple Team""Torre DTI"!="Coligada - Zurich"OR"Torre DTI"!="Coligada - SuperDigital"OR"Torre DTI"!="Coligada - Ole"OR"Torre DTI"!="Coligada - S3"OR"Torre DTI"!="Coligada - GetNet"OR"Torre DTI"!="Coligada - Universia"
| stats count as IFA
| eval IFA = round((IFA/156)*100, 2)
| eval IFA =tostring('IFA')."%"
| gauge IFA 0 1 3 4
Thanks for advance!
↧