For every record where the field Test contains the word "Please" - I want to replace the string with "This is a test", below is the logic I am applying and it is not working- I tried using case, like, and a changed from " to ' and = to == but I cannot get anything to work.
| eval Test=if(Test=="*Please*", "This is a test", Test)
↧
How to Build an If Statement based on if a field contains a string
↧
Security Key in server.conf
what is the diff between the security key in the clustering stanza and the key in the general stanza in server.conf ?
Should the same key be used for both the shcluster and the indexer cluster ?
↧
↧
Subsearch for multiple sourcetypes and fieldnames
I need to do a search in two different sourcetypes and use the result to do additional searches in these queries.
But I have the problem that, while both sourcetypes have similar values, they use different prefixes. So in sourcetype=A the ip is called aIP and in sourcetype=B the ip is called bIP respectively.
So you could search with
aIP=”192.168.0.1” OR bIp=”192.168.0.1”
However if you want to use these ips from a subsearch of both of these tables it becomes problematic and I am not sure what the best solution is.
So let’s assume I want to find the IPs used on a specific page called “MAINPAGE”. And use these IPs to search for other pages visited by them on both sources.
I tried to minimize the code as far as possible. It might not make any sense anymore, but I hope it’s enough to bring across my point.
Index=web (sourcetype=a OR sourcetype=b)
[search index=web sourcetype=a apage=MAINPAGE | table aIP]
OR
[search index=web sourcetype=a apage=MAINPAGE | rename aIP as bIP | table bIP]
OR
[search index=web sourcetype=b bpage=MAINPAGE | table bIP]
OR
[search index=web sourcetype=b bpage=MAINPAGE | rename bIP as aIP | table aIP]
| eval page = coalesce (apage, bpage)
| eval ip = coalesce (aIP, bIP)
| table page, ip
So because the table of the subsearch is automatically the search parameters for the parent searches and I need to search for both results, I don’t see a better way than doing both searches twice and just rename the field name of the output table.
Is there any way to reduce it to two subsearches in this case? Eg. renaming the fields without doing the search an additional time?
↧
Adding a new indexer to the indexer cluster
Can I please know the process of adding a new indexer to the indexer cluster ?
Should the cluster be kept in maintenance mode while adding the new indexer ?
Should the secret key be added in general stanza or the clustering stanza ?
What is the difference between splunk.secret and pass4SymmKey ?
↧
Help with the query that works with splunk server groups
Hi,
Below is the query i am using to get the hostname , IP addresses and last reported to splunk .
| metadata type=hosts index=apache_web splunk_server_group=abc | search [ | makeresults | eval host= apacheweb123 | table host | makemv host delim=" " | mvexpand host | eval host="*".host."*" | format ] | table host | append [ | makeresults | eval host=apacheweb123 | table host | makemv host delim=" " | mvexpand host ] | join [ search index=_internal hostname=* | stats count by hostname sourceIp | table hostname sourceIp | rename hostname as host ]
But the above search is not working when the server group is mentioned but i need server groups to make search faster over a large data . Any help to get the hostname , IP address , Last reported by including splunk_server_group would be appreciated.
↧
↧
Is there a way to clone already existing fields from one app to another
Hi, I have two apps one is normal one another one is machine learning app. I wanted clone all extracted fields from normal app to machine learning app. Is there any way. I have gone through machine learning documentation but I couldn't find any solution.
Thanks,
Chandana
↧
Data stopped coming into Splunk for Splunk add-on for Microsoft Cloud Services,
We are running Splunk Enterprise 7.0.1
On our Splunk Heavy forwarder we installed and configured "Splunk add-on for Microsoft Cloudservices "(current version 2.0.3)
We stopped receiving any data in Splunk for that add-on as of yesterday evening.
Troubleshooting page for that add-on looks ok. It shows "Certificate Status: Auto-generated and verified as valid"
There are few errors: & warnings in Splunk internal index (sample errors to follow).
Any advices on how to approach this issue and possibly fix it will be appreciated.
Here are patterns of errors and warnings :
1) ...File "/export/opt/splunk/lib/python2.7/ssl.py", line 653, in read v = self._sslobj.read(len) SSLError: ('The read operation timed out',)
source = $SPLUNK_HOME/var/log/splunk/splunk_ta_microsoft-cloudservices_management.log
2) File "/export/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/httplib2/__init__.py", line 1059, in connect raise SSLHandshakeError(e) SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:676)
source = $SPLUNK_HOME/var/log/splunk/splunk_ta_microsoft-cloudservices_management.log
3) Pipeline data does not have indexKey. [_path] = /export/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/ms_o365_management.py\n[_raw] = \n[_meta] = punct::\n[_stmid] = xeoUyu7qLzDHQE\n[MetaData:Source] = source::ms_o365_management\n[MetaData:Host] = host::dc1nix2p69\n[MetaData:Sourcetype] = sourcetype::ms_o365_management\n[_done] = _done\n[_linebreaker] = _linebreaker\n[_conf] = source::ms_o365_management|host::dc1nix2p69|ms_o365_management|\n
sourcet:/export/opt/splunk/var/log/splunk/splunkd.log
Thank you!
↧
Extracting field value gets encoded. Why?
I have extracted value from the message log. So I have custom field with its value.
In the log, it displays "* myName=J&K *"
The extract field is myName, and it's value is now "J\u0026K".
Even when I export this in PDF or CSV, encoded value gets displayed.
Why is this occurring, and is there way to prevent automatic encoding?
↧
Can we find the events which are not matched by Lookup table?
I have a lookup table with which I am categorizing the Error Messages received from a particulat Sourcetype "error".
Below is the SPL query that I have used to categorize the Error Messages:
sourcetype=error
| lookup Error_Wildcards "ErrorMessage" AS "Error Description" output "ErrorType" as "Error Type"
| stats count(Robot) as "Number Of Robots with Error" by "Error Type"
| table "Error Type", "Number Of Robots with Error"
I need to retrieve the Events which are not matched with the Lookup Table and categorize them as "Not Categorized".
Is it possible in any way by modifying the Lookup CSV file or by modifying the search query? Please Suggest!
Thanks
Maria Arokiaraj
↧
↧
Where to Location Custom Reports and Alerts in ES CLI
Good day. My work team is in the process of migrating our instance of ES to a new server and I am trying to locate my custom reports and alerts in the CLI so that I can extract them and migrate them to the new server.
Can someone point me in the right direction regarding which directory I can find these customizations in the ES CLI? And if this isn't possible, now would be a good time to let me know that too :-)
Thank you in advance for the assist!
↧
How can I give access to only a single index to a custom Role that I've created?
I am trying to create a role that has access to only a single index and can only view the 'search' app.
The way I created the role was by copying all the capabilities and other settings from the 'user' role to my new role. The only differences are that the 'Indexes searched by default' and 'Indexes' list are limited to only the one index I want them to see. I then went to the 'Permissions' page for the Search app and gave the new role Read+Write permissions.
After creating a dummy user and placing it in the role, I logged in and found that indeed it only had access to the search app and could not see others. However when I attempt to execute a search, no results are returned. The search query I used was: 'index=my_custom_index'. The same query works if I run it as myself (an admin).
My splunk set up is: 3 search heads, 3 forwarders, 4 indexers, and a license server. I made all of the above changes on the captain search head's UI.
Are there any steps that I am missing? And are there any other troubleshooting techniques I can use? I've tried looking at the search job logs but there are no clear indications of what permissions were missing or what caused 0 results to be returned.
↧
REGEX filter in transforms.conf file setting question
We're forwarding events to a 3rd party. In our transforms.conf file, the filter looks like the following
**REGEX = .**
For some reason, this filter capture names without any hyphens. Here's what I'm talking about
Success - Computer
Failure - Co-m-puter
We have computer names with the '-' in them but they don't get captured. Is there a better wild card string that can be used to capture all computer names, regardless of what characters are in them.
Thanks!
↧
query to grab the metadata of the host entered by the user
Hello,
Can someone please help me to build a query that will display hostname , IP address , last reported by the forwarder.
If i use the index= star host= star , that will be too much load on the indexers . Is there any better way to grab those metrics.
↧
↧
Notable Event Urgency issues
I have setup a few correlated events which currently are showing up in the incident review console as urgency (unknown) if you "Uncheck" all the Urgency levels. I have checked the searches and it has the correct input. I also setup it up so all three values eval to "high" (priority,severity,urgency) but it still only fires as high as a "medium" event. Does anyone know what could be causing these events now to show up as high. I have reviewed the articles about how urgency is assigned and the lookup table is fine it actually says it should be set to high but its still not doing it.
↧
How to count daily events with specific time?
Hi guys,
I need to count number of events daily starting from 9 am to 12 midnight. Currently I have "earliest=@d+9h latest=now" on my search.
This works well if I select "Today" on the timepckr. However, if I select yesterday, it is still counting the events from today.
how can I fix this?
Thanks a lot!
↧
How to make search faster
Hello,
below is my search . Since i am using join , search is slow . Can i please know if there is a way to increase the speed of the search rather than absolutely specifying the index.
| tstats max(_time) as lastReport WHERE splunk_server_group=abc index=*_abc_* OR index=main by host | eval LastReported=strftime(lastReport,"%m/%d/%y %H:%M:%S") | table LastReported host |join host [search index=_internal hostname=* | stats count by hostname sourceIp| rename hostname as host ]
↧
REST API to check persistent queue file size
Hi all,
As per the title, may I know if there is any REST API to get the persistent queue size in Heavy Forwarder?
I have set up persistent queues in my Forwarders. I would like to monitor the queue size from Monitoring Console in case the size exceeds beyond certain level (e.g. 1MB, 10MB, etc), in which case I would like to fire alert if that happens. So far I'm checking out the reference manual but unable to find any so far.
http://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTprolog
Any helps are much appreciated. Thanks!
↧
↧
getting Winsock error 10061
We are forwarding logs to a UF->HF-> INDEXER setup for splunk but the logs are not getting thru.
We checked the splunkd log of HF and it has the error below:
01-08-2018 20:30:15.311 -0600 ERROR ApplicationUpdater - Error checking for update, URL=https://apps.splunk.com/api/apps:resolve/checkforupgrade: Winsock error 10061
What could be the possible issue on this?
Thanks!
↧
tooltip without javascript in splunk panel title
Hi, I have a requirement wherein I need to add a tooltip or mouse-hover capability to an image in the title. We have added an image to the title with url option in background option of the panel in css:
something like:
xyz.h2.panel{
background:right float url (////.png)
}
Now, I need to add a textual description when I hover on this title. I have 20 panels in my dashboard and need to add individual tooltip to all. Any help/pointer is appreciated.
P.S. I don't have access to put files in /app/static folder, I can only use inline capability of the splunk css, not used inline js anytime.
Best,
↧
Location and site definition in Indexer Cluster
Hi,
I am trying to setup new Indexer Clusters which must comply to different regulators.
There are three different locations (EMEA, ASIA, US). Each location has two sites.
What I would like to do is having replication within location, not accros location.
The setup in the config would look like:
site_replication_factor = origin:1, emea(site1:1, site2:1), asia(site3:1, site4:1), us(site5:1, site6:1), total:2
Does anyone know a way to manage that with a single indexer cluster master instead of having a master for every location ?
Thanks for your help.
↧