I have the following search:
....| eval "cs"=case(CallRate<=250,"Under 250 kps", CallRate<=500,"Under 500 kps", CallRate<=750,"Under 750 kps", CallRate<=1000,"Under 1000 kps", CallRate<=1250,"Under 1250 kps", CallRate>1250, "Above 1250 kps") | stats count by cs | eval cs=cs+" -- "+count + "calls"
I want to make the piechart easy for my client to understand but the fields in the piechart organize themselves alphabetically. Is there a way I could sort them by the original way like above? The following is the result piechart:
![alt text][1]
[1]: /storage/temp/209710-1.png
I want it to be in this order:
Under 250 kps, Under 500 kps, Under 750 kps, Under 1000 kps, Under1250 kps, Above 1250 kps
↧
How can I change the order of the fields in my piechart?
↧
How can I change bar chart interval to time duration?
I have the following search:
...| convert dur2sec("Call Duration") as "CDinsec" | stats sum(CDinsec) as "totalCDsec", avg(CDinsec) as "avgCDinsec" by Company
which give me this result:
![alt text][1]
[1]: /storage/temp/209711-22.png
How can I change the value from totalCDsec and avgCDinsec (currently in second) to duration so that I could put it in a double column bar chart?
↧
↧
How do I write a search with a subsearch?
Hey everyone,
Trying to write a search to find Firewall allows by Previous Drops
I am very new to Splunk (love it so far) and am trying to write a search with a subsearch. Basically I want to find IP addresses that were denied at our firewall and then check if they were subsequently allowed at some point. So far I have:
tag=network action=allow [search tag=network NOT action=allow | dedup src_ip | table src_ip] | top src_ip by dest_ip
I *think* this should be correct but before I mess with it I wanted to see if there was a better way to do this.
Thank you, and sorry for the noobish question!
↧
How to filter results based on user value or lack of user value?
I am looking to filter results based on the users. The problem is some of the data doesn't have user value.
Currently, I am using below condition
User = $user_token$ OR NOT User = *
Condition 1: To extract all the results ($user_token$ = *) - Working fine
User = * OR NOT User = * ("OR NOT User = *" is for getting data which is not having user value)
Condition 2: To extract results with specific user ($user_token$ = XYZ)
User = XYZ OR NOT User = *
In condition 2 along with XYZ user it extract the data which doen't have user value. I am not sure how to modify condition so both condition work together.
**My Search Query:**
| tstats summariesonly=true max(All_TPS_Logs.duration) AS All_TPS_Logs.duration values(All_TPS_Logs.user) AS user FROM datamodel=MLC_TPS_DEBUG4 WHERE (nodename=All_TPS_Logs host=LCH_UPGR36-T32_LRBCrash-2017-08-08_09_44_32-archive (All_TPS_Logs.user=MUREXBO OR NOT All_TPS_Logs.user=*)) All_TPS_Logs.name =*** GROUPBY _time, All_TPS_Logs.fullyQualifiedMethod span=1s
Note: I drive user values from drop down menu.
↧
Alerts aren't sending emails: error - "501, 'Syntactically invalid HELO argument(s)') while sending mail to"
I have created an alert in Splunk which when triggered sends an email to a specified mail id. But sadly, the mail is not getting sent. I checked the python.log file to find this:
**501, 'Syntactically invalid HELO argument(s)') while sending mail to**
Has anyone encountered this before? What could the reason for this issue? Any fixes?
Thanks
↧
↧
What port should I use to connect to a private server (Azure)?
I want to connect the server which is in Azure (private network) to Splunk indexer server , which port should be opened in order to establish the connection?
↧
Created a scheduled search containing dboutput, but the search is not running on schedule
I am trying to use Splunk DB Connect to copy data from Splunk to Database. The following are the steps that I followed:
1. Create an identity
2. Set up a connection to the database
3. Create an output dbconnect from DataLab > Outputs
I have created the output while following all the steps, and I have also tried to schedule this output so that it automates the scheduling process of transferring data from Splunk to the database. I have also set up a cron scheduling job.
After the output is saved, it does not run the scheduler, even when I have set the properties to run that output every 10 minutes.
What can be done for auto scheduler to run on Splunk to transfer Splunk data to SQL server database?
↧
Saving scheduled searches: what's the difference if it's saved as a report or as an alert?
I have some scheduled queries for which the only purpose is to maintain a lookup table (or maybe summary index after I figure out how to do those).
Splunk only allows me to save these scheduled searches as either an alert or a report. Is there any advantage to choosing one over the other if I don't need reporting or alerting on the search?
↧
Is there a way to trim URL string from a table?
So, I want to create a table where it shows the time, source IP, and URL.
sourcetype=* src_ip=* url=* | table _time, src_ip, url
The search runs fine however the URL comes back with a long string.
Example= https://www.google.com/xxx_xxx?atyp=csi&ei=tWelWaipKMOJmQGb_Lr4Cg&s=newtab&action=update&ima=1&ime=0&mem=ujhs.10%2Ctjhs.10%2Cjhsl.2190&rt=aft.7%2Cxhr.191%2Cwsrt.326%2Ccst.0%2Cdnst.0%2Crqst.11%2Crspt.1%2Crqstt.146%2Crnt.130%2Ccstt.130%2Cdit.219&zx=1504023844621
Is there a way to trim the string from the URL to only show up to google.com/xxx_xxx?
↧
↧
Is there a built-in feature to show log records on a HTML page?
I would like to know if Splunk having in build web service features to show few Log records on HTML page?
If yes, how is it possible?
I am pushing Log files to Splunk Enterprise.
I have a plan to write Python Scripts & deploy in Splunk to fetch Log texts based on some conditions and to show the output resultant Log texts on HTML page.
Multiple Users will be accessing the HTML page, not only Admin.
Is it required to develop another web service and integrate with Splunk to show Log values on HTML page ? If this is the only way then the job is going to be difficult.
Thanks!!!
↧
Getting response from REST API URL but Splunk log says "503 error"
Hi,
I installed the REST API Modular Input app and when hitting a URL it returns this output "{"status":"DOWN"}". I would expect to see this in Splunk so I can create a dashboard, but its not in there. Looking at the logs it says the below.
08-29-2017 12:12:02.948 -0500 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\rest_ta\bin\rest.py"" HTTP Request error: 503 Server Error: Service Unavailable
Why can I pull the URL up in the browser and it shows that status down but the log says it received a server error? This is a local install of Splunk on my computer.
↧
Tried to add a search peer: Error while sending public key to search peer: No route to host
(attempting 1 Indexer, +1 SH setup)
Tried the Following the Instructions from Splunk
1. Log into Splunk Web on the search head and click Settings at the top of the page.
Click Distributed search in the Distributed Environment area.
Click Search peers.
On the Search peers page, select New.
Specify the search peer, along with any authentication settings.
Note: You must precede the search peer's host name or IP address with the URI scheme, either "http" or "https".
Click Save.
Repeat for each of the search head's search peers.
It produces: Encountered the following error while trying to save: Error while sending public key to search peer: No route to host
I've also tried the command line method and got the same result:
splunk add search-server https:// IndexerIPADDRES:8089 -auth admin:password -remoteUsername admin -remotePassword passremote
I am able to ping the machines from each other as well as ping google. from each of the machine.
I even checked the host level firewall and enable/permit the port in firewall using:
sudo firewall-cmd --zone=public --add-port=8089/tcp --permanent
and reload the config list:
firewall-cmd --list-all
What is the problem?
↧
query to find out unused indexes
I am looking for indexes which are utilizing only 10%-20% of storage allocated to them. Can i please know is there any query to find out that.I know that we can look into DMC but i specifically need only the indexes that uses only 10%-20% of storage for last 30 days.
↧
↧
return function on field with spaces
Hello - searched, but no answer found.
...| return 10 "Name of Field"
Gives:
Name="" of="" Field=""
I know that I can rename this field, but the goal is to get the actual correct name with spaces as it is used in another sourcetype with the same format.
Any ideas please?
↧
Like function overview?
I am new to Splunk, Can someone please explain me what below query is doing and what does 1 mean at the end of Sourcetype and LIke and what 1=1.
Thanks in advance
| eval UseInSummary=case(
sourcetype="HTTP-RESULTS" AND Status="SUCCESS" AND Protocol="HTTP",1,
like(Type,"packet_%") AND ResultCalculated=1,1,
like(Type,"seeder_%") AND ResultCalculated=1 AND Result<200,1,
Type="latent" AND ResultCalculated=1 AND Result<1000,1,
1=1,0
)
↧
Why does splunk give an error with my external scripted lookup when I import python module in the scripted lookup?
I have an external scripted lookup that works when I run `| lookup privuserlookup username AS USERNAME` and this is very similar to the dnslookup that comes with splunk and is described here: http://docs.splunk.com/Documentation/SplunkCloud/6.6.1/Knowledge/Configureexternallookups#External_lookup_example
So my scripted lookup is working but now I am adding more functionality to my scripted lookup and the moment I add `import ldap` to the top of the script I can no longer run `| lookup privuserlookup username AS USERNAME` I get an error `returned error code 1` on the splunk web which does not give any info about the actual exception that happened in the script.
I need to use this python-ldap module in my scripted lookup but splunk is not letting me. I have other scripts in the same directory that are using the `import ldap` and they work just fine....Something about adding the ldap module to a script that is used as a scripted lookup makes splunk not work with it anymore.
Any ideas on why splunk gives an error as soon as I add `import ldap`??
↧
IMAP Search Real Time?
Does the IMAP app search run real time? or does it only work if you execute the script manually? Can I control how frequent the backend process checks for new mail?
↧
↧
Windows Infrastructure App Drilldown to Wrong Account
While using the Windows Infrastructure App I discovered an odd behavior; when at the Failed Logins screen, if you click on an account for drill down, it takes you to the account audit page. When you get there, it doesn't bring up the associated account, but brings up the first account. Is this normal behavior, or something that needs to be fixed?
↧
How to Use Static Dropdowns in Dashboard
Hello,
So I have data with Regions and as of now- I have a region called A that needs to be mapped as region AA and AB. Region AA and AB have different countries in them. I want to filter my report on region AA or region AB. I am trying to create a static drop down that includes specific countries in AA or AB. I also have regions C, D, E that can remain the same in the data and use a dynamic filter for those. Is there anyway to do this? I tried creating multiple static options but then AA appeared multiple times in my drop down filter. Thank you for the help!
↧
Is it possible to create a view showing all events coming from an IP and/user name?
I'd like to create a dashboard where I could easily search for events coming from a specific IP address or username.
For example:
It would show where that specific IP address was logged on to, URL it accessed, if it was locked out and all that stuff considering all security appliance was added on SPLUNK such as IPS, Web Gateway, Endpoint Protection, active directory and the like.
Just like how should an SIEM should work.
↧