Hi Team,
Need your help/suggestion on what is the best way to handle below scenario.
I am using field extractor screen from search head GUI to extract fields from below proxy log patterns. For example am using below pattern and extracting below 5 fields[filed1 to filed5]
`[06/Sep/2017:17:02:29] failure (13878): for host 141.139.7.270 trying to GET /ba_staticfiles/LeakingNews/LrnJsonRsp.txt, service-http reports: COBRE7740: unable to contact wa01.abc.xyz.com:9102 (Directory lookup error)`
field1 -141.139.7.270
field2 -/ba_staticfiles/LeakingNews/LrnJsonRsp
field3 -COBRE7740
field4 -wa01.abc.xyz.com:9102
field5 -Directory lookup error
Challenge that i am running into is that not all events in the proxy logs are of same length , some log events have all the field information while others do not have them,also if one of the fields is missing in the event none of the fields get extracted.Due to this when i do stats on any of the extracted fields i miss out on the events[total evets=100 , my stats show only 70]
Below are my possible proxy log statements [3rd and 4th event is same as 1 and 2 except that it has additional space after (]
**Patterns**
1st event ->[06/Sep/2017:16:38:23] security (13878): for host 141.139.7.270 trying to GET /favicon.ico, deny-service reports: denying service of /favicon.icon
2nd event ->[06/Sep/2017:17:02:29] failure (13878): for host 141.139.7.270 trying to GET /ba_staticfiles/LeakingNews/LrnJsonRsp.txt, service-http reports: COBRE7740: unable to contact wa01.abc.xyz.com:9102 (Directory lookup error)
3rd event ->[06/Sep/2017:16:38:23] security ( 13878): for host 141.139.7.270 trying to GET /favicon.ico, deny-service reports: denying service of /favicon.icon
4th event ->[06/Sep/2017:17:02:29] failure ( 13878): for host 141.139.7.270 trying to GET /ba_staticfiles/LeakingNews/LrnJsonRsp.txt, service-http reports: COBRE7740: unable to contact wa01.abc.xyz.com:9102 (Directory lookup error)
Any suggestion on best way to handle them [ use regex to handle it during indexing by modifying props/tranforms.conf] or handle it through field extraction in the GUI. If using GUI how this can be achieved using field extraction OR regex from the search head, or if there is another better way.
Also is there a dedicated inbuilt sourcetype for proxy error logs [ i am using access_combined sourcetype]
Thank you in advance.
Splunk Version - 6.5.1
↧
Field Extractions from Proxy Logs
↧
Are there any Mulesoft dashboard apps or add-ons?
Is there a jar/tar/zip or something developed to enable or install in the Splunk servers to reuse with Mulesoft and enable some dashboards?
↧
↧
How do I make my search command to summarize network throughput data?
Aplogies, I'm not a Splunk administrator, I'm a capacity tool person that needs to extract some metrics from Splunk.
Mostly I'm doing fine, but this one has me stumped. I'm trying to collect network throughput data from F5 firewalls.
This is my search query:
| tstats
first(all.clientside_bytes_in)
from datamodel="bigip-tmstats-virtual_server_stat"
by
host
all.name
_time
span=5m
| rename first(all.*) as * all.* as *
| `abs_to_rate("host name", "clientside_bytes_in")`
| sort host,name,_time
| fields host,name,_time, clientside_bytes_in, clientside_bytes_in_rate
I get network throughput data at a 5 minute rate at a host,name level, and the data looks correct.
But I need to roll that up and have it just at 'host' level as host,name is too granular. I can't get it to work, when I take 'name' out of the query the results don't make any sense. How do return data at a host level, summing all of the name level data into one result per 5 minute?
↧
Visual chart for how much free disk space is available?
when i run the query in splunk search [ host=tableau sourcetype="Perfmon:Free Disk Space" ]
I get the below mentioned results
9/7/17
3:57:43.000 PM
09/07/2017 11:57:43.647 -0400
collection="Free Disk Space"
object=LogicalDisk
counter="% Free Space"
instance=_Total
Value=80.07256674174579
Can someone help me with a query that would actually give visual chart for how much free Disk space is available?
↧
How to build a search using 4 different ad hoc searches
base-search earliest=-1h@m|
Desk
cli_attr="MOBILE_IND=N"
Mobile
cli_attr="MOBILE_IND=Y"
Emarketing
cli_attr="MOBILE_IND=Y" OR cli_attr="MOBILE_IND=N" PartnerCode=*
Non-Emarketing
cli_attr="MOBILE_IND=Y" OR cli_attr="MOBILE_IND=N" NOT PartnerCode=*
using these am trying to build a base search
|eval deskdev=if(cli_attr=="MOBILE_IND=N","MOBILE_IND=N",NULL)
|eval mobiledev=if(cli_attr!="MOBILE_IND=N","MOBILE_IND=N",NULL)
|eval eMarketing=if((cli_attr=="MOBILE_IND=Y") OR (cli_attr!="MOBILE_IND==Y") AND (PartnerCode=="*") , "MOBILE_IND=Y",NULL)
|eval NoneMarketing=if((cli_attr=="MOBILE_IND=Y") OR (cli_attr!="MOBILE_IND=Y") AND (PartnerCode!="*"),"MOBILE_IND=Y",NULL)
search not able to match the values with original, how would it possible.
↧
↧
Splunk Statistics table with totals column
**Below is my CSV Data :**
----------
Company, Model,Year
Honda, Civic, 2016
Toyota, Camry, 2017
Honda, Accord, 2016
Honda, Civic SE,2017
Honda, Fit, 2017
Honda, Fit EV, 2017
Toyota, Corolla, 2016
Toyota, Yaris, 2017
----------
The fields auto extracted by splunk are Company,Model and Year.
When i make a "chart count over Company by Year | addtotals " and change to statistics table in Splunk simple xml Dashboard visualizations,i get the result as
![alt text][1]
My requirement is to get the totals field as second column.
**Expected:**
![alt text][2]
IF not possible through the above way,kindly suggest a way to achieve the expected stats table,with total field as second column.
Thanks in Advance
[1]: /storage/temp/209773-splunk-stats-totals.jpg
[2]: /storage/temp/209774-expected-stats.jpg
↧
Tabular report showing count based on time range
Hi,
I need to create report in ![alt text][1] format.
Could anyone help me in achieving this.
I can have time interval of 2 hours as well if cannot have in the format.
[1]: /storage/temp/211643-tabular-report.png
↧
How can I sum total memory used by a process?
I need to calculate total memory used by a process. There are multiple processes with same root and suffixes. But data sampling is not consistent. Sometimes it comes in as 2 per minute, sometimes 4. Here is a sample:
09/07/2017 14:25:56.050 -0400 ,instance=server#1 ,Value=31827849216
09/07/2017 14:25:56.050 -0400 ,instance=server ,Value=30434951168
09/07/2017 14:25:11.065 -0400 ,instance=server#1 ,Value=31827849216
09/07/2017 14:25:11.065 -0400 ,instance=server ,Value=30434951168
09/07/2017 14:24:26.064 -0400 ,instance=server#1 ,Value=31827849216
09/07/2017 14:24:26.064 -0400 ,instance=server ,Value=30434922496
How do I sum it for server* by a minute? Can't do average as it would show half the memory used, can't sum as it would show double for times with 4 samples.
↧
How can I install a forwarder on a Sun Solaris 5.10?
I did the following -
bash-3.2$ uname -a
SunOS 5.10 Generic_Virtual sun4v sparc sun4v
bash-3.2$ tar -xvzf splunkforwarder-6.4.1-debde650d26e-SunOS-sparc.tar.Z
tar: z: unknown function modifier
Usage: tar {c|r|t|u|x}[BDeEFhilmnopPqTvw@[0-7]][bfk][X...] [blocksize] [tarfile] [size] [exclude-file...] {file | -I include-file | -C directory file}...
What can be done?
The following speaks about the `tar -z` option in Solaris - [Extract and Uncompress tar.Z file, One Command (Solaris)][1]
[1]: http://www.dreamincode.net/forums/topic/274562-extract-and-uncompress-tarz-file-one-command-solaris/
It suggests trying `uncompress -c foo.tar.Z | tar xv -`....
And, `uncompress -c splunkforwarder-6.4.1-debde650d26e-SunOS-sparc.tar.Z` uncompresses it stdout...
↧
↧
How can I connect MS Excel to Splunk via Splunk ODBC after upgrading Splunk version?
After upgrading Splunk to 6.6.x I can no longer connect MS Excel (on a Windows 7 server) to Splunk via the Splunk ODBC driver 2.1.1.
When trying to make a connection following the steps below, The following error is displayed:
**"(40) Error with HTTP API, error code: SSL connect error":**
To use the Splunk ODBC Driver to get Splunk data into Microsoft Excel:
Open a new worksheet in Excel.
Click the Data tab.
In the Get External Data group, click From Other Sources, and click From Microsoft Query.
In the Choose Data Source window, click Splunk ODBC.
**Environment:**
(Windows 7 + Splunk ODBC 2.1.1) connecting to Splunk indexer 6.6.3
![alt text][1]
[1]: /storage/temp/211646-screen-shot-2017-08-29-at-44618-pm.png
↧
How to configure Splunk to collect nmon data and shows analysed reports in AIX?
I have installed splunk-6.2.13-278211-AIX-powerpc version and now wanted to configure Splunk in such way that it should collect nmon data and I should be able to see post and current utilization of servers.
↧
Cisco Networks App - Access Points Not Showing
I have switches, WLC and APs sending syslog to rsyslog.
Splunk is monitoring the folders and ingesting data properly (sourcetype for all 3: cisco:ios).
The IOS devices and the WLC are showing up in the overview, but not the APs.
Also, none of the detail dashboards have any info. Any idea what I might be missing?
↧
Automating bundle pushes from shcluster and index cluster
Simple question, has anyone been able to successfully solve this? I can surely think of a bunch of easy ways to accomplish this (i.e. ansible) but what are others experiences? What advice do you have? At this point I have resigned myself to the fact that we have to do it manually, it's not that hard YET. This process is not scalable. I have no doubt that Splunk is working to solve this issue so I dont want to have have our team dev some complicated process around this.
↧
↧
How to choose one field value out of two ?
Hi All,
If a field has two values but I want to pick only one. Could you please suggest me with the help of which command I can do that ?
just as say
Field A= B,C
↧
Reqular Expression
Hi All,
I am a new to Regular Expression topic, Could you please share me a link which help me to understand Regular Expression for splunk ?
↧
Using multiple geospatial lookups
Thanks in advance for any help.
I currently am using a geospatial file to show devices inside or outside of a geofence.
Here is a small snippet of the search
| lookup geo_Example1 latitude longitude
| fillnull featureId value="outsideGeoFence"
| where LIKE(featureId, "outsideGeoFence")
| fillnull value="unknown" user
I can use any single geo spatial file such as Example1 Example2 Example3 that I have loaded referencing the latitude and longitude and it works as expected.
I would ideally like to add more than one geospatial lookup to the search instead of creating multiple reports or dashboards for each specific location
I have tried simply adding another lookup to the string in different ways but it is not working once I add more thane one Geospatial reference.
↧
Reqular Expression 101
Hi All,
I am a new to Regular Expression topic, Could you please share me a link which help me to understand Regular Expression for splunk ?
↧
↧
Send JSON file/txt file using HEC
Hello
Trying to send a JSON file/text file through HEC to splunk. Getting stuck while adding
`"-d @data.json"` in curl command. I have created a new token, enabled it, sent sample data like Hello world, and works. But, not sure how to send txt/json file using curl
Thanks
SP
↧
Replace join with stats to merge events based on common field
My datasets are much larger but these represent the crux of my hurdle
sourcetype=sale_by
fields: sid, user
sourcetype=sale_made
fields: sid, amount
Where: `sale_made.sid = sale_by.sid`
I have this search that works:
sourcetype=sale_by | join sid [ search sourcetype=sale_made ] | stats sum(amount) by user
Can this be done more efficiently with stats?
↧
Slack Notficiation Alert: errors behind proxy
Hi,
We have already whitelisted slack & web hook URLs in proxy but still getting errors in splunkd and slack alerts are not working.
May be something to change in python script for slack.py
09-08-2017 10:45:13.225 +1000 ERROR sendmodalert - action=slack STDERR - Traceback (most recent call last):
09-08-2017 10:45:13.225 +1000 ERROR sendmodalert - action=slack STDERR - File "C:\Program Files\Splunk\etc\apps\slack_alerts\bin\slack.py", line 7, in
09-08-2017 10:45:13.225 +1000 ERROR sendmodalert - action=slack STDERR - proxies={"http": "http://ipaddress"})
09-08-2017 10:45:13.225 +1000 ERROR sendmodalert - action=slack STDERR - File "C:\Program Files\Splunk\Python-2.7\Lib\site-packages\requests\api.py", line 55, in get
09-08-2017 10:45:13.225 +1000 ERROR sendmodalert - action=slack STDERR - return request('get', url, **kwargs)
09-08-2017 10:45:13.225 +1000 ERROR sendmodalert - action=slack STDERR - File "C:\Program Files\Splunk\Python-2.7\Lib\site-packages\requests\api.py", line 44, in request
09-08-2017 10:45:13.225 +1000 ERROR sendmodalert - action=slack STDERR - return session.request(method=method, url=url, **kwargs)
09-08-2017 10:45:13.225 +1000 ERROR sendmodalert - action=slack STDERR - File "C:\Program Files\Splunk\Python-2.7\Lib\site-packages\requests\sessions.py", line 456, in request
09-08-2017 10:45:13.225 +1000 ERROR sendmodalert - action=slack STDERR - resp = self.send(prep, **send_kwargs)
09-08-2017 10:45:13.225 +1000 ERROR sendmodalert - action=slack STDERR - File "C:\Program Files\Splunk\Python-2.7\Lib\site-packages\requests\sessions.py", line 559, in send
09-08-2017 10:45:13.225 +1000 ERROR sendmodalert - action=slack STDERR - r = adapter.send(request, **kwargs)
09-08-2017 10:45:13.225 +1000 ERROR sendmodalert - action=slack STDERR - File "C:\Program Files\Splunk\Python-2.7\Lib\site-packages\requests\adapters.py", line 378, in send
09-08-2017 10:45:13.225 +1000 ERROR sendmodalert - action=slack STDERR - raise ProxyError(e)
09-08-2017 10:45:13.225 +1000 ERROR sendmodalert - action=slack STDERR - requests.exceptions.ProxyError: ('Cannot connect to proxy.', error(10060, 'A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond'))
09-08-2017 10:45:13.727 +1000 INFO sendmodalert - action=slack - Alert action script completed in duration=22037 ms with exit code=1
09-08-2017 10:45:13.727 +1000 WARN sendmodalert - action=slack - Alert action script returned error code=1
09-08-2017 10:45:13.727 +1000 ERROR sendmodalert - Error in 'sendalert' command: Alert script returned error code 1.
↧