I have a splunk query which results in the output as:
INFO :url="some_url": APIFilter.onComplete@87 : type=finalResponse;response_code=200;processing_time=21663;CommandType1_is_success=yes;CommandType1_exe_time=2758;CommandType2_is_success=yes;CommandType2_exe_time=8312;num_dependencies=2;is_all_dep_successful=true;dependencies_exe_time=11070;App_exe_time=10593;
I want to group by CommandType and have an output as:
Command Type | Success | Average | Median |
CommandType1| yes | 2758 | 2758 |
CommandType2 | yes | 8312 | 8312 |
My question here is how can I group my multiple fields in the same result?
↧
GroupBy multiple fields within single result
↧
List of searches run in the X period and by who ?
Hi,
Is there a way to search for what searches have been run over a period of time and by who - preferrably listing the search run also.
gratzi
↧
↧
No results DBX
I like to use DATABASES. I connected DBX and made a connection. With the query: | dbxquery query="SELECT * FROM \"XXX\".\"XX\".\"X\"" I can find my results, but in the search I like to use it with index=main sourcetype=Type source=Source.
What to do?
↧
Can we filter events coming from specific splunk_server?
The search head we use searches events from test and prod indexer. In prod, we only need to capture the one from prod indexer. Can we filter events coming from specific splunk_server? or how to point a search head to only get data from prod indexer?
↧
Chart Drill Down changes Date time range
I have dashboard with chart inside it.
The query of the chart is:
**base_search | eval _time = time| bucket _time span=24h | chart count over _time by app_risk| fields _time,Critical,High,Medium,Low**
The drill down settings are: On Click=Link to Search & Auto.
When clicking on "High" category on specific date, I would like to see the events related to this combination of risk and date.
For some reason, I have no results after drilling down.
**I see that the date time range is changed form the whole day (24 hours) to 9/16/18 3:00:00.000 AM to 9/16/18 3:00:00.001 AM.**
Can someone tell me why the results are not related to the specific column date?
↧
↧
Field alias's don't work for CIM data
I am trying to map incoming events to CIM fields using alias's. I followed the documentation here, https://docs.splunk.com/Documentation/Splunk/7.1.3/Knowledge/Addaliasestofields, but it didn't work when I view the dataset's values. The docs don't even mention the "named" column so it makes me wonder if I"m doing it right.
I tried to create an alias for CIM app and my src_ip custom field (created in the web.web dataset) to src. When I view the dataset values from the "datasets" tab, or search: | from datamodel:"Web.Web", the src field value remains always as "unknown".
I created the same alias for the search app and it didnt work either. What am I doing wrong?
thanks
![alt text][1]
[1]: /storage/temp/255083-capture.png
↧
Conditional Streamstats
Hi splunkers,
Suppose I have the following table:
Date ItemsPurchased UnitPrice
1/1/1111 20 0.5
2/1/1111 10 1
3/1/1111 -7 0
4/1/1111 8 0.2
Which is basically a representation of my stock, where the -7 means that 7 items have been sold.
So now I want to calculate the Median Unit Price, which I do by using the following query:
| streamstats sum(ItemsPurchased) as GTotal |streamstats sum(eval(ItemsPurchased*UnitPrice)) as UTotal |eval MedianUnitPrice= UTotal / GTotal |table date ItemsPurchased UnitPrice GTotal UTotal MedianUnitPrice
This works fine, calculating the MedianUnitPrice as required, HOWEVER, it also tries to calculate it for my Sale (-7), which skews the results thereon..
Date ItemsPurchased UnitPrice MedianUnitPrice
1/1/1111 20 0.5 0.5
2/1/1111 10 1 0.6667
3/1/1111 -7 0 (-0.475)
4/1/1111 8 0.2 (wrong result since it's adding -0.475 to the calculation)
What I'd like to do is to keep calculating the MedianUnitPrice EXCEPT when ItemsPurchased is a negative value.
Thanks!
↧
tab delimited file not getting split in the indexer
Hi
I am new to splunk
Am trying to split Tab delimited file in the indexer .
Below are the entries of the different config files .
In spite of these the data that gets ingested in splunk is not split by field names .
What am i doing wrong ?
![alt text][1]
Entries :
/opt/apps/splunkforwarder/etc/apps/DtuApp/local>vi props.conf
[SplunkJobLog_csv]
SHOULD_LINEMERGE = False
pulldown_type = 1
REPORT-myname = getJobLogData
[SplunkDbLog_csv]
SHOULD_LINEMERGE = False
pulldown_type = 1
REPORT-myname = getDbLogData
/opt/apps/splunkforwarder/etc/apps/DtuApp/local>cat transforms.conf
[getJobLogData]
DELIMS = "\t"
FIELDS = "ORDERID","JOBNAME","TYPE","ODATE","STATE","STATUS","FILENAME","APPLICATION","SUBAPPLICATION","STARTED","ENDED","TIME_OF_LOG_GEN"
[getDbLogData]
DELIMS = "\t"
FIELDS = "coord_member","application_handle","application_name","session_auth_id","client_applname","elapsed_time_sec","activity_state","activity_type","total_cpu_time","total_cpu_time_ml","rows_read","rows_returned","query_cost_estimate","direct_reads","direct_writes","stmt_text","ts"
/opt/apps/splunkforwarder/etc/apps/DtuApp/local>cat inputs.conf
[default]
host=xxxxxxx
[monitor:///data/logs/splunk_logs/Job_status_logs/*.log]
_TCP_ROUTING = DtuSplunk
disabled=false
index = 140868736_dtu_idx3
sourcetype=SplunkJobLog_csv
crcSalt =
↧
Change sourcetype via field extraction and transforms
Hi there,
One of UF is configured to send logs to sourcetype testData.
I'd like to push some of those logs matching a certain pattern (all logs matching the "[A][B]" pattern) to sourcetype testData_B.
Sample of log
[A][B] blabla
[A][C] blabla
I tried to use transforms and field extraction but I couldn't make it work. I don't have ssh access so I did via the web interface
**Transformation**
![alt text][2]
**Field extraction**
![alt text][1]
[1]: /storage/temp/255084-fieldextraction.png
[2]: /storage/temp/255085-transfo.png
What's wrong with my setup?
Thanks!
↧
↧
Join two stats searches and run stats/group on the result
I'd like to join two searches and run some stats to group the combined result to see how many users change/update browsers how often.
In my IIS logs I have one search that gives me a user agent string ( `cs_User_Agent`) and a `SessionId`; then another that has the `SessionId` and the `UserId`
search 1 retrieves a table of `cs_User_Agent` and `SessionId`:
` host=HOST1 index=iis sc_status=200 getLicense | sistats dc(SessionId) count by cs_User_Agent`
SessionId cs_User_Agent count
0014D886099319E6 Mozilla/5.0+(Windows+NT+10.0;+Win64;+x64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/64.0.3282.140+Safari/537.36+Edge/17.17134 12
0014D953D99FD234 Mozilla/5.0+(Windows+NT+6.1;+Win64;+x64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/69.0.3497.100+Safari/537.36 5
0014D953D99FD234 Mozilla/5.0+(Windows+NT+6.3;+WOW64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/69.0.3497.100+Safari/537.36 5
00154D82F471A7AA Mozilla/5.0+(Windows+NT+10.0;+Win64;+x64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/69.0.3497.100+Safari/537.36 2
0015B3CAC0EC3940 Mozilla/5.0+(Windows+NT+10.0;+Win64;+x64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/68.0.3440.106+Safari/537.36 30
0015C53D737B2991 Mozilla/5.0+(Windows+NT+10.0;+Win64;+x64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/68.0.3440.106+Safari/537.36+OPR/55.0.2994.61 16
search 2 retrieves a table of `SessionId` and `UserId`:
`host=HOST1 index=iis sc_status=200 authorize | stats count by SessionId, UserId`
SessionId UserId count
00061D40BFAB4208 4BJKKEAWGYXEAJH0F5F9DHSC0024 2
0008F091D8E8BE1A 1I7WKS9XIMZ92DCZF6CVKA4E001Q 2
000E5B538CC0A7B2 KQCZIHHPG9IOC9UD7MJICESS005B 1
000FC56381D4EA4B 3PH0F08V00SY9GFPGVCQBIQN006N 3
00106C907ED66683 JALM1LNJ8SV72BNHE1C5H0I50020 3
0013143CBC157C26 ETW9HL7L71PQJB7P492LLFEM006E 4
001E25B42A554F79 702EBB0O8MKG0VI94VIQ01ZE0031 1
I need to join these together to see how many different `cs_User_Agent` strings the users had during the period and count those.
Basically to see how many of the users change/update browsers very often.
So, my result should look like this:
Number of UA Strings in the period | Number of Users in grouping
>20 1>10 3>5 4
5 10
4 3
3 3
2 1
1 14
The results for search 1 & 2 would return A LOT of data, and from reading the subsearch info it sounds like that's not ideal as the whole subsearch would have to stay in RAM? Is there a better way than subsearch to do this?
↧
can any one help me on shell script which check the user of splunk process.if it is not running with splunk user we should get a email alert.shell script for linux platform
can any one help me on shell script which check the user of splunk process.if it is not running with splunk user we should get a email alert.our splunk is running on linux platform
↧
Splunk DB Connect: ERROR org.easybatch.core.job.BatchJob - Unable to write records org.apache.http.conn.ConnectTimeoutException: Connect to XXX.X.X.X:8088 [/XXX.X.X.X] failed: Read timed out
Hi All,
We observed ConnectTimeOutException failures for some of our DB Connect Inputs.
Can someone advise what may cause this error and how to resolve it?
[QuartzScheduler_Worker-32] ERROR org.easybatch.core.job.BatchJob - Unable to write records
org.apache.http.conn.ConnectTimeoutException: Connect to X.X.X.X:8088 [/X.X.X.X] failed: Read timed out
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:359)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:381)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEventBatch(HttpEventCollector.java:109)
at com.splunk.dbx.server.dbinput.recordwriter.HttpEventCollector.uploadEvents(HttpEventCollector.java:89)
at com.splunk.dbx.server.dbinput.task.processors.HecEventWriter.writeRecords(HecEventWriter.java:48)
at org.easybatch.core.job.BatchJob.writeBatch(BatchJob.java:203)
at org.easybatch.core.job.BatchJob.call(BatchJob.java:79)
at org.easybatch.extensions.quartz.Job.execute(Job.java:59)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
Thank you in advance.
Kind Regards,
Ariel
↧
Increasing indexer disk space
Hello,
I'm running my Splunk cluster on cloud, and I'm running out of disk space. I'm planning on increasing the available disk space but I'm wondering if there might be any side effects on doing this that I should prepare for.
Since this would be done in a Production environment I need to avoid at all costs losing access to the indexed data.
I'll also perform a disk snapshot just in case.
All the indexes are set to:
`maxDataSize = auto_high_volume`
The steps involved would be:
**1.** Stop the Splunk Forwarder.
**2.** Stop the Splunk Indexer.
**3.** Perform Splunk Indexer disk Snapshot.
**4.** Increase the disk space on Splunk Indexer.
**5.** Wait for the change to be in effect.
**6.** Restart the Splunk Indexer.
**7.** Restart the consumers on the Splunk Forwarder.
Are there any other steps that I should perform?
Thanks in advance!
↧
↧
joining 2 tables but showing whats not in table 1?
this successfully shows a combined table with users that are in Table1 and Table2, however I want to show all users in table1 that are NOT in table 2?
How can i do that?
| inputlookup table1.csv
| join type=inner userColumn [ inputlookup table2.csv ]
↧
Output stops all outputs routing when 3rd party server goes down.
Hi,
I am getting a weird issue, if the syslog server fails it stops all data being indexed by the default TCP out, then splunk fills its buckets and falls over. Am I missing something to set it to continue if it can't connect to a output.
cat outputs.conf
[syslog]
defaultGroup = xxxxx_indexers
[syslog:xxxxx_indexers]
server = xxx.xxx.xxx.xxx:9997
type = tcp
timestampformat = %Y-%m-%dT%T.%S
cat transforms.conf
[mehRouting]
REGEX = .
DEST_KEY = _TCP_ROUTING
FORMAT = xxx_cluster_indexers
[Routing_firewalls]
SOURCE_KEY = MetaData:Sourcetype
REGEX = (fgt_traffic|fgt_utm)
DEST_KEY = _SYSLOG_ROUTING
FORMAT = xxxx_indexers
cat props.conf
[host::xxxxxxx1c]
TRANSFORMS-routing = mehRouting, Routing_firewalls
[host::xxxxxc]
TRANSFORMS-routing = mehRouting, Routing_firewalls
↧
Splunk Architecture : Between AWS Accounts & VPC's : Multi-site or single site deployment.
We are deploying hosting to various organisations in our "company". Each organisation in our company may consist of numerous apps (100+ and 5,000+ employees), our intention is to provide these organisations with an AWS Account which will be consumed into our AWS deployment infrastructure. Each VPC/AWS Account will hold various apps and types of data.
My querry is should I be looking to treat each of these accounts as a seperate splunk site (Multisite deployment) and searches are local to that VPC - or instead to route log traffic to seperate "master" VPC deployment as a larger clustered deployment.
Qty of apps/users is a sliding scale as our project grows. Today it's 1 app only - next year it could be 100 per organisation.
I had initialy intended to route logs securely to a single enterprise cluster made up of say 1 search head & 2-3 indexes and grow out as demand grows. But on reading about multisite there seems to be quite a lot of benefits - however suspect costs saved via vpc traffic cost vs oodles of nodes/indexers/search heads per AWS account will be lost. Or would it be better to view Multisite as a longer term stratergy deployment of splunk as project grows etc.. and then migrate deployment at a later date.
Thoughts welcome.
↧
Chart Drill Down changes Date time range
I have dashboard with chart inside it.
The query of the chart is:
**base_search | eval _time = time| bucket _time span=24h | chart count over _time by app_risk| fields _time,Critical,High,Medium,Low**
The drill down settings are: On Click=Link to Search & Auto.
When clicking on "High" category on specific date, I would like to see the events related to this combination of risk and date.
For some reason, I have no results after drilling down.
**For example: I click on events from Sep 15 - I expect the rime range to be Sep 15 00:00:00,000 to Sep 15 23:59:59,999 but (!) the time range is Sep 15 00:00:00,000 to Sep 15 00:00:00,001**
Can someone tell me why the results are not related to the specific column date?
↧
↧
Upgrade Splunk Universal Forwarder from 6.2 to 7.2
Hello,
is it possible to Upgrade the Universal Forwarder in one Step from 6.2 to 7.1 or is a intermediate step (Upgrade to 6.5) required?
Splunk Enterprise: 7.0.1
Yes or No(with workaround) should be enought informations.
Greetings
↧
tstats count field pairs
Hello everybody
i want to count how often does a specific pair of src-dest appear
smth like
src, dest, count
10.10.10.10 11.11.11.11 3
10.10.10.10 11.11.11.12 1
10.10.10.10 11.11.11.13 12
i use folowing string
| tstats summariesonly=true prestats=true count as boo from datamodel=Network_Traffic.All_Traffic where All_Traffic.x_src_zone="smth" All_Traffic.x_dest_zone="smth" by All_Traffic.x_src_zone All_Traffic.x_dest_zone| table All_Traffic.x_src_zone All_Traffic.x_dest_zone boo
unfortunately the whole boo column is always empty
↧
How do I get the next to the last value(or field) of a record??
I have data that looks like this;
When I perform my search the data returned by splunk looks like this on the dashboard;
date="date" username="username filename="filename" 1000 bytes
You can see the problem... I can grab all of the "keyed" fields but I cant get the value "1000 bytes" because its not keyed. If I had awk I could grab the second to the last value of the string and I would be done.
Is there a way to grab the value "1000" above and place it into a value to inject into my tables???
Thanks
↧