Hello,
I have created a scheduled search which populates a summary index from a custom index.
My main custom index has around 100 fields, but those fields are not appearing in the summary index. Only host and source source type fields are present in the summary index.
When i tried adding table field1, field2, etc in the scheduled search query, then those mentioned fields were appearing in the summary index. But when i use table * in the search query, i am not getting any fields in the summary index.
Currently, I have to explicitly specify "table field names" in the query, which is tedious considering the number of fields.
IS there any way to fix this issue?
↧
Can you help me with summary index field issue?
↧
Why am I getting the following "send failure" message in my internal logs: "pushing PK to search peer" ?
Here is the complete warning message:
Send failure while pushing PK to search peer = https://*.*.*.*:8089 , Read Timeout
I'm getting the above warning messages in the internal Splunk logs every minute from each of our 3 search heads.
The search peer in question is in our secondary site (let's say B) to the search heads (site A), but there are two other search peers in the same site (B) which we don't get any warning messages for.
I've done a ping and netcat from each of the search heads in site A to each of the three search peers in Site B and the results are the same for each one, connection established and similar ping times.
It's not a connection issue, so i'm wondering what else could be causing it?
↧
↧
How do you create a dashboard with dependencies between assets, like a tree or topology?
How to create a dashboard with dependencies between assets, like a tree or topology, something like the one used in the "IT Service Intelligence" app?
Thank you very much in advance.
↧
How do you combine multiple cron jobs into a single cron job for a single database (db) input?
Hi All,
I have a db input created in the Splunk DB connect App. I want to execute a query based on a cron schedule. The problem is I want to run the first job every 45 mins starting from 0:00 to 12:00 and then the other cron job should run for every hour from 13:00 to 23:00.
Is there any way to execute these 2 crons in a single cron job? Any help will be much appreciated.
Thanks
↧
After upgrading Splunk, why is the "View Capabilities" page missing?
We upgraded our Splunk and found that when you click on "view capabilities" for a user in the AccessControls >> Users page it'll take you to a great picture of buttercup 404. Does anyone know what to restore to fix this?
↧
↧
Why is my JSON format log getting truncating?
I have a log which has a JSON format line in the middle. Splunk is extracting the log but is truncating the JSON part to 26 lines. How do I get the full log without Splunk truncating the JSON lines?
↧
Why is my below search throwing the following error: "Predict Error: Too few data points: -5." ?
The search below throws the error whenever there are more than two hosts searched for.: **command="predict", Too few data points: -5. Need at least 1 (too many holdbacks (5) maybe?)**
If searching for just one host, the data is perfect. I have 700+ hosts that I need to run this against. Any ideas?
Here is the search that returns the error:
| inputlookup test_diskusage.csv | search host=splunk-indexer-1 | eval _time=strptime(date, "%Y-%m-%d") | timechart span=1d values("/opt/splunk") as "/opt/splunk", values(cold0) AS cold0, values(cold1) AS cold1, values(hot0) AS hot0, values(hot1) AS hot1, values(hot2) AS hot2 | predict "/opt/splunk" "cold0" "cold1" "hot0" "hot1" "hot2" algorithm=LLP5 holdback=5 future_timespan=25 upper95=upper95 lower95=lower95
↧
Will you help me fix my license usage by host query?
Hello All,
I am using Splunk version 7.1.0 for the Distributed Management Console (DMC) and I want to calculate the license usage by host. I am using the below query:
index=_internal source=*license_usage.log* type="Usage" |search h=*10d*
| eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h)
| eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s)
| eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx)
| bin _time span=1d
| stats sum(b) as b
How can I get the value of "b" in GB? I am confused by the value of "b" . Is it in MB or any other metrics?
↧
Distributed Monitoring console unable to find indexers
I followed the instructions for setting up the monitoring console in distributed mode. I have added the cluster master, search heads, and deployment servers as search peers.
The monitoring console can see the cluster master and identify the number of buckets, amount of data, CPU utilization, etc. But none of the index cluster members show up.
It is a multi-site cluster with two sites. Does the monitoring console need to be in site0? Any other ideas on what might be causing this issue?
Thx.
↧
↧
How do I pull multiple events in a large XML file
Our vulnerability scanner is only able to provide XML output and i would like to get this into Splunk. The problem I am running into is that each system can have multiple events called audits. I would like to know how to set up the BREAK_ONLY_BEFORE and MUST_BREAK_AFTER parameters to match the audits to each system.
Data format is
`
10.12.60.24 CVE-1 CVE-2 10.12.60.25 CVE-4 CVE-8
`
I would then be able to generate a table that would look like this
System Audit1 Audit2
10.12.60.24 CVE-1 CVE-2
10.12.60.24 CVE-4 CVE-8
Regards,
Scott
↧
My alert isn't being triggered for some reason.
Hi everyone,
I'm trying to set up an alert for daily license usage and notify me when it reaches a certain threshold.
| rest splunk_server=shaklee-splunk-enterprise /services/licenser/pools | rename title AS Pool | search [rest splunk_server=shaklee-splunk-enterprise /services/licenser/groups | search is_active=1 | eval stack_id=stack_ids | fields stack_id] | eval quota=if(isnull(effective_quota),quota,effective_quota) | eval percentage=round(used_bytes/quota*100,2) | where percentage >= 8 | fields percentage
This is my query for when the pool reaches 8%. The search works and pulls the integer out for me but the problem is the alert will not trigger when I set it for cron to scan every second and trigger when number of results is greater than 1.
Any ideas?
Thanks,
Ryan
↧
Can you help me figure out why alert isn't being triggered?
Hi everyone,
I'm trying to set up an alert for daily license usage which would notify me when it reaches a certain threshold.
| rest splunk_server=shaklee-splunk-enterprise /services/licenser/pools | rename title AS Pool | search [rest splunk_server=shaklee-splunk-enterprise /services/licenser/groups | search is_active=1 | eval stack_id=stack_ids | fields stack_id] | eval quota=if(isnull(effective_quota),quota,effective_quota) | eval percentage=round(used_bytes/quota*100,2) | where percentage >= 8 | fields percentage
This is my query for when the pool reaches 8%. The search works and pulls the integer out for me. But, the problem is the alert will not trigger when I set it for cron to scan every second and trigger when number of results is greater than 0.
Any ideas?
Thanks,
Ryan
↧
The Http Event Collector (HEC) accepts but doesn't index _json event with accented characters — is this a bug?
Hi,
I've tracked down an issue we've been having where some events being sent through our HEC haven't been indexed, even though it responds with HTTP 200 and Success (0).
I've found two workarounds for this that resolve the issue, but I'm pretty sure the HEC should either have indexed the data anyway, or responded with some sort of error instead of Success.
My tests are done in PowerShell 5.1 with en-US culture.
The following snippet is an example of this, where it'll respond with 200 OK and not index the event:
Invoke-RestMethod -Method Post -Uri "https://splunk-hec.example.com:8088/services/collector/event" -Headers @{Authorization = "Splunk xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"} -ContentType "application/json" -Body @"
{
"sourcetype": "_json",
"host": "TESTHOST",
"source": "test:encoding",
"event": {
"testtype": "charset not defined, sourcetype _json",
"characterdata": "Têst vâlué thät has ąccents"
}
}
"@
Also, if you specify charset=iso-8859-1 or charset=windows-1252, it also accepts and silently drops the event.
Workaround 1: Change the sourcetype to JSON (without the underscore)
# Change the sourcetype to json (no underscore)
Invoke-RestMethod -Method Post -Uri "https://splunk-hec.example.com:8088/services/collector/event" -Headers @{Authorization = "Splunk xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"} -ContentType "application/json" -Body @"
{
"sourcetype": "json",
"host": "TESTHOST",
"source": "test:encoding",
"event": {
"testtype": "charset not defined, sourcetype json",
"characterdata": "Têst vâlué thät has ąccents"
}
}
"@
Workaround 2: Specify the charset with IRM
Invoke-RestMethod -Method Post -Uri "https://splunk-hec.example.com:8088/services/collector/event" -Headers @{Authorization = "Splunk xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"} -ContentType "application/json; charset=utf-8" -Body @"
{
"sourcetype": "_json",
"host": "TESTHOST",
"source": "test:encoding",
"event": {
"testtype": "charset utf-8, sourcetype _json",
"characterdata": "Têst vâlué thät has ąccents"
}
}
"@
This is on Splunk Enterprise 7.1.2 using a heavy forwarder. Thanks!
↧
↧
Can you help me create a search query that would make a dynamic comparison of yesterday's data to last week's?
I wrote the following query for today's comparison with last week:
index = abc App_Name=xyz earliest=-0d@d latest=now | multikv | eval ReportKey="Today"|append[search index = abc App_Name=xyz earliest=-7d@d latest=-6d@d | multikv | eval ReportKey="LastWeek"| eval _time=_time+60*60*24*7]|eval _time=if(isnotnull(new_time), new_time, _time)|timechart span=5m sum(TOTAL_TRANSACTIONS) as Transactions by ReportKey
I want the query to do the following: allow someone to view the comparison of yesterday's data and last week's(considering yesterday to its one week data), or the "day before yesterday" to its corresponding "last week" data, and so on.
So, could you please help how can i write the query for that?
↧
Change single panel color based on text result
I'm working on creating a dashboard with a single panel view. My search is determining if i'm processing data in x, y or x+y. I can get the dashboard to correctly display the location but i'd like to have a color associated with each location. I've tried assigning a value to the location which then i can get the color to correctly appear but my associated number is listed in my dashboard and not the actual location. I've poured over these forums and google, not having much luck at least with how i'm currently restricted by my company.
| eval Location=case(like(host, "%ksc%"),"KSC",like(host, "%stl%"),"STL") | stats count by Location | eval Location=case(count > 10,Location) | dedup Location | eventstats count as "LocCount" | eval Location=if(LocCount > 1,"Co-Processing", Location) | eval Location=case(Location="KSC", 0, Location="STL", 5, Location="Co-Processing", 10) | stats values(Location)
↧
Metadata TRANSFORMS- not being applied after series of
I have a customer with a nightmare syslog server environment -- different sourcetypes in different log files on different syslog servers, shared unqualified hostnames used in different data centers, some logs have FQDNs, some don't, etc.
My understanding is that the order of precedence for TRANSFORMS is source:: overwrites both sourcetype and host:: stanzas. host:: overwrites sourcetype stanzas.
So... I have TRANSFORMS stanzas applied to each source:: stanza to put the appropriate data into the correct sourcetype. I then apply index and host metadata TRANSFORMS to each of the sourcetype stanzas.
But for some reason, the host and index TRANSFORMS don't seem to get applied once an event has had a TRANSFORM applied in a source:: stanza. Is that expected behavior or are there limitations to metadata rewrites that they must occur only on the stanza with the highest precedence for a particular event?
↧
Configure trace and audit log collection - no \local folder under splunk_app_db_connect
Cannot find a \local folder under %SPLUNK_HOME%\etc\apps\splunk_app_db_connect\ after installing the DB_Connect add on. Have restarted the SQL services and Splunk service. We are running Windows Server 2012 R2. There is a \locale folder but no inputs.conf contained in the folder.
Following this documentation to configure SQL audit log collection into Splunk.
http://docs.splunk.com/Documentation/AddOns/released/MSSQLServer/ConfigureDBConnectv1inputs
↧
↧
Create a Table With Each Row Being a Log and Every Column Being a Recognized "Interesting Field"
I was wondering if there is an easy way to create a table that contains every single recognized interesting field instead of doing the usual `| table field1, field2...` method. To be clear I want to have each row in the table as a separate instance/log and not a summary of counts. In other words, I would like a substitution for `| table` but to capture every single interesting field that is recognized. Thanks!
↧
Transaction Duration Issue
Hey all,
I wanted to see if someone can help me out with this. Basically im trying to get a duration for the time in between 2 scenarios. Im trying to get how long it takes for each user to get from scenario_1 to scenario_2 by service. This is what I have so far and it seems to work when I do by individual service:
index=index_name (scenariotype="scenario_1" OR scenariotype="scenario_2) user_ID="*" service_name="*service_1*" | transaction user_ID | stats mean(duration) AS "Mean Duration(In Seconds)" by service_name
Stats table shows:
service_name | Mean Duration(In Seconds)
service_1 7.25
It returns a low number and when I manually checked the mean time by user_ID, it is correct.
However, when I want to get the mean duration for all services, I get a much higher number, especially for service_1 above. Keep in mind, I have 9 services Im trying to get numbers from. So basically when I run the following and dont specify a service_name or I include more than service name, i get much higher numbers for (exactly the same period of time) as the mean duration for each service(note service_1 is the same service as the above result but returning much higher number):
index=index_name (scenariotype="scenario_1" OR scenariotype="scenario_2) user_ID="*" | transaction user_ID | stats mean(duration) AS "Mean Duration(In Seconds)" by service_name
Stats table shows:
service_name | Mean Duration(In Seconds)
service_1 189.57
service_2 5.75
service_3 5.75
service_4 1.35
service_5 6.25
service_6 10.40
service_7 4.53
service_8 8.78
service_9 6.72
Ive also experimented with looking further back in the time and the mean duration goes up as I go further back in time if i dont specify 1 service or include more than 1 service or include all services.
hopefully i made sense and someone can help me with what am i doing wrong.
thx!!
↧
Metadata TRANSFORMS- not being applied after series of transforms
I have a customer with a nightmare syslog server environment -- different sourcetypes in different log files on different syslog servers, shared unqualified hostnames used in different data centers, some logs have FQDNs, some don't, etc.
My understanding is that the order of precedence for TRANSFORMS is source:: overwrites both sourcetype and host:: stanzas. host:: overwrites sourcetype stanzas.
So... I have TRANSFORMS stanzas applied to each source:: stanza to put the appropriate data into the correct sourcetype. I then apply index and host metadata TRANSFORMS to each of the sourcetype stanzas.
But for some reason, the host and index TRANSFORMS don't seem to get applied once an event has had a TRANSFORM applied in a source:: stanza. Is that expected behavior or are there limitations to metadata rewrites that they must occur only on the stanza with the highest precedence for a particular event?
↧