Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How to send report to different user based on SPLUNK query

$
0
0
I have a SPLUNK query that generate following table: User_Name Number recipient user_a 10 user_a@mail.com user_b 20 user_b@mail.com user_c 30 user_c@mail.com how can i achieve for each recipient only receive email contains its records? like user_a@mail.com only receive following: User_Name Number user_a 10 Thanks.

Problem with upgrade Splunk Enterprise 6.6.8 -> 7.1.1 (Windows Server)

$
0
0
Hi, to test the upgrade process, we created a clone of our current splunk server (6.6.8 running on Windows Server 2016) via VSphere (using VMWare 5.5). During the upgrade process I get the following errors: "The program can't start because SSLEAY32.dll is missing from your computer. Try reinstalling the program to fix this problem" "The program can't start because LIBEAY32.dll is missing from your computer. Try reinstalling the program to fix this problem" The the upgrade is rollbacked. I just did the upgrade from 6.6.3 to 6.6.8 last week without any problems. What can I do to perform the upgrade successfully? Regards, Bernd

Dashboard taking hours to complete or fails

$
0
0
The system I am working with gets logs about 500k per hour. I have a dashboard with multiple queries on these logs. And I am trying to get a report out for the last 1 year. I do expect it to take sometime. But the dashboard not completing after like 5 hours or jus failing seems plain ridiculous. Believe I am doing some really inefficient work on the dashboard. I am new to this and looking for some advice on how I could make my queries and charts deliver faster results without failing. I have 5 of the following searches on the dashboard and each search is presented as a single value chart: service="this changes for the 5 different searches" | chart avg(REQUEST_DURATION) as "Service (ms)" I have 5 of the following searches on the dashboard and each search is presented as a single value chart: market="this changes for the 5 different searches" | timechart span=5m avg(REQUEST_DURATION) as average | fillnull | sort average I have 5 of the following searches on the dashboard and each search is presented as a line chart: locale="this changes for the 5 different searches"|fields REQUEST_DURATION| eventstats avg(REQUEST_DURATION) as average | timechart span=5m avg(REQUEST_DURATION) as actual ,first(average) as average | eval max = 500 | filldown

Splunk Join two searches

$
0
0
Hi, I am trying to join two of my searches in splunk using a common field uniqueID but I am getting a error in Splunk Job inspector - SubSearch produced more than 50k results, truncating to max out 50k. I can't change limits.conf and I have to use the query to get the desired result. Really appreciate if someone can help on this? My query is something like this - index="A" sourcetype="test*" requested_content="/index" | join uniqueId [ search [search B] ] | timechart span=1h count

OTHER option in timechart doesn't work

$
0
0
Hi Splunkers, I have search like this: index="myindex" host="myhost" | timechart span=1month latest(all_cnt) as "Number of all" by code useother=true limit=100 as a result there is all 100 values listed in timechart and legend. If I change query to: index="myindex" host="myhost" | timechart span=1month latest(all_cnt) as "Number of all" by code useother=true limit=10 only top 10 values are represented in timechart and no "other" values are present in timechart but there is label in the legend. Why? And how to solve this issue? Thank you! Dragana

How would I filter out fields via Props.conf?

$
0
0
I am forwarding windows events from graylog to a UF and then UF to Indexer. I have a props.conf to create field alias from the Graylog fields. Once I have these I want to eliminate the gray log fields from being indexed. Here is Props.conf. FIELDALIAS-winlogbeat_as_host = winlogbeat_fields_collector_node_id as host FIELDALIAS-winlogbeat_as_eventid = winlogbeat_event_id as EventCode FIELDALIAS-winlogbeat_as_processname = winlogbeat_event_data_ProcessName as Process_Name FIELDALIAS-winlogbeat_as_logonid = winlogbeat_event_data_TargetLogonId as Logon_ID FIELDALIAS-winlogbeat_as_user = winlogbeat_event_data_TargetUserName as user FIELDALIAS-winlogbeat_as_src_user = user as src_user FIELDALIAS-winlogbeat_as_action = winlogbeat_keywords as action FIELDALIAS-winlogbeat_as_security_id = winlogbeat_event_data_TargetUserSid as Security_ID FIELDALIAS-winlogbeat_as_account_domain = winlogbeat_event_data_TargetDomainName as Account_Domain FIELDALIAS-winlogbeat_as_logontype = winlogbeat_event_data_LogonType as Logon_Type FIELDALIAS-winlogbeat_as_srcip = winlogbeat_event_data_IpAddress as src_ip FIELDALIAS-winlogbeat_as_src = winlogbeat_computer_name as src FIELDALIAS-winlogbeat_as_destip = src_ip as dest_ip How would I eliminate the winlogbeat fields from being indexed? Thanks!

Chart Multiple Fields Over Time On One Graph and Creating a Table Summarizing Totals by User

$
0
0
Using the base search listed below it presents me with all print jobs, one print job per user. I would like to chart the fields "Total Pages" by "Full Name" for all users(all results) over time on one graph. In total there are around 37 users and several random "Total Page" counts. Thinking of using a line graph. What would be the best way to complete this? I then would like to create a table that totals all pages printed by user. So adding the "Total Pages" field, by "Full Name" in a table. Base Search: index="win_custom" sourcetype="print-job-accounting-report" ![alt text][1] [1]: /storage/temp/251060-chart.jpg Thank you!

run a dashboard search in verbose mode through Simple XML?

$
0
0
I've created a dashboard with some panels, and I am getting different event counts than when I run the reports individually. The event counts from dashboards is less than the event counts run through a report. I've read some posts mentioning that we can do some settings in savesearches.conf file to run the dashboard in verbose mode. I have Splunk User role access, and I don't have admin access to perform these changes. Please suggest if there is a way to get this resolved through Simple/Advance XML dashboard configuration.

How to get the Top 1 data per Host?

$
0
0
I have a log where the mount usage of every host gets logged. So there can be multiple mounts per host. The data can be following - Host | Mount_Name | Usage ________________________________ host1 | /tmp | 90 host1 | /opt | 92 host2 | /opt | 81 host2 | /tmp | 90 So the result would be - Host | Mount_Name | Usage ________________________________ host1 | /opt | 92 host2 | /tmp | 90 Which means for every host I need the highest mount usage.

Can I do a if else in a props.conf?

$
0
0
I am using Graylog to forward my windows events, all the events field names start with winlogbeat, but some are _event_data_targetname and some are _event_data_Subjectname. This appears to be different based on windows event type. can I do a if winlogbeats_event_data_targetnamedomain not null then FIELDALIAS-winlogbeat_as_account_domain = winlogbeat_event_data_TargetDomainName as Account_Domain else FIELDALIAS-winlogbeat_as_account_domain = winlogbeat_event_data_SubjectDomainName as Account_Domain Thanks!

How do I connect to a Splunk Enterprise instance through a proxy using the Splunk SDK?

$
0
0
I was not successful in running the example found on http://dev.splunk.com/view/java-sdk/SP-CAAAECX. I was told that everything is proxied for my company's Splunk Enterprise instance.

mstats - spaces in metric names

$
0
0
Is there a way to use the improved mstats syntax introduced in 7.1 (changes described [here][1]) with metrics that have spaces in their names? I'm getting an error "Term based search is not supported" when I try. I'm trying out the new Splunk Add-on for Microsoft Windows version, which includes the transforms necessary for storing the permon data in metrics indexes. It works great, except for the cases where the perfmon counter name has spaces in it. For example, this search works: | mstats avg("Threads") where index=my_metric_index span=1m But this one produces the error mentioned above: | mstats avg("% Processor Time") where index=my_metric_index span=1m I can get the result I need using the deprecated syntax like this, but there's a reason why it's deprecated: | mstats avg(_value) where index=my_metric_index metric_name="% Processor Time" span=1m Any good way to resolve this? Currently the only thing that comes to mind is removing or replacing the spaces using SEDCMD, but that doesn't seem very optimal. [1]: http://docs.splunk.com/Documentation/Splunk/7.1.1/SearchReference/Mstats#Deprecated_syntax

scheduled report based on X number of search results

$
0
0
i have a query template already made. i want to run this query on X (the number will change all the time) amount of distinct description that are being returned through another query. how can i do that? query that run on every distinct description: index="event_raw_data" description="somedescription" | fillnull value="NO Description" description | timechart count by description useother=f query that return the description: index="event_raw_data" | table description | dedup description i want to combine the two so that the first query will run on every distinct result of the first

Issues with splunk search behavior in version 7.0.4

$
0
0
Hi, I upgraded my system from splunk 6.4.5 to splunk 7. I found an issue with the search behavior. Query index=testindex | where asset = "up%20asset" Above query produces results in 6.4.5 but not in 7.0.4 Has anything changed under the hood ? Any help appreciated.

untable for multiple aggregate stats

$
0
0
Hello, I would like to plot an hour distribution with aggregate stats over time. For instance, I want to see distribution of average value over hour regarding 5 minutes sum of a field. I proceed with an instance to be clear: index="something" | timechart span=5m sum(nrcpt) as "Dest in 5m" by sasl_username limit=1 useother=f | fillnull | untable _time sasl_username "Dest in 5m" | eval date_hour=strftime(_time,"%1H") | chart avg("Dest in 5m") over date_hour by sasl_username **timechart** let me to fill null values with 0, to obtain a desired average over time. But let suppose I want more aggregate statistics. The followin example doesn't work: index="something" flow=outbound | timechart span=5m sum(nrcpt) as "Dest in 5m" count(sasl_username) as "Msg in 5m" by sasl_username limit=1 useother=f | fillnull | untable _time sasl_username "Dest in 5m" "Msg in 5m" | eval date_hour=strftime(_time,"%1H") | chart avg("Dest in 5m") avg("Msg in 5m") over date_hour by sasl_username Ouch! Because **untable** supports only **one** serie. So, to obtain the result, I have to run a complex search, such as index="something" | timechart span=5m sum(nrcpt) as "Dest in 5m" by sasl_username limit=1 useother=f | fillnull | untable _time sasl_username "Dest in 5m" | append [search index="something" | timechart span=5m count(sasl_username) as "Msg in 5m" by sasl_username limit=1 useother=f | fillnull | untable _time sasl_username "Msg in 5m"] | eval date_hour=strftime(_time,"%1H") | chart max("Dest in 5m") max("Msg in 5m") OVER date_hour BY sasl_username Why does **untable** support only one serie? Is there a better way to write the last search? Thank you very much Warm Regards Marco

Spunk Search Query for Trimming and Grouping

$
0
0
Hi, I have a CSV named **Results2018**. It has fields **Group, Server, Issue**. The field Issue has information about CPU and Memory utilization from different sources. The CPU field is populated in CSV as “CPU bottleneck detected on Server A”, “CPU bottleneck detected on Server B” and so on. Likewise, for Memory utilization as “Memory utilization exceeded on Server A” …. Server B and so on. What I am trying to do it to trim, match and group CPU bottleneck issue value in this field in the CSV and take a total count of that (Not on Server A or B). For example, trim down the field value to only “CPU bottleneck detected” and do a total event count on that. I trying to get the top 10 issues with the highest count here for all the issues in this file. Thanks in -advance for assistance.

DB connect- Can I use single DB connection to read datas from multiple databases

$
0
0
I need to run same query in multiple DB servers, is there any option to use same connection to read data from multiple database. any options like to pass the DB server name and port numbers from a look up table to one connection and read the data ? or do i need to create separate connections for each and every DB servers ?

AntiSpam update report

$
0
0
When i checked the reports under proofpoint app. I am seeing field "type=mail" in the search string hovewer this field is not in the search results so it is breaking the reports. What should i do? Go every reports and remove the type from search string or is there an update can fix this? `get_pps_index` sourcetype="pps_filter_log" mod=spam type=mail cmd=refresh engine=* | table _time engine definitions

How can I change to a logarithmic timescale on the x-axis of the timeline?

$
0
0
I would like to change the x-axis to a logarithmic timescale of the timeline like on the y-axis. So, for instance, the first inch would be the first 10 seconds, the second inch would be the first 100 seconds etc. This way I could see what is happening right now as well as over the past day/week/month all on one timeline. So if traffic appears as a diagonal line from upper left to lower right then I know that what I am seeing this minute is typical of the past hour, day, week, month etc. If this doesn't exist, where should I go to create such a timeline?

Splunk Stats count discrepancy

$
0
0
Hello How would searching in VERBOSE mode and a strict timerange for `index=foo host=bar | stats count` returns a positive value but I don't see any events in the events tab? Even if I search for `index=foo host=bar` in the same time frame I have no events. What is wrong? How can Splunk count the events with a specific host but then not returning them? Any ideas? Thanks
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>