Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Stats count by when field exists, otherwise use another

$
0
0
I am trying to create a dashboard that graphs the parsing queue size for a HF by `ingest_pipe`. I noticed that most of these logs have that field but some don't (i'm not sure why). **sample logs** 06-03-2020 12:21:30.964 -0400 INFO Metrics - group=queue, name=parsingqueue, max_size_kb=512, current_size_kb=0, current_size=0, largest_size=2, smallest_size=0 06-03-2020 12:21:27.144 -0400 INFO Metrics - group=queue, ingest_pipe=3, name=parsingqueue, max_size_kb=6144, current_size_kb=0, current_size=0, largest_size=2, smallest_size=0 06-03-2020 12:21:27.142 -0400 INFO Metrics - group=queue, ingest_pipe=2, name=parsingqueue, max_size_kb=6144, current_size_kb=0, current_size=0, largest_size=11778, smallest_size=0 **current SPL** index=_internal host=$hostToken$ group=queue name=parsingqueue | timechart avg(current_size_kb) by ingest_pipe I can't modify the search with `ingest_pipe=*` because I have tokenized the host field in the search and some of my HF's only have 1 ingest pipe. In that scenario, there is no `ingest_pipe` field at all so hardcoding that into the search will result in 0 results when the HF only has 1 pipeline. The solution I came up with is to count the # of events where ingest_pipe exists (yesPipe), count the # of events where it does not exist (noPipe), and assign my `count by foo` value to the field that is greater. If yesPipe is greater, `count by ingest_pipe`, else `count by host`. I don't have the query for these counts and checks. Alternatively, I thought I could use a lookup table that has a "count by field" column, where per host I simply specify either `ingest_pipe` or `host` to count by. I feel like there is an easy solution and I'm overthinking it. Any ideas?

How to resolve error in rex command when parsing a long string with escaped double quotes?

$
0
0
Hi everybody, When parsing a long string containing escaped double-quotes I get this error: Error in 'rex' command: regex="^(?([^"]|\")) has exceeded the configured depth_limit, consider raising the value in limits.conf. Steps to reproduce: | makeresults | eval value="dolor commodo sit amet Sed fringilla nisi et augue condimentum, finibus hendrerit massa egestas Phasellus erat nunc, placerat vitae molestie quis, tempor ac ipsum Phasellus malesuada risus risus, sed lobortis purus vestibulum et Curabitur tempus tincidunt faucibus Aenean sed ipsum eleifend molestie, eros non at accumsan odio turpis sed Integer sed egestas nibh, nec fringilla quam orci eu ipsum Ut at ligula nec metus cursus condimentum eget id eros Pellentesque habitant morbi tristique senectus et \"netus\" et malesuada fames ac turpis egestas Nunc mollis neque eros, eu luctus augue iaculis a Aenean maximus varius erat sed auctor Duis consectetur luctus ligula fringilla quam" | rex field=value "^(?([^\"]|\\\")*)" | table RESULT My regex ([^\"]|\\\") generate capturing group who exceed the configured capacity. I tried to disable capturing group with this syntax (?:[^\"]|\\\")* but I still have a limit eg: | makeresults | eval value="dolor commodo sit amet Sed fringilla nisi et augue condimentum, finibus hendrerit massa egestas Phasellus erat nunc, placerat vitae molestie quis, tempor ac ipsum Phasellus malesuada risus risus, sed lobortis purus vestibulum et Curabitur tempus tincidunt faucibus Aenean sed ipsum eleifend molestie, eros non at accumsan odio turpis sed Integer sed egestas nibh, nec fringilla quam orci eu ipsum Ut at ligula nec metus cursus condimentum eget id eros Pellentesque habitant morbi tristique senectus et \"netus\" et \"malesuada\" fames ac turpis egestas Nunc mollis neque eros, eu luctus augue iaculis a Aenean maximus varius erat sed auctor Duis consectetur luctus ligula fringilla quam laoreet consectetur tempus, sapien mi pretium ipsum, et rutrum lorem orci vel ipsum. Vivamus \"vitae\" rhoncus erat, vel \"blandit\" libero. Vestibulum dictum arcu eu ligula \"dignissim\", eu efficitur \"ante\" faucibus. Nam eu lacus rhoncus, tempor lorem at, ultrices elit. Nulla facilisi. Ut feugiat lobortis \"orci\". Proin at ultricies metus. Donec pharetra justo nec sapien hendrerit lacinia. \"Vestibulum\" ornare nibh diam, in ullamcorper massa ultricies et. Quisque fringilla dolor ornare nibh rhoncus vestibulum. Integer porttitor enim nec elementum sagittis. Aliquam ut semper diam, non vehicula risus. Proin ut fringilla massa. Mauris eu dolor ex. Vivamus sit amet diam sapien." | rex field=value "^(?(?:[^\"]|\\\")*)" | table RESULT If I remove two \" in the string, it works Any idea ?

ERROR ScriptRunner - ERROR:root:Connection unexpectedly closed while sending mail to

$
0
0
ERROR ScriptRunner - stderr from '/opt/splunk/bin/python2.7 /opt/splunk/etc/apps/search/bin/sendemail.py "results_link=http://localhost.localdomain:8000/app/Splunk_CiscoISE/@go?sid=rt_scheduler__admin_U3BsdW5rX0Npc2NvSVNF__RMD5f595461cdff80ada_at_1591190776_4988.58" "ssname=CISE_Passed_Authentications" "graceful=True" "trigger_time=1591192208" results_file="/opt/splunk/var/run/splunk/dispatch/rt_scheduler__admin_U3BsdW5rX0Npc2NvSVNF__RMD5f595461cdff80ada_at_1591190776_4988.58/results.csv.gz" "is_stream_malert=False"': ERROR:root:Connection unexpectedly closed while sending mail to xx@qq.com

I want to move my unwanted logs into nullQueue.But no luck

$
0
0
#### #### #### #### 2020-05-12 14:34:52,060 2020-05-12 14:34:52,060 2020-05-12 14:34:52,060 I want to remove ####< from my events, so i used props.conf along with transforms.conf with this below setting. But still ####< is not removed from the events. My props.conf [hast_sourcetype] BREAK_ONLY_BEFORE_DATE = CHARSET = UTF-8 DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 29 NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TRANSFORMS-remove-hash = include-date-item category = Custom description = hash_sourcetype pulldown_type = true My transforms.conf [eliminate-hash-item] DELIMS = ####< DEST_KEY=queue FORMAT=nullQueue Please help me to solve this issue.

How to use stats to identify largest number and use that as horizontal line on timechart?

$
0
0
I am trying to make an area chart which shows the average size of the parsing queue over time. I would like to add a horizontal bar as a threshold. I noticed that some logs have different values for the max_size_kb, so I thought I could use max to get the value and set my threshold to that, but for some reason my search is returning zero results. I don't know why it's not working. If I hardcode a number for zzz, it'll work, but doesn't seem to work the way it is written now. The value changes between my hosts, so I don't want to hard code it. **Current SPL** index=_internal host=$hostToken$ group=queue name=parsingqueue | stats max(max_size_kb) AS zzz | timechart avg(current_size_kb) by ingest_pipe | eval threshold = zzz

How to create an area chart that displays an average of data over time, using timechart and stats commands?

$
0
0
I am trying to make an area chart which shows the average size of the parsing queue over time. I would like to add a horizontal bar as a threshold. I noticed that some logs have different values for the `max_size_kb,` so I thought I could use `max` to get the value and set my threshold to that, but for some reason, my search is returning zero results. I don't know why it's not working. If I hardcode a number for zzz, it'll work, but doesn't seem to work the way it is written now. The value changes between my hosts, so I don't want to hard code it. **Current SPL** index=_internal host=$hostToken$ group=queue name=parsingqueue | stats max(max_size_kb) AS zzz | timechart avg(current_size_kb) by ingest_pipe | eval threshold = zzz

How to join two sources with summary indexing to improve performance?

$
0
0
Hello, I am quite green at Splunk and have a problem I could use some help with. My data is coming from a postgres database via the Splunk DB Connect App, where each input (source) into Splunk is a postgres table. I am trying to join two sources, which I can do in a regular search, but am trying to improve performance since my join search is running quite long, so I am looking at summary indexing. The two sources are as follows: **action_times** action_time act_id **actions_table** act_id operation Here is the base search that returns the expected results. source="action_times" | join type=inner act_id [search source="actions_table"] | stats count by operation I have been able to set up a summary index and schedule a report which runs the search above, but the `actions_table` really does not update often so most subsequent runs of the scheduled report return no events, despite there being tens of thousands of events from `action_times`. **What I would like to do...** - I would like to use summary indexing to pull in the joined data, either with an actual join command, or without. If there is any other helpful information I can provide, please let me know. Thank you,

Field extraction from data within backslashes

$
0
0
Hi, I have dateset that contains IP addresses. IP Addresses are coming in variations due to ranges they are assigned to separated by \ backslashes. I need them to be extracted in multiple fields regardless of how many variations are there. See sample data below: 1.2.3.4\n4.5.6.7\n8.9.1.2 1.2.3.4\n4.5.6.7\n 1.2.3.4\n4.5.6.7 1.2.3.4\n4.5.6.7\n8.9.1.2 I need them like: 1.2.3.4\n4.5.6.7\n8.9.1.2 Value1: 1.2.3.4 Value2: 4.5.6.7 Value3: 8.9.1.2 Value4: and so on..... So basically all values within backslash, I need them separated out in fields. Also, the letter "n" or any alphabets attached to any IP also needs to go. Thanks in-advance!

How to display the value of the difference result in Splunk?

$
0
0
Hi, How can I display the actual value of the difference in a new column? The value is "cts16k1sacc". Row 1 in attached screenshot....

Splunk Enterprise Security: Add a Filter to the Traffic Size Analysis Dashboard

$
0
0
I'd like to add a filter to the Traffic Size Analysis Dashboard. The filter I'd like to add is the "src_ip" field. Currently, this dashboard doesn't allow you to search by one IP and I think having that filter would be very helpful. What would be the best way in going about and adding this?

Using a macro causes count of 1 on single value panel

$
0
0
Splunk is 8.0.2.1. Somewhat similar to https://answers.splunk.com/answers/48050/strange-behaviour-with-count-in-stats-when-using-macros.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev My query ends in *| stats count* and works fine when run from Search (selecting the single value visualization). It also worked fine as a dashboard, until I turned it into a macro. Now, it shows count of 1, like it's counting the number of fields returned. Tables work just fine. Also using this app, to have dropdowns, however, panel in question is not the dropdown: https://splunkbase.splunk.com/app/3689/ Snippet from my dashboard: Location`getactiveuserscount(192.168.0.%)`-11m@mnow2mdelay/app/TA-myapp-it/dashboard__active_users

Why does search only display 24 hours of event data on Linux, but all-time on Windows?

$
0
0
1. There are approximately 1.5 Billion ingested entries from 40 forwarders. 2. Performing a search with any criteria on Windows hosts lists all events as all-time. 3. Performing the same search on Linux hosts only returns 24 hours of data, regardless of time/date ranges supplied. Each day the data only covers the last 24. What settings could be causing this?

How do I loop through a list of regular expression patterns stored in a KV store in a search?

$
0
0
I am new to Splunk. The `cluster` command gives me results that I am looking for and some. I would like to filter the results of this command with a list of regular expression patterns that I have stored in a KV store, but I am having a tough time getting the answers that I am looking for. When I run the `map` command below it looks like the `$payload$` ends up with the value rather than the field name. The `app_critical_warning` KV store has a list of regexp patterns with one of the column names being `regexp_pattern`. Here's the search that I have come up with: index="someindex" msgtype::warning | cluster t=0.9 showcount=true field=payload | table cluster_count payload | map [|inputlookup app_critical_warning | regex $payload$=regexp_pattern ] maxsearches=10 Does anybody have any suggestions on how to go about this task? I can compose the search with all the `regexp` patterns, but I would like to maintain it in a KV store for logistic reasons. Thank you!

How to create a search that calculates the percentage between two rows?

$
0
0
Hello!!! I need to calculate the percentage between the rows in my table, like this, for example: Search: | bucket span=10m _time | stats count by _time Result: _time count 1 2020-06-03 16:10:00 27656974 2 2020-06-03 16:20:00 68834318 3 2020-06-03 16:30:00 68160616 4 2020-06-03 16:40:00 67655028 5 2020-06-03 16:50:00 66023251 6 2020-06-03 17:00:00 65418711 7 2020-06-03 17:10:00 36918173 How can I calculate `perc1=row2/row1`, `perc2=row3/row2`, and so on?

Why doesn't Fundamentals 1 recognize some of my completed labs for the course?

$
0
0
I completed the entirety of Fundamentals 1 and it is not recognizing my lab 12 or 13 being done. Any help as to why or what i can do?

How to resolve ScriptRunner Error Message "ERROR:root:Connection unexpectedly closed while sending mail to..."?

$
0
0
ERROR ScriptRunner - stderr from '/opt/splunk/bin/python2.7 /opt/splunk/etc/apps/search/bin/sendemail.py "results_link=http://localhost.localdomain:8000/app/Splunk_CiscoISE/@go?sid=rt_scheduler__admin_U3BsdW5rX0Npc2NvSVNF__RMD5f595461cdff80ada_at_1591190776_4988.58" "ssname=CISE_Passed_Authentications" "graceful=True" "trigger_time=1591192208" results_file="/opt/splunk/var/run/splunk/dispatch/rt_scheduler__admin_U3BsdW5rX0Npc2NvSVNF__RMD5f595461cdff80ada_at_1591190776_4988.58/results.csv.gz" "is_stream_malert=False"': ERROR:root:Connection unexpectedly closed while sending mail to xx@qq.com

Can I upload raw SAR text files to Splunk?

$
0
0
Hi, I'm trying to upload raw SAR text files to Splunk, is it possible? Is there an add-on or other method to do this directly into Splunk? Or is the only way to use sysstat, then the add-on for Linux (on the forwarder) and the GUI for SysStat? Thanks.

how to search for AWS non-active users with active secret keys?

$
0
0
I would like to search for AWS non-active users, who have not logged in or using their Access Key ID for more than 60 days, but have active Access Key ID. I am very new to Splunk. Please help. Thanks.

how to fix error "The external search command 'xmlkv' did not return events in descending time order, as expected"?

$
0
0
I am getting error as ** "The external search command 'xmlkv' did not return events in descending time order, as expected"** along with my search results. Dashboard functionality works as expected and search results are getting displayed. Please find the code snippet for one panel for reference and suggest. There are 6 panels altogether with different queries.
source="log.2020-05-08" | rex field=_raw "((?(\w*))\s(?(\d+))\s((([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9]|[A-Za-z0-9][A-Za-z0-9\-]*[A-Za-z0-9]))\s(?(\d{8}\s\d{6}))\s(?([\w\s.:,/()]*)))" | rex field=number "(?([\d]*))/\d" | xmlkv maxinputs=10000 | rename "SBT-type" as Mtracktype "SBT-exception-code" as MTrackECode | eval LogTimeStamp=strftime(strptime(TimeStamp,"%Y%m%d%H%M%S"),"%m/%d/%Y %H:%M:%S %p") | sort -LogTimeStamp
*
Results Export search (SBTnumber=$SBTNo$ OR Number=$SBTNo$ OR type=$SBTNo$ OR AWB=$SBTNo$) | table LogType LogTimeStamp Msg SBTtype SBTnumber$job.sid$1

need to use SQL query in Splunk

$
0
0
i need to convert my sql query into splunk by dbx query could some one help me ? here is my query. SELECT * FROM [Systems] AS D RIGHT JOIN (SELECT * FROM [Users] WHERE ProductName = 'Platform' ) AS C ON D.ComputerName = C.ComputerName Thnaks in advance
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>