Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

SEDCMD regex json in props.conf doesn't work but works in regex01 and sed command line

$
0
0
Hello, i got a json which looks like this: https://pastebin.com/xHebS2x3 i need to get rid of the field 'sql_queries' and i am using SEDCMD-whatever = s/regex//g using this regex: regex101 without escape: s/,"sql_queries":"([^"\\]|\\.)*"//g sed command line with escape: s/,"sql_queries":"\([^"\\]\|\\.\)*"//g it works in sed command line and regex101 anyone can help please?

Accelerated data model with higher event count than corresponding indexed data

$
0
0
Has anyone seen an issue where an accelerated data model has duplicate events in tsidx files? Occasionally I encounter an issue where there are many more records in the accelerated data model compared to the corresponding index data. I get the data model event count using | tstats count from datamodel= by _time And also see the high count in the pivot GUI. Rebuilding the data model will correct it so that the event counts match. I'd like to be able to prevent this from happening or at least understand why it occurs. Around the time that the data model gets 'corrupted', I've noticed a lot of `sourcetype=splunkd_access` events are generated with a URI that looks like this: /servicesNS/user1/myapp/admin/summarization/tstats%3ADM_myapp_mydatamodel/touch The `user1` in this URI has limited permissions. They have access to `accelerate_search` but not to `schedule_search` which I think is is required for `accelerate_search` to be truly enabled. Anyway this URI is as close as I've been able to get to any sort of explanation. I never see it except when this issue is occurring. Is there any way to block a user role from executing this 'touch' search? Is this a cause or just another symptom? There are 12 datamodels in my app and this is happening to all of them except the slimmest 1 or 2. It occurs most often when the system has a lot of incoming data to crunch so it does seem to be process- or resource-related.

Problem with Fundamentals 1, Module 9, Task 6

$
0
0
For some reason the Parenthesis on avg(Duration) won't work. I have entered the answer "index=main sourcetype=db_audit | stats avg(Duration)" but when I enter it I get an additional set of parenthesis

How to forecast for multiple hosts individually

$
0
0
Hello all, I was trying to get some predictive alerts working, my only problem is the search I've written is limited to a single host, and I'm trying to manage 2300 servers. This part of the code effectively identifies outliers in CPU usage. index=perfmon host= counter="% Processor Time" | timechart span=5min avg(Value) | predict "avg(Value)" as prediction algorithm=LLP holdback=2 future_timespan=2 period=288 upper95=upper95 lower95=lower95 | `forecastviz(4, 2, "avg(Value)", 95)` | eval isOutlier=if('avg(Value)' > 'upper95(prediction)', 1, 0) The following isolates the search to the last 30 minutes | eval eTime=relative_time(_time, "-0M") | eval lTime=relative_time(now(), "-30M") | where eTime>=lTime My plan is to schedule this search to run every 30 minutes, and to alert/email when 'isOutlier=1'. This works great if I only have 1 server, or if I want to group them all together as a single object. But does anyone know of a way to apply this with a wildcard, and have it evaluate each host independently of the others?

Interface with Outlook from Dashboard

$
0
0
I have a Splunk dashboard that displays an email address as one field. The client would like to be able to click on the email address and then launch an Outlook email form with the email and subject line populated. The user would be able to further edit the email before they initiate the send. Is this possible to do from a Splunk dashboard?

How to extract changing headers from multiline event?

$
0
0
Hi all, I am sending a multiline event to Splunk Enterprise. The first row contains metadata, the second row the field names, the third row the actual values. It looks like this: ***SPLUNK*** host=hostname index=testindex source=testsource sourcetype=testsourcetype Timestamp,"Arbitrary field name" 1550850412192,89 The field name is arbitrary, which means it is variable. At the first event, it could be "Field name 1" and at the second "Field name 2". Currently, Splunk shows me two events. 1. Timestamp,"Arbitrary field name" 2. 1550850412192,89 What I want is that it creates a field from the "Arbitrary field name" and assigns value 89. My props.conf looks like this at the moment: INDEXED_EXTRACTIONS = csv FIELD_DELIMITER = , HEADER_FIELD_LINE_NUMBER = 2

cisco_ios has a bad lookup

$
0
0
02-22-2019 10:32:44.260 -0600 WARN SearchResultsFiles - Corrupt CSV line, char #649: "PLATFORM_ENV","1","FRU_PS_SIGNAL_OK","[chars] signal on power supply [dec] is restored","The specified power supply signal is restored. [chars] is the input or output signal value, and [dec] is the power supply number.","Copy the message exactly as it appears on the console or in the system log. Research and attempt to resolve the error by using the Output Interpreter. Use the Bug Toolkit to look for similar reported problems. If you still need assistance, open a case with the TAC, or provide your Cisco technical support representative with your information. For more information about these online tools and about contacting Cisco, see the "Error Message Traceback Reports" section on page 1-5." cisco_ios/lookups/cisco_ios_messages.csv

Difference in result with timechart span=1d dc(ip) vs timechart span=1h dc(ip)

$
0
0
I am using distinct count with time chart for the whole day (yesterday). The result is varying if the span is changed to 1h and 1d. | timechart span=1h dc(SessionId) as "TotalCustomerSession"| addcoltotals labelfield=_time label=Total| sort -_time gives me a result 10802 | timechart span=1h dc(SessionId) as "TotalCustomerSession"| addcoltotals labelfield=_time label=Total| sort -_time gives me result 10399 Confused!

Setting up a google/outlook IMAP account for splunk in windows

$
0
0
I am trying to configure IMAP Mailbox on my splunk (7.1.2) windows os but geting below error when I tried `python get_imap_email.py -- debug` Traceback (most recent call last): File "get_imap_email.py", line 333, in getMail M = imaplib.IMAP4(self.server,int(self.port)) File "F:\Splunk\Python-2.7\Lib\imaplib.py", line 194, in __init__ self.welcome = self._get_response() File "F:\Splunk\Python-2.7\Lib\imaplib.py", line 931, in _get_response resp = self._get_line() File "F:\Splunk\Python-2.7\Lib\imaplib.py", line 1031, in _get_line raise self.abort('socket error: EOF') abort: socket error: EOF None Traceback (most recent call last): File "get_imap_email.py", line 717, in parseArgs() File "get_imap_email.py", line 709, in parseArgs imapProc.getMail() File "get_imap_email.py", line 345, in getMail raise LoginError('Could not log into server: %s with password provided' % self.server) __main__.LoginError: Could not log into server: imap.gmail.com with password provided my settings are- [IMAP Configuration] debug = 1 deleteWhenDone = False disabled = False fullHeaders = False includeBody = True noCache = False server = imap.gmail.com useSSL = 0 password = mypassword port = 993 includeBody = True mimeTypes = text/plain user = myid@gmail.com splunkuser = admin splunkpassword = password folders = Inbox imapSearch = UNDELETED SMALLER 204800 timeout = 10 Also I have open Firewall port 993so there won't be any firewall issue...Not sure if this app is compatible with splunk 7.1.2 Thanks,

How to change the timezone of user profile using Dashboard

$
0
0
if we select a time range in splunk then it will bring the results based on the time range applying user's timezone. Displays all the results with dates based on the user's profile. I have a dashboard where user can select list of timezones in the dashboard. Based on his timezone selection, can we change the user profile timezone (without going to the settings and adjusting preferences) In summary....whatever action the settings--> preferences does in the UI......can I do the same through the dashboard? When we apply after changing the preferences, what all actions does splunk perform?

Reference another dashboard's searches

$
0
0
Is it possible to reference the searches used on one dashboard for use in another? Or would the search need to be saved first and referenced that way? Example-- Dashboard 1: index=flintstone yabbadabba=do Dashboard 2:

Palo Alto Syslog being Indexed, but not parsed

$
0
0
I saw the other forum posts, and they are not the same Issue i am having. I have configured the PA to directly send syslog's to the Splunk server. Its a single node deployment. I installed the Addon as well as the PA dashboard app. I am using the default syslog format of BSD with no custom formats. I created a Pan_logs Index and a UDP data input on 5514, with a sourcetype of pan:log. I have also tried other source types such as pan:firewall, pan:traffic....etc I can do a search on the index, and it comes up with all the syslog messages. The source type is pan:traffic from most of them. config changes come in with pan:config. The index is configured with the App of the Addon None of the data is being parsed into the dashboard. A search of eventtype="pan_firewall" yields no results. What am I missing? I feel like its a Splunk config i need.

Connection issues: Created a new index but the data is not showing up.

$
0
0
There are couple of indexes in inputs.conf i just added a new index with new port, All other indexes are working fine and servers can send data to indexes, Problem is with newly added one. When i do telnet from uf to indexer all other ones are establishing a connection and cant establish a connection to new one. I am missing something here can some one figure out where the problem is.. Thanks a lot in advance.

Splunk Add-on for JBoss - Version 7.1

$
0
0
Has anyone tried to use the Splunk Add-on for JBoss (https://splunkbase.splunk.com/app/2954/) with JBoss version 7.1 on Splunk Enterprise 7.1? Does anyone know if the app will be updated? Alternatively, there is a community JBoss app and add-on (https://splunkbase.splunk.com/app/1805/ and https://splunkbase.splunk.com/app/1804/). Has anyone tried these using the above versions? Thank you!

Connection issues: when I created a new index, our data is not showing up.

$
0
0
There are a couple of indexes in inputs.conf. I just added a new index with a new port. All other indexes are working fine and servers can send data to indexes. The problem is with the newly added one. When I do telnet from universal forwarder to indexer, all other ones are establishing a connection, but I can't establish a connection to the new one. Am I missing something here? Can someone figure out where the problem is? Thanks a lot in advance.

Has anyone tried to use the Splunk Add-on for JBoss - Version 7.1 on Splunk Enterprise 7.1?

$
0
0
Has anyone tried to use the Splunk Add-on for JBoss (https://splunkbase.splunk.com/app/2954/) with JBoss version 7.1 on Splunk Enterprise 7.1? Does anyone know if the app will be updated? Alternatively, there is a community JBoss app and Add-On (https://splunkbase.splunk.com/app/1805/ and https://splunkbase.splunk.com/app/1804/). Has anyone tried these using the above versions? Thank you!

How to calculate no of opened ticket in past in splunk

$
0
0
Hi , i want to calculate total no . of opened incidents by a user over a time interval in dynamic environment in splunk (assuming the time input via time picker and we have snow data in splunk ) for example i want to calculate total no. of opened incident by users from 15 feb 19 to 20 feb 19 .(obviously some ticket will be in closed , resolved, in progress , new ....etc states) we have dv_opened_at , dv_closed_at , sys_updated_on, dv_number fields in splunk as below - dv_number Team_Name dv_state dv_opened_at sys_updated_on INC0346726 Desktop Computing Updated by Customer 1/21/2019 7:34 2/22/2019 18:45 INC0349402 IAM In Progress 1/23/2019 19:28 2/22/2019 16:57 INC0363170 Desktop Computing On Hold 2/7/2019 20:19 2/22/2019 19:10 INC0368256 Desktop Computing On Hold 2/13/2019 19:53 2/22/2019 18:58 INC0370984 On Hold 2/16/2019 18:46 2/22/2019 18:17 INC0375322 Updated by Customer 2/20/2019 16:13 2/22/2019 17:58 INC0375327 Endpoint Security Updated by Customer 2/20/2019 16:18 2/22/2019 18:48 INC0375361 Desktop Computing In Progress 2/20/2019 17:22 2/22/2019 16:58 INC0376457 In Progress 2/21/2019 11:12 2/22/2019 18:48 INC0376813 Desktop Computing In Progress 2/21/2019 22:33 2/22/2019 18:26 INC0377715 IAM New 2/22/2019 17:24 2/22/2019 17:27 INC0377755 Messaging New 2/22/2019 18:56 2/22/2019 19:14 this log is pulled by splunk in last 4 hours(22 feb), here we can see have we have OLDER incidents also we are getting all those incidents because those got updated in this time interval . How we can exclude all those incidents ?? Thanks in advance :)

Can you help me monitor httpd process through Splunk on a Linux Machine?

$
0
0
Greetings all, I want to monitor an "httpd" process for a Linux Machine, and if the process is down or not running, I need to shoot an alert. Could you please help me with the search query for this? Thanks in advance.

Can you help me use start and end times in one search in a mapped subsearch?

$
0
0
I'm trying to connect the sum of measurements from a certain process and connect them to workorders by the times the orders are in place. However, when I attempt to map the data using $StartTime$ and $EndTime$, the statistics table disappears and will not even show the fields from my first search. What am I doing wrong? The Start and EndTimes are in epoch time for the first part of the search. My search: index="all_usf_hardsurfaces_orderhistory" TargetOrg="XX" MachineName="YXD12" (Status="IP") | stats values(_time) as StartTime by WorkOrderNumber, MaterialName, _time, Quantity | eval Qty=round(Quantity,0) | fields StartTime EndTime WorkOrderNumber MaterialName Qty | sort by -StartTime | delta _time as DeltaStart | eval DeltaStart=abs(DeltaStart) | eval EndTime=_time+DeltaStart | fields Time EndTime WorkOrderNumber MaterialName Qty |map search="search index=pltxx_da ProcessName="Defecting" ItemName="Current Length Output (No Waste)" earliest=$StartTime$ latest=$EndTime$ |dedup Measurement consecutive=true |stats sum(Measurement) as Measurment |eval Measurement=Measurement/304.8" |table StartTime EndTime WorkOrderNumber MaterialName Measurement

How do you return Boolean if today matches the dates listed in lookups table?

$
0
0
I have mydates.csv file uploaded to Splunk lookups. It looks like this: Date 1/2/2019 2/5/2019 2/16/2019 I need to add date check function to my search, so it will check if today’s date is listed in mydates.csv file. If it is, then create dayflag=YES. Otherwise, set dayflag=NO. How can I do this?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>