Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Help needed with a search and a lookup

$
0
0
All, I have this search: index=ssn sourcetype="agent" | rex field=_raw "Files:(?.*):/tmp/(?.*):(?.*)" | stats sum(filecount) as filecount by customer host It returns this data: 1 CUST001 host001 782 2 CUST002 host002 150 3 CUST003 host003 10 4 CUST004 host004 15 5 CUST005 host005 3 6 CUST006 host006 44 7 CUST007 host007 997 8 CUST008 host008 87 9 CUST009 host009 3587 10 CUST010 host010 18 11 CUST011 host011 273 12 CUST012 host012 20227 13 CUST013 host001 18 I need one alarm for hosts that are in a lookup table AND the filecount is 0. The lookup table is: | inputlookup sldp-oo_customers 1 host001 CUST001 2 host001 CUST001 3 host001 CUST001 4 host020 CUST020 The output I need is: 1 CUST020 host020 0 As you may notice, the host in question does not have result in the first search, in this case it is missing data, but I want to be able to change the threshold (0 files) if needed. The only way I found to achieve this result is to run it in 2 searches: 1) Generate another lookup file with the result of first search and schedule to run minutes before the second one: index=ssn sourcetype="agent" | rex field=_raw "Files:(?.*):/tmp/(?.*):(?.*)" | stats sum(filecount) as filecount by customer host | outputlookup sldp-oo-filecount.csv createinapp=true 2) Run search using 2 lookup tables: | inputlookup sldp-oo_customers | lookup sldp-oo-filecount.csv customer as customer host as host OUTPUT filecount | fillnull value=0 filecount | search filecount=0 | fields customer host filecount Is there a better way to do this? Today is fine because the user want it to run every 24h, but it may became nightmare if I need to run it often. Thank you very much for your help, Gerson

Inbuilt token in email alert is only sending first result, not all results from the column

$
0
0
Hello Splunkers, I am trying to setup the alerts go to email and other integrations. When I use the inbuilt tokens like "$results.x$"- it gives only the first result from the search. How should I access other fields from the search results? My search is something like this: index=* "xxxxxx" |.....|stats count by domain, name, ip This search usually gives 3-4 unique columns like this- | Domain | Name | IP | | A | B | C | | D | E | F | | H | I | J | Email alert should have all the results(columns) in it. Please help. Thanks in advance.

CSV lookup, search unstructured index, find matches and return the statistics

$
0
0
I posted a comment on https://answers.splunk.com/answers/468612/how-to-search-a-lookup-table-and-return-the-matchi.html but didn't get any answer. So I'm opening a new question about this issue. The provided solution works fine but it uses a lot of resources when the number of rows in csv file as well as index size grows. In my case, I have a structured data file like this: Field-ID,Field-SourceType,Field-Substring 1,sourcetype1,Some text goes here 2,sourcetype1,Another other text with WILDCARD * here 3,sourcetype2,This is a different text for different sourcetype ... I run the query mentioned there (returning "Field-Substring" field) against some index data/events to count the number of occurrences of substrings: index="application_error" [| inputlookup my_lookup_table.csv | rename Field-Substring as search | fields search | format] | rename _raw as rawText | eval foo=[| inputlookup my_lookup_table.csv | stats values(Field-Substring) as query | eval query=mvjoin(query,",") | fields query | format "" "" "" "" "" ""] | eval foo=split(foo,",") | mvexpand foo | eval foo=lower(foo) | eval rawText=lower(rawText) | where like(rawText,"%"+foo+"%") | stats count by foo As there are huge number of unstructured events and quite large number of substrings in the csv file, it takes ages to return the result. Just wondering if there's another method to expedite searching unstructured log files for all the values in my lookup csv file and return the stats/count/etc. These unstructured indexed data/logs are only categorised based on different sourcetypes and as you can see in the lookup csv file, each line shows the substring and it's corresponding sourcetype which needs to be searched (which is not being used in the above query).

Adding default search to your own app dashboard

$
0
0
So we needed a view where we had the default search look and capabilities but within the "chrome" or look and feel of the app we were creating. I searched high and low through the documentation but there's no method to add CSS or javascript to the search page that is rendered to then manipulate the look and feel of the search page. Finally, it dawned on me that everything on that page is run by javascript so I simply viewed the page source and copied and pasted the HTML from the search page into a new html page within our app and then was able to tweak the HTML to add the CSS and Javascript from our app (sample below with ID's replaced). Just be sure to copy the HTML out of your own app as the App ID's , etc will change.Loading...
Loading...

How can I find out which email server Splunk uses?

$
0
0
Where can I find which email server Splunk uses? An advanced user is asking ; - )

Splunk Add-on for Microsoft Cloud Services: How to receive sign-in and audit log data

$
0
0
Hello, We have recently installed this Azure add-on. I can successfully receive data from O365 Management APIs, Azure resources ....however, the only data we really need is data that can be accessed from Azure portal by going to Azure Active Directory -> Activity -> Sign-ins and Audit Logs Have anyone had any success pulling in this data? Thanks!

Error message -- 'Regex: regular expression is too large. Trying to pull a field from an event using subsearch with a transaction.

$
0
0
I am new to Splunk. I have requirements to show a table summarizing events over time. I've created a Splunk query and several inputs to filter the data based on fields that I have defined. All of the fields are pulled from the last event in a set of common events, so I've filtered on the common field and then 'dedup'ed the events. I've just been given a new requirement where I need to pull one field from the first event in the set. I've reviewed answers at answers.splunk. I'm attempting to use a 'subsearch' with a 'transaction' to do this. while testing out the query, I am receiving 'Regex: regular expression is too large' error. This should be relatively easy but I'm not totally understanding on how to put the query together. Here is what I have so far: host="test" [ search host="test" earliest=0 latest=now() | eval "Event Id"= 'event.id' | transaction "Event Id" with mvlist=t | eval createdTimes='event.details.updateTime.time' | eval created=mvindex('event.details.updateTime.time', 0) ] | search 'event.id'="Event Id" |debup "Event Id" | ....{query to extract remaining fields}

Best way to send Windows event logs from a Windows 12 server to indexers?

$
0
0
Unfortunately I am not allowed to install a universal forwarder on Windows endpoints to send Windows event logs into Splunk. That would be my preferred method. So I configured endpoints to send winevent logs to a Windows 12 server (configured as a WEC). Now I am wondering what is the best way to send the events to the indexers? Should one use a universal forwarder or heavy forwarder or some other method? Thank you

Escaping a forward slash / in conditional statement

$
0
0
I have a conditional statement (part of an eval case) in which I need to check for the value of a field. The desired value contains a forward slash ( `/`). `| eval Bool = case(Reason=="Thing1 / Thing2", 0, ... 1=1, 1)`. This statement will evaluate to `Bool = 1`. I've tried to escape it with a back slash /, but that didn't work. `| eval Bool = case(Reason=="Thing1 \/ Thing2", 0, ... 1=1, 1)`. This still evaluates to `Bool = 1`. I can technically use a `like` statement, which is how I know the `/` is causing the issue. `| eval Bool = case(Reason like "Thing1 % Thing2", 0, ... 1=1, 1)`. This evaluates to `Bool = 0`. `| eval Bool = case(Reason like "Thing1%Thing2", 0, ... 1=1, 1)`. This evaluates to `Bool = 0`. (The only difference is no spaces around the `%` character.) Is there a solution that will let me use an exact match search vs. the like statement?

Time format throwing error

$
0
0
Hello All, i have a sourcetype with timestamp as **"2017-10-10T18:55:47.425Z"** and i defined TIME_FORMAT as **"%Y-%m-%dT%H:%M:%S.%3%Z"** but seems to be issue am getting the following error Bad strptime format value: '%Y-%m-%dT%H:%M:%S.%3%Z', of param: props.conf / [] / TIME_FORMAT. can anyone help me in correcting it?

Create a new field from a field-extraction

$
0
0
Hello: I have an existing field name "filename" (extracted from Splunk) in this format abcdefg.000000AB.DDD01A222222222222222222.xml. I want to create a new field that extracts the characters in the position of "DDD01A" in the field above. I do not want to lose the existing "filename" extraction - I want to add another column with the new value. The Extract New Fields GUI did not work. Can someone please advise? Thanks!

How can I search for users that haven't logged into Splunk for 90+ days?

$
0
0
Any query help Highly appreciated ? Thanks in advance ! lists accounts in Splunk that have not been used (logon) for 90 days or more . Is splunk automatically delete user after 90 days ?

Splunk affecting log4j rolling when busy

$
0
0
On an active server, log4j is writing log files that Splunk is monitoring. Log4j is configured to roll over log files once they reach 10Mb is size. Here is the log4j settings in use log4j.appender.out=org.apache.log4j.RollingFileAppender log4j.appender.out.layout=org.apache.log4j.PatternLayout log4j.appender.out.file=logs/output.log log4j.appender.out.encoding=UTF-8 log4j.appender.out.append=true log4j.appender.out.maxFileSize=10MB log4j.appender.out.maxBackupIndex=10 Splunk is configured to monitor the log files. Here is the stanze being used. [monitor://D:\Server*\logs\output.log*] _TCP_ROUTING=large_pool index = javaServer sourcetype = javaAppLogs disabled = false All works as expected, until the server get really busy. Once the log are written out very quickly, for example, filling 10Mb in less than a minute, the logs no longer roll once they get to 10Mb in size. The files have got up to 7Gb and keep growing until the server load starts to slow down. Then log4j does start tolling the files again. This only happens when Splunk is monitoring the log files. It would seem that Splunk is holding a read lock on the log file when log4j is trying to roll it, and stops log4j from rolling the file, Log4j is still able to log to the file, just not roll it. I have tried changing the value of the **inputs.conf time_before_close** option. The default is 3 seconds and I have set it to 1, but the problem still occurs. Should I set it to 0? I get the impression that could have some bad side effects.

Anyone configured Splunk to collect logs from Moodle ?

$
0
0
I have to configure Splunk to collect logs from Moodle. On the net, there doesn't seem to be much resource or documentation on integrating Moodle with Splunk. So far, I have only found one plug-in called Splunk Logstore thats installed on Moodle. I configured this plug-in on a test moodle site to send logs to test SPlunk, however, it didnt work. Hence, wondering if anyone here has configured or has knowledge on how to collect logs from Moodle to Splunk. Please share your experience. Thank you.

Splunk Forwarder 7.0.0 -- Can I set the index that is selected before the Splunk forwarder is installed?

$
0
0
I installed the Splunk Forwarder x64 Windows version 7.0.0 today on a server. The behavior appears to have changed. In version 6.x.x the windows event logs would go to index wineventlog. In the new version of the forwarder, it went directly to the main index. I have two questions regarding this: - Is there a way to change the index that is selected before the Splunk forwarder is installed so I don't have to move them from one index to another? - Second question, why was there a change in behavior? Thank you in advance for any support you can provide.

Email alert changes the time format

$
0
0
I have a search that works perfectly, but I use it to generate an alert that will send an email and the time field "Last Event Time" is switched back to the unix time format. Is there a way to keep it from changing back? | metadata type=hosts index=ind1 | where recentTime < relative_time(now(), "-4h") AND recentTime > relative_time(now(), "-24h") | rename recentTime as "Last Event Time" host as "IDS Sensor" | fieldformat "Last Event Time"=strftime('Last Event Time', "%c") | table "IDS Sensor" "Last Event Time"

Lookup File Editor - is there a limit to the number of backup files that will be created?

$
0
0
Installed lookup file editor and I'm wondering if there is any kind of limit on the number of backups it will allow to be created, or if I need to implement some sort of clean up script to limit the number of files that can be created. Thanks, and great app.

how to pull the index and app\workspace names on Splunk search heads?

$
0
0
Hi, Can you please help me to write a query to run on the search heads which will list me the index and app\workspace names in a tabular format? It would be even helpful if you can write a query to display like these are the indexes in this App\Workspace. This is basically to give an idea to a new user to get started.. If i can build a dashboard which will list out the list of indexes in each workspace it would help them to identify their workspace\app and indexes in it. Thanks, Thippesh

Got "Forbidden: Strict SSO Mode" after a fresh install

$
0
0
Once I did a fresh install and try to login to web GU, I got "Forbidden: Strict SSO Mode". I didn't enable any SSO at all. I try this http://localhost:8089.en-US/debug/sso and it also confirmed sso is not enabled.

Can I limit the disk size of a Splunk instance to 300 GB within config files?

$
0
0
I'm looking to set up a stand-alone test Splunk instance and want to limit the disk size of the instance to 300GB. Is this possible to do within the config files? Or do I need to install it on a separate partition that has 300GB and just let it run?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>