Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

If field1 NOT LIKE field2

$
0
0
Hello, I am aware of the following search syntax field1 = *something* field1 = field2 field1 != field2 But I wish to write something like `field1 != *field2*` but this is typically meant to search if field2 doesn't contain field1 but instead its just search field2 as text as its set within stars. Can anyone provide me the syntax to search with this criteria? Thanks

Is there a comparison of CPU consumption of HF and UF?

$
0
0
hi all, I want to use splunk heavy forwarder in my company but i wonder that what does it cost me to use HF? Is there any test or something like that about cpu, IO consuming etc. ?

Regex '' in XML tag

$
0
0
I have a log that contains some XML that I'm extracting into fields and then removing all empty tags at index time in props.conf. I'm having trouble with the `<` and `>` characters that can appear between the `` tag. I would like to replace them with `<` and `>` Example data: Some < text >and >more text< I can use this regex as the capture group, but how can I do a replace only on this group? `(.*?)<\/Text>` Is it possible to do this in props.conf with SEDCMD?

Hiding panels with 2 depends

$
0
0
Hi all, i'm running Splunk 6.5.0 and try to setup a dashboard with 2 checkboxes. The 1st one is to aktivate/deactivate panels in general and the 2nd one is to choose dedicated panels. The checkboxes are setup like this: OnPanel 1Panel 21,2 If add the following option to the panel i got an error message: Error message: Invalid attribute name The second Problem ist, that I have no idea how to setup the dependings with the different values (Panel 1 and Panel 2) Has someone an idea how to solve this and how the syntax for the 2 dependings should look like, in the documention more dependings should be seperated by comma? Thanks in advanced Best regards Thorsten

Token condition empty/not empty used in a panel

$
0
0
Hi all, I'm trying to have some panels with searches depending on a token. Let me explain more. I have a token called torn, which you can select between 4 turns (T1, T2, T3, T4). At the beginning, when entering to the dashboard, I want it to be null. Then, how can I indicate the panels, to go on with the searches even though this token is null? The search can work without any value for it, is just adding value when the user wants it or leaving in a simply search. Here I have the condition I put in the panel: -------- index="XXXXXXX" | dedup CreatedAt, Ticket, Task | rename UserId as Id | lookup user.csv Id | search [|inputlookup user.csv | search role="SDO " Id=$id$ | dedup shift | fields shift] | search Team="DA" $shift$ | eval tout=if(match('role', "SDO"), "Operators", "Analysts") | top limit=10 tout | eval percent=round(percent,1)$time_angora.earliest$$time_angora.latest$$turn$ ------------- In the search I look for the shift of this user and then, look for the information in this shift, the turns, are like this shifts. The thing is that if the person using splunk can add more shifts ($shift$) but leave it with just the one from the user as it should search it from the beginning. The not working part right now is the panel working with $turn$ being empty. (I don't know how to use JavaScript or any other language in Splunk yet) Thank you in advance. Best regards.

Why do the duplicated rawdata of *nix increase as time proceeds?

$
0
0
Hello, I'm using Add-on for *nix, Universal Forwarder 6.4 as forwarder, and Splunk Enterprise 6.5 as indexer. I found the search results of "index=os" are duplicated a lot, so I investigated the detail: - the "os" rawdata which is named "journal.gz" includes duplicated all fields data. - the number of duplicated data increased as time proceeds. - if change the index name from "os", the data are not duplicated. - if forward stand-alone Splunk Enterprise 6.4 which is configured same as above, the data are not duplicated. This issue occurs to only "os" index so I'm guessing that the cause of duplication exists at the indexing process using *nix add-on, but I don't have any idea to solve this problem and I would not like to solve with search statement ( like "dedup" command ). Please kindly tell me any idea to solve? Thank you,

Why is Splunk not indexing logs located in a directory containing multiple levels of subdirectories?

$
0
0
I set up a data input to index all the data from the following path /pipeline/node This directory contains multiple subdirectories, and each of these subdirectories contains 5 subdirectories and a log file. Splunk is picking up and indexing the log file in the first level of subdirectory but it is not indexing the log files contained within the other five subdirectories. Any thoughts on why this is happening?

How to display a message in dashboard like "PROCESS IS DOWN" when certain events showed up in log?

$
0
0
I want to display a message in dashboard like "PROCESS IS DOWN" when certain events showed up in log. "sm waiting for threads to terminate" OR "sm proceeding with shutdown" OR "sm is down" OR "Thread pool stopped; proceeding with shutdown" OR "Released sm" If any of the string when occurred in log, I want to show a message like host and Process is down. Thanks!

What happens when changing deployment servers?

$
0
0
In order to test configurations before going to production we want to set forwarders to go to our development environment to get inputs and parsing configured properly. The idea is to script an install that points to our dev environment that will then receive apps based on the server classes that point to the indexer cluster and set some default settings for the forwarder. These apps will be similar across both environments but have addresses and such changed. From there we would develop a TA in dev for the inputs and parsing for the sourcetype and create a new server class to receive the TA (would also have to go in master-apps for the parsing portion to be done on the indexers). Once everything looks good we want to copy over the TA and create the server class in production and change the deploy poll of the forwarder to the production deployment server. When the deployment server address is changed do the apps from the dev environment automatically get replaced by the ones in the production environment?

Forescout: How to gather a list of non-compliant hosts over a span of 30, 60, or 90 days?

$
0
0
We're currently running the following search and it's returning every instance of when a host was non-compliant. Instead, we want a list of hosts that were non-compliant over a span of any 30 days (then 60 and 90) and report on: -host -number of non-compliant days (given that it's 30+, 60+, 90+, those that fall under 90, then 60, should be inclusive in the count for 30) -current status (compliant, non-compliant) -the initial date it first existed in the system -the first date when the host went non-compliant for the given span index=forescout sourcetype="fs_av_compliance" sourcetype=fs_av_compliance status="non-compliant"| table src_ip,sourcetype,status,_time,src_nt_host | eval days30=relative_time(now(),"-30d@d") We'll want to do this for more than one sourcetype too, but we're starting with just one. Thanks for any help! Trista

Which is the better sourcetype (iis or web)

$
0
0
On my IIS system i have an input.conf for the IIS logs that pulls the logs in as sourctype=web then on the search head I have an sourcetype renaming from web to iis Should i remove the sourcetype=iis? Can I search for either of the sourcetype's and see the same data? I am having trouble with sourcetype=web and breaking the logs. would it be better to collect the logs as sourctype=iis and do the breaking on the new sourcetype rather than use the default [web] sourctype breaking on the default props.conf

Splunk App for Windows Infrastructure: Created "sendtoindexer" app per documentation, but why is outputs.conf not on the deployment client?

$
0
0
I have created a "sendtoindexer" app following Splunk App for Windows Infrastructure 1.4 documentation and I cannot seem to get the outputs.conf file to push down to the deployment client. The app is showing as installed from the deployment server but I do not see any outputs.conf file on the deployment client. The rest of the folders and files of the app exist on the client but no outputs.conf. I have restarted Splunk services on the deployment client, reloaded the deployment server, and restarted Splunk on the deployment server but outputs.conf will not push down to the deployment client. Thanks in advance.

How to edit my search to find all source_value_id fields where the d value is less than zero?

$
0
0
When running this command: `"low_seq=" "source_session_id" "-1177" | stats by _time,source_session_id,low_seq | delta low_seq as d | where d<0 | table _time, source_session_id, low_seq, d` I get what I want for one **source_session_id**: _time source_session_id low_seq d 1:00:01 PM -1177 0 -4584 However, I have multiple **source_session_id**, so without "-1177", the search does not work: `"low_seq=" "source_session_id" | stats by _time,source_session_id,low_seq | delta low_seq as d |table _time, source_session_id, low_seq, d`. How do I make it work so it finds all **source_session_id** where d<0? I tried this: `"low_seq=" "source_session_id" | stats values(low_seq) by source_session_id`. it groups nicely for all **source_session_id** but I could not make it work with `delta` with `stats(values)` to get d<0, Thank you.

how to show current ERROR trend as a single value

$
0
0
HI, i am trying to display ERROR count as a single value and using below search index=myindex ERROR co_name=$co_name$ env_name=$env_name$ | timechart span=1m count | eval _time=_time-now()%3600 | sort +_time

All JSON fields are extracted properly using spath, but why not with INDEXED_EXTRACTIONS in props.conf?

$
0
0
**Issue:** Not all fields are extracted properly when using the INDEXED_EXTRACTIONS=json flag in my *props.conf* but the fields ARE correctly extracted when using spath. **Goal:** Extract each value in my JSON string with the name *MyKey1*, *MyKey2*, ..., *MyKeyN* as the field and the value bing the corresponding value. In my example below, I should have a Splunk field called: - *please_help{}.MyKey1* with a corresponding value of *25* **and** *150* - *please_help{}.MyKey2* with a corresponding value of *50* - *please_help{}.MyKey3* with a corresponding value of *75* - *please_help{}.MyKey4* with a corresponding value of *100* - *please_help{}.MyKey5* with a corresponding value of *125* Currently withOUT the spath command in my search `index=json_test sourcetype=JSON_CAN` I see about 77 total fields, so some of the JSON fields (*please_help{}.MyKey1*) are actually getting extracted properly. When I DO use spath in my search `index=json_test sourcetype=JSON_CAN | spath` I get over 2100 fields extracted properly props.conf [JSON_CAN] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json KV_MODE = none NO_BINARY_CHECK = true category = Structured description = JavaScript Object Notation format. For more information, visit http://json.org/ disabled = false pulldown_type = true limits.conf [spath] extract_all = true extraction_cutoff = 7000 max_mem_usage_mb = 200 **Repeating** JSON Formated Data: { "splunky_goodness": "Happy Trails", "please_help": [{ "MyKey1": "25", "Candy": "0", "Cane": "0", "Rap": "0" }, { "MyKey2": "50", "Candy": "0", "Cane": "0", "Rap": "0" }, { "MyKey3": "75", "Candy": "0", "Cane": "0", "Rap": "0" }], "community": { "john": "1800-200-11121", "jacob": 0, "jingleheimer": "schmidt" } } { "splunky_goodness": "Yellow Submarine", "please_help": [{ "MyKey1: "150", "Candy": "0", "Cane": "0", "Rap": "0" }, { "MyKey4": "100", "Candy": "0", "Cane": "0", "Rap": "0" }, { "MyKey5": "125", "Candy": "0", "Cane": "0", "Rap": "0" }], "community": { "john": "1800-200-11121", "jacob": 0, "jingleheimer": "schmidt" } } Any help is appreciated. Please comment if more information is needed!

How to move data to a different index without creating duplicates or holes?

$
0
0
I've got some data in an index that has a retention time that is intentionally short, but some of the data in that index is of higher value and I want to retain it for a longer period. I've been looking at setting up a scheduled search that uses 'collect', but I don't see a mechanism to run a scheduled search such that there's a high level of fidelity in the data - no duplicates and no holes. Since this data is more valuable we want to make sure we get it all! Is there a simple mechanism to do such a thing? I'm thinking I want to make the base search reach far enough back in time to not miss any data that has shown up since the last run, then deduplicate against the existing data in the target index (which might be complicated without _raw) and then 'collect' whatever is left into the target.

Replacing carriage return with special character

$
0
0
Hi, Results of a search returns computer name and IPaddress separated by a carriage return ComputerName [carriage return] ip Adress I would like to separate them into 2 separate fields Computername Ipaddresss OR Replace

Upgraded from Splunk 6.3.1 to 6.5, why does the scheduled report not work?

$
0
0
After we updated Splunk from 6.3.1 to 6.5, the scheduled report doesn't work and we're unable to display an embedded report. We checked the logs and we found error message: 12/1/16 10:26:07.095 AM 2016-12-01 10:26:07,095 +0800 ERROR sendemail:1255 - 'module' object has no attribute 'which_pdf_service' host = cn0-splunk-p06 source = /opt/splunk/var/log/splunk/python.log sourcetype = splunk_python Please help to solve.

Splunk App for AWS: After updating app to 5.0.0, why is AWS config no longer getting data?

$
0
0
I updated to Splunk App for AWS 5.0.0. I have aws:description data from the update time (I believe I generated a snapshot) but no subsequent config changes after upgrade and things were changed in AWS.

Why am I unable to paste search queries into Splunk Web search bar?

$
0
0
Hi, Have an issue with a Splunk deployment on Windows (Server '08 Datacenter R2) with the end-user assets being Windows 10 using IE11 (VDI devices). That when I try and paste anything on the clipboard into the main Splunk search bar nothing happens... I can paste fine into the normal url address bar, into notepad, or any other application or even other text fields in Splunk (Settings, Dashboard's filter text fields) but just not the main Splunk Web search bar. Can be quite annoying when you either have to save the search into "savedsearches.conf" in the backend or manually type it all out again...
Viewing all 47296 articles
Browse latest View live