Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Bamboo add-on was not able to get the index or sourcetype in the search. In the logs its throwing Not writing this event because it is already indexed

$
0
0
I have configured the bamboo addon and In the logs its connecting to the bamboo server via http connection and generating the api url. But at the end Its throwing "Not writing this event because it is already indexed". Also i was not able to get the index or sourcetype in the search. I am just doing a POC, Both splunk enterprise and bamboo are installed in my local system.

How to get a complete pull of Qualys KnowledgeBase from the TA

$
0
0
Hi guys, I'm trying to get the full knowledge base downloaded from qualys onto my search heads. I currently have the "basic" knowledge base being downloaded fine. However, I don't know which parameters in the Python scripts to change to download the full knowledge base, I've tried changing detail = Basic to All and the new full knowledgebase is downloaded into the tmp file, however, it's not parsed into the qualys_kb.csv file. I've looked at the logs and the errors I'm seeing look like this: Exception while parsing. dict contains fields not in fieldnames: 'CVSS_REPORT_CONFIDENCE', 'CVSS_ACCESS', 'CVSS_REMEDIATION_LEVEL', 'CVSS_EXPLOITABILITY', 'CVSS_IMPACT', 'CVSS_AUTHENTICATION' :: Traceback (most recent call last): return self.writer.writerow(self._dict_to_list(rowdict)) File "/opt/splunk/lib/python2.7/csv.py", line 148, in _dict_to_list ValueError: dict contains fields not in fieldnames: 'CVSS_REPORT_CONFIDENCE', 'CVSS_ACCESS', 'CVSS_REMEDIATION_LEVEL', 'CVSS_EXPLOITABILITY', 'CVSS_IMPACT', 'CVSS_AUTHENTICATION' TA-QualysCloudPlatform: 2017-09-22T13:18:23Z PID=1552 [MainThread] DEBUG: TA-QualysCloudPlatform [knowledge_base] - Exception while parsing. dict contains fields not in fieldnames: 'CVSS_AUTHENTICATION', 'CVSS_REMEDIATION_LEVEL', 'CVSS_IMPACT', 'CVSS_REPORT_CONFIDENCE', 'CVSS_EXPLOITABILITY', 'CVSS_ACCESS' :: Traceback (most recent call last): return self.writer.writerow(self._dict_to_list(rowdict)) File "/opt/splunk/lib/python2.7/csv.py", line 148, in _dict_to_list ValueError: dict contains fields not in fieldnames: 'CVSS_AUTHENTICATION', 'CVSS_REMEDIATION_LEVEL', 'CVSS_IMPACT', 'CVSS_REPORT_CONFIDENCE', 'CVSS_EXPLOITABILITY', 'CVSS_ACCESS' I've looked at the full knowledgebase XML file in the temp directory and it does not contain any of these fieldnames mentioned above (nor does the basic version) so I'm unsure where it's coming from? Any python wizards out there that can help find what i need to change in the script for the full pull? I've not explained it too well so if anyone needs more info I'll try and explain it a bit better! Cheers!

Sum to have a value as zero in case not found

$
0
0
Hi All, I have some search criteria followed by stats as: Search ns=app1 Error | stats sum(eval(AcctNo="'1000394'")) as "FailedOccurences". In case if that AcctNo is not found in the Search, it does not display FailedOccurences. Kindly let me how can I get a value of Zero for FailedOccurences if AcctNo is not found in Search result. Thank you.

Extract value within quotes and plot on timechart

$
0
0
Hi, I am very new to splunk and i have data like this below: "salary": "2000" I have 1000's of events like this, I would like to extract only the integer 2000 and plot the value on timechart. source="tcp:8050" | search salary| rex _raw=".*(?P\d+).*" | timechart count as "SAL" The above search is only returning the number of events having salary but not returning the actual value of 2000. Requests help to achieve this!!! Thanks!!

How can I change the interval time between performance measures?

$
0
0
Hi, I'm editing nmon.conf in order to increase the time between measures of performance. Default is: **fifo_interval="60"** fifo_snapshot="1440" but I wanna change the interval to **fifo_interval="180"** The app TA-nmon is installed in my deployment server. After edit nmon.conf file, I run the command /opt/splunk/bin/splunk reload deploy-server -class *serverclassname* -debug I created the nmon.conf in 'local' directory and edited the file there (deployment server). I have edited too the nmon.conf directly in 'default' directory but it haven't work. My linux servers received the app correctly, and with the new value in fifo_interval parameter, but performance measures are still being collected each 60 seconds. Do I need to change this parameter in any other file? Thanks

Create a table with _time and a custom fields

$
0
0
I'm lost. I'm trying to capture the _time and UserName (custom field) from a search and use the _time to find events within 1 second in another search. I would then want to report the result in a simple table: _time; UserName; real_ip_address. Any help would be appreciated! Thank you.

Add my CSV file into SPlunk

$
0
0
I have: 1 Searchhead 1 Deployment Server 4 Indexers (Non clustered) This is the raw CSV file: date,name,capacity,free_capacity,virtual_capacity,used_capacity,real_capacity,overallocation,compression_virtual_capacity,compression_compressed_capacity,compression_uncompressed_capacity 1470207600,myserver,62.00TB,16.67TB,163.02TB,41.80TB,45.24TB,262,86.72TB,34.97TB,69.88TB 1470207600,MigrationPool_8192,0,0,0.00MB,0.00MB,0.00MB,0,0.00MB,0.00MB,0.00MB 1470207600,MigrationPool_512,0,0,0.00MB,0.00MB,0.00MB,0,0.00MB,0.00MB,0.00MB 1470294000,myserver,62.00TB,16.67TB,163.02TB,41.81TB,45.25TB,262,86.72TB,34.99TB,69.88TB 1470294000,MigrationPool_8192,0,0,0.00MB,0.00MB,0.00MB,0,0.00MB,0.00MB,0.00MB the top line is the header........ I have the Props and Transform conf on my UF along side my inputs.conf /opt/splunkforwarder/etc/apps/myapp/local/inputs.conf: [monitor:///usr/local/bin/reports/storage/emc_capacity.out] disabled = false index = zz_test sourcetype = VMAX_capacity [monitor:///usr/local/bin/reports/storage/tustin_svc_capacity_rpts.out] disabled = false index = zz_test sourcetype = SVC_capacity [monitor:///usr/local/bin/reports/storage/idc_svc_capacity_rpts.out] disabled = false index = zz_test sourcetype = SVC_capacity /opt/splunkforwarder/etc/apps/myapp/local/props.conf: [VMAX_capacity] REPORT -VMAX_capacity = VMAX_storage_csv [SVC_capacity] REPORT -SVC_capacity = SVC_storage_csv /opt/splunkforwarder/etc/apps/myapp/local/transforms.conf [SVC_storage_csv] DELIMS = "," FIELDS = "date","name","capacity","free_capacity","virtual_capacity","used_capacity","real_capacity","overallocation","compression_virtual_capacity","compression_compressed_capacity","compression_uncompressed_capacity" [VMAX_storage_csv] DELIMS = "," FIELDS = "Date","Array","Useable","Used","UsedPercent","UsedGrowth","Free","Subscribed","SubscribedMax","SubscribedPercent","SubscribedGrowth","Snapshot","compression","ExpansionNeeded" When I run the search on my Searchhead: index=zz_test Sourcetype=SVC_capacity The data is not parsed....my question...does the props and Transform need to ne on my Indexers? on the UF? does my Props and Transform conf look correct? Any assistance much appreciated.

Custom fields in alert action

$
0
0
Hello, Hello Support Team, I am trying to integrate servicenow app/addon on Splunk and would like to enable the ServiceNow Event Integration for the trigger action. However, I am wondering if there is any way for us to customize the fields. Currently we see 5 fields: Node, Type, Resource, Severity and Description. Does anyone know how to add/change the fields similar to the one in ServiceNow Security Operations add-on for Splunk? The ServiceNow Security Incident has the fields: Title, CI/Host, Category, Subcategory, Group, Source, Priority and Description. We do not want to use the security incident as that is not enabled on the ServiceNow side yet. Kindly let us know if there is a way. Thank you.

Extract failed when extracting "caused by" entries -- "try removing one or more fields. Start with extractions that are embedded within longer text strings"

$
0
0
Hi, Would like to extract the below from the error log. only some text I am posting here from the entire log4j.Multiple caused by entries will appear for each Error. for example Caused by: org.apache.camel.FailedToCreateRouteException Caused by: org.apache.camel.ResolveEndpointFailedException Caused by: org.apache.camel.TypeConversionException Caused by: org.apache.camel.RuntimeCamelException Caused by: java.lang.NumberFormatException for example for Type conversion Exception the description like "Error during type conversion from type: java.lang.String to the required type: int with value .........more text" When I try to extract these by using IFX is causing the error stating that "The extraction failed. If you are extracting multiple fields, try removing one or more fields. Start with extractions that are embedded within longer text strings. " Can some one suggest how to extract these caused by and related Description.

Question on TZ setting in props.conf

$
0
0
In our Slave-Apps directory on the 2 peers/indexers we have a custom app created by the prev admin which has setting for TZ to UTC for network devices that are on UTC. Now i am adding new data source (i.e. AD security logs) using UFs on DCs and our DCs are all in EST TZ and hence i would need to list EST TZ in the props.conf. My Questions are 1) Is this the right stanza for EST time entry [WinEventLog://Security] TZ = US/Eastern I understand i will have to do this on master-apps folder on cluster master and then apply config bundle 2) Will this require a reboot of any peers ?

Only run search query if token is filled

$
0
0
I have a dashboard with textbox used for a search. I would think this is simple but don't see any examples of this out there. I only want to run the dashboard query if the token (textbox) has a value in it. If it does, then run the query, which does work on its own. But if not, then don't run it. |eval result = if(isnull($LotNumber$),"", sourcetype=xyx |search $LotNumber$) | table result

Splunk real-time data input from html page not working

$
0
0
I've been trying to look for a way to for Splunk to input real-time data and I come across Rest API thinking it could be a solution to my problem. But after I set it up a Rest api base on the instruction from Splunk, no data is being added to Splunk. Could anyone let me know what I did wrong here? For testing purposes, I use a Wikipedia site as an endpoint url and I did not set up any kind of handler. I just want to know if this REST api could get me any type of information from the site. The below is a picture of the rest api I set up: ![alt text][1] [1]: /storage/temp/217622-capture.png

Splunk ML - forecast time series

$
0
0
Hi there, I started using splunk machine learning but trying to understand on how to use forecast time series.Can someone please explain what is the holdback , future timespan. When i read the documentation and try to set the values is throwing an error "External search command 'predict' returned error code 1."I am also trying to understand all the methods like LLP5,LL,LLP,LLT. It would be great, if someone could be help to understand the forecast time series in ML.

CPU Utilization by a process

$
0
0
I am trying to get CPU usage for a specific process in windows. My swearch looks like this: host=host1 AND sourcetype="Perfmon:Process" AND counter="% Processor Time" AND process_name="server*" | table _time, counter, process_name, Value My result is showing mostly 100 for Value which is not really true. Windows runs on VM. Result looks like this: 2017-09-22T14:40:28.000-0400 % Processor Time server 100 2017-09-22T14:39:43.000-0400 % Processor Time server 100 2017-09-22T14:37:28.000-0400 % Processor Time server 100 2017-09-22T14:32:58.000-0400 % Processor Time server#1 100 2017-09-22T14:32:13.000-0400 % Processor Time server 100 2017-09-22T14:38:13.000-0400 % Processor Time server 100 2017-09-22T14:31:28.000-0400 % Processor Time server#1 11.30968265 2017-09-22T14:21:43.000-0400 % Processor Time server 100 2017-09-22T14:18:43.000-0400 % Processor Time server#1 0.105369743 2017-09-22T14:36:43.000-0400 % Processor Time server 0.034732856 2017-09-22T14:35:58.000-0400 % Processor Time server#1 0.14049302 2017-09-22T14:29:13.000-0400 % Processor Time server 100 2017-09-22T14:28:28.000-0400 % Processor Time server#1 84.84122861 2017-09-22T14:20:58.000-0400 % Processor Time server#1 100 2017-09-22T14:16:28.000-0400 % Processor Time server 100 2017-09-22T14:14:58.000-0400 % Processor Time server#1 100 What should I do? Why is it pulling all the 100s? 80% of events show a 100. Is it an agent config issue?

Splunk App for ServiceNow: Can we implement custom fields?

$
0
0
Hello, Hello Support Team, I am trying to integrate servicenow app/addon on Splunk and would like to enable the ServiceNow Event Integration for the trigger action. However, I am wondering if there is any way for us to customize the fields. Currently we see 5 fields: Node, Type, Resource, Severity and Description. Does anyone know how to add/change the fields similar to the one in ServiceNow Security Operations add-on for Splunk? The ServiceNow Security Incident has the fields: Title, CI/Host, Category, Subcategory, Group, Source, Priority and Description. We do not want to use the security incident as that is not enabled on the ServiceNow side yet. Kindly let us know if there is a way. Thank you.

Run search in a dashboard if a token has a value in it

$
0
0
I have a dashboard with textbox used for a search. I would think this is simple but don't see any examples of this out there. I only want to run the dashboard query if the token (textbox) has a value in it. If it does, then run the query, which does work on its own. But if not, then don't run it. |eval result = if(isnull($LotNumber$),"", sourcetype=xyx |search $LotNumber$) | table result

Help with an eval statement?

$
0
0
I am trying to build a base search for the field message.device.category , it has 3 values: desktop , mobile and tablet. Using `eval` I am trying to divide the field with separate values. `search | eval MobileUsers=if("message.device.category" == "Mobile", "Mobile",NULL) |eval DesktopUsers=if("message.device.category" == "Desktop", "Desktop",NULL) |eval tabUsers=if("message.device.category" == "Tablet", "Tablet",NULL) |event stats count(DesktopUsers) ,count (MobileUsers) ,count(tabUsers)` its not returning any values in the counts.

Props.conf timezone settings for Eastern? And do I need to reboot any peers?

$
0
0
In our Slave-Apps directory on the 2 peers/indexers we have a custom app created by the prev admin which has setting for TZ to UTC for network devices that are on UTC. Now i am adding new data source (i.e. AD security logs) using UFs on DCs and our DCs are all in EST TZ and hence i would need to list EST TZ in the props.conf. My Questions are 1) Is this the right stanza for EST time entry [WinEventLog://Security] TZ = US/Eastern I understand i will have to do this on master-apps folder on cluster master and then apply config bundle 2) Will this require a reboot of any peers ?

Splunk real-time data input from HTML page not working

$
0
0
I've been trying to look for a way to for Splunk to input real-time data and I come across Rest API thinking it could be a solution to my problem. But after I set it up a Rest api base on the instruction from Splunk, no data is being added to Splunk. Could anyone let me know what I did wrong here? For testing purposes, I use a Wikipedia site as an endpoint url and I did not set up any kind of handler. I just want to know if this REST api could get me any type of information from the site. The below is a picture of the rest api I set up: ![alt text][1] [1]: /storage/temp/217622-capture.png

Splunk Machine Learning Toolkit - How do I use forecast time series?

$
0
0
Hi there, I started using splunk machine learning but trying to understand on how to use forecast time series. Can someone please explain what is the holdback , future timespan. When i read the documentation and try to set the values is throwing an error "External search command 'predict' returned error code 1."I am also trying to understand all the methods like LLP5,LL,LLP,LLT. It would be great, if someone could be help to understand the forecast time series in ML.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>