Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Splunk_TA_nessus stalls collecting from Security Center

$
0
0
Running the Splunk_TA_nessus (5.1.1) against security center works fine, and collects event data correctly, however it frequently (approx. weekly) stalls, and requires that either the input is disabled/enabled or the HF is restarted. It appears the python process is still running, but it just stops trying to connect to SC. This feels like the script is getting stuck somewhere. Wondering if anyone else has experienced the same?

How work internally searchs?

$
0
0
Hello, Splunkers. I have been looking for information about how work internally the splunk searchs. Are they be translated to another programming language like phyton? How is the workflow since you execute a search and splunk returns to you the results? I need to know deeper how Splunk works. Thanks and regards.

How to hide x-axis values in the chart

$
0
0
| eval totaltime=mvindex(data,0) | eval duration=mvindex(data,1) | table totaltime duration by using totaltime(first highlight in the below data) and duration(second highlight in the sample data below) from the below sample data and would like to striff off the values populating on x-axis. values say ()18.054, 18.250,1408.651,etc). Attached is the screenshot![`alt text`][1] **18.054:** [Full GC (System.gc()) 503347K->61384K(32156672K), **0.1963421** secs] **18.250:** [Full GC (System.gc()) 61384K->60461K(32156672K), **0.2033912** secs] **1408.651**: [GC (Metadata GC Threshold) 2828876K->81862K(32156672K), **0.0540273** secs] **1408.705:** [Full GC (Metadata GC Threshold) 81862K->29711K(32156672K), 0.0434399 secs] 1412.793: [GC (Metadata GC Threshold) 1573326K->77878K(32156672K), 0.0172465 secs] **1412.810:** [Full GC (Metadata GC Threshold) 77878K->67453K(32156672K), 0.1902059 secs] [1]: /storage/temp/216814-values-striffoff-screenshot.png

What is the full process to migrate a full Splunk (7.0) from a server to an another one (Windows Servers 2012 R2) ?

$
0
0
My source Splunk server (version 7.0) is physical Windows 2008 R2 My target is a Virtual windows server 2013 R2. I want to migration the full Splunk solution (apps / index...) from the source to the target.

Forwarding data to splunk free

$
0
0
I am trying to forward logs from a linux server to a Splunk Free indexer instance. I know my forwarder is set up correctly because I can forward data to a fully licensed splunk indexer OK. But when I switch the target server to the free license indexer i don't receive anything. Q: Is it possible to use universal forwarder to send data to a splunk free indexer ( not a trial license)? I have seen a good few answers but they all talk about forwarding FROM Splunk free not forwarding TO splunk free. I have seen the "MoreaboutSplunkFree" page http://docs.splunk.com/Documentation/Splunk/latest/Admin/MoreaboutSplunkFree but again restrictions seem to be about about forwarding **from** not **to** Splunk free.

Find the user based on the lookup list to show those that have and have not login

$
0
0
I have list of lookup list yyyy which I want to shown the latest login based on max login time and also user that did not login. How to reconstruct the query to allow to show both in one table? index=main sourcetype=xxxx [inputlookup yyyy |fields account_name|rename account_name as query] |search ACTION_NAME=LOGON RETURNCODE=0| stats max(_time) as login_time by user,source| eval login_time=strftime(login_time,"%Y/%m/%d %H:%M:%S")| table user,source,login_time

Data Retention Policy

$
0
0
Hi All, We have set the data retention has 1 year (365 days) for in cluster master. But when we search the data in Search and Reporting app for an index then we can able to fetch data more than a year too. For audit purpose we need to track what would be the exact data retention and after that there should not be any logs for the same. But in our case we can able to fetch data more than a year too. So is there any search query that can able to pull the exact data retention which has been set for all indexes and beyond that there should not be any data for that particular index. These are the configurations which we have been set in cluster master server under the following folder: /opt/splunk/etc/master-apps/mc_master_indexes/local [splunk@mon-prod-cm-1 local]$ cat indexes.conf [default] frozenTimePeriodInSecs = 31536000 maxTotalDataSizeMB = 20971520 [volume:hot] path=/data/hot maxVolumeDataSizeMB=2831156 [volume:cold] path=/data/cold maxVolumeDataSizeMB=12268340 So need your quick help regarding the same to get the exact retention which has been set for all indexes.

SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:676) when trying to use add-on.

$
0
0
I have this add-on installed on a HFW, just installed the 0Gb ingestion license (to allow KVStore to run) but I am now getting SSL errors?!? I can't see any SSL configuration element in the app/docs so am not sure where to start. Full Error Message: 2017-10-16 09:36:03,597 ERROR pid=869 tid=MainThread file=base_modinput.py:log_error:307 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ta_ms_o365_reporting/modinput_wrapper/base_modinput.py", line 127, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ms_o365_message_trace.py", line 72, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/input_module_ms_o365_message_trace.py", line 57, in collect_events r = requests.get(microsoft_trace_url, auth=requests.auth.HTTPBasicAuth(global_microsoft_office_365_username, global_microsoft_office_365_password)) File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ta_ms_o365_reporting/requests/api.py", line 70, in get return request('get', url, params=params, **kwargs) File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ta_ms_o365_reporting/requests/api.py", line 56, in request return session.request(method=method, url=url, **kwargs) File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ta_ms_o365_reporting/requests/sessions.py", line 488, in request resp = self.send(prep, **send_kwargs) File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ta_ms_o365_reporting/requests/sessions.py", line 609, in send r = adapter.send(request, **kwargs) File "/opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ta_ms_o365_reporting/requests/adapters.py", line 497, in send raise SSLError(e, request=request) SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:676) Why is this failing? And where do I start to fix it? Thanks.

How do I connect my java program to splunkd server?

$
0
0
I'm using the below mentioned code in java import com.splunk.*; // The entry point to the client library public class Connection { public static void main(String[] args) { ServiceArgs loginArgs = new ServiceArgs(); loginArgs.setUsername("admin"); loginArgs.setPassword("changeme"); loginArgs.setHost("apl10865gtapp14"); loginArgs.setPort(8089); loginArgs.setScheme("http"); HttpService.setSslSecurityProtocol(SSLSecurityProtocol.TLSv1_2); Service service = Service.connect(loginArgs); for (Application app : service.getApplications().values()) { System.out.println(app.getName()); }}} Below mentioned is the error which I get.... Exception in thread "main" java.lang.RuntimeException: Connection refused: connect at com.splunk.HttpService.send(HttpService.java:409) at com.splunk.Service.send(Service.java:1293) at com.splunk.HttpService.post(HttpService.java:308) at com.splunk.Service.login(Service.java:1122) at com.splunk.Service.login(Service.java:1101) at com.splunk.Service.connect(Service.java:187) at Connection.main(Connection.java:13) Caused by: java.net.ConnectException: Connection refused: connect at java.net.DualStackPlainSocketImpl.connect0(Native Method) at java.net.DualStackPlainSocketImpl.socketConnect(Unknown Source) at java.net.AbstractPlainSocketImpl.doConnect(Unknown Source) at java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source) at java.net.AbstractPlainSocketImpl.connect(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at sun.net.NetworkClient.doConnect(Unknown Source) at sun.net.www.http.HttpClient.openServer(Unknown Source) at sun.net.www.http.HttpClient.openServer(Unknown Source) at sun.net.www.http.HttpClient.(Unknown Source) at sun.net.www.http.HttpClient.New(Unknown Source) at sun.net.www.http.HttpClient.New(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.connect(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(Unknown Source) at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(Unknown Source) at com.splunk.HttpService.send(HttpService.java:403) ... 6 more Not sure what the problem is..!! Can you guys please help me out on this?

how to display 3rd row value in second row?

$
0
0
I have the following table: Month Value September 12 October 78 November 88 December 132 I want to display the value of October in September,November in October and so on.The final output should look like this Month Value September 78 October 88 November 132 December value of january How can this be achieved ?Please help. Thanks in advance

Splunk App fro Network Topology Visualisation

$
0
0
I am looking for a Splunk App to visualise network topology over a world map. I want to be able to display network nodes on the map by its geographic coordinates, draw the links between the nodes, and show other attributes, like alerts, traffic volumes, est. I am not able to find anything like that on Splunkbase. Do you have any suggestions?

Dell Defender SYSLOG Field Extraction

$
0
0
This post is about combining field extractions. I am working with **Dell Defender Syslogs** and want to extract different types of messages. But those Syslogs differ in length and content so **I have trouble building one regex to cover all**. Also if you are using the OR statement you can't bind multiple parts to one field unless you rename it. (REGEX ERROR MESSAGE: two named subpatterns have the same name ) **This REGEX would cover all raws.** (?[A-Z][a-z][a-z]( \w+| \w) \d+:\d+:\d+) (?\d+\.\d+\.\d+\.\d+) (?[A-Z][a-z][a-z]( \w+| \w) \d+:\d+:\d+) (?\w+) (?\w+):(?.*) **Message examples:** Prefix: Oct 12 15:54:14 IP Oct 12 15:54:14 SERVERNAME SERVER_NAME: 1. Radius Request from IP:PORT Request ID: REQUEST_ID 2. Radius request: Access-Request for USER_NAME from IP:PORT through NAS:AccessNode Request ID: REQUEST_ID Session ID: SESSION_ID 3. Radius response: Authentication Acknowledged User-Name: USER_NAME, Request ID: REQUEST_ID Session ID: SESSION_ID 4. User USER_NAME authenticated with token TOKENNUMBER Session ID: SESSION_ID 5. Radius response: Access-Challenge User-Name: USER_NAME Request ID: 5 Session ID: SESSION_ID 6. Requesting response USER_NAME 7. Info: User USER_NAME found as user CN=USER NAME,OU=Windows 7,OU=Users,OU=ORGANIZATION_UNIT,OU=WHERE,DC=COUNTRY,DC=DOMAIN,DC=local Session ID: SESSION_ID **What I need are Fields like this to enable proper filtering and search possibilities:** Timestamp (Oct 2 15:54:11 OR Oct 12 15:54:11) IP (x.x.x.x) EventTime (Oct 2 15:54:11 OR Oct 12 15:54:11) ServerName (SERVER_NAME) Type (Radius Request, Radius request, Radius response, authenticated, Requesting response, Info) Message (covering additional information) UserName (USER_NAME) RequestID (REQUEST_ID) SessionID (SESSION_ID) **Here are the regex that work for individual lines:** 1. Radius Request from IP: (?[A-Z][a-z][a-z]( \w+| \w) \d+:\d+:\d+) (?\d+\.\d+\.\d+\.\d+) (?[A-Z][a-z][a-z]( \w+| \w) \d+:\d+:\d+) (?\w+) (?\w+):(?\w+ \w+) (?.*) (?\d+.\d+.\d+.\d+):(?\d+) Request ID: (?\w+) 2. Radius request: Access-Request (?[A-Z][a-z][a-z]( \w+| \w) \d+:\d+:\d+) (?\d+\.\d+\.\d+\.\d+) (?[A-Z][a-z][a-z]( \w+| \w) \d+:\d+:\d+) (?\w+) (?\w+):(?\w+ \w+): (?\w+-\w+) \w+ (?\w+) \w+ (?\d+.\d+.\d+.\d+):(?\d+) through (?\w+:\w+) Request ID: (?\w+) Session ID: (?\w+) 3. Radius Response: (?[A-Z][a-z][a-z]( \w+| \w) \d+:\d+:\d+) (?\d+\.\d+\.\d+\.\d+) (?[A-Z][a-z][a-z]( \w+| \w) \d+:\d+:\d+) (?\w+) (?\w+):(?\w+ \w+): (?\w+ \w+) User-Name: (?\w+), Request ID: (?\w+) Session ID: (?\w+) 4. User Authenticated: (?[A-Z][a-z][a-z]( \w+| \w) \d+:\d+:\d+) (?\d+\.\d+\.\d+\.\d+) (?[A-Z][a-z][a-z]( \w+| \w) \d+:\d+:\d+) (?\w+) (?\w+):User (?\w+) (?\w+) (?\w+ \w+ \w+) Session ID: (?\w+) 5. Radius response: Access-Challenge (?[A-Z][a-z][a-z]( \w+| \w) \d+:\d+:\d+) (?\d+\.\d+\.\d+\.\d+) (?[A-Z][a-z][a-z]( \w+| \w) \d+:\d+:\d+) (?\w+) (?\w+):(?\w+ \w+): (?\w+-\w+) User-Name: (?\w+) Request ID: (?\w+) Session ID: (?\w+) 6. Requesting response (?[A-Z][a-z][a-z]( \w+| \w) \d+:\d+:\d+) (?\d+\.\d+\.\d+\.\d+) (?[A-Z][a-z][a-z]( \w+| \w) \d+:\d+:\d+) (?\w+) (?\w+):(?\w+ \w+) (?\w+) 7. Info: User ... (?[A-Z][a-z][a-z]( \w+| \w) \d+:\d+:\d+) (?\d+\.\d+\.\d+\.\d+) (?[A-Z][a-z][a-z]( \w+| \w) \d+:\d+:\d+) (?\w+) (?\w+):(?\w+): User (?\w+) (?.*) Session ID: (?\w+) **Questions:** Is there a way to combine all 7(+ one that covers all) REGEX statements in one extraction? If so, what would it look like? What would be your way to configure this kind of input? Do I need to follow an order so one REGEX doesn't cover others by accident?

How to add the Java Script File to Splunk Dashboard?

$
0
0
Hi, I am trying to load the JS file in my xml dashboard but i am unable to get the js data in xml dashboard i am facing difficulties i placed the file in the path (appname/appserver/static) and i mentioned in XML like i am not getting the JS Data. Please do needfull Thanks and Regards, Venu

Splunk Deployment Server and deployment client : error checksum

$
0
0
HI All When we want to deploy new update of a deployment app, we have error message on deployment client : 10-13-2017 18:26:28.736 +0200 WARN ClientSessionsManager - ip=10.22.192.187 name=3B9E25A9-75EE-4C7F-9978-DD6DE87F24FC Updating record for sc=monitoring-windows_fwd app=monitoring-windows_fwd: action=Install result=Fail checksum=10403769077432640354 How can we resolve this issue ? Regards

Error in 'eval' command: The expression is malformed. Expected )

$
0
0
This is my search query REST API call : curl -k -u admin:password https://api.splunk.ext.com/services/search/jobs/export -d search="search index=cpaws source=PFT buildNumber=17 type=REQUEST | stats p98(wholeduration) as current| appendcols [search index=cpaws source=PFT buildNumber=16 type=REQUEST | stats p98(wholeduration) as n] | appendcols [search index=cpaws source=PFT buildNumber=15 type=REQUEST | stats p98(wholeduration) as m] | eval previousVal=(n+m)/2 | eval success=if(previousVal*0.1 <= (current),\"Good\",\"BAD\")| table current,n,m,previousVal,success" -d earliest_time=-80h@h -d latest_time=now I am getting this error 0 Error in 'eval' command: The expression is malformed. Expected ). Please help me out for this. Thanks

How to show only certain results in the statistics, hence hide search results?

$
0
0
Hello, I would like to hide the following results in bold and only have the final eval statement show. I am only doing the calculations for the last eval statement. source="Dataset_Finance.csv" host="sample" index="dataintegration" sourcetype="SampleFinance" ObjectAccount="4*" OR ObjectAccount="5*" | eval **Sales**=if(ObjectAccount="411010",DomesticAmount,0), Costs=if(like(ObjectAccount,"5%"),DomesticAmount,0) | stats sum(Sales) as **Sales**, sum(Costs) as **Costs** | eval **CM**=Sales+Costs | eval CMPer=(CM/Sales)*100 Also, I noticed that I can not put a by statement after the eval, should I only include it in the stats section and how will I be able to categorize the CMPer by another value?

Problem Loading Modules

$
0
0
Hello, I clone one of my views in another splunk machine. The original works fine in the new machine but the cloned doesn't load and the chrome browser console says: ![alt text][1] [1]: /storage/temp/217854-captura.png Anyone can give me an html/js solution?? Thanks

Splunk Fields Extract Usage Performance

$
0
0
Below is my sample log format %timestamp% **com_java_package1**.subpackage someMessage exceptionMessage %timestamp% someText **com_java_package2**.v1.subpackage exceptionMessage %timestamp% **com_java_package3_v2**.subpackage exceptionMessage %timestamp% someText someOtherText someVeryBigText **com_java_package4**.subpackage someMessage exceptionMessage Usage 1: index=someIndex sourcetype=someSourceType (packageName=com_java_package1 OR packageName=com_java_package2) Usage 2: index=someIndex sourcetype=someSourceType ("com_java_package1" OR "com_java_package2") The logs are in a very bad shape where I cannot write a generic regex to extract packageName field. It requires lot of effort to put all combination to extract the packageName field. Now my question is - do I really need field extraction for packageName? Is there any potential benefits in performance of above usage over the other?

How to set Alert schedule?

$
0
0
we have 15 different hosts, we enabled an alert with condition if host is down we need to alert it. it has to check every 5 min if any host is down it has to alert it and also should not alert for second time for same host. how should i do it? **example**: At 10:00 AM HostA is down we need to alert it and if at 10:01 HostA and HostB are down then i should get an alert saying HostB is down but i should not receive alert for HOSTA again for 15mins. here is what we did,is it correct? ![alt text][1] [1]: /storage/temp/216818-splunk-alert.png

How to convert job duration to HH:MM:SS

$
0
0
I am trying to create a dashboard for the Job status and I want to convert the job duration to HH:MM:SS. I use the below Splunk search which gives result, but when the duration is more than 24 hours it outputs 1+10:29:14.000000 and with this I cannot sort the long running jobs. I want the duration always in HH:MM:SS. Any suggestions My searh|eval starttime = strptime(start,"%m/%d/%Y %H:%M:%S")|eval endtime = strptime(end,"%m/%d/%Y %H:%M:%S") |eval Diff=tostring((endtime-starttime ),"duration")|search Status!=RU|dedup job|table job start end Diff. So instead of 1+10:29:14.000000 I want to out it as 34:29:14
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>