Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

join options

0
0
Could you please explain the following three options of join? I could not understand them. ---------- usetime Syntax: usetime= Description: A Boolean value that Indicates whether to use time to limit the matches in the subsearch results. Used with the earlier option to limit the subsearch results to matches that are earlier or later than the main search results. Default: true ---------- earlier Syntax: earlier= Description: If usetime=true and earlier=true, the main search results are matched only against earlier results from the subsearch. If earlier=false, the main search results are matched only against later results from the subsearch. Results that occur at the same time (second) are not eliminated by either value. Default: true ---------- overwrite Syntax: overwrite= Description: Indicates whether fields from the subresults overwrite the fields from the main results, if the fields have the same field name. Default: true

Cron question

0
0
Does this work? `*/10 19-23,0-6 * * 1-5` I have reason to believe that it only works for 19-23 and not 0-6. Checks out fine on crontab: https://crontab.guru/#*/10_19-23,0-6_*_*_1-5 We should have received an alert last night that was beyond midnight, after checking my OOH's alerts sent over the past week, it would seem that the only things we have are the alerts that are before midnight, and none sent after midnight. Is there a query I can run to see if the cron schedule triggered at 01:50? Would be helpful also to see if an email was dispatched, it's strange because our daytime alert has picked this up

lookup table

0
0
I want to display the result in a graph based on the results of the following two join queries. I can store these values in two lookup tables temporarily. Is there a way to read values from more than one lookup table at the same time? OR any other option in this situation? I may have to add more queries like these in future. index="index1" sourcetype="production-response" | eval running_ok = if(response_status="Reponse test success","2","0") |sort 0 - _time| join running_ok [search index="index1" sourcetype="production-monitor" | eval running_ok = if(monitor_status="Monitor running","2","0")|sort 0 - _time ] | stats count(eval(running_ok="0")) AS result | eval redCount = if(result >2,result,0)| eval greenCount = if(result 2,result,0)| eval greenCount = if(result

Cron Expression: How to make this expression: every 2 hours, 6am to 8pm, everyday

0
0
Hello guys, Can you help me with this Cron expression: every 2 hours, 6am to 8pm, everyday I tried this one bellow, but it's not working. */120 6-20 * * * Thanks! cheers,

Extract value behind java.lang

0
0
Hi, I am trying to retrieve the information behind the value "at java.lang. ..." I tried the following command but without result : java.lang | rex field=_raw "at java.lang. (?.*)" Example : 2016-11-25T01:13:01.393Z ERROR 15204582 --- [][][] --- [DiscoveryClient-HeartbeatExecutor-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_OAUTH/Server:oauth:port(-1655386256) - was unable to send heartbeat! com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:111) ~[eureka-client-1.4.6.jar!/:1.4.6] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.sendHeartBeat(EurekaHttpClientDecorator.java:89) ~[eureka-client-1.4.6.jar!/:1.4.6] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$3.execute(EurekaHttpClientDecorator.java:92) ~[eureka-client-1.4.6.jar!/:1.4.6] at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77) ~[eureka-client-1.4.6.jar!/:1.4.6] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.sendHeartBeat(EurekaHttpClientDecorator.java:89) ~[eureka-client-1.4.6.jar!/:1.4.6] at com.netflix.discovery.DiscoveryClient.renew(DiscoveryClient.java:827) ~[eureka-client-1.4.6.jar!/:1.4.6] at com.netflix.discovery.DiscoveryClient$HeartbeatThread.run(DiscoveryClient.java:1383) [eureka-client-1.4.6.jar!/:1.4.6] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:522) [na:1.8.0-internal] at java.util.concurrent.FutureTask.run(FutureTask.java:277) [na:1.8.0-internal] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153) [na:1.8.0-internal] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [na:1.8.0-internal] at java.lang.Thread.run(Thread.java:785) [na:1.8.0-internal] Thanks for your help.

set hostname

0
0
Hi at all, I have to create an Heartbeat Alert that contains three fields: - TimeStamp, - HostName - Message My problem is HostName because I have a Search Head Cluster with three SH, so I cannot use a fixed value and I don't know how to set in a search the value of the present SH that is executing the search. Is there a way to do this? Thank You. Bye. Giuseppe

Logfile falling behind...

0
0
Hi, We recently enabled syslog for dns devices, including query events. I checked this morning, and the events are about 4 hours behind. Looking for advice on how to fine-tune this... This particular logfile is huge: 254130888464 Nov 25 08:29 system-ftcnsrtp1.log - and growing rapidly. We have lots of files on this server, but none remotely close to the size of this one. When I run the "inputstatus" command, that feed is in batch mode. I don't see any messages about thruput warnings from this heavy forwarder. /apps/logs/2016/11/25/system-ftcnsrtp1.log file position = 133013964565 file size = 9895002842 parent = /apps/logs/2*/*/*/system-ftc*.log percent = 1344.25 type = reading (batch) Thoughts?

How to collect "Analytic and Debug logs" from windows event log

0
0
Hi. I am trying to get Splunk to read an "AD FS 2.0 Tracing/debug" log. When looking at the log in the Windows eventViewer you have to enable the viewing by right clicking on "Applications and Services logs" select View and enable "Show Analytic and Debug logs". When looking at the eventlog properties they show the name as "AD FS 2.0 Tracing/Debug" I paste that name into the inputs.conf, restart the Universal forwarder, and expects the logs to show up in my Splunk instance, sadly no logs show up. I have verified there are log entries when looking thru Winevent viewer. I get both the security log and an Admin log from the same server. Do I have to do something different when dealing with debug logs? My inputs.conf file for the server: [default] evt_dc_name = evt_dns_name = [WinEventLog://AD FS 2.0/Admin] index = wineventlog disabled = 0 [WinEventLog://Security] index = wineventlog disabled = 0 [WinEventLog://AD FS 2.0 Tracing/Debug] index = wineventlog disabled = 0 [WinEventLog://AD FS 2.0 Tracing-Debug] index = wineventlog disabled = 0 Thanks for the help.

How to get the first event from a search AND get 1 event in a timechart by source ?

0
0
Hi all, How to get the first event from a search AND get only 1 event in a timechart by source ? (and not "by source, span interval): If I try this query: index=blabla sourcetype=blabla source=blabla1 "MySpecificFilter" | table _time, source, mySpecificValue I can get for example 10 events in my source blabla1, 15 in the source blabla2, ... I want to select, for each source, the first one and to chart them with a timechart command. Thank's in advance for help.

Lot of Splunk internal connections in CLOSE_WAIT status

0
0
Hi, We have a splunk app which exposes a REST end point for other application to request metrics. The main piece of python code inside the method is: service = self.getService() searchjob = service.jobs.create(searchquery) while not searchjob.is_done(): time.sleep(5) reader = results.ResultsReader(searchjob.results(count=0)) response_data = {} response_data["results"] = [] for result in reader: if isinstance(result, dict): response_data["results"].append(result) elif isinstance(result, results.Message): mylogger.info("action=runSearch, search = %s, msg = %s" % (searchquery, results.Message)) search_dict["searchjob"] = searchjob search_dict["searchresults"] = json.dumps(response_data) The dependent application invokes the REST API at some scheduled intervals. There are close to 150 calls that is spread across various time intervals. Note: At any point of time there will be maximum 6 search requests. **Normal scenarios** Remote application and my Splunk app - both are up and running - everything is fine. For some reason if I have to restart remote application, and after restart - both are up and running - everything is fine. For some reason if I have to restart my Splunk process, and after restart - both the applications are up and running - everything is fine. **Problematic scenario:** The problem starts when the system where remote application is running is rebooted. After reboot, the remote application will start making calls to the splunk application and in about 60 min, the number of CLOSE_WAIT connections reaches 700+ and eventually splunk system starts throwing socket error. Splunk Web will also become inaccessible. Additional Info: - The remote application is a python application written using Tornado framework. The remote application runs inside a docker container that is managed by Kubernetes. - The ulimit -n on splunk system shows 1024. (I know that as per Splunk recommendation it is less. But i would like to understand why the issue occurs only during remote systemreboot) - During normal times, the searches take on an average 7s to complete. When the remote machine is rebooted during that time the searches take on an average 14s to complete. (Well this may not make sense to relate remote system reboot with splunk search performace on the splunk system. But thats the trend) The CLOSE_WAIT connecitons are all internal tcp connections tcp 1 0 127.0.0.1:8089 127.0.0.1:37421 CLOSE_WAIT 0 167495826 28720/splunkd tcp 1 0 127.0.0.1:8089 127.0.0.1:32869 CLOSE_WAIT 0 167449474 28720/splunkd tcp 1 0 127.0.0.1:8089 127.0.0.1:37567 CLOSE_WAIT 0 167497280 28720/splunkd tcp 1 0 127.0.0.1:8089 127.0.0.1:33086 CLOSE_WAIT 0 167451533 28720/splunkd Any help or pointers is highly appreciated. Thanks, Strive

Error setting up Amazon Kinesis Modular Input

0
0
Hello, I'm looking for guidance on configuring the Amazon Kinesis Modular Input. I tried configuring it using the UI, with the "Data Output" option set to "Http Event Collector" and provided all the details. But then when I hit next, I get the following error message: Encountered the following error while trying to update: Validation Failed : A Kinesis connection can not be establised with the supplied propertys.Reason : INSTANCE I was however able to configure this with the "Data Output" option set to "STDOUT". But even though I enabled the input, it doesn't seem to be pulling data from the stream into Kinesis. I'm not sure what this STDOUT data output means. Can anyone point me in the right direction? Is there some documentation on how to configure and use this add-on? Thanks, Richard

I want to get the count of forwarders that are reporting from each application/Workspace

0
0
Hi Splunkers, I want to get the count of forwarders that are reporting from each application/Workspace. Example: I have created 4 apps/workspace for 4 different teams. So now I want to get the count of forwarders that are reporting from each application/Workspace Is there any search which can give me the above information in a single search ? Thanks in advance, Thippesh

Specifying a value in one place, using it in several searches

0
0
I have several saved searches that contain `where vehicle_distance

How to do basic search Splunk (Anomali)?

0
0
Install Anomali from Splunk App, and upload 2 sets of data. (1) Network.log that have several IP address I am interested. (2) web.log that has several url, that I hope it will show up after my search. However, after I click "gemerating Network Summary", or :Generating Web Summary" or "Genarating and Uploading Summaries" . If I click "Run" next to Generating and Uploading Summaries, i am getting -- "Failed to run the search" I was told i should be ble to seeing the search result, and graph. So, what I am doing wrong?

Is there any way to do stats counting over multiple time frames

0
0
Is there any way to do stats counting over multiple time frames? I am trying to replace something written in perl and output to .xls format. I wish to count IP addresses in each subnet; I have about 3500 subnets that I wish to summarize across multiple time frames ( current, -30days, -60days, -90days). I have done the first part by doing a CIDR lookup to subnet and then counting. I am looking for alternate ideas to accomplish the same thing. Help? Tim ( index=network_dns OR index=network_bro ) earliest=-30d | rex field=named_message "client (?\d+\.\d+\.\d+\.\d+)" | fields _time id_orig_h id_resp_h client_ip | eval ip=coalesce(id_orig_h, id_resp_h, client_ip) | regex ip="\d+\.\d+\.\d+\.\d+" | dedup ip | lookup cidr_ranges subnet AS ip OUTPUT subnet | eval ip_class=if(`is_my_network(ip)`, "MINE", "External") | stats count(ip) as count by subnet, ip_class | where ip_class="MINE" | where subnet!="UNKNOWN" | sort subnet | table count subnet

Indexer peers down due to wrong home path

0
0
While pushing the cluster bundle from cluster master to indexers there was a wrong homepath in the indexers app that was being pushed as a result the peers could not restart and peers were down and each indexer was manually restarted after removing the app from the slave apps folder. Is that a good practice or will there be any loss of buckets in the process ?

what does "Error in TsidxStats": Could not find datamodel: TS_optic" mean and how do I fix it?

0
0
I am new to the Splunk world, but I was trying to use Splunk - > anomalie and a search but got the following errors: (1) Error in "TsidxStats": Could not find datamodel: TS_Optic (2) The search job has failed due to an error. You may be able view job in the "Job Inspector" My question is: what is datamodel : TS_Optic? How do I ceate one?

How to get the first event from a search AND get 1 event in a timechart by source?

0
0
Hi all, How to get the first event from a search AND get only 1 event in a timechart by source ? (and not "by source, span interval): If I try this search: index=blabla sourcetype=blabla source=blabla1 "MySpecificFilter" | table _time, source, mySpecificValue I can get for example 10 events in my source blabla1, 15 in the source blabla2, ... I want to select, for each source, the first one and to chart them with a timechart command. Thank's in advance for help.

How to resolve lots of Splunk internal connections in a CLOSE_WAIT status?

0
0
Hi, We have a Splunk app which exposes a REST end point for other application to request metrics. The main piece of python code inside the method is: service = self.getService() searchjob = service.jobs.create(searchquery) while not searchjob.is_done(): time.sleep(5) reader = results.ResultsReader(searchjob.results(count=0)) response_data = {} response_data["results"] = [] for result in reader: if isinstance(result, dict): response_data["results"].append(result) elif isinstance(result, results.Message): mylogger.info("action=runSearch, search = %s, msg = %s" % (searchquery, results.Message)) search_dict["searchjob"] = searchjob search_dict["searchresults"] = json.dumps(response_data) The dependent application invokes the REST API at some scheduled intervals. There are close to 150 calls that is spread across various time intervals. Note: At any point of time there will be maximum 6 search requests. **Normal scenarios** Remote application and my Splunk app - both are up and running - everything is fine. For some reason if I have to restart remote application, and after restart - both are up and running - everything is fine. For some reason if I have to restart my Splunk process, and after restart - both the applications are up and running - everything is fine. **Problematic scenario:** The problem starts when the system where remote application is running is rebooted. After reboot, the remote application will start making calls to the splunk application and in about 60 min, the number of CLOSE_WAIT connections reaches 700+ and eventually splunk system starts throwing socket error. Splunk Web will also become inaccessible. Additional Info: - The remote application is a python application written using Tornado framework. The remote application runs inside a docker container that is managed by Kubernetes. - The ulimit -n on splunk system shows 1024. (I know that as per Splunk recommendation it is less. But i would like to understand why the issue occurs only during remote systemreboot) - During normal times, the searches take on an average 7s to complete. When the remote machine is rebooted during that time the searches take on an average 14s to complete. (Well this may not make sense to relate remote system reboot with splunk search performance on the splunk system. But thats the trend) The CLOSE_WAIT connections are all internal tcp connections tcp 1 0 127.0.0.1:8089 127.0.0.1:37421 CLOSE_WAIT 0 167495826 28720/splunkd tcp 1 0 127.0.0.1:8089 127.0.0.1:32869 CLOSE_WAIT 0 167449474 28720/splunkd tcp 1 0 127.0.0.1:8089 127.0.0.1:37567 CLOSE_WAIT 0 167497280 28720/splunkd tcp 1 0 127.0.0.1:8089 127.0.0.1:33086 CLOSE_WAIT 0 167451533 28720/splunkd Any help or pointers is highly appreciated. Thanks, Strive

Amazon Kinesis Modular Input: How to configure this add-on to pull data into Kinesis?

0
0
Hello, I'm looking for guidance on configuring the Amazon Kinesis Modular Input. I tried configuring it using the UI, with the "Data Output" option set to "Http Event Collector" and provided all the details. But then when I hit next, I get the following error message: Encountered the following error while trying to update: Validation Failed : A Kinesis connection can not be establised with the supplied propertys.Reason : INSTANCE I was however able to configure this with the "Data Output" option set to "STDOUT". But even though I enabled the input, it doesn't seem to be pulling data from the stream into Kinesis. I'm not sure what this STDOUT data output means. Can anyone point me in the right direction? Is there some documentation on how to configure and use this add-on? Thanks, Richard
Viewing all 47296 articles
Browse latest View live




Latest Images