Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Subsecond minspan in auto span timechart?

$
0
0
(How) can I create an auto span timechart that has a subsecond minimum span, such as 0.001s? ### Background to this question My dashboards show logs from systems that process many transactions per second. I have deliberately designed the dashboards to "work" regardless of the duration specified by the time picker. Timecharts "auto span": they automatically infer span from the data time range. (The X-axis of time-based charts automatically adjusts to match the duration.) These dashboards can be used to analyze logs across a wide variety of durations. In practice, users might initially be interested in a time range of several minutes, and then [zoom in][1] to analyze progressively narrower time ranges, down to subsecond time ranges. As far as I can tell, the minimum span that you can specify in Splunk (6.6.3) is one second, `minspan=1s`. Smaller, subsecond values, such as `minspan=0.001s`, cause visualizations to display the error "Error in 'timechart' command: Invalid value for minspan". Here, I am (re)developing in Splunk some dashboards that I have already developed in Kibana (5.5). In Kibana, I can zoom time-based charts to a minimum span of 0.001s. For example, I can zoom so far in that buckets are marked on the X-axis as 14:34:45.242, 14:34:45.243, 14:34:45.244, etc. In Splunk, the granularity "bottoms out" at 1 second; If I have a time range of 2 seconds, I'm looking at two buckets. Two fat bars. That's disappointing. Am I missing something? Can I somehow increase this granularity to match what I can do in Kibana? [1]: https://answers.splunk.com/answers/568487

Change search results color based on the the head of contents

$
0
0
Hi all, we made a form in Splunk, one column to write is our update situation (including the update time and content). Now, we only want to we give the update rows a background color. the contents example: 9/14:XXXXXXXXXXXXXXXXXXXX 9/13:XXXXXXXXXXXXXXX

Forwarder Cluster

$
0
0
Hi, When you have a splunkforwarder on a server using Cluster (Actif/Inactif), what can you do to Stop the splunkforwarder on the server who is Inactif, and Start the forwarder on The Actif when it is needed ? I don't want to have duplicate data .... Regards, Steve

Subsecond minspan in auto-span timechart?

$
0
0
(How) can I create an auto-span timechart that has a subsecond minimum span, such as 0.001s? ### Background to this question My dashboards show logs from systems that process many transactions per second. I have deliberately designed the dashboards to "work" regardless of the duration specified by the time picker. Timecharts "auto span": they automatically infer span from the data time range. (The X-axis of time-based charts automatically adjusts to match the duration.) These dashboards can be used to analyze logs across a wide variety of durations. In practice, users might initially be interested in a time range of several minutes, and then [zoom in][1] to analyze progressively narrower time ranges, down to subsecond time ranges. As far as I can tell, auto-span timecharts don't create buckets smaller than one second. Specifying `minspan=1ms` on my `timechart` command seems to have no effect. Here, I am (re)developing in Splunk some dashboards that I have already developed in Kibana (5.5). In Kibana, I can zoom time-based charts to a minimum span of 0.001s. For example, I can zoom so far in that buckets are marked on the X-axis as 14:34:45.242, 14:34:45.243, 14:34:45.244, etc. In Splunk, the granularity of auto-span timecharts "bottoms out" at 1 second; If I have a time range of 2 seconds, I'm looking at two buckets. Two fat bars. That's disappointing. Am I missing something? Can I somehow increase this granularity to match what I can do in Kibana? [1]: https://answers.splunk.com/answers/568487

different results in iplocation

$
0
0
Hi at all, I have a strange behaviour in iplocation: - I'm migrating some apps and indexes from an old infrastructure to a new one; - I checked differences in data and I have the same events in both the Indexes (old and new); - looking at geolocalization I found differences in one event; - I run the searches: - the first on the old server from the new one (the new one is configured as search head and the old one is configured as search peer), - the new on the local indexes of the new server; - the same event present in both the indexes (old and new) has the same Ip_Source in boith the indexes but has different lat and lon fields from iplocation command in the two indexes. Above there are the two version of event (it's the same _raw with different metadata) with interesting fields 30/08/17 09.56.00,000 2017-08-30 09:56:00.000, Data_Apertura="2017-08-30 09:56:00.0", Matricola="XXXXX", Cognome="XXXXX", SubArea="XX. Short_Message", Desc_lunga="Long_Message", Severity="Medium", Provenienza_Segnalazione="XXXXX", id="XXX", Ip_Source="xx.xxx.xx.x", Status="Chiuso" • Ip_Source = xx.xxx.xx.x • host = host1 • index = index1 • lat = 33.81810 • lon = -84.36040 • source = source1 • sourcetype = sourcetype1 30/08/17 09.56.00,000 2017-08-30 09:56:00.000, Data_Apertura="2017-08-30 09:56:00.0", Matricola="XXXXX", Cognome="XXXXX", SubArea="XX. Short_Message", Desc_lunga="Long_Message", Severity="Medium", Provenienza_Segnalazione="XXXXX", id="XXX", Ip_Source="xx.xxx.xx.x", Status="Chiuso" • Ip_Source = xx.xxx.xx.x • host = host2 • index = index2 • lat = 38.00000 • lon = -97.00000 • source = source2 • sourcetype = sourcetype2 Is it possible that different servers (with different versions of Splunk) return different lat and lon values after iplocation command? iplocation command uses a lookup located on the server where the run is executed or on the indexers? Bye. Giuseppe

Cisco syslog and double timestamp

$
0
0
Hello, all. I have a new question. That we have: 1. Main splunk server 2. Installed Cisco Security Suite and Splunk Add-on for Cisco ASA 3. Configured inputs data from cisco on UDP (create this via browser). Set index and sourcetype cisco:asa 4. Two cisco asa for data semple And after, when i collected some data, i found one trouble. For example 2 string: From first cisco: Sep 11 17:25:45 xxx.xxx.xxx.xxx Sep 11 2017 17:25:46: %ASA-3-713902: Group = yyy.yyy.yyy.yyy, IP = yyy.yyy.yyy.yyy, Removing peer from correlator table failed, no match! And from second: Sep 11 17:27:00 yyy.yyy.yyy.yyy %ASA-3-710003: TCP access denied by ACL from xxx.xxx.xxx.xxx/54483 to INT-WAN2:xxx.xxx.xxx.xxx/22 And how you can see on first cisco i have double timestamp, but on second cisco all good. I dump traffic to splunk and all cisco send correct identical data to my udp. How i can fix it? Thanks!

File indexed only occasionally

$
0
0
**My input.conf file:** [monitor:///var/log/openvpn/*hostname*_vpnStatus.log] disabled = 0 crcSalt = SOURCE index = iss-nipa-clients sourcetype = nipa:clients:status **My props.conf file:** [nipa:clients:status] [source::/var/log/openvpn/*hostname*_vpnStatus.log] CHECK_METHOD = modtime DATETIME_CONFIG = NONE **Extract from the forwarder splunkd.log:** 09-13-2017 11:55:02.104 +0200 INFO WatchedFile - Modtime is newer than stored, will reread file='/var/log/openvpn/*hostname*_vpnStatus.log'. 09-13-2017 11:55:02.110 +0200 INFO WatchedFile - Will begin reading at offset=0 for file='/var/log/openvpn/*hostname*_vpnStatus.log'. **The file to be indexed:** File created at: 2017-09-13_11:59:01 UNDEF,ip.ip.ip.ip:port,84,188,Wed Sep 13 11:58:16 2017,Tunnel_a c1115-ip.ip.ip.ip:port,19051077,18985566,Thu Aug 31 14:54:56 2017,Tunnel_a c1350,ip.ip.ip.ip:port,161253,160644,Wed Sep 13 09:24:57 2017,Tunnel_a c1255-1,ip.ip.ip.ip:port,176571,172050,Wed Sep 13 09:24:57 2017,Tunnel_a c1783-1,ip.ip.ip.ip:port,170017,175415,Wed Sep 13 09:24:59 2017,Tunnel_d c1215-1,ip.ip.ip.ip:port,167136,167643,Wed Sep 13 09:24:56 2017,Tunnel_d File created at: 2017-09-13_11:59:01 This file is created every minute and according to **splunkd.log** it is also read every minute, but not indexed **only periodicaly**. The created time stamp on the header and trailer is changing every minute as the creatation time of the file. Why is splunk not indexing this file every minute!!!!????

Can i restrict permissions for the text box ,drilldown inputs??

$
0
0
Hi All, below is my requirement. i want to restrict permissions for the text box in the dashboard.Only the users having admin access can change the values in the text box. Users having Read only access should not able to change the values in the text box. http://dev.splunk.com/view/webframework-developapps/SP-CAAAE88 this is the refrence i got .Which knowledge object should be used to set the permissions

What happens when I update maxTotalDataSizeMB in a live environment

$
0
0
Hi guys, We are running a multi site index cluster with 12 indexers (6 across 2 sites). Our goal is to limit the size of one of our indexes. We currently have an index that's sitting at 150gb and I was planning on using the maxTotalDataSizeMB to limit it to around 75gb. The remaining 75gb should go to frozen, is this correct? (We don't actual freeze data so I assume it just gets deleted) If i apply this update to maxTotalDataSizeMB when the cluster is up and running, will it just delete the data will carrying on with regular tasks as normal? As it is our production environment, we can't take the cluster down for even a minute. (rolling restarts are okay as it'll need one when I apply the new cluster bundle). Does anyone know if its okay to apply this config while the cluster is up and running? Cheers

Connection Timed out and An existing connection was forcibly closed by the remote host

$
0
0
Hi Guys, I am just a newbie in Splunk and this will be my first time to perform troubleshooting. I'm having a **connection timed out** with 6 of our servers and I think this is reason why there is no logs being forwarded to our Indexers. Also there is an error saying that "**An existing connection was forcibly closed by the remote host**". Hope anyone can help me on how to resolve this issue. Please see the screenshot below for reference. ![alt text][1] ![alt text][2] [1]: /storage/temp/214576-connection-timed-out.jpg [2]: /storage/temp/214577-existing-connection.jpg

Search Heads in cluster are not able to replicate properly

$
0
0
Hi! There are 2 search heads in our production cluster. We have implemented Alert Manager app in our SH and it incorporates alert manager specific lookups,Data Models and event types. Some of the functionalities of this app and dashboards are not getting replicated properly in all our search heads. In addition to this we are also facing few scenario's where the dashboards data are not getting replicated properly. We have increased the distsearch's default size to 3 Gb but still some times we have to face the above issue.

how to detect users using DNS different than Organization DNS

$
0
0
Dear All Good Day I need search detect users using DNS different than Organization DNS. Please share me your ideas & suggestion .

All data are not fetching and displaying

$
0
0
Hi I have upload csv file with very large data in one of the column value about more then 1000 characters(with special characters) and 41 words but all data are not displaying. Can you please tell what changes require to load all data ? Thank you in advance.

Query using inputlookup as primary, with nested query

$
0
0
I have an inputlookup table that has a list of details, specifically IP's. The user wanted a list of all IP's that existed in both the index and the inputlookup so I wrote a query similar to the following which lists ONLY the IP's that exist in both locations. index= | dedup clientip | search [inputlookup file.csv | table clientip] | table IP, host Now they want a query that lists all IP's in the inputlookup file in the output, noting whether or not they were found or not in the index (an eval statement?). Essentially list all hits AND misses

How do I scale my Splunk deployment to account for rising demand in indexing volume?

$
0
0
Hi Splunkers, My program is considering adding 600 more Linux UF endpoints to our current Splunk deployment (we have ~450 total UF endpoints now), and they're asking for a "wish list" of resources to support the additional volume. I have a pretty good idea of my licensing needs, and I've been using the Splunk online sizing tool to figure out how much additional disk capacity we need (based on our retention policies). Is there also a good sizing tool or document out there to help me figure out whether I need to increase RAM/CPU on my indexers, and possibly add another indexer? (and maybe add another deployment server) Just FYI - I currently have a 2 indexer cluster. Each indexer has 16 cores, 31 GB RAM

Is there a way to trigger an alert through a dashboard button?

$
0
0
I had an interesting request today from a team who was looking to enhance their Splunk dashboard by allowing for a manual trigger of an Alert. We currently have a custom alert set up that essentially does an snmp trap over to some of our alert monitoring tools. Most teams are using this alert in the typical fashion (i.e. scheduled searches that trigger alert on specific value). However this one team needs more of an ad-hoc alerting. They have their engineers analyze some of the data that they are reporting on, and until they come up with the appropriate algorithms and such to automate via schedules and the like, they would like to have a button that would actually kick off the alert action. Is this possible?

How to index the same field different values for ID?

$
0
0
How to index the same field "A" different values for the unique ID? A set of field "A" values is finite and for each ID can have multiple identical field values. After a few search strings I have a table. I try to explain by img: ![alt text][1] My main difficulty that I can't calculate the time difference between any two points of the field "A", because there are the same field "A" values. I think that this way will help me. [1]: /storage/temp/214579-index-field.jpg

how to use metadata to find the last reporting time of a list of hosts from a lookuptable without getting the "Metadata results may be incomplete: 100000 entries have been received from all peers" Warning

$
0
0
The following is my query | metadata type=hosts | search [| inputlookup hostnames.csv | rename my_hostname as host | eval host=lower(host) | table host ] | eval lastTime=coalesce(lastTime,0) | eval timeDiff=now()-lastTime | eval last_seen_in_24_hours=if(timeDiff>86400,"NO","YES") | eval lastReported=if(lastTime=0,"never",strftime(lastTime,"%F %T")) | stats count by last_seen_in_24_hours Now, The issue is that I have around 1000 hosts in the csv file but from the above query i can able to see only 400 hosts's information and also seen the below warning on the job. ![alt text][1] [1]: /storage/temp/215578-warning.png Now how to modify my current query to overcome that warning and display all the 1000 hosts reporting status ?

What is meant by "Spent xxxxxms reaping search artifacts in /opt/splunk/var/run/splunk/dispatch?

$
0
0
We saw a spike in the memory usage in one of the cluster search heads. This spike stayed for around 12 hours. When looking and comparing splunkd.log from all search heads, the impacted search head had something different. The warning in splunkd.log looks something like this: Spent 10777ms reaping search artifacts in /opt/splunk/var/run/splunk/dispatch Can anyone help me find out if the above would cause an excessive use of memory?

Search driven by KVStore parameters

$
0
0
I have a set of events similar to below and a working search for a single ID value of 133. My next step is to make the ID dynamic from a KVStore. My attempts so far have been unsuccessful and I could use some help. I am not even positive this is the right approach. This is for a custom app for internal use so options are wide open on how to best approach this. Ideas? Events: date time : Process Start for core instance ID: 133 date time : random message 1 date time : random message 5 date time : Process Ending ID: 133 date time : Process Start for core instance ID: 145 date time : random message 2 date time : random message 4 date time : random message 7 date time : Process Ending ID: 145 etc... Working search: index=myindex source=mysource [search index=myindex ("Process Start" AND "ID: 133") | head 1 | eval earliest=_time | table earliest] [search index=myindex ("Process Ending" AND "ID: 133") | head 1 | eval latest=_time+1 | table latest] | eval StatusCode= if((like(_raw, "%Process Start%") AND like(_raw, "%ID: 133%")), 1, if(like(_raw, "%Process Ending%"), 2, 0)) | stats sum(StatusCode) as StatusCode, min(_time) as StartTime | eval Started=if((StatusCode /1)>=1,"Success","Fail") | eval Finished=if((StatusCode /2)>=1,"Success","Fail") | eval Time=strftime(StartTime,"%c") | table StartTime, evalVal1, evalVal2 Desired Results: ID StartTime Started Finished 133 datetime Success Success 145 datetime Success Fail
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>