Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Splunk Query Grammar

$
0
0
I have a system that receives data from other systems for auditing purposes. One of these systems uses Splunk and I have a need to parse the queries. I am hoping someone can point me to a grammar for the Splunk language (Antlr, BNF, etc.).

Alert triggering if there is we see "results not found" Count shows "0" in splunk dashboards.

$
0
0
Sometimes there are problems in loading splunk dashboards (example: "results not found" Count shows "0" etc). Trying to alert team if there any issues in Splunk. Please help how can we achieve it?

How can I move an index in *.nix?

$
0
0
Hi, I have index A stored on my systemdisk (i know), and I have made a new Index B on my datadisk. How will I go forward with putting the IndexA events into IndexB, so I can delete IndexA. Or just move the Index and restart Splunk? What is the best way to fix this? Is it possible to merge it or to move it? Does anyone have experience with this? System is running Red hat 7* Thanks in advance for all help

How do you find the difference between two different indexes with fields in common?

$
0
0
Hi all, I am working on a piece of work on reconciling the trades from DB and a log. I had a thought that the below query should be working fine, but it is not. It has shown be 9K differences if I ran it against Today or yesterday time range. I have checked the query separately to see if the version has changed, but it's the same version and same trade when I ran the 2 searches separately. | set diff [search index=stdb sourcetype=stdbtype | dedup TRADEID sortby -AUD_VER | rename TRADEID as tradeId,AUD_VER as SMTVersion | table tradeId, SMTVersion] [search index=XXX_inbound SMT55/BOND_TR | dedup tradeId sortby -SMTVersion | table tradeId, SMTVersion] If I investigate a few trades, the version and the trade ID are the same, but it shows as a difference in the above query. I'm not sure why and I'm pretty much confused. Any help is much appreciated.

How do you apply CSS definitions to a row of charts in a single panel?

$
0
0
Hello guys, I hope you can guide me with this since I've been going around this for some time and I am not seeing a solution in sight. I have a panel with 3 charts in it, and I wanted to put them in a row. I've tried using CSS for .panel-element-row, but unfortunately, it affects all the charts in the dashboard, which is not the desired goal. .panel-element-row{ display: inline-block !important; width: 33% !important; } I've applied a class to see if it worked, but to no avail: #panel1 .panel-element-row{ display: inline-block !important; width: 33% !important; } Can you guys guide me the right way?

How come our CSV scheduled export for a savedsearch can't have more than 50.000 rows?

$
0
0
Hi, We need to have a copy of a big SQL table in a CSV file to speed up some lookups... We do retrieve the data using a savedsearch, and we schedule it to run every hour and save the result to a CSV file. The search is like this: | dbxquery maxrows=0 query="query string" connection="db_connection" | fields field1, field2, field3, field4, field5, field6, field7, field8, field9 Adding the maxrows=0 allow to retrieve all data. If we run the search thru Splunk web, we do see 507.000 results. If we use the API to get the results as explained in this link: [Exporting Large Result Sets to CSV][1] We get the full CSV, with 507.000 rows, and we can use it for lookups. However, if we create a schedule to the savedsearch and a trigger to export to a lookup CSV file, we only get 50.000 lines... How can we save the whole 500.000 lines to a CSV using the scheduler? Thanks in advance! [1]: https://www.splunk.com/blog/2013/09/15/exporting-large-results-sets-to-csv.html#

How do you send an alert when the number of records goes below 20% of the daily avg?

$
0
0
Hi , I am using the below query to get an average count . But how do I write a query to send an alert when the number of records goes below 20% of the daily average? index= abc platform=xyz | stats avg(count) by _time

log unsafe as it does not exist anymore, scheduling a oneshot timeout instead.

$
0
0
Running syslog-ng with a HF. Logrotate runs hourly. 16 or so different web proxies are sending logs to the syslog-ng server with the HF. Sometimes 1 out of the 16 proxy log sources are no longer getting read by the HF even though the proxy log file exists in syslog-ng and can be read. At the top of the hour it fixes itself and the HF reads the file but i'm out of logs for an hour for correlation rules. DEBUG was enabled and below shows the log right at the time of the last log event seen in Splunk 01-25-2019 22:00:05.997 +0000 DEBUG TailingProcessor - Defering file=/var/syslog/proxy/192.168.251.141/proxy.log unsafe as it does not exist anymore, scheduling a oneshot timeout instead. ./splunk list inputstatus | grep -A4 proxy | grep -A4 192.168.251.141 /var/syslog/proxy/192.168.251.141 parent = /var/syslog/proxy/*/*.log type = directory <<<<<<<>>>>>>> /var/syslog/proxy/192.168.251.141/proxy.log file position = 30610690 file size = 30610690 parent = /var/syslog/proxy/*/*.log percent = 100.00 I read logrotate should not be used with syslog-ng. Anyone ever see this message?

setting up indexer cluster

$
0
0
Hi, I was trying to setup indexer cluster, but in the console its showing the replication factor and search factor or not met ? got the below error in Splunkd.log Eviction results: count=0, test_count=0, bytes_evicted=0, bytes_needed=5610983424, elapsed_ms=1 01-26-2019 07:43:28.260 +0000 WARN CacheManager - Unable to evict enough data. Evicted size=0 instead of size=5610983424 01-26-2019 07:43:29.261 +0000 INFO CacheManager - Eviction requested, bytes_needed=5610983424 partition of path=/opt/splunk/var/lib/splunk/audit/db 01-26-2019 07:43:29.261 +0000 WARN CacheManager - Last run failed to evict requested bytes. Performing eviction in urgent mode for path=/opt/splunk/var/lib/splunk/audit/db

ERROR UserManagerPro - Could not get info for non-existent user="tesla"

$
0
0
ERROR UserManagerPro - Could not get info for non-existent user="tesla" We have alerts setup to trigger .py scripts for some auto remediation tasks. As of today I am seeing a lot of these errors and not just for one user account and my scripts are not pulling a .gz files. Any ideas what these errors mean. I review all of the answers for other similar questions but the case doesn't match up. All of the users have active accounts and can login to the web interface. My authentication is over LDAP. Any suggestions would be greatly appreciated.

Splunk Query for license usages

$
0
0
Hi All, Can you please help me with the search query to extract the license usages for last 1 yr, I am trying below search query but it is taking too long to extract the results. Please suggest the same. Thanks

Splunk Integration with Resilient.

$
0
0
I want to integrate Splunk Enterprise 7.1.1 with IBM Resilient in order to create effective IRPs. What is the best practice to do it? I have tried the "Resilient Integration for Splunk and Splunk ES" app but whenever I am trying to connect it says authentications failed, check Resilient org name and credentials in spite of entering the correct one. Please let me know the best way to do it. Thanks! https://splunkbase.splunk.com/app/3861/

Query about Architect Exam (recertification)

$
0
0
folks, For the final Architect Exam, can you please confirm 1. If the Architect Exam is multiple choice? (or any other format?) 2. Any books/materials to look for sample questions? thanks in advance

How to configure Search Head in one of the Index Server

$
0
0
Hi I have the following setup : 1 x Node Master with 2 x indexer ( Clustering) How can I configure to designate one of the indexer as my search head server. Currently the default search head is pointing to my Node Master. Thanks for the advice.

indexes.conf stopping my search heads from starting.

$
0
0
hello, i just want to confirm / clarify if what i am about to do is correct. i have read the index guides, indexes guides and cluster guides. When my splunk multi site and clustered approach died my indexed data was no longer searchable and my search heads in particular would not turn on. it gave me a db error. i did some fault finding and effectively the homepath to my indexed data was both not writable anymore and also in the wrong place (PS architect that built the environment and put it in this specific location) so my question.... If i change the indexes.conf file to have the location of the indexed data to "servername/D:/Splunk/hotdb" "servername/D:/Splunk/colddb" "servername/D:/Splunk/thaweddb" **servername being the specific networked name of the new storage array.** will that allow me to store the data there and will it be searchable? yes i will ensure ability to write to that location. part 2: what other specific files need to be changed on a multi site clustered indexer environment in order to make this work? i have a cluster master, license master, deployment server. 3 x indexers in each location and 1x SH in each location. it is still in test so losing the data isnt actually a drama, i just want a correctly working area first. part 3: due to monetary constraints the second site at the moment does not have its own data storage array and will for the moment be using the first sites storage.... when i get this storage will i then have to change the indexes.conf file on the second site to this... "servernamesite2/D:/Splunk/hotdb" "servernamesite2/D:/Splunk/colddb" "servernamesite2/D:/Splunk/thaweddb" any help is greatly appreciated. willsy

How do i percentage for PROCESSED and STARTED on below query

$
0
0
index=ciaudit eventname=* | spath "EventStreamData.response.verificationStatus" | search "EventStreamData.response.verificationStatus"=PROCESSED OR "EventStreamData.response.verificationStatus"=STARTED | rename "EventStreamData.response.verificationStatus" as verificationStatus | stats count by verificationStatus verificationStatus count PROCESSED 2 STARTED 187 Stated /processed *100

Why I can't use case insensitive match in lookup with WILDCARD?

$
0
0
My environment : Splunk Stand-Alone ver 7.2.3 I'd like to extract username that match with lookup case-insensitively, also I want to extract username that match with lookup using WILDCARD. But in 7.2.3, I can't realize it. * Although in 7.1.4, I can. The settings and search used for verification are as follows. `transforms.conf` [test_case_insensitive] batch_index_query = 0 case_sensitive_match = 0 filename = test_case_insensitive.csv match_type = WILDCARD(status) Lookup table : `test_case_insensitive.csv` status,status2 "*AAAAA*","OK!" Example search | makeresults count=3 | streamstats count as c | eval status=case(c=1, "###AAAAA###", c=2, "###aaaaa###", c=3, "###AAaaa###") | lookup test_case_insensitive status OUTPUT status2 Is this a bug? If someone know about it, please tell me, also give me workaround.

Integrate highcharts.js or D3.js or charts.js into Splunk for advanced visualizations ?

$
0
0
Hi All, I am currently looking for and want to evaluate various js charts into Splunk. I need suggestion what would be the best JS libraries to easily integrate with Splunk without much development efforts ? Also, I need to know what JD charts are licensed and which once are free, so that I can budget the requirements accordingly. Also, how easy it would be to integrate without much knowledge on javascript and XML ? Please advise ? I can add more questions, once I get reply on this. Thanks.

splunk installation in windows 10

$
0
0
Warning: overriding %SPLUNK_HOME% setting in environment ("C:\Program Files\Splunk\bin") with "C:\Program Files\Splunk". If this is not correct, edit C:\Program Files\Splunk\etc\splunk-launch.conf I want the solution in this error Thanks & Regards,

How to disable methods from the https://localhost:8000 ?

$
0
0
When I make curl -v -X https://127.0.0.1:8000 It returns Accept: all or any(*/*) it seems like all methods are working. It means that GET, POST, HEAD, OPTIONS are working. But, I want make only working on get, post. Because, security reasons. How could I make it?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>