Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Quotes around first word in inputlookup value

$
0
0
I am using an input lookup to exclude results from a search (e.g. index=main NOT [| inputlookup test_lookup.csv | fields value]. The searches I am trying to exclude contain values with quotes, such as **"foo" bar bat**. It seems that if the first word in a lookup table value is surrounded in quotes, it will take the word surrounded in quotes as the value for that field and ignore the rest. A lookup of the example above returns only **foo**. Quotes appear to work find around words, so long as they are not the first word in the value. I've cruised around looking for the answer, and came across a number of posts suggesting triple quoting, using hex char value for quotes, etc and I've also tried a number of things on my own without any success. Thus I have come here. The lookup result I am trying to get is: **"foo" bar bat** Here is the contents of my lookup file: value,comment "foo" bar bat, double quotes around first word foo "bar" bat, double quotes around second word foo bar "bat", double quotes around third word """foo""" bar bat, triple-double quotes around first word \"foo\" bar bat, backslash escaped double quotes around first word '"foo" bar bat', single quotes around the whole field and here are the results of the lookup table: ![input_lookup][1] Thanks in advance for any assistance. [1]: /storage/temp/251083-lookup-with-quotes.jpeg

Populate dropdown menu using lookup and tokens with multiple field values

$
0
0
I am trying to populate a dropdown menu using a lookup table that contains all my server's hostname in one column and their Category in another | inputlookup UFlookups.csv | dedup Category | stats count by Category host | fields - count The query above populates the dropdown with the category names as intended but I'm only able to show one server per Category when there should be showing several. Could this be due to my dedup? Another thing I read was using eval tokens in the XML, if that's the preferred method, can someone help me understand how to should multiple host if my token is named Hosts?

How do I change the owner of alerts in splunk web UI or conf file?

$
0
0
Dears, I have around 100 alerts configured in splunk with one AD user. Since this AD user is left the organization, I need to change the ownership of all alerts under his name to my name. Is this possible in splunk. I couldn't find any docs as such for this. Tried looking at savedsearch.conf under the app but there is nothing like owner filed in any alert. Thanks, Ramu Chittiprolu

How to configure Load Balancing on Splunk Search Heads?

$
0
0
Hi! So I set up a F5 Load Balancer and listed all of my Splunk search heads as pool members. Apparently the load balancer performs a health check, and therefore, requires a health monitor URI and a health monitor response! So I'm consulting you guys, which URI and response should I use? It's just a simple request and response to check if my search head is up. With the default configurations my server is considered down, of course. I have no experience with load balancers so please be gentle.

avg many fields

$
0
0
HELLO I try to do an avg on multiple fields but i dont succeed for one field i use this / stats avg(ReadOperationCount) BY host but if i want to do the same for 2 fields (toto for example), how to do please????

how to deploy search head and indexer

$
0
0
Hi how to deploy search head and indexer with detailed steps regards smdasim

Mail Alert Notification Is Not Working After One Month

$
0
0
Hi Experts, I have triggered Mail alert notification on the real time format. I got last email alert notification on 30.06.2018 after that I got error which is visible in search result but didn't get any mail notification for that. Even I got that errors when I searched with "all time" option but didn't get that same errors when it was in "within 7 days" option. Can anyone help me one this matter?? Please let me know if you have any quires.

What is the optimum setting value?

$
0
0
In my environment, 800,000 mails are sent a day. This time, when introducing Microsoft Office 365 Reporting Add-on for Splunk, I am worried about the following values. 1.interval 2.query_window_size 3.delay_throttle Do these optimum values ​​exist? If it does not exist, what do you need to think based on? Thank you

MSSQL ERRORLOG problem

$
0
0
I am using splunk to monitor the MSSQL ERRORLOG files. My goal is to list the failed and success logons into MSSQL. Without using **db connect 2** and just the **Splunk_TA_microsoft-sqlserver**, am I looking into the correct logs (ERRORLOG*) for this?? inputs.conf [monitor://C:\Program Files\Microsoft SQL Server\MSSQL12\MSSQL\Log\ERRORLOG*] sourcetype = mssql:errorlog disabled = 0 The file path is correct, however, i realized there wasn't a stanza in the props.conf for [mssql:errorlog] My question is am i looking at the correct log file, and do I have to do a manual field extraction for this sourcetype [mssql:errorlog] for MSSQL failed and success logons?

The replication factor process is not complete?

$
0
0
we have 3 indexers, since two weeks ago 2 indexers down, after 2 weeks from repair the servers became UP, but there is a delay in the replication factor process? is this normal or not? There is a decrease in the number of buckets but it is very few and may take days , but we need the quick solve? Splunk Version 6.6.5 instillation on Linux servers Version 3.10.0-514.e17.x86_64. please we need your support. Thank you

display start and endtime in results

$
0
0
I would like to write a query which will start with "starttime=06/08/2018:00:00:00 endtime=06/08/2018:00:01:00 index=* ..." and then take starttime and endtime as parameters... and create an epoach time in the result. basically every 1 minute I plan to execute starttime=06/08/2018:00:00:00 endtime=06/08/2018:00:01:00 index=* ... starttime=06/08/2018:00:00:00 endtime=06/08/2018:00:02:00 index=* ... starttime=06/08/2018:00:00:00 endtime=06/08/2018:00:03:00 index=* ... ... and I want to get something as a table like 1533686460,1 1533686520,1 1533686580,1 ... thanks

Display the Results of Search Query at regular intervals of time with fixed start DateTime

$
0
0
Hi , Currently am running below SPlunk Search Query where am using earliest=-0d@d latest=-2m. earliest=-0d@d latest=-2m | spath message | rex field=message "TradeID = (?\w+)" | dedup 1 id sortby -_time |timechart count But My requirement is to run the above search Query for every 10 mins interval where earliest=-0d@d is always fixed and the latest is the time when we are running this search Query I.e for every 10 mins and display the result of that Query at each 10 mins as chart.

splunk db connect - google bigquery

$
0
0
has anyone created the connection between the google bigquery and splunk and if so what did you use in your types conf file Thanks

Geo Heatmap not showing data points on the map.

$
0
0
Hi all, I am having difficulty with geostats and the Geo heatmap visualization. No matter what I input into the geostats command nothing is displayed on the map. The same geostats results show up on all other mapping visualizations and display correctly. Has anyone else had these issues? thanks in advance, Simon This is a search example: index="ndov_chb" source="meetboek_2017-07-26.csv" earliest=1 latest=now | eval y = RD_y, x = RD_x | where y > 0 AND x > 0 | eval dX = round((('x' - 155000) * 0.00001),5), dY = round((('y' - 463000) * 0.00001),5) | eval SomN = (3235.65389 * dY) + (-32.58297 * pow(dX,2)) + (-0.2475 * pow(dY,2)) + (-0.84978 * pow(dX,2) * dY) + (-0.0655 * pow(dY,3)) + (-0.01709 * pow(dX,2) * pow(dY,2)) + (-0.00738 * dX) + (0.0053 * pow(dX,4)) + (-0.00039 * pow(dX,2) * pow(dY,3)) + (0.00033 * pow(dX,4) * dY) + (-0.00012 * dX * dY), SomE = (5260.52916 * dX) + (105.94684 * dX * dY) + (2.45656 * dX * pow(dY,2)) + (-0.81885 * pow(dX,3)) + (0.05594 * dX * pow(dY,3)) + (-0.05607 * pow(dX,3) * dY) + (0.01199 * dY) + (-0.00256 * pow(dX,3) * pow(dY,2)) + (0.00128 * dX * pow(dY,4)) + (0.00022 * pow(dY,2)) + (-0.00022 * pow(dX,2)) + (0.00026 * pow(dX,5)) | eval latitude = 52.15517 + (SomN / 3600), longitude = 5.387206 + (SomE / 3600) | where latitude<60 AND longitude < 8 AND latitude>0 AND longitude>0 | eval haltecodes = substr(Quaycode,6,len(Quaycode)-5) | geostats latfield=latitude longfield=longitude count here is an example of my geostats results: geobin latitude longitude count bin_id_zl_0_y_6_x_4 52.17996 5.42564 46659 bin_id_zl_1_y_12_x_8 52.17996 5.42564 46659 bin_id_zl_2_y_25_x_16 52.17996 5.42564 46659 bin_id_zl_3_y_50_x_32 52.05388 4.82835 26785 bin_id_zl_3_y_50_x_33 52.34348 6.23268 19760 bin_id_zl_3_y_51_x_32 53.44860 5.62062 2 bin_id_zl_3_y_51_x_33 53.45916 5.88079 112 bin_id_zl_4_y_100_x_65 51.72564 4.75573 12924 bin_id_zl_4_y_100_x_66 51.48612 5.91255 6521

tstats subsearch

$
0
0
Hi, I have a tstats query working perfectly however I need to then cross reference a field returned with the data held in another index. Example query which I have shortened | tstats summariesonly=t count FROM datamodel=Datamodel.Name WHERE earliest=@d latest=now datamodel.EventName="LOGIN_FAILED" by datamodel.EventName, datamodel.UserName What I am after doing is then running some kind of subsearch to query another index to return more information about the user. I thought of doing something like | tstats summariesonly=t count FROM datamodel=Datamodel.Name WHERE earliest=@d latest=now datamodel.EventName="LOGIN_FAILED" by datamodel.EventName, datamodel.UserName | [search index=ad Name=datamodel.UserName] However it doesn't seem to like it. Can someone point me in the correct direction I am banging my head on the wall! Thanks!

Where can I find documentation on the "Network Traffic App for Splunk"?

$
0
0
I'm looking for any documentation on the "Network Traffic App for Splunk". I have searched the Splunk wiki and Splunk Answers but have not found anything on this app. My apologies if my search-foo is not up to par. Thanks,

Splunk DB Connect Alternative

$
0
0
Hello everyone! My team and I are weighing our options for various ways to connect to our databases with Splunk; however, our main Splunk department does not have the DB Connect app installed. From what I've read, if the DB Connect app is installed on an intermediary Heavy Forwarder (setup strictly as a forwarder with no extraction), then the main Splunk instance must have it as well. That is not the case with us, so we are looking for alternatives. Does anyone have an alternative to the DBX app? I know that the SQL Alchemy Python Library can connect to databases, but I'm not so sure how this would integrate with the Heavy Forwarder (Maybe through Python inputs?) If anyone has any recommendations, please let me know!

Splunk Alert -No Delete Option

$
0
0
There is no delete option under the edit menu for a splunk alert. The alert is disabled now but need to delete it. The option is not showing up for the alert owner and splunk admin. Are there any other ways to delete the alert ?

Rex field extraction

$
0
0
1. Could someone help me extract the two bold words from the following sample SAMPLE EVENT 1 2018-07-02 08:51:44,648 https-nsse-nio-8663-exec-18 LRQ9923 531x698404x16 1kvc79 99.103.154.114,30.128.209.1 /best/madget/1.0/login The user '**LRQ9923**' has **PASSED** authentication. 2. Could someone help me extract the three bold words from the following sample SAMPLE EVENT 2 2018-07-02 09:18:44,761 https-nsse-nio-8663-exec-90 anonymous 558x723020x25 5lqwk7 88.128.203.123,30.118.254.78 /best/madget/1.0/login The user '**JRA3620**' has **FAILED** authentication. Failure count equals **3**

Splunk python SDK to update kvstore error

$
0
0
Getting this error when using batch_save to update kvstore through Splunk python SDK. This works fine when posting same content through curl, not sure if some limitation through splunk SDK. Anyone have any idea on quick fix? File "/opt/splunk/etc/apps/adp_cmdb/bin/splunklib/client.py", line 3724, in batch_save return json.loads(self._post('batch_save', headers=KVStoreCollectionData.JSON_HEADER, body=data).body.read().decode('utf-8')) File "/opt/splunk/etc/apps/adp_cmdb/bin/splunklib/client.py", line 3615, in _post return self.service.post(self.path + url, owner=self.owner, app=self.app, sharing=self.sharing, **kwargs) File "/opt/splunk/etc/apps/adp_cmdb/bin/splunklib/binding.py", line 289, in wrapper return request_fun(self, *args, **kwargs) File "/opt/splunk/etc/apps/adp_cmdb/bin/splunklib/binding.py", line 71, in new_f val = f(*args, **kwargs) File "/opt/splunk/etc/apps/adp_cmdb/bin/splunklib/binding.py", line 742, in post response = self.http.post(path, all_headers, **query) File "/opt/splunk/etc/apps/adp_cmdb/bin/splunklib/binding.py", line 1208, in post return self.request(url, message) File "/opt/splunk/etc/apps/adp_cmdb/bin/splunklib/binding.py", line 1228, in request raise HTTPError(response) splunklib.binding.HTTPError: HTTP 400 Bad Request -- The provided query was invalid.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>