Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Reset local auth splunk password with python sdk

$
0
0
I need to figure out how to reset a user password with the Python SDK. I see in the documentation where I can change the attributes of a user but not he password. Any help is much apprecaited!

Splunk taking wrong time from logs

$
0
0
Splunk adds one hour to timestamp, when indexing logs. Logs: 9/18/17 3:46:01.000 PM --> time splunk shows [][hello][please][help][18/Sep/2017:14:46:01 -0500] --> actual log I have added the below in my props.conf [host::xyz*] TZ = US/Eastern Also tried TZ = America/New_York ( GMT -5:00) Server shows this date - Sat Sep 30 15:22:18 EDT 2017

Setting field based on eventtype

$
0
0
I do use **eventtypes.conf** to extract fields. Then in **tags.conf** I do set **warning=enable** for some of the fields. Some is **error** and other is **information**. In my search, this then shows up as **eventtype=xyz**, **tags=error** I would like to change this so I get a new field called **severity**. How do I set the **severity** field based on **eventtype**?

I am trying to write a microservices for my company for splunk search through rest api..but i am not able to get the session key for /services/auth/login rest api.. which returns 302 Found.. i need some help on this

$
0
0
I am trying to write a microservices for my company for splunk search through rest api..but i am not able to get the session key for /services/auth/login rest api.. which returns 302 Found... if someone had the same issue before ...could you please help

Kind of inner join

$
0
0
Hello, Hopefully, you will understand what I mean...It was not clear how I could formulate a search to find some documentation. I got an index, with a lot of fields [ f1, f2, f3, ... ]. Let's say that field f1 is the url from the proxy, and f2 is the source_ip of the request. What I would like is from a set of specific "source_ip", all the url that has been accessed by these "source_ip", and the url needs to be accessed by every single IP... Any idea how can I emplement the query in Splunk ? Thanks.

Change Colors of Bar based on legend

$
0
0
It's a simple query. I am just trying to give different color to different legends in my bar graph. below is the XML| datamodel Incident_Management Notable_Events search | stats count by severity -24h@hnow1 But in the dashboard, it's giving just one color to all bars i.e., the color code "0x40ff00". I think it's probably because in the bar chart the there's just one legend i.e., "count". Can somebody help how can i sort this out. I want color in the following format to the bars high-orange severe-red low-green medium-blue

Splunk SH bundle push is very slow

$
0
0
Hey Splunkers, I am running into issues with applying a search head cluster bundle. This bundle has around 200 MB including Splunk Enterprise Security and they run in AWS. When I apply the usual apply shcluster-bundle command, everything works fine, except that it takes ~2 hours to push it ( 3 SH ) SH deployer is running on t2.medium and searchheads on m4.xlarge. CPU is not overwhelmed during the push at all and i have also verified the bandwidth with iperf3 and it is more than allright ( ~500 Mb/s ). There are no searches running at the moment and no data are being indexed. I am just building and testing the infrastructure. I have tailed the splunkd.log during the push on the deployer and also there was no WARN or ERROR regarding that. Do you have any idea what else to test and where could potentially be the root cause ? Thank you for any feedback, Marek

IIS Log Files parsing and Removing Load Balance Health Check

$
0
0
I,m using the new 7.0.0 version of Splunk at my distributed installation (Indexer,Search Head) and i´m trying to parse iis logs from a Windows Server 2016. The parsing is working but i´ve tried to avoid some noise (Probe validation from Load Balancer) using "nullqueue" but somehow, that it´s not working. The noisy probe logs still is coming... Here we go: **Part of of the IIS log file:** Software: Microsoft Internet Information Services 10.0 #Version: 1.0 #Date: 2017-09-30 18:22:33 #Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) cs(Referer) cs-host sc-status sc-substatus sc-win32-status time-taken 2017-09-30 18:22:33 W3SVC6 10.50.10.40 GET / - 25002 - 168.63.129.16 Load+Balancer+Agent - 100.76.216.215:25002 200 0 0 718 2017-09-30 18:22:38 W3SVC6 10.50.10.40 GET / - 25002 - 168.63.129.16 Load+Balancer+Agent - 100.76.216.215:25002 200 0 0 15 2017-09-30 18:22:43 W3SVC6 10.50.10.40 GET / - 25002 - 168.63.129.16 Load+Balancer+Agent - 100.76.216.215:25002 200 0 0 15 2017-09-30 18:22:48 W3SVC6 10.50.10.40 GET / - 25002 - 168.63.129.16 Load+Balancer+Agent - 100.76.216.215:25002 200 0 0 15 2017-09-30 18:22:53 W3SVC6 10.50.10.40 GET / - 25002 - 168.63.129.16 Load+Balancer+Agent - 100.76.216.215:25002 200 0 0 15 2017-09-30 18:22:58 W3SVC6 10.50.10.40 GET / - 25002 - 168.63.129.16 Load+Balancer+Agent - 100.76.216.215:25002 200 0 0 15 2017-09-30 18:23:03 W3SVC6 10.50.10.40 GET / - 25002 - 168.63.129.16 Load+Balancer+Agent - 100.76.216.215:25002 200 0 0 0 2017-09-30 18:23:08 W3SVC6 10.50.10.40 GET / - 25002 - 168.63.129.16 Load+Balancer+Agent - 100.76.216.215:25002 200 0 0 15* **inputs.conf (at C:\Program Files\SplunkUniversalForwarder\etc\system\local) Universal Forwarder ** [monitor://C:\Logs\IIS\W3SV*\*.log] index = private_backend sourcetype = iis disabled = false ignoreOlderThan = 0d **/opt/splunk/etc/system/local/props.conf (at the Indexer server) ** [iis] TRANSFORMS-null=remove_log_probe **/opt/splunk/etc/system/local/transforms.conf (at the Indexer server) ** [remove_log_probe] REGEX=Load\SBalancer\SAgent DEST_KEY=queue FORMAT=nullQueue I´m definetily missing something (maybe silly rsrsr). Can, please, somebody help?

REST API Modular Input - 401 Client Unauthorized

$
0
0
I'm trying to get the REST Input to work with Google Nest API which has a space in one of the headers which I think is causing an issue. I can get other REST APIs to work on the same server. The header is the Authorization one which includes Bearer and then a key From postman I can get to the Nest API from the server so it's not a network issue. But splunkd.log is giving me 09-29-2017 15:10:52.452 +0000 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\rest_ta\bin\rest.py"" HTTP Request error: 401 Client Error: Unauthorized I’ve tried putting inverted commas around it but that hasn’t fixed it. I have also tried replacing the space with %20 The inputs.conf stanza is [rest://Nest] auth_type = none endpoint = https://developer-api.nest.com/ http_header_propertys = Authorization=Bearer c.hp9b{rest of key} http_method = GET index = nest index_error_response_codes = 0 response_type = json sequential_mode = 0 sourcetype = _json streaming_request = 0 disabled = 0

MissingSectionHeaderError when calling a command

$
0
0
I am trying to use the Splunk app MongoDB Commands to gather data from a MongoDB instance. I want to be able to query the data and display it in Splunk. When I call the command "|mongoshowdb" I get this error: _MissingSectionHeaderError at "C:\Program Files\Splunk\Python-2.7\Lib\ConfigParser.py", line 512 : File contains no section headers. file: C:\Program Files\Splunk\etc\apps\mongodb_commands\bin\..\local\mongodb.conf, line: 1 '\xef\xbb\xbf\n'_

timepicker not working for base search in dashboard

$
0
0
Hi, we added timepicker to a simple dashboard consists of base search as following, but it's not working. Using full search in panels the timepicker works properly. Would anyone please help? We're using 6.5.4. Thanks a lot.
sourcetype=syslog$time_tok1.earliest$$time_tok1.latest$
-1mon@mon@mon
stats count by Type$time_tok1.earliest$$time_tok1.latest$

events not reflected in jmx add on

$
0
0
i have added jmx add-on to splunk and connected to the tomcat server via process id, however when i search for "sourcetype=jmx" it says 0 events returned. Also, i cannot see "jmx" as a datasource in "Data Summarry" - Process id is correct - Splunk Single Instance setup is on same server as tomcat application - Index is "Default" index Any help is appreciated

Failed to create a bundles setup with server name

$
0
0
Hi , Im trying to connect the sh cluster to indexer cluster,Im using Splunk Version 7. All the status are ok. But everytime i will query a search this error shows up ***[idx1] [idx2] [idx3] Failed to create a bundles setup with server name 'GUID'. Using peer's local bundles to execute the search, results might not be correct*** Please enlighten me Thanks in Advance

Failed to create a bundles setup with server name 'GUID'.

$
0
0
Hi , Im trying to connect the sh cluster to indexer cluster,Im using Splunk Version 7. All the status are ok. But everytime i will query a search this error shows up [***idx1] [idx2] [idx3] Failed to create a bundles setup with server name 'GUID'. Using peer's local bundles to execute the search, results might not be correct*** in splunkd.log it shows ***10-01-2017 22:01:43.115 +0800 WARN ISplunkDispatch - Gave up waiting for the captain to establish a common bundle version across all search peers; using most recent bundles on all peers instead*** Please enlighten me Thanks in Advance

On which user my Splunk is running?

$
0
0
Not that familiar with *NIX hence the question. I created the user and group called splunk and then ran Splunk for the first time with splunk user. Now I want to ensure my Splunk is running as splunk user and not as root. Can someone help me below command and the output? -bash-4.2$ ps -af|grep splunk root 1658 1473 0 22:33 pts/0 00:00:00 su - splunk splunk 1659 1658 0 22:33 pts/0 00:00:00 -bash splunk 2121 1659 0 22:36 pts/0 00:00:00 ps -af splunk 2122 1659 0 22:36 pts/0 00:00:00 grep --color=auto splunk

Wrapper script to call two different scripts on a alert run a script action

$
0
0
I want to call two different scripts under /bin/scripts folder when alert job is triggered action item "run a script" ..tips to change it as a custom alert actions are also welcome as run a script is deprecated in recent splunk versions

How to get a calculated column in a table

$
0
0
Hi Splunk Experts, I need to create a report to display the table record count difference between two databases during a period of time. Events (list) are captured as follow: db_name table_name row_count x a 4 x b 3 y a 4 y b 1 Report should look like this: table_name x y rec_diff a 4 4 0 b 3 1 2 Any help will be very appreciated.

Extract values from JSON array

$
0
0
Hi everyone! I have a JSON output in raw format: {"result":{"addr":"456hR5drYrYrdY5wTYreYrdyerYe6y","workers":[["host04",{},29,1,"80000",0,22],["client3001",{"a":"0.27"},1,1,"80000",0,22],["host02",{"a":"0"},16,1,"80000",0,22],["host06",{"a":"0.27"},4,1,"80000",0,22],["client52",{"a":"0.27"},10,1,"80000",0,22],["host03",{"a":"0.54"},5,1,"80000",0,22],["host01",{"a":"0.54"},26,1,"80000",0,22],["host08",{"a":"0.53"},3,1,"80000",0,22],["f05",{},19,1,"80000",0,22],["client4004",{"a":"0.27"},76,1,"80000",0,22],["host05",{"a":"0.54"},36,1,"80000",0,22],["host07",{},6,1,"80000",0,22],["client5004",{},2,1,"80000",0,22],["client3002",{"a":"0.27"},7,1,"80000",0,22],["client4003",{"a":"0"},111,1,"80000",0,22],["host02",{"a":"0.54"},25,1,"80000",0,22],["client9006",{"a":"0.53"},21,1,"80000",0,22],["client6001",{"a":"0.55"},9,1,"80000",0,22],["P4003",{"a":"478.71"},1937,1,"256",0,24],["P6001",{"a":"349.75"},1936,1,"256",0,24],["p9006",{"a":"225.7"},1936,1,"128",0,24],["P5004",{"a":"369.91"},1936,1,"128",0,24],["P3002",{"a":"522.23"},1937,1,"256",0,24],["P52",{"a":"449.7"},794,1,"256",0,24],["P4004",{"a":"551.24"},1643,1,"256",0,24],["P6004",{"a":"406.18"},1936,1,"256",0,24],["P3001",{"a":"377.17"},1788,1,"256",0,24]],"algo":-1},"method":"stats.provider.workers"} Here are in some readable view: {"result": { "addr":"456hR5drYrYrdY5wTYreYrdyerYe6y", "workers": [ ["host07", {"a":"0.53"}, 48, 1, "80000", 0, 22], ["client52", {}, 5, 1, "80000", 0, 22], ["host06", {"a":"0.27"}, 26, 1, "80000", 0, 22], ["client3002", {"a":"0"}, 8, 1, "80000", 0, 22], ["client4004", {}, 0, 1, "80000", 0, 22], ["host08", {"a":"0.27"}, 9, 1, "80000", 0, 22], ["host02", {"a":"0.53"}, 19, 1, "80000", 0, 22], ["client5004", {"a":"0.27"}, 28, 1, "80000", 0, 22], ["host01", {"a":"0.27"}, 16, 1, "80000", 0, 22], ["client6001", {"a":"0.53"}, 45, 1, "80000", 0, 22], ["client9006", {"a":"0.53"}, 26, 1, "80000", 0, 22], ["host03", {"a":"0"}, 118, 1, "80000", 0, 22], ["host02", {"a":"0.27"}, 78, 1, "80000", 0, 22], ["f05", {}, 1, 1, "80000", 0, 22], ["host05", {"a":"0.27"}, 10, 1, "80000", 0, 22], ["client4003", {"a":"0.54"}, 25, 1, "80000", 0, 22], ["host04", {"a":"1.34"}, 12, 1, "80000", 0, 22], ["client3001", {"a":"0.54"}, 16, 1, "80000", 0, 22] ],"algo":22} ,"method":"stats.provider.workers"} I want to get names and count the number of workers in each event. But automatically Splunk get "**result.workers{}{}**" field that contains all values in line: 0 1 22 80000 24 256 1986 1987 29 host02 In output I want to get table like: Name a value1 value2 value3 value4 value5 --------------------------------------------------------------------------------------------------------- host07 0.53 48 1 80000 0 22 client52 0.55 51 1 80000 0 22 host06 0.27 26 1 80000 0 22 .... client3002 0 8 1 80000 0 22

How to extract fields at index time?

$
0
0
We have .net logs from SeriLog and we would like to break it down into key value pairs at index time and extract some fields. I have tried to follow the splunk guides and blog posts, but my indexed fields are not available. I can't post links yet unfortunately. transforms.conf: [SerilogKVPairs] DELIMS = "{,}", ":" [LogLevel] REGEX = ^(?:[^ \n]* ){3}(?P[^ ]+) props.conf: # Extract fields from Serilog log inputs TRANSFORMS-KVPairs= SerilogKVPairs TRANSFORMS-LogLevel= LogLevel fields.conf: [SerilogKVPairs] INDEXED=true [LogLevel] INDEXED=true if I search with a pipe to kv SerilogKVPairs it all works, I have searchable values from my Serilog files. But the fields are not available in the UI unless I pipe it through kv SerilogKVPairs. We would like them to be available on all logs without having to pipe through the KV command. Loglevel does not seem to be extracted either. Is there a log which shows what is going on here? Thanks Paul

Sudden excessive WinEventLog:Security events involving splunkd.exe

$
0
0
Splunk Universal Forwarder is v6.4.x Splunk Server is v6.5.x In C:\\Program Files\\SplunkUniversalForwarder\\etc\\apps\\Splunk_TA_windows\\local\\inputs.conf , I have: [WinEventLog://Security] disabled = 0 index = wmi I would normally see about 240 WinEventLog://Security "splunkd.exe" events logged per hour (for weeks). Suddenly, that number jumped to over 4 million WinEventLog://Security "splunkd.exe" events logged per hour, and my indexing limit was exceeded. Here's what gets logged: TIMESTAMP LogName=Security SourceName=Microsoft Windows security auditing. EventCode=5156 EventType=0 Type=Information ComputerName=HOSTNAME TaskCategory=Filtering Platform Connection OpCode=Info RecordNumber=X Keywords=Audit Success Message=The Windows Filtering Platform has permitted a connection. Application Information: Process ID: XXX Application Name: \device\harddiskvolume2\program files\splunkuniversalforwarder\bin\splunkd.exe Network Information: Direction: Outbound Source Address: 10.X.X.X Source Port: XXX Destination Address: 172.X.X.X Destination Port: XXX Protocol: 6 Filter Information: Filter Run-Time ID: XXX Layer Name: Connect Layer Run-Time ID: X What could have possibly changed in a Windows machine that suddenly makes it log so much WinEventLog:Security "splunkd.exe" events? I could set disabled=1, but then I'd lose the ability to track who is logging in/out of that machine. Is there any way to just omit logging these kind of "Audit Success" / "The Windows Filtering Platform has permitted a connection" events?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>