Hi,
I am using the Splunk Add-on for AWS app to ingest data from SQS/S3.
One of the requirements is that the traffic is done through a proxy.
I noticed from the Splunk Add-on for AWS web UI there are proxy settings. When set, it generates a passwords.conf file containing a hash - I am guessing that it is the Proxy URL hashed.
As we do everything via configuration management, it would be awesome to just have this in the conf file.
I did notice that in Splunk Add-on for AWS's `aws_settings.conf` file, there is a Proxy stanza. When I set the options in this it doesn't take effect in the Web UI.
Will the proxy settings in aws_settings.conf be used even though it doesn't show in the Web UI?
Thanks!
Hi, one of my db input in DB connect gets auto disabled multiple time even after manually enabling the input. When I check the connection everything seems to be good. Is there a way I can check from the internal logs for any errors or issue ?
I am seeing messages like this:
09-05-2018 13:23:47.416 -0400 WARN AdminHandler:AuthenticationHandler - Denied session token for user: splunk-system-user
09-05-2018 13:23:47.429 -0400 WARN AdminHandler:AuthenticationHandler - Denied session token for user: splunk-system-user
09-05-2018 13:23:47.436 -0400 WARN AdminHandler:AuthenticationHandler - Denied session token for user: splunk-system-user
09-05-2018 13:23:47.436 -0400 WARN AdminHandler:AuthenticationHandler - Denied session token for user: splunk-system-user
Searched for them here but others see the message on search heads, while mine are from a Universal Forwarder, which should not be dispatching any distributed search.
Any thoughts? Thank you for any help.
I have search A which gives out results like field A, field B , field C, where field C is a combination of two halves like part 1.part2.
Now, I want to compare/combine the results of this search with another search that gives out columns like field D , field E, field C ( here field C contains only part 2 and does not have part 1 ).
My question is :
1. How do I compare/combine results of search 1 with results of search 2 to see events that have part 2 of field C matching/same.
I am trying to read log files from a server. I have made all the configuration in Splunk but data is not coming in Splunk search. When I checked Splunk's internal log, I got a permission denied error for that server. I logged to the specific server and verified that all users have read permission to path I am trying to Monitor.
Can anyone suggest what could be the real cause for this issue.
Below is the inputs.conf configuration
[monitor:///usr2/oracle/saltlog/*logs.log]
sourcetype = oracle_os:healthcheck
index = os_na
interval = 600
crcSalt =
Below is the props.conf configuration
[sourcetype:oracle_os:healthcheck]
SHOULD_LINEMERGE= true
NO_BINARY_CHECK = true
BREAK_ONLY_AFTER = TIMESTAMP=
TRUNCATE =9999
TZ = US/Eastern
we have some problems with excess buckets from time to time and I am writing a python program to check for excess buckets and then I want to take that list and purge them. In the documentation I see the endpoint **/cluster/master/control/control/prune_index**. My question is; is this the correct API end point to call? Do I need to pass it a list of buckets? Or am I off track?
Out of 19 windows servers running the same services, there is one server that keeps on blocking at parsingQueue. I have increased the size to 30MB while the others remain under 10MB, but it keeps on blocking.
I ran the following code to check how many events hit each server and found that they are all even:
index=_internal host=* group=queue name=parsingqueue | timechart span=60m limit=0 count by host
Next, I ran this search to check the size of the queue and found that while the rest of the servers are at about ~1000, the server that is blocking is above 70K!
index=_internal host=* group=queue name=parsingqueue | timechart span=60m limit=0 sum(current_size) by host
They are all running with the same system specs: 64-bit, 3.07GHz, 12 core, 6 cores per CPU and 96 gigs of RAM. They all have plenty of disk space, as well. Is there a way to check the increase/output rate for the queue? Also, I am not sure how far to increase before it becomes dangerous. Is there anything else I have missed?
TIA,
Skyler
Running the latest PAN FW App and Add On Splunk 7.02
I followed troubleshooting steps to no avail. URLs are reported by a regular search. After a bit of investigation it looks like there's no event type "pan_url" that is used in the datamodel. So, you you run a simple query such as
eventtype="pan_threat" it returns results, including URLs but
eventtype="pan_url" comes empty.
Any idea?
Running the latest PAN FW App and Add On Splunk 7.02
I followed troubleshooting steps to no avail. URLs are reported by a regular search. After a bit of investigation it looks like there's no event type "pan_url", which is used in the datamodel. So, you run a simple query such as
eventtype="pan_threat"
The query returns results, including URLs but `eventtype="pan_url"` comes up empty.
Any idea?
Whats the best practice in case of having different groups, where each group doesn't want to see another groups logs, but they have the same assets. All of them have Cisco switches,Linux servers...
How can we separate their logs?
I have a list of EventTypes I'm searching for based on a standard naming convention. I want to be able to return a list of EventTypes that have not occurred in the given time frame. Right now my search looks something like this:
eventtype=ps-*
And then from there, I am working with the list of returned events. I need a separate search to get a list of the event types that didn't return anything.
Thoughts?
Hi,
I installed and configured Cisco Nexus 9k Add-on.
However, I got an error like this:
**Nexus Error: Not able to Execute command through NXAPI: __init__() got an unexpected keyword argument ‘context’, DEVICE IP: xxxx.xxx.xx.xxx. COMMAND: show version**
Has anyone experienced this? If so, can you please share the resolution?
One possible reason that I can think of is that this add-on issues NX-API call with https while on the switch' side, it is http, right? Can this add-on provides the option to choose whether http or https? Changing the code in collect.py is not very nice.
Thanks in advance.
I have a list of event types I'm searching for based on a standard naming convention. I want to be able to return a list of event types that have not occurred in the given time frame. Right now, my search looks something like this:
eventtype=ps-*
And then from there, I am working with the list of returned events. I need a separate search to get a list of the event types that didn't return anything.
Thoughts?
I'm having some serious difficulty in figuring out how to escape a double backslash within the REX/regex spl command..
The following regex works on regex101 `"title\\\\\"\:\\\\\"(?[^\)].*)\\\\\"\,\\\\\"selection"` when extracting the log snippet below to get the "Button Title" text: > "partyId\\":\\"lahflkhasdljkflkf\\",\\"title\\”:\\”Button Title\\”,\\”selectionType\\":\\"button\
I found a suggestion on "Tricky behavior of escaping backslash in regex" to \\\\ to match a single \ but that didn't do the trick. Anyone have advice on how to escape a double backslash in the rex command, and if so please post the regex below!
Thanks!
trying to use "lookup dnslookup clientip as dvc OUTPUT clienthost AS dvc" within a search on a dashboard. Some of the "dvc" entries already show as hostname rather than IP which is causing issues. I also can't get the results to display on the dashboard graph although the previous IPs displayed no problem, however adding this string prevents it.
Total search is
index=sectool sourcetype=sectool* | fields bytes_in, bytes_out, dvc, user | stats count by bytes_in, bytes_out, dvc, user | eval total_bytes_in=bytes_in*count, total_bytes_out=bytes_out*count | stats sum(total_bytes_in) as "Bytes Inbound" sum(total_bytes_out) as "Bytes Outbound" by dvc | lookup dnslookup clientip as dvc OUTPUT clienthost AS dvc | sort by dvc
Hi all, I have been through the forums and I have made sure sysstat is installed and is working.. I am able to issue all of the .sh commands in bin directory and index=os | head 20 shows up blank....
Specification
CPU: unknown - is cpu.sh enabled?
RAM: unknown - is vmstat.sh enabled?
Disk: unknown - is df.sh enabled?
Capacity: unknown - is df.sh enabled?
Commands do work.. I am a stumped newbie..
/opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin # ./df.sh
Filesystem Type Size Used Avail UsePct MountedOn
/dev/mapper/vgroot-lvroot ext4 11G 7.8G 2.0G 80% /
/dev/sdc1 ext3 459G 165G 271G 38% /timesheetbk
etc. etc.....
I have a column chart that needs to update based on the input selection (Hour/Weekday/Month - aka $field4$). I've managed to get it to update one part of the search query, but I need it to update two parts (not just one).
For example, this is my query:
index=os sourcetype=cpu cpu=all
**| eval date_wday=strftime(_time,$field4$)**
| stats avg(pctIdle) by date_wday
| rename avg(pctIdle) AS "Avg CPU"
**| eval sort_field = case(date_wday=="Monday",1, date_wday=="Tuesday",2, date_wday=="Wednesday",3, date_wday=="Thursday",4, date_wday=="Friday",5, date_wday=="Saturday",6, date_wday=="Sunday",7)**
| sort sort_field
| fields - sort_field
I can't seem to figure out how to also update the second part in bold (eval sort_field = case(date_wday...) when a selection for $field4$ is made and I need it to change so that if "Month" is selected, the second part of the query would update to:
| eval sort_field = case(date_month=="January",1, date_month=="February",2, date_month=="March",3, date_month=="April",4, date_month=="May",5, date_month=="June",6, date_month=="July",7, date_month=="August",8, date_month=="September",9, date_month=="October",10, date_month=="November",11, date_month=="December",12)
![alt text][1]
[1]: /storage/temp/255928-input-change-search.png