Hi,
I am using the Splunk Add-on for AWS app to ingest data from SQS/S3.
One of the requirements is that the traffic is done through a proxy.
I noticed from the Splunk Add-on for AWS web UI there are proxy settings. When set, it generates a passwords.conf file containing a hash - I am guessing that it is the Proxy URL hashed.
As we do everything via configuration management, it would be awesome to just have this in the conf file.
I did notice that in Splunk Add-on for AWS's `aws_settings.conf` file, there is a Proxy stanza. When I set the options in this it doesn't take effect in the Web UI.
Will the proxy settings in aws_settings.conf be used even though it doesn't show in the Web UI?
Thanks!
↧
Proxy Settings in Splunk Add-on for AWS
↧
SQL db input gets auto disabled multiple times.
Hi, one of my db input in DB connect gets auto disabled multiple time even after manually enabling the input. When I check the connection everything seems to be good. Is there a way I can check from the internal logs for any errors or issue ?
↧
↧
On Forwarder: WARN AdminHandler:AuthenticationHandler - Denied session token for user: splunk-system-user
I am seeing messages like this:
09-05-2018 13:23:47.416 -0400 WARN AdminHandler:AuthenticationHandler - Denied session token for user: splunk-system-user
09-05-2018 13:23:47.429 -0400 WARN AdminHandler:AuthenticationHandler - Denied session token for user: splunk-system-user
09-05-2018 13:23:47.436 -0400 WARN AdminHandler:AuthenticationHandler - Denied session token for user: splunk-system-user
09-05-2018 13:23:47.436 -0400 WARN AdminHandler:AuthenticationHandler - Denied session token for user: splunk-system-user
Searched for them here but others see the message on search heads, while mine are from a Universal Forwarder, which should not be dispatching any distributed search.
Any thoughts? Thank you for any help.
↧
How can I parse this JSON with SPATH
Hi can someone help me parse the json below with spath? I haven't been able to get it to work. For example i'd like to get the value for reporter.displayName = "Bob" parsed into a field.
{
u'aggregatetimeestimate':'null',
u'customfield_10400':'null',
u'fixVersions':'[]',
u'customfield_11801':'null',
u'customfield_11802':'null',
u'customfield_11711':'null',
u'versions':'[]',
u'customfield_12500':'null',
u'customfield_12501':'null',
u'customfield_11208':'null',
u'customfield_11209':'null',
u'customfield_11206':'null',
u'customfield_11207':'null',
u'customfield_11204':'null',
u'customfield_11205':'null',
u'customfield_11202':'null',
u'resolution':'null',
u'customfield_11200':'null',
u'customfield_11201':'null',
u'customfield_10007':'null',
u'priority':'{"iconUrl": "http://xxxx/xxx.png", "id": "10000", "self": "http://xxx/xxx", "name": "Normal"}',
u'customfield_10005':'null',
u'customfield_10004':'null',
u'customfield_10003':'null',
u'customfield_10002':'null',
u'customfield_10001': u'2017-11-08T11:01:06.758-0500',
u'customfield_12343':'null',
u'customfield_11604':'null',
'_time':1509976280,
u'aggregateprogress':'{"progress": 0, "total": 0}',
u'customfield_12308':'null',
u'customfield_10008':'null',
u'customfield_12342':'null',
u'aggregatetimespent':'null',
u'customfield_12340':'null',
u'customfield_12341':'null',
u'customfield_12100':'null',
u'customfield_12347':'null',
u'creator':'{"avatarUrls": {"24x24": "http://xxx/xxx", "48x48": "http://xxx/xxx", "32x32": "http://xxx/xxx", "16x16": "http://xxx/xxx"}, "emailAddress": "bob@bob.com", "timeZone": "America/New_York", "active": true, "self": "http://xxx/xxx", "name": "bob", "displayName": "Bob", "key": "bob"}',
u'customfield_12345':'null',
u'customfield_12346':'null',
u'customfield_12348':'null',
u'customfield_12349':'null',
u'timeestimate':'null',
'key':u'XXX-5249',
u'customfield_10800':'null',
u'customfield_11702':'null',
u'customfield_11701':'null',
u'customfield_11700':'null',
u'duedate':'null',
u'customfield_11705':'null',
u'environment':u'test',
u'customfield_11709':'null',
u'customfield_10901':'null',
u'subtasks':'[]',
u'progress':'{"progress": 0, "total": 0}',
u'customfield_11500':'null',
u'customfield_10200': u'1|hzx1kv:',
u'customfield_12314':'null',
u'customfield_12357':'null',
u'customfield_11302':'null',
u'issuetype':'{"iconUrl": "http://xxx/xxx", "subtask": false, "id": "10001", "self": "http://xxx/xxx", "description": "xxx", "name": "Story", "avatarId": 10615}',
u'customfield_12001':'null',
u'customfield_12364':'null',
u'description':u'xxx',
u'customfield_11501':'null',
u'customfield_12319':'null',
u'customfield_12339':'null',
u'customfield_12338':'null',
u'customfield_11203':'null',
u'customfield_12333':'null',
u'customfield_12332':'null',
u'customfield_12331':'null',
u'customfield_12330':'null',
u'customfield_12337':'null',
u'customfield_12336':'null',
u'customfield_12335':'null',
u'customfield_10100':'null',
u'status':'{"iconUrl": "http://xxx/xxx", "statusCategory": {"colorName": "blue-gray", "self": "http://xxx/xxx", "id": 2, "name": "To Do", "key": "new"}, "description": "", "id": "10002", "self": "http://xxx/xxx", "name": "To Do"}',
u'customfield_11712':'null',
u'customfield_11713':'null',
u'customfield_10006':u'9223372036854775807',
u'reporter':'{"avatarUrls": {"24x24": "http://xxx/xxx", "48x48": "http://xxx/xxx", "32x32": "http://xxx/xxx", "16x16": "http://xxx/xxx"}, "emailAddress": "bob@bob.com", "timeZone": "America/New_York", "active": true, "self": "http://xxx/xxx", "name": "bob", "displayName": "Bob", "key": "bob"}',
u'labels':'["Maintenance"]',
u'components':'[{"name": "Database", "self": "http://xxx/xxx", "id": "12307"}, {"name": "Veritica", "self": "http://xxx/xxx", "id": "12711"}]',
u'customfield_11900':'{"self": "http://xxx/xxx", "id": "11001", "value": "False"}',
u'watches':'{"self": "http://xxx/xxx", "isWatching": false, "watchCount": 2}',
u'customfield_12353':'null',
u'customfield_11100':u'0.0',
u'customfield_12302':'null',
u'resolutiondate':'null',
u'created': u'2017-11-06T08:51:20.000-0500', u'summary': u'xxx:xxx',
u'customfield_12352':'null',
u'timespent':'null',
u'assignee':'{"avatarUrls": {"24x24": "http://xxx/xxx", "48x48": "http://xxx/xxx", "32x32": "http://xxx/xxx", "16x16": "http://xxx/xxx1"}, "emailAddress": "bob@bob.com", "timeZone": "America/New_York", "active": true, "self": "http://xxx/xxx", "name": "bob", "displayName": "Bob", "key": "bob"}',
u'customfield_10700':'null',
u'workratio':'-1',
u'customfield_12202':'null',
u'customfield_12200':'null',
u'customfield_12328':'null',
u'customfield_12365':'null',
u'votes':'{"hasVoted": false, "self": "http://xxx/xxx", "votes": 0}',
u'customfield_12360':'null',
u'customfield_12361':'null',
u'customfield_12362':'null',
u'customfield_12363':'null',
u'customfield_12320':'null',
u'customfield_12321':'null',
u'customfield_12322':'null',
u'customfield_12323':'null',
u'customfield_12324':'null',
u'customfield_12325':'null',
u'customfield_12326':'null',
u'issuelinks':'[]',
u'customfield_10902':'null',
u'customfield_12306':'null',
u'customfield_11708':'null',
u'customfield_12307':'null',
u'project':'{"avatarUrls": {"24x24": "http://xxx/xxx", "48x48": "http://xxx/xxx", "32x32": "http://xxx/xxx", "16x16": "http://xxx/xxx"}, "projectCategory": {"description": "", "self": "http://xxx/xxx", "id": "10001", "name": "Bob"}, "id": "10005", "self": "http://xxx/xxx", "name": "Bob", "key": "XX"}',
u'customfield_12304':'null',
u'customfield_11211':'null',
u'customfield_12399':'null',
u'customfield_12398':'null',
u'customfield_12305':'null',
u'customfield_11602':'null',
u'customfield_11603':'null',
u'customfield_11213':'null',
u'customfield_11601':'null',
u'customfield_12329':'null',
u'aggregatetimeoriginalestimate':'null',
u'customfield_12397':'null',
u'customfield_12396':'null',
u'customfield_12311':'null',
u'customfield_12310':'null',
u'customfield_12313':'null',
u'customfield_12312':'null',
u'customfield_12315':'null',
u'customfield_11400':'null',
u'customfield_12351':'null',
u'customfield_10000':'null',
u'customfield_10300':'null',
u'updated': u'2018-09-04T14:30:28.000-0400',
u'customfield_12303':'null',
u'timeoriginalestimate':'null',
u'customfield_12301':'null',
u'customfield_12359':'null',
u'customfield_12358':'null',
u'customfield_12206':'null',
u'customfield_12355':'null',
u'customfield_12354':'null',
u'customfield_11210':'null',
u'customfield_12356':'null',
u'customfield_12309':'null',
u'customfield_12350':'null',
u'lastViewed':'null',
u'customfield_11504':'null'
}
↧
I need to compare two results based on one part of a field ( and not the entire field )
I have search A which gives out results like field A, field B , field C, where field C is a combination of two halves like part 1.part2.
Now, I want to compare/combine the results of this search with another search that gives out columns like field D , field E, field C ( here field C contains only part 2 and does not have part 1 ).
My question is :
1. How do I compare/combine results of search 1 with results of search 2 to see events that have part 2 of field C matching/same.
↧
↧
Why am I unable to read logfiles?
I am trying to read log files from a server. I have made all the configuration in Splunk but data is not coming in Splunk search. When I checked Splunk's internal log, I got a permission denied error for that server. I logged to the specific server and verified that all users have read permission to path I am trying to Monitor.
Can anyone suggest what could be the real cause for this issue.
Below is the inputs.conf configuration
[monitor:///usr2/oracle/saltlog/*logs.log]
sourcetype = oracle_os:healthcheck
index = os_na
interval = 600
crcSalt =
↧
Can i use job.resultCount in splunk 6.2.14
Hi All,
I am new to splunk and facing an issue with assigning token value based on condition. I'm using the following code
↧
Remove excess buckets via REST API
we have some problems with excess buckets from time to time and I am writing a python program to check for excess buckets and then I want to take that list and purge them. In the documentation I see the endpoint **/cluster/master/control/control/prune_index**. My question is; is this the correct API end point to call? Do I need to pass it a list of buckets? Or am I off track?
↧
Why is the parsingQueue blocking on only one server?
Out of 19 windows servers running the same services, there is one server that keeps on blocking at parsingQueue. I have increased the size to 30MB while the others remain under 10MB, but it keeps on blocking.
I ran the following code to check how many events hit each server and found that they are all even:
index=_internal host=* group=queue name=parsingqueue | timechart span=60m limit=0 count by host
Next, I ran this search to check the size of the queue and found that while the rest of the servers are at about ~1000, the server that is blocking is above 70K!
index=_internal host=* group=queue name=parsingqueue | timechart span=60m limit=0 sum(current_size) by host
They are all running with the same system specs: 64-bit, 3.07GHz, 12 core, 6 cores per CPU and 96 gigs of RAM. They all have plenty of disk space, as well. Is there a way to check the increase/output rate for the queue? Also, I am not sure how far to increase before it becomes dangerous. Is there anything else I have missed?
TIA,
Skyler
↧
↧
Web Activity Dashboard is empty
Running the latest PAN FW App and Add On Splunk 7.02
I followed troubleshooting steps to no avail. URLs are reported by a regular search. After a bit of investigation it looks like there's no event type "pan_url" that is used in the datamodel. So, you you run a simple query such as
eventtype="pan_threat" it returns results, including URLs but
eventtype="pan_url" comes empty.
Any idea?
↧
Palo Alto Networks App: why is our Web Activity Dashboard empty?
Running the latest PAN FW App and Add On Splunk 7.02
I followed troubleshooting steps to no avail. URLs are reported by a regular search. After a bit of investigation it looks like there's no event type "pan_url", which is used in the datamodel. So, you run a simple query such as
eventtype="pan_threat"
The query returns results, including URLs but `eventtype="pan_url"` comes up empty.
Any idea?
↧
How do we separate Splunk logs from different groups?
Whats the best practice in case of having different groups, where each group doesn't want to see another groups logs, but they have the same assets. All of them have Cisco switches,Linux servers...
How can we separate their logs?
↧
How do you search for Event Types that return no results?
I have a list of EventTypes I'm searching for based on a standard naming convention. I want to be able to return a list of EventTypes that have not occurred in the given time frame. Right now my search looks something like this:
eventtype=ps-*
And then from there, I am working with the list of returned events. I need a separate search to get a list of the event types that didn't return anything.
Thoughts?
↧
↧
Cisco Nexus 9k Add-on: collect.py ERROR: __init__() got an unexpected keyword argument ‘context’
Hi,
I installed and configured Cisco Nexus 9k Add-on.
However, I got an error like this:
**Nexus Error: Not able to Execute command through NXAPI: __init__() got an unexpected keyword argument ‘context’, DEVICE IP: xxxx.xxx.xx.xxx. COMMAND: show version**
Has anyone experienced this? If so, can you please share the resolution?
One possible reason that I can think of is that this add-on issues NX-API call with https while on the switch' side, it is http, right? Can this add-on provides the option to choose whether http or https? Changing the code in collect.py is not very nice.
Thanks in advance.
↧
How do you search for event types that return no results?
I have a list of event types I'm searching for based on a standard naming convention. I want to be able to return a list of event types that have not occurred in the given time frame. Right now, my search looks something like this:
eventtype=ps-*
And then from there, I am working with the list of returned events. I need a separate search to get a list of the event types that didn't return anything.
Thoughts?
↧
Escaping Double Backslash in Rex/Regex Command
I'm having some serious difficulty in figuring out how to escape a double backslash within the REX/regex spl command..
The following regex works on regex101 `"title\\\\\"\:\\\\\"(?[^\)].*)\\\\\"\,\\\\\"selection"` when extracting the log snippet below to get the "Button Title" text: > "partyId\\":\\"lahflkhasdljkflkf\\",\\"title\\”:\\”Button Title\\”,\\”selectionType\\":\\"button\
I found a suggestion on "Tricky behavior of escaping backslash in regex" to \\\\ to match a single \ but that didn't do the trick. Anyone have advice on how to escape a double backslash in the rex command, and if so please post the regex below!
Thanks!
↧
Can i use "job.resultCount" in Splunk 6.2.14?
Hi All,
I am new to Splunk and am facing an issue with assigning token value based on condition. I'm using the following code:
↧
↧
dnslookup not working
trying to use "lookup dnslookup clientip as dvc OUTPUT clienthost AS dvc" within a search on a dashboard. Some of the "dvc" entries already show as hostname rather than IP which is causing issues. I also can't get the results to display on the dashboard graph although the previous IPs displayed no problem, however adding this string prevents it.
Total search is
index=sectool sourcetype=sectool* | fields bytes_in, bytes_out, dvc, user | stats count by bytes_in, bytes_out, dvc, user | eval total_bytes_in=bytes_in*count, total_bytes_out=bytes_out*count | stats sum(total_bytes_in) as "Bytes Inbound" sum(total_bytes_out) as "Bytes Outbound" by dvc | lookup dnslookup clientip as dvc OUTPUT clienthost AS dvc | sort by dvc
↧
*.nix app for Unix
Hi all, I have been through the forums and I have made sure sysstat is installed and is working.. I am able to issue all of the .sh commands in bin directory and index=os | head 20 shows up blank....
Specification
CPU: unknown - is cpu.sh enabled?
RAM: unknown - is vmstat.sh enabled?
Disk: unknown - is df.sh enabled?
Capacity: unknown - is df.sh enabled?
Commands do work.. I am a stumped newbie..
/opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin # ./df.sh
Filesystem Type Size Used Avail UsePct MountedOn
/dev/mapper/vgroot-lvroot ext4 11G 7.8G 2.0G 80% /
/dev/sdc1 ext3 459G 165G 271G 38% /timesheetbk
etc. etc.....
↧
How to change two parts of a search query based on input selection?
I have a column chart that needs to update based on the input selection (Hour/Weekday/Month - aka $field4$). I've managed to get it to update one part of the search query, but I need it to update two parts (not just one).
For example, this is my query:
index=os sourcetype=cpu cpu=all
**| eval date_wday=strftime(_time,$field4$)**
| stats avg(pctIdle) by date_wday
| rename avg(pctIdle) AS "Avg CPU"
**| eval sort_field = case(date_wday=="Monday",1, date_wday=="Tuesday",2, date_wday=="Wednesday",3, date_wday=="Thursday",4, date_wday=="Friday",5, date_wday=="Saturday",6, date_wday=="Sunday",7)**
| sort sort_field
| fields - sort_field
I can't seem to figure out how to also update the second part in bold (eval sort_field = case(date_wday...) when a selection for $field4$ is made and I need it to change so that if "Month" is selected, the second part of the query would update to:
| eval sort_field = case(date_month=="January",1, date_month=="February",2, date_month=="March",3, date_month=="April",4, date_month=="May",5, date_month=="June",6, date_month=="July",7, date_month=="August",8, date_month=="September",9, date_month=="October",10, date_month=="November",11, date_month=="December",12)
![alt text][1]
[1]: /storage/temp/255928-input-change-search.png
↧