Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How do i extract my field using rex;

$
0
0
Hi How do i extract my field using rex; Below is the sample log: "{"xxxx":{"zzzz":"405","statusMessage":"Added","zzzzzzz":false}}",

Palo Alto Networks App for Splunk 5.3.1: "pancontentpack threats" command returned error "External search command 'pancontentpack' returned error code 1."?

$
0
0
After we upgraded Panorama from 7.1.x to 8.0.5, the "pancontentpack threats" command from Splunk is no longer work. However, "pancontentpack apps" still work and returns result. **python.log** 2017-10-12 13:41:04,868 +0000 INFO panContentPack:185 - Getting apps from content pack on Palo Alto Networks device at ... 2017-10-12 13:41:16,827 +0000 INFO panContentPack:133 - Found 2440 apps 2017-10-12 13:41:20,820 +0000 INFO panContentPack:187 - Getting threats from content pack on Palo Alto Networks device at ... 2017-10-12 13:41:27,286 +0000 ERROR panContentPack:162 - Error parsing app: 10585 **Our environment** Splunk Enterprise 6.5.3 SplunkforPaloAltoNetworks 5.3.1 Splunk_TA_paloalto 3.7.1 I also upgraded the PaloAlto App and TA to the latest version, but it still failed for pulling threats content.

How can I use tstats to search event count comparing with last week a the same time

$
0
0
I have a search that works with stats - but fail to work when using tstats.. Here is the search with stats: index=wineventlog sourcetype="xmlwineventlog:security" earliest=-15m@m-1w latest=@m-1w | stats count by host | rename count as LastWeek | appendcols [search index=wineventlog sourcetype="xmlwineventlog:security" earliest=-15m@m latest=@m | stats count by host | rename count as Today] | table host Today LastWeek Since this search take some time - I thought that I should use tstats instead - but some how I can't make it work. The individual searches works - but not combined as subsearch as in this example: | tstats count where index=wineventlog sourcetype="xmlwineventlog:security" earliest=-15m@m-1w latest=@m-1w by host | rename count as LastWeek | appendcols [search [|tstats count where index=wineventlog sourcetype="xmlwineventlog:security" earliest=-15m@m latest=@m by host | rename count as Today]] | table host LastWeek Today In this search it only returns values for "LastWeek" - nothing for "Today", but the individual searches with tstast works without problems. Anyone with a clue?

how to write relative_time(now(),"$Value$") in the query..

$
0
0
I'm using summary index to capture all previous days and regular index to capture today's data to improve performance. I'm taking the earliest & latest timestamp of regular index.. Timerange token is added. Now i want to set a token for SIearliest & SILatest using regular index timestamp. eg: if searching for last 7days, instead of "+6days" i need a variable which should automatically check and change when i search for last 30days next time. |eval SIEarliest = $time_range$ (earliesttimestamp) +) |eval SILatest = $time_range$ + ) my search: index=regular sourcetype=ABC | chart earliest(_time) as RIEarliest latest(_time) as RILatest |eval SIEarliest = $time_range$ + relative_time(now(),"-6d@d") |eval SILatest = $time_range$ + relative_time(now(),"-d@d") | appendpipe [ stats count | eval EarliestTime=0 | eval LatestTime= 0 | where count=0 | table RIEarliest RILatest SIEarliest SILatest ]

How splunk can send data to third party system specified server ? In RFC Compliant FORMAT ?

$
0
0
Actually we are trying to send win event logs from splunk to RSA net witness ? 1) we got IP address and port number to send the data from NW Team .Since as per below link from from splunk document .We performed .Since it did send data via tcp routing .Actually at NW reciever the logs are in different format .It didnot worked . http://docs.splunk.com/Documentation/Splunk/6.6.1/Forwarding/Forwarddatatothird-partysystemsd#TCP_data http://docs.splunk.com/Documentation/SplunkCloud/6.6.3/Forwarding/Forwarddatatothird-partysystemsd 2) So for second time we performed using Splunk app for cef ? The data we are able to send .Since the data at NW receiver side is not in RFC compliant format .So did not worked . https://splunkbase.splunk.com/app/1847/ Is there any different apporach to send data from splunk to NW RSA? Any suggestions highly appreciated ?

Search for account failing a log on at repeating intervals

$
0
0
I'm trying to make a search that looks for an account trying to log onto a destination at a repeating interval. This will hopefully cover two potential uses: 1. Find rogue service accounts with bad passwords or the like repeatedly hitting a location 2. Find bot activity / slow brute forces due to scripts. I'm actually not even sure where to begin. It sounds like it would be a `transaction` command but something like `index=wineventlog EventCode=4625 | transaction Account_Name, dest ` isn't returning events as I was expecting. At first I thought this was going to be easy and the hard part would be the trend but even finding an account failing to log into the same machine twice in an hour is proving to be difficult for some reason.

Sum and average of values present in two columns of an output

$
0
0
Hi All, I would like to get the average for Failed_Attempts and Passed_Authentications of the below table, _time Failed_Attempts Passed_Authentications 2017-10-09 12345 28800 2017-10-10 22189 47452 2017-10-11 19697 50204 2017-10-12 4124 12054 Please suggest the query !!

Invalid key in default stanza for DB Connect app doc_title

$
0
0
I am running Splunk 6.5 and DB Connect 3.1.0. I know there is a 3.1.1, but it does not list this as a fixed issue in the changelog. Other than deleting/commenting out these bad lines, how would I fix this key? This is happening in the default configs, so I'm not sure if I somehow corrupted the install, or if something migrated from my old v2 DB Connect that shouldn't have. in $SPLUNK_HOME/etc/apps/splunk_app_db_connect/default/checklist.conf: Invalid key in stanza [dbx_java_installation_health_check] in /services/s-splunk/splunk/etc/apps/splunk_app_db_connect/default/checklist.conf, line 17: doc_title (value: DB Connect JVM installation). Did you mean 'description'? Did you mean 'disabled'? Did you mean 'doc_link'? Did you mean 'drilldown'? Invalid key in stanza [dbx_driver_installation_health_check] in /services/s-splunk/splunk/etc/apps/splunk_app_db_connect/default/checklist.conf, line 41: doc_title (value: DB Connect JDBC Driver installation). Did you mean 'description'? Did you mean 'disabled'? Did you mean 'doc_link'? Did you mean 'drilldown'? Invalid key in stanza [dbx_connections_configuration_health_check] in /services/s-splunk/splunk/etc/apps/splunk_app_db_connect/default/checklist.conf, line 69: doc_title (value: DB Connect connection configuration,DB Connect identity configuration). Did you mean 'description'? Did you mean 'disabled'? Did you mean 'doc_link'? Did you mean 'drilldown'? Invalid key in stanza [dbx_data_lab_configuration_health_check] in /services/s-splunk/splunk/etc/apps/splunk_app_db_connect/default/checklist.conf, line 85: doc_title (value: DB Connect input configuration,DB Connect output configuration,DB Connect lookup configuration). Did you mean 'description'? Did you mean 'disabled'? Did you mean 'doc_link'? Did you mean 'drilldown'? Invalid key in stanza [dbx_java_server_configuration_health_check] in /services/s-splunk/splunk/etc/apps/splunk_app_db_connect/default/checklist.conf, line 98: doc_title (value: DB Connect Java Server configuration). Did you mean 'description'? Did you mean 'disabled'? Did you mean 'doc_link'? Did you mean 'drilldown'?

Extract string and place in new field

$
0
0
Trying to extract a string into a new field. A sample of log is as follows: productName = Special Day Argyle Socks for Men (Special Day Argyle Socks Size 10-13) | rex "productName\s=\s(?\s[\w\W]+)" Not sure where to go from here. When I test within regex101, it works just fine. But when I move to Splunk i get nothing. **Update** Added this but now it doesn't stop at a new line | rex "productName\s=\s(?[\w\W]+\n)"

Deployment Clients Management

$
0
0
want to use the SDK or REST to read all the windows deployment client and then spin this against my CMDB to validate I have all clients reporting, this would also include new client builds as the join the network. Do I need to connect to the deployment server to get this data? I am using python and\or C# Thanks!

Filter time-based values from inputlookup by time picker range

$
0
0
Hi Splunkers, I have csv tables (inputlookup) with latest time of particular event for users, sources..., reflected in field `_time` . These tables are utilized as filters for my dashboard with statistics (| inputlookup mylookup | fields user). This helps to decrease time of filtering for a long-time ranges for events in dashboard. Is it possible to filter out values from inputlookup table output with time range chosen in a Time picker? Something like | inputlookup mylookup | where _time>$timepicker.earliest$ AND _time<$timepicker.latest$| fields user

Why am I only seeing results from one search-peer?

$
0
0
I'm trying to confirm that replication and searching can happen on one NIC while ingesting happens over a different NIC. I have the following simple test setup: 3 indexes in a cluster, each with 2 NICs... 1 master 1 search-head 1 forwarder sending to all three indexers The search-head is connected to the master and in settings > distributed search > search-peers, or on the command line I see all three indexers in the cluster: splunk list search-server Server at URI "dsplunk-index-test-01.oit.duke.edu:8089" with status as "Up" Server at URI "splunk-index-test-01-private.oit.duke.edu:8089" with status as "Up" Server at URI "splunk-index-test-02-private.oit.duke.edu:8089" with status as "Up" Server at URI "splunk-index-test-03-private.oit.duke.edu:8089" with status as "Up" But I only see results from one indexer when I search from the web GUI on the search-head, or from its command line. This is my command line search: splunk search "index=* | chart count by splunk_server" I'm using the same search in the web GUI, just everything inside the "". If I run the command-line search on the indexers individually I get results from the specific search-peer. If I run the command-line search on the master, I get results from all three search-peers. splunk_server count -------------------- ----- splunk-index-test-01 57 splunk-index-test-02 39 splunk-index-test-03 456 If I run the command-line search from the search-head I get one result. splunk_server count -------------------- ----- splunk-index-test-01 57 If I had configured the search-head incorrectly to the master, I wouldn't see the search peers in the list search-server command results. Or I wouldn't see any results at all. As it is, it makes no sense that one of the 3 indexers shows and the other two don't. Firewalls are all open to the search-head for both NICs on all 3 indexers. I can telnet to port 8089 from the search-head to both NICs on all 3 boxes. Here's the snippet from server.conf on the search-head: [clustering] master_uri = https://splunk-master-test-01.oit.duke.edu:8089 mode = searchhead pass4SymmKey = $1$7/FK0zLe7w3j3t4lkTuxrXaNBB9vpccQ== And from the master: [clustering] cluster_label = oit mode = master pass4SymmKey = $1$bYZ2q5Vu//5VNuiwljjQlH9xYhGBKA== replication_factor = 2 search_factor = 1 (pass4SymmKeys have been changed) show cluster-status shows that everything is up and searchable, all green lights. How do I get my search-head to believe that it actually should be able to see the other search-peers?

How much data is indexed over 12 months

$
0
0
Hello, I'm looking for a report that shows the currenct size of my Splunk Indexer and how much of that data is over 12 months. Keeping in mind that Splunk compresses this data so I just need to know how much data out of the size I currently hold on my Indexer or Disk space is 12+ months old. Can anyone assist me in figuring this out?

Which command works better to see lookup fields in fields sidebar?

$
0
0
In order to view lookup fields in the fields sidebar which command would be used to get faster results. I know to use inputlookup to verify data but as far as viewing fields in sidebar, which command would be used?

How to prevent directory bombs on forwarders?

$
0
0
Spent all day yesterday trying to figure out why a client's logs weren't indexing. Most of the time I had no access to the server in question, so I was simply troubleshooting from internal logs, configs, and the sporadic logs that would show up briefly after a restart. Finally, when I was just about to throw in the towel, I started poking around directories above the target files. The monitor line had an asterisk at this point in the path, so even though most other dirs didn't match further down the line, Splunk had to check them. Several of them had 100k+ files in them. So Splunk was stuck trying to read these dirs. Even performing just an `ls | wc -l` took over 10 minutes on a few of them. I can find big directories with something like this and send it into Splunk for alerting: `find /path -size +100k -type d` Adjusting the size requirement as needed. Is there a better way to avoid these landmines? Thanks, Jon

How many gigs of license is this event using?

$
0
0
I have an event that is using X amount of space. The search is index=network default send string I'd like to search how many gigs of license this event is using over the last week. Anyway to do that with a search?

How to extract the fields from json output and display as table

$
0
0
{ "ERROR_CODE" : "XXX-XXX-00000", "ERROR_DESC" : "Success." }, "accountBalances" : { "accountNumber13" : "22222222222", "siteId" : "200001005", "siteCode" : "HRD", "customerName" : "LiXX XXXXXX", "serviceAddress" : "XXXXXXXXXX, VA XXXXX-4849 ", "streetNumber" : "XXX", "streetName" : "XXXXXX", "city" : "CHESAPEAKE", "state" : "VA", "zip5" : "23320", "homeTelephoneNumber" : "XXX 000-0000", "acceptChecks" : "True", "acceptCreditCards" : "True", "pendingWODepositAmount" : "0.0", "statementInfo" : [ { "statementCode" : 1, "currentBalance" : "0.0", "serviceCategories" : [ "INTERNET", "CABLE", "TELEPHONE" ], "amountBilled" : "577.71", "minimumDue" : "270.6", "billDay" : "8", "statementDueDate" : "20171029", "totalARBalance" : "577.71", "ar1To30" : "307.11", "ar31To60" : "198.89", "ar61To90" : "71.71", "ar91To120" : "0.0", "ar121To150" : "0.0", "arOver150Days" : "0.0", "writeOffAmount" : "0.0", "totalUnappliedPayment" : "0.0", "totalUnappliedAdjustment" : "0.0", "depositDue" : "0.0", "depositPaid" : "0.0", "depositInterest" : "0.0", "totalMonthlyRate" : "174.23", "lastStatementDate" : "20171009" } ] }

How to make search Query using splunk Rest API

$
0
0
I have following search query that I run on the Splunk search UI & It works fine: index=cpaws source=PFT buildNumber=14 type=REQUEST | stats p98(wholeduration) as currentRunP98Duration| appendcols [search index=cpaws source=PFT buildNumber=13 type=REQUEST | stats p98(wholeduration) as previousRunP98Duration1] | appendcols [search index=cpaws source=PFT buildNumber=12 type=REQUEST | stats p98(wholeduration) as previousRunP98Duration2] |eval avgP98=(previousRunP98Duration1+previousRunP98Duration2)/2 | eval success=if(currentRunP98Duration>=avgP98*0.1,"Good","BAD")| table success For printing out parameter "success", I was using the table command. Now I want to call the same query using the Splunk REST API and in return I want to get the success parameter value. How can I do that? I went through the Splunk REST API Documentation but I couldn't/didn't find anything helpful. Please help me.

Search for App Activity

$
0
0
I need to see if an app I have installed is actually doing anything. Is there a query that I can use to check the activity of this app? It was originally scheduled as a one time thing, so I fixed that to initially 60 seconds and then 300 seconds just to see if it would change anything. So far, I haven't received any data. Thanks!

Heavy Forwarder not showing in Deployment Server

$
0
0
Hi Team, I am facing a very strange issue. I have two heavy forwarder, let say host1 and host2. I am getting data from both host1 and host2 on Indexer's but when only either of them get listed on deployment server at a time. If i restart host1, deployment server will list host1, as soon as i go ahead and restart host2, host1 gets delisted and host2 starts showing up. So at any point of time i am unable to get both listed together. PS : - what has happened earlier was in inputs.conf file of host2 below was by mistake done and later it was fixed but the problem is not fixed inputs.conf of host2 [default] host=host1 Since in host2, the host=host1 was configured, we realized later and fixed it but both the forwarders are not getting listed together on deployment server since then. Any help? how to troubleshoot this issue.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>