Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Add-on for AWS missing regions

$
0
0
I go to add an AWS CloudWatch Logs input and i do not see Canada (Central) region. I have log inputs from US East and US West working fine. When i select the AWS Region drop down i'm missing Canada, UK, Mumbai and Ohio. ![alt text][1] ![alt text][2] [1]: /storage/temp/209692-aws1.jpg [2]: /storage/temp/209693-aws2.jpg

6.6.x summary indexing: Where'd it go? Where'd the indexes go?

$
0
0
In the re-write of the searches and reports interface, they've moved summary indexing to its own menu item under Edit. Something else has changed though. The dropdown list for which summary index to use has been truncated. How do I control this? In 6.5.3 it listed all my indexes, about 150. It now has 3 or 4. I've yet to figure out why these get listed.

Palo syslog: 1 host is ingested w/ incorrect date

$
0
0
Pretty weird situation here. Bringing in multiple palo alto syslog sources, all going to the same main syslog directory, then divvied up by host name, so /var/log/syslog/PaloAlto/host1/host1-PaloAlto.log, etc. Host 1 is showing the correct date in the event that matches the log 13:49:48,010108000857,TRAFFIC,end,1,2017/08/28 13:49:48,172.30.69.194,172.30.5.69,0.0.0.0,0.0.0.0,DC_Dea_Any,,,tanium,vsys3,DC_DEA_TRUSTED,DC_DEA_UNTRUSTED,ethernet6/4.1028,ethernet6/3.1028,Log_Fwd_PA-7050,2017/08/28 13:49:48,1343232963,1,54123,17472,0,0,0x5e,tcp,allow,3133,893,2240,14,2017/08/28 13:49:29,17,any,0,0,0x0,172.16.0.0-172.31.255.255,172.16.0.0-172.31.255.255,0,9,5,tcp-fin,43,0,0,0,DC-DEA,host1,from-policy 8/28/17 1:49:48.010 PM while host 2 is showing 13:49:49,007801000317,TRAFFIC,end,0,2017/08/28 13:49:28,204.76.30.253,172.217.2.46,0.0.0.0,0.0.0.0,PUBLIC_TO_INTERNET,,,google-analytics,vsys10,IPS_IN,IPS_IN,ethernet1/1,ethernet1/1,Log_Fwd,2017/08/28 13:49:28,120421,1,57690,443,0,0,0x53,tcp,allow,6609,1706,4903,17,2017/08/28 13:46:38,168,computer-and-internet-info,0,31998418668,0x8000000000000000,United States,United States,0,9,8,tcp-fin,892,0,0,0,IPS_TEST,host2,from-policy,,,0,,0,,N/A 8/2/17 1:49:49.007 PM We're uncertain how long this has been going on. I've added the following props for the sourcetype, but it's had no effect: [pan:traffic] DATETIME_CONFIG = NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIME_FORMAT = %Y/%m/%d %H:%M:%S TIME_PREFIX = \S+\,\S+\,\S+\,\S+\,\S+\, category = Custom pulldown_type = true MAX_TIMESTAMP_LOOKAHEAD = 19 I tried it without the timestamp_lookahead, but no change. Any help here would be appreciated.

Splunk Windows UF Stops Sending Events After Certain Period

$
0
0
Hello Guys, I have a bit of a curious case and it is really bugging our production environment. I have deployed around 12 Windows UF to monitor Security event logs within AD servers which are located behind a firewall. The version of the UFs is 5.0.2 currently and I have set the input and output configurations using a deployment server. From the first deployment, I could see all 12 servers are sending the logs just fine. After several hours, the number of servers dropped to 7. The drop sequence continue until no server is sending logs at all. I tried to use just a single server as a test project and I found that the server is only sending logs for about 3 - 4 hours max before stopped sending completely. No errors or warnings found within splunkd.log of the forwarder and my indexer. The splunkd.log's entries were only "Connected to ...." and "... phone home ....". I also did not see any blocking event from metrics.log My configurations are like this: **inputs.conf** [WinEventLog://Security] disabled = 0 index = app_ad sourcetype = tseladscrt start_from = oldest current_only = 0 _TCP_ROUTING = loadheavyfwd **outputs.conf** [tcpout:loadheavyfwd] compressed = true server = :9997 sslCertPath = D:\Program Files\SplunkUniversalForwarder\etc\auth\cert.pem sslPassword = xxxxxxxxxxxxx sslRootCAPath = D:\Program Files\SplunkUniversalForwarder\etc\auth\CoreCA.pem sslVerifyServerCert = true Where should I start to troubleshoot? Thank you.

How do I correct "timestamp parsing issues" for sourcetype=splunkd in Splunk_SA_CIM?

$
0
0
I am getting "timestamp parsing issues" in the Data Quality section of Monitoring Console. I traced it to the Windows netlogon.log file. It's format is 08/28 12:43:01 COMPANY: NO_CLIENT_SITE: HQ-DT-1460 10.15.1.72 but it is indexed as being in 2015 causing the following error. DateParserVerbose - Time parsed (Fri Aug 28 12:43:01 2015) is too far away from the previous event's time (Sat Jan 24 12:43:01 2015) to be accepted. Looking at the source type it is configured as Extraction = Advanced , Timestamp format = %m-%d-%Y %H:%M:%S.%l %z I think if I change the timestamp format to reflect this format, it will stop this error but because it is part of CIM, I will be shifting the error to other logs. Is there an easy fix for this or am I going to have to go through of adding data and making a new sourcetype for just this log? Thank you in advance for any help.

Palo Alto Networks syslog: 1 host is ingested with incorrect date

$
0
0
Pretty weird situation here. Bringing in multiple palo alto syslog sources, all going to the same main syslog directory, then divvied up by host name, so /var/log/syslog/PaloAlto/host1/host1-PaloAlto.log, etc. Host 1 is showing the correct date in the event that matches the log 13:49:48,010108000857,TRAFFIC,end,1,2017/08/28 13:49:48,172.30.69.194,172.30.5.69,0.0.0.0,0.0.0.0,DC_Dea_Any,,,tanium,vsys3,DC_DEA_TRUSTED,DC_DEA_UNTRUSTED,ethernet6/4.1028,ethernet6/3.1028,Log_Fwd_PA-7050,2017/08/28 13:49:48,1343232963,1,54123,17472,0,0,0x5e,tcp,allow,3133,893,2240,14,2017/08/28 13:49:29,17,any,0,0,0x0,172.16.0.0-172.31.255.255,172.16.0.0-172.31.255.255,0,9,5,tcp-fin,43,0,0,0,DC-DEA,host1,from-policy 8/28/17 1:49:48.010 PM while host 2 is showing 13:49:49,007801000317,TRAFFIC,end,0,2017/08/28 13:49:28,204.76.30.253,172.217.2.46,0.0.0.0,0.0.0.0,PUBLIC_TO_INTERNET,,,google-analytics,vsys10,IPS_IN,IPS_IN,ethernet1/1,ethernet1/1,Log_Fwd,2017/08/28 13:49:28,120421,1,57690,443,0,0,0x53,tcp,allow,6609,1706,4903,17,2017/08/28 13:46:38,168,computer-and-internet-info,0,31998418668,0x8000000000000000,United States,United States,0,9,8,tcp-fin,892,0,0,0,IPS_TEST,host2,from-policy,,,0,,0,,N/A 8/2/17 1:49:49.007 PM We're uncertain how long this has been going on. I've added the following props for the sourcetype, but it's had no effect: [pan:traffic] DATETIME_CONFIG = NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIME_FORMAT = %Y/%m/%d %H:%M:%S TIME_PREFIX = \S+\,\S+\,\S+\,\S+\,\S+\, category = Custom pulldown_type = true MAX_TIMESTAMP_LOOKAHEAD = 19 I tried it without the timestamp_lookahead, but no change. Any help here would be appreciated.

How to migrate all fields from one sourcetype to another new sourcetype of same app

$
0
0
There was a migration in middleware environment. So, due to which there was new source, sourcetypes and hosts indexed in splunk. Found that all the fields are lost and no fields existing in new sourcetype and extracting fields one by one is tedious task. Is there any way that I can migrate all fields from old sourcetype to new sourcetype.

Why did all of my servers stop sending logs? Configuration issue?

$
0
0
Hello Guys, I have a bit of a curious case and it is really bugging our production environment. I have deployed around 12 Windows UF to monitor Security event logs within AD servers which are located behind a firewall. The version of the UFs is 5.0.2 currently and I have set the input and output configurations using a deployment server. From the first deployment, I could see all 12 servers are sending the logs just fine. After several hours, the number of servers dropped to 7. The drop sequence continue until no server is sending logs at all. I tried to use just a single server as a test project and I found that the server is only sending logs for about 3 - 4 hours max before stopped sending completely. No errors or warnings found within splunkd.log of the forwarder and my indexer. The splunkd.log's entries were only "Connected to ...." and "... phone home ....". I also did not see any blocking event from metrics.log My configurations are like this: **inputs.conf** [WinEventLog://Security] disabled = 0 index = app_ad sourcetype = tseladscrt start_from = oldest current_only = 0 _TCP_ROUTING = loadheavyfwd **outputs.conf** [tcpout:loadheavyfwd] compressed = true server = :9997 sslCertPath = D:\Program Files\SplunkUniversalForwarder\etc\auth\cert.pem sslPassword = xxxxxxxxxxxxx sslRootCAPath = D:\Program Files\SplunkUniversalForwarder\etc\auth\CoreCA.pem sslVerifyServerCert = true Where should I start to troubleshoot? Thank you.

How do I correct timestamp parsing issues?

$
0
0
How do I correct "timestamp parsing issues" for sourcetype=splunkd in Splunk_SA_CIM? I am getting "timestamp parsing issues" in the Data Quality section of Monitoring Console. I traced it to the Windows netlogon.log file. It's format is 08/28 12:43:01 COMPANY: NO_CLIENT_SITE: HQ-DT-1460 10.15.1.72 but it is indexed as being in 2015 causing the following error. DateParserVerbose - Time parsed (Fri Aug 28 12:43:01 2015) is too far away from the previous event's time (Sat Jan 24 12:43:01 2015) to be accepted. Looking at the source type it is configured as Extraction = Advanced , Timestamp format = %m-%d-%Y %H:%M:%S.%l %z I think if I change the timestamp format to reflect this format, it will stop this error but because it is part of CIM, I will be shifting the error to other logs. Is there an easy fix for this or am I going to have to go through of adding data and making a new sourcetype for just this log? Thank you in advance for any help.

Feedback wanted: Optimizing bucket sizes in a small-scale environment (

$
0
0
Using Splunk Enterprise 6.6.1 in a small-scale (<5GB/day) cost-sensitive deployment. Right now our hot/warm/cold storage is all the same (straight EBS) and then we roll frozen to S3 and archive it. As volume has come up we have seen our storage window drop to about two months, and would really like to have a year online. To minimize costs we we are considering mapping warm buckets to EBS Throughput-Optimized (st1) volumes and cold buckets to EBS Cold (sc1) volumes. I've found references to doing this in a Splunk blog post (April 2017 on New AWS Storage Types) and the Splunk AWS tech brief (sorry, not enough points to post links), but the data here is either larger scale or a bit of a shallow reference. I'm wondering if anyone here has done this and found that Splunk access patterns match Amazon's intended usage patterns for Throughput-Optimized and Cold volumes? For example, assuming we have a good enough definition of hot (say 60 days) with warm covering the balance of the year and cold covering some longer period, should we expect to see "streaming" style requests to the throughput optimized and cold volumes periodically, and well within burst ranges (again, assuming the vast majority of queries are in that "hot" window definition). Looking for any positive or negative experiences, but also any gotchas that cropped up attempting such a storage configuration (like did it always blow burst limits and end up tragically slow). Any perspectives appreciated!

Splunk Add-on for Infoblox: extraction for DHCP events

$
0
0
We often look to see if DHCP ranges are out of leases or just not configured correctly by looking for the messages that read "Peer holds all free leases" (which in Blox v8.x now reads "no permitted ranges with available leases"). However, the network that is being referenced doesn't get parsed out by the Infoblox TA app and it's a little hard to sort on. Here's the expression for a field "network" that works for us... hope it's helpful for others. ^(?:[^:\n]*:){9}\s+\w+\s+(?P[^:]+)

How to correlate external data to data that is indexed in Splunk

$
0
0
We have transactional data in Splunk that we need to correlate to chargeback data that is manually downloaded from an external source. What apps/tools/commands can be used in order to ingest the external data and correlate it to the data already in Splunk? There are some fields that can be used to correlate (i.e. auth code, account number, etc.) but there are other fields that are exclusive to the chargeback file data that is not indexed in Splunk. Is it possible to add those fields into Splunk without the use of a forwarder?

Fail to set active folder as partition "/Common". How to open port 443?

$
0
0
I'm receiving the error in the title when I enable the F5 task in the add-on. As far as I can tell the server information is configured correctly, and I'm using the same account as what's set on the F5 itself for me. One of the answers [Fail to set active folder as partition "/Common" for Template][1] had an answer that said to open port 443, but it, and the documentation fail to mention where to open said port. I can telnet to the F5 from my heavy forwarder on port 443, but 443 isn't running in Splunk, nor is it allowed, as I'm running as the splunk user and not root, per best practices. Could someone please explain what to do here? Thank you. [1]: https://answers.splunk.com/answers/386502/fail-to-set-active-folder-as-partition-common-for.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev

Assistance needed with Summary Indexing

$
0
0
Hello, I have the following Saved Search configured to run daily on a cron schedule, the scheduled job appears to be running on time as expected but the search doesn't save any events to the Summary Index. Saved Search: *index=index host=hosts sourcetype=sourcetype source=somelogfile.log | addinfo | eval _time = info_max_time | rename xheaders.X-NOTIFICATION-TYPE to "Notification Type" | sistats count by "Notification Type", reportField | sort - psrsvd_gc | collect spool=t uselb=t addtime=f index="summary" name="name" marker="report="name"* If I take out the *collect* clause and change *sistats* to *stats*, the query does return results. I know my account has permissions to write to the summary index. In the same environment I do have one job running and saving to the summary index as expected, the only difference I can see is that the working one has "nobody" as the owner and the ones that are not functional have my username as the owner. Also, something that is strange is that the same configuration works in our Pre-Production environment. The only real difference is that in Production our Splunk Administrators use the Deployer role to push the Saved Search configuration. Has anybody else ran into this type of issue? Or know what I may have miss configured? Regards, Cory

Splunk Add-on for F5 BIG-IP: error message -- "Fail to set active folder as partition '/Common'"

$
0
0
Fail to set active folder as partition "/Common". How to open port 443? I'm receiving this error message when I enable the F5 task in the add-on. As far as I can tell the server information is configured correctly, and I'm using the same account as what's set on the F5 itself for me. One of the answers [Fail to set active folder as partition "/Common" for Template][1] had an answer that said to open port 443, but it, and the documentation fail to mention where to open said port. I can telnet to the F5 from my heavy forwarder on port 443, but 443 isn't running in Splunk, nor is it allowed, as I'm running as the splunk user and not root, per best practices. Could someone please explain what to do here? Thank you. [1]: https://answers.splunk.com/answers/386502/fail-to-set-active-folder-as-partition-common-for.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev

Is search user ownership affecting if events are saved to the summary index?

$
0
0
Hello, I have the following Saved Search configured to run daily on a cron schedule, the scheduled job appears to be running on time as expected but the search doesn't save any events to the Summary Index. Saved Search: *index=index host=hosts sourcetype=sourcetype source=somelogfile.log | addinfo | eval _time = info_max_time | rename xheaders.X-NOTIFICATION-TYPE to "Notification Type" | sistats count by "Notification Type", reportField | sort - psrsvd_gc | collect spool=t uselb=t addtime=f index="summary" name="name" marker="report="name"* If I take out the *collect* clause and change *sistats* to *stats*, the query does return results. I know my account has permissions to write to the summary index. In the same environment I do have one job running and saving to the summary index as expected, the only difference I can see is that the working one has "nobody" as the owner and the ones that are not functional have my username as the owner. Also, something that is strange is that the same configuration works in our Pre-Production environment. The only real difference is that in Production our Splunk Administrators use the Deployer role to push the Saved Search configuration. Has anybody else ran into this type of issue? Or know what I may have miss configured? Regards, Cory

Possible to change the zoom depth on Clustered Single Value Map Visualization app?

$
0
0
We're plotting numerous objects on the map, and in some cases we'd like a bit more control over the zoom depth. Meaning we'd like to zoom between one of the preset zoom "stops." Is that possible?

How can I add two new fields to my logs?

$
0
0
Hello, On my servers I used combined Apache logs, but I added two other fields at the end of the logs : SSL_PROTOCOL and X-Forwarded-For LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" %{SSL_PROTOCOL}x %{X-Forwarded-For}i" combined The logs look like this : 192.168.1.1 - - [28/Aug/2017:22:27:26 +0200] "GET /production/file HTTP/1.1" 200 601 "-" "Ruby" TLSv1.2 or 192.168.1.1 - - [28/Aug/2017:22:27:26 +0200] "GET /production/file HTTP/1.1" 200 601 "-" "Ruby" TLSv1.2 192.168.2.1 192.168.6.2 With default access_combined sourcetype the PROTOCOL and X-Forwarded are located in other fields. But I would like to add two new fields for that like TLS_version and xforwarded. Any idea on how to do this ? regards

Can I use data from a lookup table to display an error code description as a tooltip?

$
0
0
Hello Everyone, I am trying to customize the mouse over tooltips on the dashboard. I have imported the error code with error code description CSV file on the lookup table. ![alt text][1] [1]: /storage/temp/210688-cusersfumpictureserrorcode.png I want to add one more column for error code description on the tooltip. I just wonder if I can use the data from lookup table and display that as error code description on the tooltip. I have struggled this for a long time. Anyone can help me out?

How to correlate the time of multiple searches for an ITSI glass table visualization?

$
0
0
Hi , Am running 40 base searches on the the [ITSI glass table](https://docs.splunk.com/Documentation/ITSI/2.6.1/User/BuildGlassTable), 4 groups each group contains 10 search. Some of the 9 searches results are ran with the time of the 10th search, am running each base KPI search scheduled for 5 min. My problem is all of the searches are running at different times, so my total is not getting compared to the above 9 searches. Is there any possible way to run all the 9 searches at same point of time? Thanks advance
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>