Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

DB Connect and sourcetypes

$
0
0
Good evening, I was using DB connect and it was forwarding events to my indexers, searches were working and everything was great, However the DBA then cleaned the source DB the events were coming from and now my index is empty, no events and no sourcetype, , Therefore a few questions, 1. Should I create my sourcetype on the SH as well as when its created on the HF (where DB Connect is installed). 2. When the source DB is cleaned and all events removed is it expected behaviour that it would remove events from the Splunk Indexes as well ? Thanks R

Log event skipped on read

$
0
0
Hi, I'm generating a stats (csv) file that is updated every second. The log has no errors/skips, but I've found that if I don't specify an interval within inputs.conf it will miss randomly and/or part read the last event as it's written. If I set an interval, say 10 seconds, then I get a missed event every 10 seconds without fail. I'm using collectl to populate the file. So I guess my question is; is it possible to skip the last event, but read it on the next run? Here are my inputs and props for completeness. inputs.conf [monitor:///var/tmp] whitelist = sysRes-log-\d{8}\.tab disabled = false index = os host = logserver sourcetype = sysStats #multiline_event_extra_waittime = true recursive = false #interval = 5 props.conf [sysStats] DATETIME_CONFIG = FIELD_DELIMITER = space INDEXED_EXTRACTIONS = csv KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true PREAMBLE_REGEX = (^##|^#\s) SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = Date,Time TIME_FORMAT = %Y%m%d %H:%M:%S BREAK_ONLY_BEFORE_DATE = true category = Structured description = Space value format. Set header and other settings in "Delimited Settings" disabled = false pulldown_type = true

Splunk UF Deployment - Possible Issues

$
0
0
Hello. We are planning on deploying UFs across our enterprise ~ 3000 systems. Currently, we have deployed UFs to 50 systems and have seen no issues. Before doing a large deployment to cover our entire enterprise - I was curious if anyone has seen any issues arise from deploying UFs?

Azure File Share and Splunk

$
0
0
Hello everyone. I have an Azure File Sharing folder with log files. Is there a way to read all these files from Azure File Sharing folder and show the logs into Splunk web? Thanks.

Replacing backslash not working in SEDCMD after re-directing through transforms.conf and applying it in props.conf.

$
0
0
Hi, I am trying to escape backslash character from json data. It works when I apply SEDCMD definations in props.conf soucetype - mysrc. But when I re-direct the definations to transforms.conf (custom_data_one and custom_data_two) to transform data for particular pattern & extract required data from the json event, and then apply the SEDCMD in mentioned sourcetype (mysrc_two) it doesnt works. Please share your thoughts on this. Data: {"docker":{"container_id":"852241528698541tzfjztdgtzjsxf"},"kubernetes":{"container_name":"a-kterminal","namespace_name":"kterminal","pod_name":"a-kterminal-555-85chghv","pod_id":"858gh-zgzh-gjh-ghg-896545213","labels":{"application":"a-kterminal","createdBy":"k-rass-template","deployment":"a-kterminal-555","deploymentConfig":"a-kterminal","deploymentconfig":"a-kterminal"},"host":"sdeb-gv-g58","master_url":"https://kubernetes.default.hgfbsjbgsk","namespace_id":"uzsefgvshj-dsgfvjhdv-ztfvsjhybv","namespace_labels":{"app_code":"mycode","network-policy":"true","splunk":"true","splunkindex":"myindex"}},"message":"2019-11-04 14:07:12.321 TRACE 1 --- [nio-8080-exec-4] c.k.k.d.trackinglogger.TrackingLogger : {\"timeStamp\":\"2019-11-04T14:07:12.321Z\",\"country\":\"DE\",\"environment\":\"at\",\"payload\":\"/bye/0\",\"loggingVersion\":\"1.0.0\",\"sessionId\":\"uzsefgvshj-dsgfvjhdv-ztfvsjhybv\",\"terminalId\":\"ABC-12345TST0103\",\"storeId\":\"8950\",\"floor\":\"0\",\"type\":\"System\"}\n","level":"info","hostname":"abc-555-g85","pipeline_metadata":{"collector":{"ipaddr4":"123.12.00.123","ipaddr6":"abc::abc5:abc54:a12:12a","inputname":"fluent-plugin-systemd","name":"fluentd","received_at":"2019-11-04T14:07:13.101993+00:00","version":"0.12.43 1.6.0"}},"@timestamp":"2019-11-04T14:07:12.321816+00:00","viaq_index_name":"project.kterminal.uzsefgvshj-dsgfvjhdv-ztfvsjhybv","viaq_msg_id":"uzsefgvshj-dsgfvjhdv-ztfvsjhybv","forwarded_by":"splunk-connect-1-854ik","source_component":"t01"} Data from which all backslash (\) need to be removed to view the data in proper json format: {\"timeStamp\":\"2019-11-04T14:07:12.321Z\",\"country\":\"DE\",\"environment\":\"at\",\"payload\":\"/bye/0\",\"loggingVersion\":\"1.0.0\",\"sessionId\":\"uzsefgvshj-dsgfvjhdv-ztfvsjhybv\",\"terminalId\":\"ABC-12345TST0103\",\"storeId\":\"8950\",\"floor\":\"0\",\"type\":\"System\"} Configurations :- props.conf [mysrc] TRUNCATE = 0 CHARSET = UTF-8 KV_MODE=JSON SHOULD_LINEMERGE=false SEDCMD-remove_header = s/{\"docker.*\,\"message":.*\s+\:\s+//g SEDCMD-remove_footer = s/\\n"\,\"level"\:.*//g SEDCMD-replace_backslash = s/\\//g [mysrc_one] TRUNCATE = 0 CHARSET = UTF-8 KV_MODE=JSON SHOULD_LINEMERGE=false TRANSFORMS-kdt-one = custom_data_one TRANSFORMS-kdt-two = custom_data_two [mysrc_two] TRUNCATE = 0 CHARSET = UTF-8 KV_MODE=JSON SHOULD_LINEMERGE=false SEDCMD-replace_backslash = s/\\//g transforms.conf [custom_data_one] REGEX = "splunkindex":"myindex" DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::mysrc_two [custom_data_two] REGEX = ({\"docker.*"splunkindex":"myindex"}},\"message":.*\s+\:\s+)(.*)(\\n"\,\"level"\:.*) DEST_KEY = _raw FORMAT = $2 Thanks!

Spluk Addon for AWS

$
0
0
Hello The addon configured for AWS runs form 3 HFs to get the data from SQS queue, however on the SQS, the Messages Available" grows to 999K+ and is not getting cleared. "Messages in Flight" appears to be around 30 Tried to increase the interval to 20 secs on the CloudTrail Input to see if that helps, but it did not. The Queue still grows, dont see any errors on the splunk_ta_aws_cloudtrail_main.log "processing 20 records in s3:logs*/AWSLogs/*..json.gz" "fetched 20 records, wrote 20, discarded 0, redirected 0 from s3:logs*/AWSLogs/*..json.gz" Any suggestions on how to ensure the Queue is read to clear the Messages Available Thanks

No events indexed REST API for twitter

$
0
0
I am very new to Splunk, and I have I just connected the twitter API to my splunk data source. And this is how my configuration looks ![alt text][1] ![alt text][2] [1]: /storage/temp/275155-1.png [2]: /storage/temp/275156-2.png I am able to index tweets starting from a search date and the dates that come after. But, i am not able to index tweets that happened in a certain date. For example, If I go to the search and reporting app and I do a search I can see events that happened after my search began. I tried to modify the date and see but still can see no events.

Does `maxTotalDataSizeMB` apply to all indexes in one indexer ?

$
0
0
I am beginner in splunk and I had a doubt related to `maxTotalDataSizeMB` property. Assume, I have only one indexer. Now I have created many indexes like `web_app` , `iot` etc. Now, a separate index Db created for them in our only indexer. Now, when we mention value of `maxTotalDataSizeMB` , does this apply to all indexes Db size in our only indexer or individually on each one of them ?

Change Cluster Map Color to solid color with error

$
0
0
Hello, I am trying to make it so that my cluster map pie chart turns all one color when there is an event containing an error. So instead of being mostly green with a little bit of red, I would like the whole pie to turn red. Thanks!

Appinspect in CI Pipeline - Memory use?

$
0
0
I'm using AppInspect (2.0) in my Bitbucket Pipelines step as a check on merge. In the past I don't remember this happening , but now it's taking even longer than usual to run (5m ... to now indefinite) and my build step is running out of memory. I am running it in test mode. Any ideas?

Why doesn't a > WHERE clause work when an = does?

$
0
0
I cannot seem to get my search to return results when comparing a property with a greater than comparison even though using an equals comparison does work. The 'elements' property in my message is a 0 - x property of the event...meaning it could exist zero times or it could exist multiple times...each element in the event has a 'y' value. What i'm trying to accomplish is to count each time an event occurs where any of the elements in the event have a y value greater than a value. example: This search returns 2 : index="lab" source="*-test" | eval y='line.message.space-document.design.elements{}.y' | where y="1664" | stats count This search returns 0 when it should be the same if not more than the above search: index="lab" source="*-test" | eval y='line.message.space-document.design.elements{}.y' | where y>"1663" | stats count

How to split Cluster Master/Deployment server into two separate servers?

$
0
0
Hi - I am migrating Splunk to a new hardware and looking for a way to split the combo cluster master/deployment server into two separate servers as recommended. Can anyone advise me which files need to go to which instance? It's also very helpful with CLIs showing how to deploy indexes to indexers and TA/apps to forwarders. Thank you,

How to extract a field with a NULL/blank value

$
0
0
I am working with winevent logs for failed logons (Event 4625) and I have a log that has null/blank values for Account and Domain. When I try to extract the field I can see in the IFX that it is being grabbed as what seems to be a null/blank value using my regex below. When I save the extraction all of the other fields I am extracting works but the Account and Domain field are not being assigned a value of anything and not showing up as an extracted field. My question is how do I extract a null/blank value from a log and have Splunk still recognize it as a field with a null/blank value. The regex I am using is: (?s)EventCode=4625.+?ComputerName=(?[^\s]+).+?Logon Type:\s+(?\d).+?Account Name:\s+(?[^\r\n]*)\sAccount Domain:\s+(?[^\r\n]*)(Failure Reason:).+?Caller Process Name:\s+(?[^\s]+).+?Workstation Name:\s+(?[^\s]+).+?Source Network Address:\s+(?[^\s]+).+?Source Port:\s+(?[^\s]+) The log looks like this: 11/15/2019 12:36:54 PM EventCode=4625 ComputerName=somehost Message=An account failed to log on. Security ID: DOMAIN\someuser Account Name: someuser Account Domain: DOMAIN Logon ID: 0x0000000 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: Account Domain: Failure Reason: An Error occured during Logon. Status: 0x00000000 Sub Status: 0x0 Caller Process ID: 0x0000 Caller Process Name: C:\Windows\System32\someprocess.exe Workstation Name: somehost Source Network Address: - Source Port: -

Contingency table using dictated column fields

$
0
0
I am currently looking to make a table that shows how variables from 5 fields (the first five rows that splunk says have the biggest count) end up being spread into 5 new fields. As of now, I have maxcol and maxrow set to 5. I know the 5 new fields that I want to specifically look at. Is there any way to call these fields out when I am doing the search. My current search looks like this index=name |'data' | contingency group newgroup maxcols=5 maxrows=5 usetotal=false I was hoping there would be some way to replace the maxcols=5 with a variable like col1=fielda col2=fieldb etc....

when set no_priority_stripping = true the host change

$
0
0
Hi, when I set no_priority_stripping = true the host change from IP Address to Host name when performing a search in splunk. Example Host="10.10.10.170" to Host="ABC-DEVICE" Before set no_priority_stripping = true in inputs.conf Below is syslog event send to splunk 2:31:50.000 PM <134> 1 2019-11-15T14:31:50-08:00 ABC-DEVICE server - - [meta sequenceId="13" enterpriseId="2634.1.17.16" vendorId="WTI"] CPM: ABC-DEVICE, (AUDIT LOG) DATE-TIME: 11/15/19 14:31:50 host = ABC-DEVICE source = udp:514 sourcetype = syslog After remove set no_priority_stripping = true from inputs.conf Nov 15 14:07:57 192.168.100.170 1 2019-11-15T14:07:57-08:00 ABC-DEVICE server - - [meta sequenceId="8" enterpriseId="2634.1.17.16" vendorId="WTI"] CPM: ANTHONY-TEST, (AUDIT LOG) DATE-TIME: 11/15/19 14:07:57, host = 10.10.10.170 source = udp:514 sourcetype = syslog Anyone have any idea why Splunk Stripping the IP Address and replace it with the Host name instead.

Why am I losing events when neither the cold path usage or maxage are being met?

$
0
0
I have an index I'm using to backfill a bunch of data, and as I'm tracking the event count by sources, I'm seeing splunk throw away events literally by the millions randomly (I'll keep track of the events of one of my sources, then I'll check again 5 minutes later and the number is over a million less than it used to be.). None of the limits set should be getting hit. The only thing being hit is the warm path but that should just be rolling into the much larger allotment I gave for Cold, and Cold isn't even near being full, yet I'm getting events thrown out left and right. ![alt text][1] [1]: /storage/temp/276096-screen-shot-11-15-19-at-0424-pm.png What can I look into here? I'm having trouble trusting the integrity of my data when I see the event counts moving backwards even though Cold Path isn't even close to being filled up, and data age isn't close to being hit either.

how to define which heavy forwarder instances to deploy apps?

$
0
0
Hello - I have 3 HFs and about 150 UFs and 1 deployment server and other instances. In a new configuration, how can I use the DS to deploy apps to only these 3 HFs and UFs, not to other instances? Thank you,

How to read different time slots from lookup table

$
0
0
Hi splunkers, I have a situation to read different operational hours of same bin size for the last 3 days Scenario: 9-10 10-11 11-12 12-13 13-14 14-15 15-16 .............23-24 Today 1 2 3 4 5 1 day before 1 2 3 4 5 2 day before 1 2 3 4 5 3 day before 1 2 3 4 5 As per Today's train scedule it will start at 10 operates till 12 after that it will take some rest and start again at 1 pm. Example: if today the train is in 1st hour of operation, i need to to count the alarmopened in 1st hour of operation for the last 3 days, divide it be 3 to compute average. If today's count of alarm opened is greater than the average, it will give alerts. The same happens for all hours of operation. Question: My problem is how can I take the same time slot of previous last 3 days. If Now my train in 2nd hour of operation, how can I get the 2nd hour of operation for the last 3 days Note: Bin size is same everyday running for 5 hours TIA

Overwrite _time with field only shows all entries in timechart ignoring the timeframe selected

$
0
0
Hi, I need to perform a timechart count for a particular field. The dates in the field aren't related to the timestamp the log was received and can go back to dates a few years ago, and so I overwrite the _time and convert the field to epoch. This works well and the figures in the graph are accurate. However if I try and select the timeframe for 'last 7 days' or 'last 30 days' for example the timechart still shows all entries including those going back to 2017. index=example sourcetye=examplesource| eval epoch_logged_time=strptime('Date Logged',"%d/%m/%Y") | eval _time=epoch_logged_time | timechart count span=7d What's going on here? TIA

where does splunk store output of shell scripts?

$
0
0
Hi, On Splunk forwarders, we have few shell scripts in "SPLUNK_HOME/etc/apps/my_app/bin/" that are being run. Just wondering where do the outputs of these shell scripts store? Shell scripts don't have the output filename in them so I tried to look into "SPLUNK_HOME/var/log/splunk" but no luck. Are these outputs store in "*.dat" file which we can't read? Thanks
Viewing all 47296 articles
Browse latest View live