Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Microsoft Office 365 Reporting Add-on for Splunk: Data got indexed only .I cant see the latest message traces . Not sure whats wrong with the add-on

$
0
0
After installing the add-on , i could see the data only once . Not able to see the latest logs .I dont see any errors not sure whats wrong with the add-on . anyone could help here

Indexes not visible on searchhead in splunk in non cluster enviornment

$
0
0
Hi Team, I have one Search head(deloyment server) ,two indexer and two forwarder in the network .I created web index on both indexer , when i try to add data from search head into web index .The web index does not show on the forwader and it is not present on search head ,However i did create web index on both indexer . Please let me know why my web index on indexer is not visible in search head . thanks smdasim

Run a search based on alert result

$
0
0
Hi, i would like to run a search (to collect data in a summary index) triggered by an alert, which is checking for new data. e.g. if the start of a new dataset comes in, i would like to enrich, manipulate and collect the last dataset into a summary index. if the collect search only runs on a time schedule, i may get inconsistencies in between the collected dataset due to cutting in between. i'm looking for something like a custom alert action to trigger another saved search. Thanks in advance.

One shot search with Python SDK

$
0
0
I am reading the documentation to create a simple search script: #!/usr/bin/env python import os import sys import json import argparse import datetime from random import choice try: import splunklib.client as client import splunklib.results as results except: print('') print('Please install the Splunk Python SDK via # pip install splunk-sdk [http://dev.splunk.com/python]') print('') quit(1) ################################################# ### Deal with arguments vars and file handles ### ################################################# token = ''.join([choice('abcdefghijklmnopqrstuvwxyz0123456789') for i in range(64)]) parser = argparse.ArgumentParser(description='Python Script to test Splunk functionality') parser.add_argument('-H', help='Hostname to target', required=True) parser.add_argument('-u', help='Splunk Username', required=True) parser.add_argument('-p', help='Splunk Password', required=True) parser.add_argument('-P', help='API Port, default = 8089', default="8089") args = parser.parse_args() ## Connect to Splunk try: sdk = client.connect(host=args.H,port=args.P,username=args.u,password=args.p) except: print "Error connecting..." kwargs_oneshot = {"earliest_time": "2018-08-132T12:00:00.000-07:00", "latest_time": "2018-09-13T12:00:00.000-07:00"} searchquery_oneshot = "search * | head 10" oneshotsearch_results = sdk.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) # Get the results and display them using the ResultsReader reader = results.ResultsReader(oneshotsearch_results) for item in reader: print(item) This produces no results. What am I missing? This does not seem to be a fully functioning search. I should say that the only index that has events is _internal.

New Index not searchable

$
0
0
Hi everyone, I'm new to Splunk and this is the first Index I created, so hopefully this Question ain't to nooby ;) This is my inputs.conf: [monitor:///var/log/app/retry.log] disabled=false sourcetype=log4j index=retry multiline_event_extra_waittime = true indexes.conf: [retry] homePath=$SPLUNK_DB/retry/db coldPath=$SPLUNK_DB/retry/colddb thawedPath=$SPLUNK_DB/retry/thaweddb repFactor=autor maxDataSize=auto Cluster Bundle Status: master cluster_status=None active_bundle checksum=2924BEA962D9C72179B8CF4D03846EAB timestamp=1533281547 (in localtime=Fri Aug 3 09:32:27 2018) latest_bundle checksum=2924BEA962D9C72179B8CF4D03846EAB timestamp=1533281547 (in localtime=Fri Aug 3 09:32:27 2018) last_validated_bundle checksum=2924BEA962D9C72179B8CF4D03846EAB last_validation_succeeded=1 timestamp=1533281547 (in localtime=Fri Aug 3 09:32:27 2018) last_check_restart_bundle last_check_restart_result=restart not required checksum= timestamp=0 (in localtime=Thu Jan 1 01:00:00 1970) splunkidx2 3F5EEC11-8718-4C0D-AEF7-0F54DABB1D01 default active_bundle=2924BEA962D9C72179B8CF4D03846EAB latest_bundle=2924BEA962D9C72179B8CF4D03846EAB last_validated_bundle=2924BEA962D9C72179B8CF4D03846EAB last_bundle_validation_status=success restart_required_apply_bundle=0 status=Up splunkidx3 79FD9BAC-9F72-46CB-A043-EDCA31DE8EB7 default active_bundle=2924BEA962D9C72179B8CF4D03846EAB latest_bundle=2924BEA962D9C72179B8CF4D03846EAB last_validated_bundle=2924BEA962D9C72179B8CF4D03846EAB last_bundle_validation_status=success restart_required_apply_bundle=0 status=Up splunkidx1 D2077BB4-988A-46F2-BB00-E261EBF94BC9 default active_bundle=2924BEA962D9C72179B8CF4D03846EAB latest_bundle=2924BEA962D9C72179B8CF4D03846EAB last_validated_bundle=2924BEA962D9C72179B8CF4D03846EAB last_bundle_validation_status=success restart_required_apply_bundle=0 status=Up I can see the new "retry" Index in Splunk and add it to roles. But I can't search for it, or find events when search for "index=retry". But I can see the rawdata/db on the Indexers, so Data is here. Any Idea what I could have missed? Thanks in advance!

How to fix one column in a table when using the scroll bar (moving left to right) and (moving right to left).

$
0
0
I have table having 34 columns, So I need to fix first column while scrolling bar left to right or vice versa.

Field extraction from XML file

$
0
0
I have one xml file ========================================== 1. 1- 2- -xxx3,25.10742916222947 3- Intexxxon 4- 23333 5- ---------------------------------------- ================================== I wnat the date from and time from the fieldname should be captured from and value crossponding value can show against field name.

Linux timestamp not parsing

$
0
0
I recently added several servers to our splunk system, and they are all reporting as `sourcetype=linux_audit` (Which I do not believe is overridden from something else) Looking at the logs, I am pretty sure they are from redhat (or similar), as the log looks like [(this)][1]: type=SYSCALL msg=audit(1364481363.243:24287): arch=c000003e syscall=2 success=no exit=-13 a0=7fffd19c5592 a1=0 a2=7fffd19c4b50 a3=a items=1 ppid=2686 pid=3538 auid=500 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=pts0 ses=1 comm="cat" exe="/bin/cat" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key="sshd_config" But, when I go through my logs, I see that any log from this sourcetype is using a default timestamp generated by splunk, and I have about a million logs indexing in a single second at the beginning of a minute. Looking into it further, I see that splunk is not even trying to parse the timestamp `msg=audit(1364481363.243:24287)` (No "failed to parse timestamp" errors. The rest of the message seems to be parsing correctly -- all of the "key=value" pairs are showing up in verbose mode. But the msg=audit is showing up in "msg" and not as a timestamp. Being a RHEL log, it seems to be something that splunk would automatically identify, but I don't even see a "linux_audit" sourcetype in the pretrained sourcetypes. http://docs.splunk.com/Documentation/Splunk/7.1.2/Data/Listofpretrainedsourcetypes What can I do from here to nudge these logs back into automatically parsing? Is this a situation where I need to override the sourcetype with some other syslog? (Again, I see no "msg=audit(unixtimestamp)" in the pretrained sources) [1]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security_guide/sec-understanding_audit_log_files

extract fields at search time through props.conf file

$
0
0
I have w3c format logs. I want to create the fiels through props.conf. I want to use EXTRACT- xxx= for search time field extraction. below is my sample event. 2014-01-02 22:12:37 5209 1x3.xxx2.xx.xxx 200 TCP_MISS 209383 546 GET http daxxx.clxxxnt.net 80 /photos/show_resized/137406/12/4/41.jpg - - - - daxxx.clxxxnt.net image/jpeg;%20charset=utf-8 http://daxxx.clxxxnt.net?&utm_source=email&utm_medium=sf&utm_term=Second%20Email%20SF%201/2&utm_content=loot_position1_michael_macdonald_18&utm_campaign=second_email_sf_01_02_14# "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)" OBSERVED "Content Servers" - 1x3.xx2.xx.xxx 5x.xxx.1xxx.2xxx 52 006 ========= #Fields: date time time-taken c-ip sc-status s-action sc-bytes cs-bytes cs-method cs-uri-scheme cs-host cs-uri-port cs-uri-path cs-uri-query cs-username cs-auth-group s-hierarchy s-supplier-name rs(Content-Type) cs(Referer) cs(User-Agent) sc-filter-result cs-categories x-virus-id s-ip r-supplier-ip c-port =====================

Splunk Add-on for Microsoft Windows, ingesting zip files not working

$
0
0
Hallo all, I'm using the "Splunk Add-on for Microsoft Windows" to monitor a blob storage (which is a great feature). It works fine for text files. However it doesn't handle zip files well. If I monitor zip files on the file system I have no problems. If I use the same sourcetype to monitor the same zip files stored in a Blob it reads it in as if it were text (,�{7�~4)Tk�tȺ��Ќ����8>�D3��QԈ0T�� = same 8 lines as you would get if you open the zip file in a text editor.) Has anyone any suggestions how to solve / workaround this? Thanks

Question regarding summary index with saved search

$
0
0
Hello, I have created a saved search to populate summary index. I am running saved search for every 5 minutes. What i want is, first time when the saved search runs, it should run with time range as all time. And from the second time on wards, saved search should with time range as "last 5 mins" (ie, latest=now and earliest=last time when ss ran succesfully) So that i will avoid duplicate of data in summary index. How to achieve this? Thanks in advance.

workflow to update a lookup table

$
0
0
I would like to be able to use a POST action in work flow to update a lookup table. Any direction on how to do this is appreciated.

How to convert a single event into an outputlookup CSV file?

$
0
0
We are receiving a csv file as an event. (The whole csv file as a single event). This is configured correctly eg [custom:csv_event] BREAK_ONLY_BEFORE=NEVER_OCCUR_TAG MAX_EVENTS=100000 DATETIME_CONFIG = NONE CHECK_METHOD = modtime Example message hostname,user host1,user1 host2,user2 host3,user3 If I do a quick extraction, the event comes correctly but as a single line (\n is preserved as far as I can see) index=* sourcetype=custom:csv_event| stats latest(_raw) as csv_raw by sourcetype| rex field=csv_raw "(?
.+)(\r\n|\r|\n)(?[\S\s]+)" What's the best method to convert the above event into a CSV file, so we can do an outputlookup into a csv file? I know an ugly method, but was thinking if you have better ideas; the ugly solution is: (not elegant) index=* sourcetype=custom:csv_event| stats latest(_raw) as csv_raw by sourcetype| rex field=csv_raw "(?
.+)(\r\n|\r|\n)(?[\S\s]+)"| eval header=rest_of_event| rename header as "hostname,user"| fields "hostname,user"| outputlookup hostname_user.csv

Forwarder on AMI in AWS for Auto Scaling groups

$
0
0
I'm setting up an Auto-Scaling group in Amazon using an AMI. I want my logs, specifically apache logs, to be pushed into my Splunk server, but want to make sure I do this properly. So the set-up is currently it spins up the AMI and runs a user data script to prep the system for the proper set-up either test or prod. We can have anywhere from 4-18 servers depending on load. Would it be best to install the forwarder on the AMI or is there another way to do this? Thanks in advance for your help, Josiah

Why are some of the Linux timestamps not parsing?

$
0
0
I recently added several servers to our splunk system, and they are all reporting as `sourcetype=linux_audit` (Which I do not believe is overridden from something else) Looking at the logs, I am pretty sure they are from redhat (or similar), as the log looks like [(this)][1]: type=SYSCALL msg=audit(1364481363.243:24287): arch=c000003e syscall=2 success=no exit=-13 a0=7fffd19c5592 a1=0 a2=7fffd19c4b50 a3=a items=1 ppid=2686 pid=3538 auid=500 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=pts0 ses=1 comm="cat" exe="/bin/cat" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key="sshd_config" But, when I go through my logs, I see that any log from this sourcetype is using a default timestamp generated by splunk, and I have about a million logs indexing in a single second at the beginning of a minute. Looking into it further, I see that splunk is not even trying to parse the timestamp `msg=audit(1364481363.243:24287)` (No "failed to parse timestamp" errors. The rest of the message seems to be parsing correctly -- all of the "key=value" pairs are showing up in verbose mode. But the msg=audit is showing up in "msg" and not as a timestamp. Being a RHEL log, it seems to be something that splunk would automatically identify, but I don't even see a "linux_audit" sourcetype in the pretrained sourcetypes. http://docs.splunk.com/Documentation/Splunk/7.1.2/Data/Listofpretrainedsourcetypes What can I do from here to nudge these logs back into automatically parsing? Is this a situation where I need to override the sourcetype with some other syslog? (Again, I see no "msg=audit(unixtimestamp)" in the pretrained sources) [1]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security_guide/sec-understanding_audit_log_files ***Further investigation shows:*** A few logs are coming in just fine. Same host, same source, same sourcetype. But them BAM. A huge influx of non-parsed timestamps. I see nothing different.

How does _TCP_ROUTING work in inputs.conf?

$
0
0
We soon will be required to send our Windows Event Security logs to a separate Splunk sever owned by our organization's Security group. To test this, I installed a test Splunk server (testsplunk in below files). I first tested that I could send all events to both Splunk indexers. Here are are outputs.conf and inputs.conf from the Splunk Universal Forwarder client I used in this first test: $SPLUNK_HOME/etc/system/local/outputs.conf [tcpout] defaultGroup = mysplunk, testsplunk [tcpout:mysplunk] server = mysplunk.com:9997 [tcpout:testsplunk] server = testsplunk.com:9997 $SPLUNK_HOME/etc/apps/WinEvt_Logs/local/inputs.conf [WinEventLog://Security] disabled = 0 index = winevent In this case both servers received all events as expected (including events from 3 other apps not shown here). In the next test I wanted mysplunk to continue receive all events and testsplunk to only get [WinEventLog://Security] To accomplish this I took testsplunk out of the defaultGroup and modified inputs.conf as shown below: $SPLUNK_HOME/etc/system/local/outputs.conf [tcpout] defaultGroup = mysplunk [tcpout:mysplunk] server = mysplunk.com:9997 [tcpout:testsplunk] server = testsplunk.com:9997 $SPLUNK_HOME/etc/apps/WinEvt_Logs/local/inputs.conf [WinEventLog://Security] _TCP_ROUTING = mysplunk, testsplunk disabled = 0 index = winevent After restarting the SplunkForwarder, mysplunk did keep receiving all events but testsplunk now got nothing. What am I missing?

What stanza would I need to only monitor the Notification Packages string within the Lsa hive?

$
0
0
Hey guys, So I have another request that I can monitor hives without issue so directly below if I were to add anything into this hive it gets picked up. However, when it comes to monitoring a specific value of a String or Dword then i'm having trouble, see the 2nd example below. [WinRegMon://Registry1] proc = .* hive = \\REGISTRY\\USER\\.*\\SOFTWARE\\MICROSOFT\\WINDOWS\\CURRENTVERSION\\RUN\\.* type = create|delete|set|rename baseline = 1 index = main [WinRegMon://Registry11] proc = .* hive = \\REGISTRY\\MACHINE\\SYSTEM\\CURRENTCONTROLSET\\CONTROL\\LSA\\Notification Packages.* type = create|delete|set|rename baseline = 1 index = main Also tried with - \\NotificationPackages.* \\Notification Packages\\.* If I remove the "Notification Packages" then the stanza does kinda of work in that the baseline is taken of all items within the Lsa hive, but when adding the Notifications Packages item I get nothing at all. I have read that I can monitor via the key_path and also process_image however I dont want to narrow the changes down to specific processes and again adding a .* doesnt seem to bring back any values. Can anyone advise of the stanza I would need to only monitor the Notification Packages string within the Lsa hive ?

How to setup volumes for Splunk deployment?

$
0
0
OK basically I think I'm confusing myself. I have a helm deployment on K8 and orig had volumes for etc and var. I want to have separate volumes for hotwarm, cold, frozen and thawed. I created some PVC/volumes for each e.g. mapping to var/cold,var/hot etc but is this correct? I know in the index.conf you set paths e.g. per index, but can this be .... var/hotwarm/index1/? Is it ok to have 3-4 vols for each of the temps and put the indexes on each, or do I need a volume per index? I'm just getting confused. Any help appreciated. I'm also guestimating sizes of volumes - currently, we don't use Splunk much, but it's going to grow rapidly I suspect!! E.g. my helm script includes this: volumeMounts: - name: splunk-etc mountPath: /opt/splunk/etc - name: splunk-var mountPath: /opt/splunk/var - name: splunk-var-hotwarm mountPath: /opt/splunk/var/log/splunk/hotwarm - name: splunk-var-cold mountPath: /opt/splunk/var/log/splunk/cold - name: splunk-var-frozen mountPath: /opt/splunk/var/log/splunk/frozen

Splunk stats count for several search

$
0
0
Hello, I have ~15 the same queries with a little difference: (index=SOME_INDEX sourcetype=SOME_SOURCE source=... | eval API=CASE(searchmatch("xxx"), "yyy", ...) | search API=WebResponse | eval Status=case(...) | stats avg(dur) AS Avg by status_code | stats count by status_code ... (index=SOME_INDEX sourcetype=SOME_SOURCE source=... | eval API=CASE(searchmatch("xxx"), "yyy", ...) | search API=AppResponse | eval Status=case(...) | stats avg(dur) AS Avg by status_code | stats count by status_code So, all my queries are different only in one place - `| search API=XXX` and return result like: | status_code | count | | 201 | 10 | | 404 | 28 | etc How I can combine all above queries into one and get result as (or something like this): | status_code | count(AppResponse) | count(WebResponse) | count(Other) | | 201 | 10 | 0 | 0 | | 404 | 28 | 3 | 0 | ?

What ports are used as source ports for Splunk Universal forwarder agent?

$
0
0
Let’s say we have Splunk Universal Forwarder agents installed on windows servers. Is it known what ports are being used by windows servers to send data FROM (not sent TO) to splunk deployment server? In the following example source port = 61616 is used. Can it be something like 8180? TCP windows_server_source_ip:61616 splunk_deployment_server:8089 ESTABLISHED 3232
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>