Hi Team,
I have one Search head(deloyment server) ,two indexer and two forwarder in the network .I created web index on both indexer ,
when i try to add data from search head into web index .The web index does not show on the forwader and it is not present on search head ,However i did create web index on both indexer .
Please let me know why my web index on indexer is not visible in search head .
thanks
smdasim
↧
Indexes not visible on searchhead in splunk in non cluster enviornment
↧
Find events either side of a matched event
I am trying to find the best way to identify the event before and after a matched event for each SessionID
Example data;
time | SessionID | UserID | Match | Data
12/08/2018 11:12:27 | 1 | 123 | Y | a
12/08/2018 11:12:28 | 1 | 123 | N | b
12/08/2018 11:12:29 | 2 | 789 | Y | c
12/08/2018 11:12:30 | 1 | 321 | N | d
12/08/2018 11:12:31 | 1 | 321 | Y | e
12/08/2018 11:12:32 | 2 | 987 | N | f
12/08/2018 11:12:33 | 1 | 123 | N | g
12/08/2018 11:12:34 | 1 | 321 | N | h
12/08/2018 11:12:35 | 2 | 987 | N | i
12/08/2018 11:12:36 | 1 | 321 | N | j
12/08/2018 11:12:37 | 1 | 321 | N | k
12/08/2018 11:12:38 | 2 | 987 | Y | l
12/08/2018 11:12:39 | 2 | 789 | N | m
12/08/2018 11:12:40 | 1 | 123 | N | n
12/08/2018 11:12:41 | 1 | 123 | N | o
12/08/2018 11:12:42 | 2 | 789 | N | p
12/08/2018 11:12:43 | 1 | 321 | N | q
12/08/2018 11:12:44 | 1 | 123 | Y | r
And the data i am trying to identify should look like this;
time | SessionID | UserID | Match | Data
12/08/2018 11:12:27 | 1 | 123 | Y | a
12/08/2018 11:12:28 | 1 | 123 | N | b
-------------------------------------------------------
12/08/2018 11:12:29 | 2 | 789 | Y | c
12/08/2018 11:12:32 | 2 | 987 | N | f
-------------------------------------------------------
12/08/2018 11:12:30 | 1 | 321 | N | d
12/08/2018 11:12:31 | 1 | 321 | Y | e
12/08/2018 11:12:33 | 1 | 123 | N | g
-------------------------------------------------------
12/08/2018 11:12:35 | 2 | 987 | N | i
12/08/2018 11:12:38 | 2 | 987 | Y | l
12/08/2018 11:12:39 | 2 | 789 | N | m
-------------------------------------------------------
12/08/2018 11:12:43 | 1 | 321 | N | q
12/08/2018 11:12:44 | 1 | 123 | Y | r
↧
↧
Is there any way to monitor cyberark logs?
Hello! So I installed the Cyberark add on in order to monitor Cyberark.
I already have a syslog server which produces .log files from Cyberark. Is there any way to monitor it directly from the .log files, or do I absolutely have to do it they way they specified in: https://docs.splunk.com/Documentation/AddOns/released/CyberArk/Setup (translating the files)?
It's just, so much easier if I could just translate the log files locally on the syslog server. Thank you!
↧
Universal forwarderからのログの受信設定について
現在Universal forwarderからindex&Search用Splunkへファイアウォールのログを転送しています。
受信側(index&Search)にsourcetypeとindexを指定してログを取り込むよう設定したのですが、
どのファイルにどのように設定を記載すればよいでしょうか?
Universal forwarder:172.16.11.11
Splunk indexer:172.16.11.12
受信ポート:9997
指定したいsourcetype名:FW_Traffic
指定したいindex名:firewall
ご教示お願い致します。
↧
Tokenization features in Splunk?
All,
I have never seen a docs or Conf talk or anything for this I guess it doesn't exist but thought I would ask anyway, just in case it's some feature I somehow missed.
Basically we have email addresses and some other PII coming into a small instance of Splunk segmented from the main one. Boss wants the data coming into Splunk tokenized and detokenized based on Splunk user role.
Anything like that available?
↧
↧
TcpOutputProc - Cooked connection to Forwarder IP:9997 timed out
Hi,
We have a indexer{2 indexers] in our environment, 2 fowarder and 1 search heads.
I am seeing below output on Search head .
TcpOutputProc - Cooked connection to ip=x.x.x.x:9997 timed out
TcpOutputProc - Cooked connection to ip=x.x.x.x:9997 timed out
Please advice how can i debug at each level to figure out the issue .
Data is not reaching to newly created web index.
Regrads
smdasim
↧
JSON index field extraction fails with large events (> 10k bytes)
I'm using indexed field extraction to ingest JSON data over the HTTP Event Collector.
It works great. Except, once the event is > 10k bytes, the fields within the JSON are not indexed automatically. For example, if I submit a 15k event then search for it via `host`, I find it, however if I search for it via a field within the JSON, it does not come up.
Is it possible to configure this setting? I haven't seen anything in the documentation yet. I'm still new to this particular functionality
Thanks
↧
logs not complete
Hi ,
I am having trouble right now on why does the splunk log is not complete/cut , in the past few months logs are coming consistently complete.
but now it is cut shows only the header and no information.
![alt text][1]
[1]: /storage/temp/255682-capture.png
it came from a server that monitor the logs,
Can somebody tell me why this happens ?
what to investigate ?
Also what is the solution for this problem?
-thanks in advance
↧
Can you please help me with building a reqular expression for the following situation.
I have following data.adfasdf1234567890dfa adfasdf17890dfa
i need a regular expression which matches " to ".
i have tried the following regex but it matches to
<[^>]*>.*?\d{9,}.*?<[^>]*>
↧
↧
Is there a way to add more than one time filter to splunk reports?
Hello,
Can we add more than one time filter to splunk reports?
I am trying to do this for pivot reports?
Thanks in advance.
↧
Does splunk offer a Universal Forwarder to compatible with HP Nonstop OSS environment ?
Hi,
We have couple of servers from HP NonStop OSS environment which is not 100% Unix. Instead, OSS is “Unix-like” where most Unix commands will work in OSS.
I have got a requirement to forward the HP NonStop OSS application logs to splunk.
I have gone through the documentation but couldn't find any details. Can this be achieved ?
Thanks,
Ramu Chittiprolu
↧
XML file is not read completely
Hi,
We have kept the monitoring on the directory where XML files are placed. We would like to have one event per XML file but its getting split into multiple events. Also only few lines of xml file is getting into splunk.
Can anyone suggest how we can get the complete file in Splunk ?
Thanks,
Prashant
↧
Setup SPLUNK alerting
What is the command to setup alerting through Splunk as I would like to track when users are added or removed from our Security group?
↧
↧
splunkd keeps crashing with uberAgent app
Splunk version 7.1.2
uberAgent version: 5.0.1
We have Splunk Search Head + Splunk Indexer + Splunk Heavy Forwarder all running on Windows 2012R2.
We have also uberAgent app installed on Search Head and uberAgent_Indexer app installed on Indexer. It looks like uberAgent is crashing Splunk service on the Indexer frequently. This issue seems to be related to uberAgent, because after disabling the app it isn't crashing anymore.
However we would assume that even if the uberAgent app is buggy, it would not crash Splunk completely, because this completely stops anyone from using the Search Head to search anything (even indexes not related to uberAgent).
Something is very odd there - it looks like the Splunk service on the Indexer sometimes recovers itself automatically, because uberAgent crashes the service e.g. 10 times a day without our intervention, so something must be restarting the splunk service there. Unfortunately, it looks like sometimes the service is not restarted and hence any searches from the Search Head stop working. Then we have to restart the Spplunk service on the Indexer and re-add the Splunk Indexer to the Distributed Search servers to make searches work again (otherwise the Indexer's status is shown as "Sick").
As a side effect of the frequent crashes, dump log files are created along the log files. Each log file takes about 2GB of disk space and since these are not maintained and cleared up automatically, they have filled up the disk space causing Splunk crash due to "not enough disk space". It was the disk space issue which lead us to find out what is going on and found out the root cause of "not enough disk space" - uberAgent was crashing Splunk and the dump files were generating 100GB+ of data.
10/08/2018 04:40 4,449 E__Programs_Splunk_bin_splunkd_exe_crash-2018-08-10-04-40-23.log
10/08/2018 05:11 8,507 E__Programs_Splunk_bin_splunkd_exe_crash-2018-08-10-05-11-39.log
10/08/2018 05:40 8,615 E__Programs_Splunk_bin_splunkd_exe_crash-2018-08-10-05-40-24.log
10/08/2018 05:50 8,508 E__Programs_Splunk_bin_splunkd_exe_crash-2018-08-10-05-50-30.log
10/08/2018 05:56 4,078 E__Programs_Splunk_bin_splunkd_exe_crash-2018-08-10-05-56-30.log
10/08/2018 06:50 4,546 E__Programs_Splunk_bin_splunkd_exe_crash-2018-08-10-06-50-25.log
10/08/2018 07:00 2,435 E__Programs_Splunk_bin_splunkd_exe_crash-2018-08-10-06-50-28.log
10/08/2018 06:50 8,611 E__Programs_Splunk_bin_splunkd_exe_crash-2018-08-10-06-50-34.log
10/08/2018 07:06 4,359 E__Programs_Splunk_bin_splunkd_exe_crash-2018-08-10-07-06-30.log
10/08/2018 07:26 6,603 E__Programs_Splunk_bin_splunkd_exe_crash-2018-08-10-07-25-31.log
10/08/2018 10:10 5,211 E__Programs_Splunk_bin_splunkd_exe_crash-2018-08-10-10-10-37.log
10/08/2018 10:35 2,706 E__Programs_Splunk_bin_splunkd_exe_crash-2018-08-10-10-35-17.log
10/08/2018 12:10 8,595 E__Programs_Splunk_bin_splunkd_exe_crash-2018-08-10-12-10-31.log
10/08/2018 12:40 4,372 E__Programs_Splunk_bin_splunkd_exe_crash-2018-08-10-12-40-20.log
10/08/2018 13:35 4,450 E__Programs_Splunk_bin_splunkd_exe_crash-2018-08-10-13-35-32.log
...
Sample crash log:
[build 8f0ead9ec3db] 2018-08-10 04:40:23
Access violation, cannot write at address [0x0000000000000000]
Exception address: [0x00007FF766E0FA53]
Crashing thread: rjreaderthread
MxCsr: [0x0000000000001F80]
SegDs: [0x000000000000002B]
SegEs: [0x000000000000002B]
SegFs: [0x0000000000000053]
SegGs: [0x000000000000002B]
SegSs: [0x000000000000002B]
SegCs: [0x0000000000000033]
EFlags: [0x0000000000010202]
Rsp: [0x0000000E249FB420]
Rip: [0x00007FF766E0FA53] ?
Dr0: [0x0000000000000000]
Dr1: [0x0000000000000000]
Dr2: [0x0000000000000000]
Dr3: [0x0000000000000000]
Dr6: [0x0000000000000000]
Dr7: [0x0000000000000000]
Rax: [0x0000000000000000]
Rcx: [0x0000000E5BDC4AC0]
Rdx: [0x0000000E11BA0AC0]
Rbx: [0x0000000E5BDC4A50]
Rbp: [0x0000000E4FA4EB80]
Rsi: [0x0000000000000000]
Rdi: [0x0000000E249FB558]
R8: [0x00007FFBD216F610]
R9: [0x00007FFBD216F618]
R10: [0x5000BB77A2A6EB15]
R11: [0x0000BB705D19EB74]
R12: [0x0000000E4D015A38]
R13: [0x0000000E506C22C8]
R14: [0x0000000E11BA0B00]
R15: [0x0000000E4F65C228]
DebugControl: [0x0000000E591E4E74]
LastBranchToRip: [0x0000000000000000]
LastBranchFromRip: [0x0000000000000000]
LastExceptionToRip: [0x0000000000000000]
LastExceptionFromRip: [0x0000000000000000]
OS: Windows
Arch: x86-64
Backtrace:
[0x00007FF766E0FA53] ?
Args: [0x0000000E4F65C1F0] [0x0000000E00000002] [0x0000000000000063]
[0x00007FF766CEDA7A] ?
Args: [0x0000000E249FB558] [0x00007FFBD20D419B] [0x0000000E4D809480]
[0x00007FF766ABEA09] ?
Args: [0x0000000E001FBDA0] [0x0000000E00000006] [0x0000000000000063]
[0x00007FF76666C8FA] ?
Args: [0x0000000E4FA4EB80] [0x0000000E4FA4EB80] [0x00000000FFFFFFFF]
[0x00007FF766668ED3] ?
Args: [0x0000000E5AA94830] [0x0000000E5AA94830] [0x0000000E5AA7B940]
[0x00007FF766D4E922] ?
Args: [0x0000000000000000] [0x00007FFBF21416A0] [0x00007FFBF21416A0]
[0x00007FFBD212BE1D] crt_at_quick_exit + 125/784
Args: [0x00007FFBF21416A0] [0x0000000E5AA7B940] [0x0000000000000000]
[0x00007FFBF21416AD] BaseThreadInitThunk + 13/48
Args: [0x0000000000000000] [0x0000000000000000] [0x0000000000000000]
[0x00007FFBF2AC54F4] RtlUserThreadStart + 52/1008
Args: [0x0000000000000000] [0x0000000000000000] [0x0000000000000000]
Crash dump written to: E:\Programs\Splunk\var\log\splunk\E__Programs_Splunk_bin_splunkd_exe_crash-2018-08-10-04-40-23.dmp
Splunk ran as local administrator
HXP33715 /Windows Server 2012 R2
GetLastError(): 8
Threads running: 15
Executable module base: 0x00007FF7662F0000
Runtime: 65.111172s
argv: [splunkd search --id=remote_hxp33714_scheduler__nobody__uberAgent__RMD5e28e2a5bd72887c9_at_1533872164_93340 --maxbuckets=0 --ttl=60 --maxout=0 --maxtime=0 --lookups=1 --streaming --sidtype=normal --outCsv=true --user=splunk-system-user --pro --roles=admin:db_connect_user:dbx_user:itoa_admin:itoa_analyst:itoa_user:power:splunk-system-role:user]
Thread: "rjreaderthread", did_join=1, ready_to_run=Y, main_thread=N
First 4 bytes of Thread token @0xe5aa94844:
00000000 8c 0d 00 00 |....|
00000004
x86 CPUID registers:
0: 0000000D 756E6547 6C65746E 49656E69
1: 000306F0 04010800 FFFA3203 0FABFBFF
2: 76036301 00F0B5FF 00000000 00C30000
3: 00000000 00000000 00000000 00000000
4: 00000121 01C0003F 0000003F 00000000
5: 00000000 00000000 00000000 00000000
6: 00000077 00000002 00000009 00000000
7: 00000000 000027AB 00000000 00000000
8: 00000000 00000000 00000000 00000000
9: 00000000 00000000 00000000 00000000
A: 07300401 0000007F 00000000 00000000
B: 00000000 00000001 00000100 00000004
C: 00000000 00000000 00000000 00000000
D: 00000007 00000340 00000340 00000000
80000000: 80000008 00000000 00000000 00000000
80000001: 00000000 00000000 00000021 2C100800
80000002: 65746E49 2952286C 6F655820 2952286E
80000003: 55504320 2D354520 30393632 20347620
80000004: 2E322040 48473036 0000007A 00000000
80000005: 00000000 00000000 00000000 00000000
80000006: 00000000 00000000 01006040 00000000
80000007: 00000000 00000000 00000000 00000100
80000008: 0000302A 00000000 00000000 00000000
terminating...
↧
Registry Value Monitoring Assistance
Hey Guys,
So I have another request, I can monitor hives without issue so directly below, If I were to add anything into this hive it gets picked up however when it comes to monitoring a specific value of a String or Dword then i'm having trouble, see the 2nd example below
[WinRegMon://Registry1]
proc = .*
hive = \\REGISTRY\\USER\\.*\\SOFTWARE\\MICROSOFT\\WINDOWS\\CURRENTVERSION\\RUN\\.*
type = create|delete|set|rename
baseline = 1
index = main
[WinRegMon://Registry11]
proc = .*
hive = \\REGISTRY\\MACHINE\\SYSTEM\\CURRENTCONTROLSET\\CONTROL\\LSA\\Notification Packages.*
type = create|delete|set|rename
baseline = 1
index = main
also tried with -
\\NotificationPackages.*
\\Notification Packages\\.*
If I remove the "Notification Packages" then the stanza does kinda of work in that the baseline is taken of all items within the Lsa hive, but when adding the Notifications Packages item I get nothing at all. I have read that I can monitor via the key_path and also process_image however I dont want to narrow the changes down to specific processes and again adding a .* doesnt seem to bring back any values.
Can anyone advise of the stanza I would need to only monitor the Notification Packages string within the Lsa hive ?
↧
Parsing SQL Queries
I am analyzing SQL Queries executed by users, is there any way to parse this queries. e.g. In insert query every time schema and values will be dynamic.
Sample event :
> insert into UtilityConnectivityHandler(ErrorCode,InstanceName,MailHost,MailBox,IssueDesc,IssueDateTime)VALUES ('A','B','C','D,E,F,G','H ',GETDATE())
↧
is vmware app compatible with splunk v7 ?
Hi there
i'm planning to install vmware App for splunk
documentation says :
The Splunk App for VMware version 3.3.2 is compatible with Splunk 6.3.0 and above and VMware vSphere versions 5.0 and above.
Can you tell me il this application is compatible with our splunk enterprise ?
Splunk Enterprise
Version: 7.1.2
Best regards
Pierre
↧
↧
Setting up volumes for splunk deployment
OK basically I think im confusing myself. Ive a helm deployment on K8 and orig had volumes for etc and var. I want to have seperate volumes for hotwarm, cold, frozen and thawed. I created some PVC/vollumes for each e.g. mapping to var/cold,var/hot etc but is this correct? I know in index.conf you set paths e.g. per index, but can this be .... var/hotwarm/index1/ ? Is it ok to have 3-4 vols for each of the temps and put the indexes on each, or do i need a vollume per index ??? im just getting confused. Any help appreciated. Im also guestimating sizes of vollumes - currently we dont use splunk much, but its going to grow rappidly I suspect!!
E.g. my helm script includes this .....
volumeMounts:
- name: splunk-etc
mountPath: /opt/splunk/etc
- name: splunk-var
mountPath: /opt/splunk/var
- name: splunk-var-hotwarm
mountPath: /opt/splunk/var/log/splunk/hotwarm
- name: splunk-var-cold
mountPath: /opt/splunk/var/log/splunk/cold
- name: splunk-var-frozen
mountPath: /opt/splunk/var/log/splunk/frozen
↧
single event coming to Splunk as csv. Need to convert it into a lookup
We are receiving a csv file as an event. (The whole csv file as a single event). This is configured correctly
eg
[custom:csv_event]
BREAK_ONLY_BEFORE=NEVER_OCCUR_TAG
MAX_EVENTS=100000
DATETIME_CONFIG = NONE
CHECK_METHOD = modtime
Example message
hostname,user
host1,user1
host2,user2
host3,user3
If i do a, the event comes correctly but as a single line (\n is preserved as far as I can see)
index=* sourcetype=custom:csv_event| stats latest(_raw) as csv_raw by sourcetype| rex field=csv_raw "(?.+)(\r\n|\r|\n)(?[\S\s]+)"
What's the best method to convert the above event into a CSV file, so we can do an outputlookup into a csv file? I know an ugly method, but was thinking if you have better ideas
ugly solution is: (not elegant)
index=* sourcetype=custom:csv_event| stats latest(_raw) as csv_raw by sourcetype| rex field=csv_raw "(?.+)(\r\n|\r|\n)(?[\S\s]+)"| eval header=rest_of_event| rename header as "hostname,user"| fields "hostname,user"| outputlookup hostname_user.csv
↧
extract fields at search time through props.conf file
I have w3c format logs. I want to create the fiels through props.conf.
I want to use EXTRACT- = [| in ] for search time field extraction.
below is my sample event.
2014-01-02 22:12:37 5209 1x3.xxx2.xx.xxx 200 TCP_MISS 209383 546 GET http daxxx.clxxxnt.net 80 /photos/show_resized/137406/12/4/41.jpg - - - - daxxx.clxxxnt.net image/jpeg;%20charset=utf-8 http://daxxx.clxxxnt.net?&utm_source=email&utm_medium=sf&utm_term=Second%20Email%20SF%201/2&utm_content=loot_position1_michael_macdonald_18&utm_campaign=second_email_sf_01_02_14# "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)" OBSERVED "Content Servers" - 1x3.xx2.xx.xxx 5x.xxx.1xxx.2xxx 52
006
=========
#Fields: date time time-taken c-ip sc-status s-action sc-bytes cs-bytes cs-method cs-uri-scheme cs-host cs-uri-port cs-uri-path cs-uri-query cs-username cs-auth-group s-hierarchy s-supplier-name rs(Content-Type) cs(Referer) cs(User-Agent) sc-filter-result cs-categories x-virus-id s-ip r-supplier-ip c-port
=====================
↧