Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How to use the field in search query EXTRACTED using REX command

$
0
0
I have a field named "content" with multiple values to it as follows content=value.deva content=value.devb " =value.devc ...... I have written a rex expression in my search query .........| rex field=Name "\.(?.*)" to extract the Environment from the field content . Now I want to get the values in my result only for Environment=deva, how can I use the field Environment in my query? I tried this way but it did not work ".........| rex field=content ".(?.)" | Environment=deva " Can someone help me with this?

On a HF, can I forward a subset of data to syslog and drop everything else?

$
0
0
Here is my situation: I have a Windows HF that is collecting a lot of different data. Some via powershell scripts, some via WMI, some via file monitoring locally and over UNC paths. All of that data is being forwarded to two indexes. A few weeks ago I configured one of the file monitoring inputs to send a copy of the data it collected to a syslog server. I now need to send that data (collected via file monitoring) to the syslog server and NOT to the indexers. IOW, I want all data collected by this HF to go to the indexers, EXCEPT this data which should be sent to the syslog server ONLY. How do I do that? I've read through this which helped me get the current configuration: http://docs.splunk.com/Documentation/SplunkCloud/6.6.1/Forwarding/Forwarddatatothird-partysystemsd Here are my config files: .../etc/apps/myapp/local/props.conf: [WinDNS] SHOULD_LINEMERGE = True BREAK_ONLY_BEFORE_DATE = True MAX_EVENTS = 1000 EXTRACT-Domain = (?i) .*? \.(?P[-a-zA-Z0-9@:%_\+.~#?;//=]{2,256}\.[a-z]{2,6}) EXTRACT-src = (?i) [Rcv|Snd] (?P\d+\.\d+\.\d+\.\d+) EXTRACT-Threat_ID,Context,Int_packet_ID,proto,mode,Xid,type,Opcode,Flags_Hex,char_code,ResponseCode,question_type = .+?[AM|PM]\s+(?\w+)\s+(?\w+)\s+(?\w+)\s+(?\w+)\s+(?\w+)\s+\d+\.\d+\.\d+\.\d+\s+(?\w+)\s(?(?:R)?)\s+(?\w+)\s+\[(?\w+)\s(?.+?)(?[A-Z]+)\]\s+(?\w+)\s EXTRACT-Authoritative_Answer,TrunCation,Recursion_Desired,Recursion_Available = (?m) .+?Message:\W.+\W.+\W.+\W.+\W.+AA\s+(?\d)\W.+TC\s+(?\d)\W.+RD\s+(?\d)\W.+RA\s+(?\d) TRANSFORMS-droplocal2 = droplocal2 TRANSFORMS-dropbach = dropbach #TRANSFORMS-dropall = dropall SEDCMD-win_dns = s/\(\d+\)/./g TRANSFORMS-dns = send_to_syslog .../etc/apps/myapp/local/transforms.conf [dropbach] REGEX = \[.+?\]\s+\w+\s+.+?BACH DEST_KEY=queue FORMAT=nullQueue [droplocal2] REGEX = \[.+?\]\s+\w+\s+.+?local DEST_KEY=queue FORMAT=nullQueue [send_to_syslog] REGEX = . DEST_KEY = _SYSLOG_ROUTING FORMAT = my_syslog_group #[dropall] #REGEX = . #DEST_KEY=queue #FORMAT=nullQueue .../etc/system/local/outputs.conf [tcpout] defaultGroup = default-autolb-group indexAndForward = 0 [tcpout-server://splunk-01:9997] [tcpout:default-autolb-group] disabled = false server = splunk-01:9997,splunk-02:9997 [tcpout-server://splunk-02:9997] # not sure why this is here.... [syslog:my_syslog_group] server = 1.1.1.5:514 As you can tell, I tried to add a 'dropall', but that just dropped everything without sending a copy to the syslog server first. I then found this forum post: https://answers.splunk.com/answers/4083/can-i-route-some-data-as-syslog-output-to-multiple-destinations.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev which seems to imply that to do what I want to do, I need to modify the outputs.conf so the defaultgroup=nothing and then modify all the props.conf and transforms.conf files for all my inputs to point to the "default-autolb-group" in outputs.conf that sends to the indexers, and then for this app have the ONLY output reference pointing to the "my_syslog_group" in outputs.conf. Is that correct or something else?

How do I search for a string with a partial portion of the string?

$
0
0
Can someone help explain why "partial" search doesn't work for me? It's an ASA syslog... when I search for a full syslog event "%ASA-4-713903" it finds it, when i search "%ASA-4-" the "%ASA-4-713903" is among the results, but when I search ""%ASA-4-71390" it finds nothing. Thanks!

Http event collector not working?

$
0
0
I have a token set up in http event collector and try to do a curl command to test if it works. I read the instruction from this site http://dev.splunk.com/view/event-collector/SP-CAAAE7F which indicated that I could do a curl command to send data to the http event collector. This is the command that I use: curl -k https://:8088/services/collector -H 'Authorization: Splunk ' -d '{"event":"Hello, World!"}'. But the result I got from it was: {"text":"Token is required","code":2}curl: (7) Failed to connect to Splunk port 80: Connection refused curl: (6) Could not resolve host: B86C5445-76D4-4FAF-A0FA-D8FE2FA49F79' Does this mean the http event doesn't work? What can I do to resolve the issue?

Append Domain name at index time?

$
0
0
All, I have logs coming in from /var/log/messages and /var/log/maillog which have the hostname not the FQDN. There is just too much change control and politics to get them fixed at the source. Looking for a way at index time to just make the correction. Server names are well formed 12 characters ending in three numbers. So I need to create a props.conf/transforms.conf on my indexer, just not sure what it will look like. If host = .*\n\n\n then append mycompany.com Any ideas what that might look like?

Counting a value out of a lookup table that does not exist in the logs

$
0
0
Hi, I have a search that works just fine that shows a list of users in a lookup table that have not logged into Splunk in the last 7 days: | inputlookup user_role_lookup.csv | rename userName AS user | table user | eval count=0 | join type=left user [search index=_audit action="login attempt" info=succeeded earliest=-7d@d | stats count by user] | where count=0 The lookup table is simply 'userName' and 'roles' with about 190 entries. Roles, of course, is not a value in the _audit logs. I want to be able to show if no one from a particular role logged into Splunk in the last 7 days but replacing 'user' with 'roles' in the query above doesn't give me what I need. If it matters, the field 'roles' is the actual roles we created in Splunk pulled out using the REST command that was put into a lookup table. Any help is appreciated.

We are having indexers with hot bucket data is almost full ?

$
0
0
The primary indexers data (Hot+ Warm) data is being full .Please help us in solving this issues . .We are trying to shrink the hot and warm are our primary indexers . The retention period for hot +warm is 30 days. What are best practice process? To come out of this issue .

Can 1 master node be used to manage 50 indexer cluster?

$
0
0
Can 1 master node be used to manage 50 indexer cluster? As Splunk doc specifies 30 indexer cluster per master node . Will having 2 master cluster nodes imply 2 sets of clusters? What is the best way to manage 50 indexer cluster? Thanks.

http event collector error inputting data?

$
0
0
I want to try to inputting a simple event to http event collector just to test if it works. I think it was able to find the web address and also authenticate it with the token value. But I get an error with the invalid data format. What can i do to fix it? I have the following command: curl -k -H "Authorization: Splunk B86C5445-76D4-4FAF-A0FA-D8FE2FA49F79" https://localhost:8088/services/collector/event -d '{"event":"testing"}' With the followng result: {"text":"Invalid data format","code":6,"invalid-event-number":0}

Creating a chart based on time values not epoch time

$
0
0
Is it possible to create a chart using time values "4:53:43" vs. converting them to epoch time "1505930393"? I'd like the Y-Axis to be time (3:41:32) - (6:43:21) and the X-Axis to be a name. Basically these are race results and I'd like the chart to show the times of each participant. Do I have to convert the time value to a real number? Thanks!

Why is my stats by command so slow and how can I speed it up for longer time intervals?

$
0
0
I'm working on some statistics related queries. I'm trying to get the security id, date and count of hosts connected to. index=wineventlog sourcetype="WinEventLog:Security" 4624 | fields host,Security_ID,_time | bucket _time span=1d | stats dc(host) by Security_ID, _time They work perfectly until I start adding Security_ID. With no `by` command or only based on time it's fast. I also tried to do a `dedup Security_ID, _time, host` before the stats dc command but it didn't help the overall speed. It takes well over 10 minutes to complete this search for a week, and I'd like to be able to run this for 30 60 or 90 days. What do I need to do for that to be viable?

On a heavy forwarder, can I forward a subset of data to syslog and drop everything else?

$
0
0
Here is my situation: I have a Windows HF that is collecting a lot of different data. Some via powershell scripts, some via WMI, some via file monitoring locally and over UNC paths. All of that data is being forwarded to two indexes. A few weeks ago I configured one of the file monitoring inputs to send a copy of the data it collected to a syslog server. I now need to send that data (collected via file monitoring) to the syslog server and NOT to the indexers. IOW, I want all data collected by this HF to go to the indexers, EXCEPT this data which should be sent to the syslog server ONLY. How do I do that? I've read through this which helped me get the current configuration: http://docs.splunk.com/Documentation/SplunkCloud/6.6.1/Forwarding/Forwarddatatothird-partysystemsd Here are my config files: .../etc/apps/myapp/local/props.conf: [WinDNS] SHOULD_LINEMERGE = True BREAK_ONLY_BEFORE_DATE = True MAX_EVENTS = 1000 EXTRACT-Domain = (?i) .*? \.(?P[-a-zA-Z0-9@:%_\+.~#?;//=]{2,256}\.[a-z]{2,6}) EXTRACT-src = (?i) [Rcv|Snd] (?P\d+\.\d+\.\d+\.\d+) EXTRACT-Threat_ID,Context,Int_packet_ID,proto,mode,Xid,type,Opcode,Flags_Hex,char_code,ResponseCode,question_type = .+?[AM|PM]\s+(?\w+)\s+(?\w+)\s+(?\w+)\s+(?\w+)\s+(?\w+)\s+\d+\.\d+\.\d+\.\d+\s+(?\w+)\s(?(?:R)?)\s+(?\w+)\s+\[(?\w+)\s(?.+?)(?[A-Z]+)\]\s+(?\w+)\s EXTRACT-Authoritative_Answer,TrunCation,Recursion_Desired,Recursion_Available = (?m) .+?Message:\W.+\W.+\W.+\W.+\W.+AA\s+(?\d)\W.+TC\s+(?\d)\W.+RD\s+(?\d)\W.+RA\s+(?\d) TRANSFORMS-droplocal2 = droplocal2 TRANSFORMS-dropbach = dropbach #TRANSFORMS-dropall = dropall SEDCMD-win_dns = s/\(\d+\)/./g TRANSFORMS-dns = send_to_syslog .../etc/apps/myapp/local/transforms.conf [dropbach] REGEX = \[.+?\]\s+\w+\s+.+?BACH DEST_KEY=queue FORMAT=nullQueue [droplocal2] REGEX = \[.+?\]\s+\w+\s+.+?local DEST_KEY=queue FORMAT=nullQueue [send_to_syslog] REGEX = . DEST_KEY = _SYSLOG_ROUTING FORMAT = my_syslog_group #[dropall] #REGEX = . #DEST_KEY=queue #FORMAT=nullQueue .../etc/system/local/outputs.conf [tcpout] defaultGroup = default-autolb-group indexAndForward = 0 [tcpout-server://splunk-01:9997] [tcpout:default-autolb-group] disabled = false server = splunk-01:9997,splunk-02:9997 [tcpout-server://splunk-02:9997] # not sure why this is here.... [syslog:my_syslog_group] server = 1.1.1.5:514 As you can tell, I tried to add a 'dropall', but that just dropped everything without sending a copy to the syslog server first. I then found this forum post: https://answers.splunk.com/answers/4083/can-i-route-some-data-as-syslog-output-to-multiple-destinations.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev which seems to imply that to do what I want to do, I need to modify the outputs.conf so the defaultgroup=nothing and then modify all the props.conf and transforms.conf files for all my inputs to point to the "default-autolb-group" in outputs.conf that sends to the indexers, and then for this app have the ONLY output reference pointing to the "my_syslog_group" in outputs.conf. Is that correct or something else?

How to create a dashboard showing in progress, completed, and pending status

$
0
0
Hi, I am preparing a dashboard for Websphere team job monitoring. I have 29 jobs. There is a started kind of logging in server and also completed successfully kind of logging in server. I have to show 3 tables: in progress jobs, completed jobs, and pending jobs. So my logic is : I will show the latest job which started in the in progress table by using head 1 sorted by_time. In completed there will be all the jobs that are complete for today (not any other date data). In pending table I have to show : not received (not started/compete) jobs. Ex: I have 5 jobs, 's, b, c, d, e. My in progress displays only 1 say, b. Compete displays, say, a and c. So my pending will show d and e. Hope I m clear. If all are compete, I. E complete has a, b, c, d, e then in progress and complete will be NIL. Remember, I have to pick up logs for only TODAY. I planned to keep a master list. My in progress will display head 1. Complete will display all. ONLY TODAY Data. Using mvappend, I can mix in progress and complete. I can subtract it from master list to show pending. Please, help.

Help with search to show the top 5 results

$
0
0
index="all_eqt" Plant=15 ProcessCode=T DefectCode="*" MachineNumber<26 | stats sum(TotalSquareYards) as "Total Square Yards" by StyleName | sortStyleName I'm trying to limit the data shown on the chart to the top 5 styles with the highest TotalSquareYards

Best practices for hot/warm bucket retention?

$
0
0
The primary indexers data (Hot+ Warm) data is being full .Please help us in solving this issues . .We are trying to shrink the hot and warm are our primary indexers . The retention period for hot +warm is 30 days. What are best practice process? To come out of this issue .

HTTP event collector -- error with data format?

$
0
0
I want to try to inputting a simple event to HTTP event collector just to test if it works. I think it was able to find the web address and also authenticate it with the token value. But I get an error with the invalid data format. What can I do to fix it? I have the following command: curl -k -H "Authorization: Splunk B86C5445-76D4-4FAF-A0FA-D8FE2FA49F79" https://localhost:8088/services/collector/event -d '{"event":"testing"}' With the following result: {"text":"Invalid data format","code":6,"invalid-event-number":0}

Creating a chart based on time values, not epoch time

$
0
0
Is it possible to create a chart using time values "4:53:43" vs. converting them to epoch time "1505930393"? I'd like the Y-Axis to be time (3:41:32) - (6:43:21) and the X-Axis to be a name. Basically these are race results and I'd like the chart to show the times of each participant. Do I have to convert the time value to a real number? Thanks!

Automatic lookup on a fieldalias field -- Is it possible?

$
0
0
My automatic lookup is not working on fields that were created via FIELDALIAS's. I have automatic lookups in my "search" app local/props.conf running on things like "src" and "dst" fields. These are global i.e. at the top of props not defined by a sorucetype or anything. Example: LOOKUP-auto-dst-lookup = subnets Subnet AS dst OUTPUT Description AS dst_description LOOKUP-auto-dest-lookup = subnets Subnet AS dest OUTPUT Description AS dest_description LOOKUP-auto-src-lookup = subnets Subnet AS src OUTPUT Description AS src_description I also want it to work on the "dest" field that you see above, which is the field that most Splunk TAs convert their destination IP field to. Example: # grep -R "FIELDALIAS-" /opt/splunk/etc/apps /opt/splunk/etc/apps/Splunk_TA_cool_waf/default/props.conf:FIELDALIAS-alias_for_dst = dst as dest /opt/splunk/etc/apps/Splunk_TA_cool_av/default/props.conf:FIELDALIAS-alias_for_ComputerName = ComputerName as dest However, only src and dst are working, not dest. Is there some kind of order of precedence here that I'm missing, or is it impossible for the automatic lookups to work based off of field names created by FIELDALIAS's? Edit: Seems like there is [precedence][1], and that I should edit System default to achieve what I want. However, system/default/props.conf says to NOT edit that file. So what am I supposed to do then? [1]: http://docs.splunk.com/Documentation/Splunk/6.6.3/Admin/Wheretofindtheconfigurationfiles

Dashboard like ITSI

$
0
0
I would like to make a dashboard close to ITSI. (Especially Deep Dives !!! Is there a visual APPS that is similar to this? I think ITSI is very good. This time it was over spec... It will be helpful if you give me some hints.

Can i use multiple sourcetape from syslog in Splunk Add-on for *nix

$
0
0
Hallo, i want to know, if this Add-on works with other sourcetypes than syslog. I have change the base-sourcetype from syslog to syslog_linux, so I can use in the transforms.conf this as stanza and can address. #################################################################### # Input-App for the Forwarders #################################################################### input.conf [monitor:///var/log/splunk.messages] disabled = false index = infrastrukturlog_linux sourcetype = syslog_linux _TCP_ROUTING = wwiForwarderProd #************************************************************* #* Original syslog-props.conf #************************************************************* [syslog_linux] pulldown_type = true maxDist = 3 TIME_FORMAT = %b %d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 32 TRANSFORMS = syslog-host REPORT-syslog = syslog-extractions SHOULD_LINEMERGE = False category = Operating System description = Output produced by many syslog daemons, as described in RFC3164 by the IETF. #################################################################### # Filter-App for the Heavy-Forwarder #################################################################### props.conf [syslog_linux] TRANSFORMS-null = null_queue_filter_syslog,null_queue_filter_syslog1 transforms.conf [null_queue_filter_syslog] REGEX = (?m)caa: DEST_KEY = queue FORMAT = nullQueue [null_queue_filter_syslog1] REGEX = ^(?=.*\bifconfig\b)(?=.*\buser:info\b).*$ DEST_KEY = queue FORMAT = nullQueue I actually get no data from the Add-on . Any Ideas ? Gerd
Viewing all 47296 articles
Browse latest View live