Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

JOIN to list user per DeviceId

$
0
0
Search is trying to show all users within the companyOu that have Mobile Iron setup (Status=Allowed) and those that do not (Mobile Iron not setup) The below search is only showing one user listed but some users have more than one DeviceId configured. I can't work out why all DeviceId's are not showing for a user. index=ad source=aduserscan | table samAccountName, companyOu, displayName | search samAccountName=*_cp companyOu=dacp.com | rename samAccountName as MailboxId | join type=left MailboxId [ search index=msexchange source=otl_mobileiron MailboxId=*_cp | dedup DeviceId | search Retired="false" ] | rename companyOu as Company, MailboxId as "User ID", DeviceFriendlyName as Model, displayName as "Display Name", SyncStatus as Status | fillnull value="Mobile Iron not setup" Status | table Company, "User ID", "Display Name", "Status" DeviceId | sort Company asc ![alt text][1] [1]: /storage/temp/251693-screenshot.png

token clash when working with dashboard and fields

$
0
0
Hi all, I am trying to pass a text box input in a dashboard called token ID1 which works fine but the issue here is that I am using the $ sign again to do a 2nd search from a field extract ID3 from index idx1 with a map search on index idx2 with the field that was extracted ID3. The issue here I think is that the dashboard is expecting an input for ID3 because I am using the $ sign. As this search works fine when done in splunk search by manually passing ID1. How do I make the dashboard understand that it doesn't need an input for ID3? index=idx1 $ID1$ | rex field=_raw "(?.*?)<.*?someString"(?.*?)">(?.*?)somemorestring" | dedup ID3 | map search="search (\"string1 with spaces\" OR string2 OR string3 OR \"string 4 with spaces\" OR \"string 5 with spaces\" OR \"string6\") index=idx2 $ID3$" Hope this makes sense, Thanks for your help. Tom

ERROR AdminManager - Argument "action.email" is not supported by this handler.?

$
0
0
Hello, I've encountered with the below error when i tried to create an alert. Restarted the SHC as well as per the splunk answers, but it dint work out. ========= ERROR AdminManager - Argument "action.email" is not supported by this handler. ========= Can anyone please help me to resolve this issue? Regards, AbSe

Extract wrapped field for auto extraction

$
0
0
Today we have messages from our application like this: 2018-May-1 12:00:00.000 [Thread=4d2ce108-c322-49ff-bcc0-380d777f939f] INFO com.MyClass - method=search,customer=1234,user=Tester,time=0.044 Splunk auto-extraction handles the `key=value` pairs perfectly without the need to define specific abstractions. We are moving these apps into cloud-hosting (PCF specifically) which in turn is wrapping our own logs in a JSON object, as follows: {"app_id":"ABC1234","app_name":"myApp","msg":"2018-May-1 12:00:00.000 [Thread=4d2ce108-c322-49ff-bcc0-380d777f939f] INFO com.MyClass - method=search,customer=1234,user=Tester,time=0.044","source":"APP/PROC/WEB"} While Splunk's auto-extractor recognizes this as JSON and parses out fields from the JSON wrapper (app_id, app_name, msg and source), auto extraction of our real "meat" fields within msg is not automatically performed. Defining each specific `key=value` extraction isn't something I want to do since they change. Performing `| rename msg as _raw | extract pairdelim=",", kvdelim="="` inline to a search effectively does what I want, that is it keeps the field extractions from the wrapper (which contain important metadata), while then re-parsing out the "msg" field (my understanding is Splunk's auto extractor only works on _raw, hence the rename of msg to _raw). However, I would like if this could be configured somehow on the props.conf or transforms.conf so it was automatic. Is it possible to do this? That is, effectively have the auto extractor run twice, first parsing JSON and then picking out one of the fields and re-parsing that field?

Setup Secure (Encrypted) Syslog

$
0
0
Has anyone had luck setting up secure (encrypted) syslog with this Addon? It only mentions creating a TCP input which would not be encrypted. Our Proofpoint is hosted at their cloud, so encryption between their cloud and our Heavy Forwarder onsite is imperative.

Best way to handle repeating fields in a single event?

$
0
0
Hi all, What would be the best way for Splunk to handle repeating fields in a single event? For instance, one of my logs has a repeating field. For same of demo, let's call it field1. So the log event can have: field1=123 field1=234 But when Spunk auto-extracts the field/value pair info, it only sees field1=123. What do I need to do to allow it to interpret both values for field1 in that single event. Preferably looking for a way to do this in-line in the search if possible. Thanks!

Unicode characters in dashboards/searches

$
0
0
Hi Is it fine to use Unicode characters as a quick way to to set checklist marks & various other formatting/make pretty scenarious for example? e.g. index=_internal host=splunk* | chart count over source by host | foreach splunk* [eval <>= if(<>>0,"✔","✘")] ![alt text][1] [1]: /storage/temp/251695-checklist.png

Restoring specific source from frozen

$
0
0
Hello guys, we need to restore frozen data, however is it possible to choose which source to restore (not all sources), if yes, how? Thanks.

Problem with monitor file

$
0
0
Hi Community! I have a problem with a big Logfile. This log 1. produces ~250 events per minute and 2. rolling every ~ 2:15 hours at a size of 10mb If i make a realtimesearch for that specific source, some events are disappeared. I have recorded some of this missing events and found it later in the index with a delay of more than 2 hours. At my indexer I see sometimes the following error for that sourcetype AggregatorMiningProcessor - Too many events (300K) It looks like the universalforwarder doesn't sent new events to the indexer and after a while a hugh load would be send. Do you have an idea what I can do now? Thanks Rob

Migration from Windows Single Instance Deployment to Small Enterprise Distributed Deployment

$
0
0
The scenario is the following: I work for a small company that installed Splunk initially for a small user base as a standalone deployment. The demand as expanded to multiple departments and we need to convert to a distributed deployment. The deployment would be one dedicated search head, and one indexer. My question is would this work for a conversion process? 1: Enable Index Clustering on current standalone instance. 2: Make the current standalone instance as a master node. 3: Bring up new indexer as a peer node. 4: Replicate the data from standalone to new indexer 5: Make new indexer the master node 6: Convert current standalone to dedicated search head. Is this a valid process?

Forwarding events from Splunk DB Connect and Splunk OPSEC LEA

$
0
0
Hi there, I'm trying to set up forwarding from Splunk to 3rd party tool and I spent a lot of time searching for the answer on my question why Splunk doesn't forward events which are collected by using Splunk OPSEC LEA Connector or Splunk DB Connect. Other events like Windows Events which are collected by SUF are forwarded fine to 3rd party. I've reread a lot of times [Splunk Docs][1] but I didn't found any issue on my side My schema installation looks like: ***Heavy Forwarder*** with installed Splunk OPSEC LEA and Splunk DB Connect **>** ***Indexers*** with config files shown below **>** ***3rd party tool*** I've got the following configuration files: *[props.conf][2]* `[WinEventLog:Security] TRANSFORMS-routing = dst_2024 [WinEventLog:System] TRANSFORMS-routing = dst_2024 [WinEventLog:Application] TRANSFORMS-routing = dst_2024 [opsec] TRANSFORMS-opsec = dst_2025 [opsec:vpn] TRANSFORMS-routing = dst_2025 [opsec:smartdefense] TRANSFORMS-routing = dst_2025` *[transforms.conf][3]* `[dst_2024] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = dst-sensor-2024 [dst_2025] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = dst-sensor-2025` *[outputs.conf][4]* `[tcpout] defaultGroup = nothing indexAndForward = 1 # Windows [tcpout:dst-sensor-2024] disabled = false server = XX.XX.XX.XX:2024 sendCookedData = false dropEventsOnQueueFull = 1 # Checkpoint [tcpout:dst-sensor-2025] disabled = false server = XX.XX.XX.XX:2025 sendCookedData = false dropEventsOnQueueFull = 1` Does someone have any idea what sort of mistake was made by me or it might be a bug? I've tried to set up CheckPoint input on Indexer and I found that Splunk started forwarded data but I still don't understand what the problem. [1]: https://docs.splunk.com/Documentation/Splunk/7.1.0/Forwarding/Forwarddatatothird-partysystemsd [2]: https://docs.splunk.com/Documentation/Splunk/7.1.0/Admin/Propsconf [3]: https://docs.splunk.com/Documentation/Splunk/7.1.0/Admin/Outputsconf [4]: https://docs.splunk.com/Documentation/Splunk/7.1.0/Admin/Outputsconf

DUO authentication for SPLUNK, how does it work?

$
0
0
WE performed a test this morning with SUO\SPLUNK and it worked fine, however it also forced our local splunk accounts to use DUO. We are also not sure on how this would impact those local accounts that are used in scripts that access the API. Is there a way around this or is it all or nothing? Thanks!

Report to see how much time a user spends on a PC

$
0
0
I have a search that captures when a user logs in and logs out of his PC: index=win* (EventCode=4800 OR EventCode=4801) Account_Name=Batman The results show the below consecutive events: (from top to bottom) EventCode=4801 The workstation was unlocked. EventCode=4800 The workstation was locked. EventCode=4801 The workstation was unlocked. EventCode=4800 The workstation was locked. EventCode=4801 The workstation was unlocked. EventCode=4800 The workstation was locked. Basically I want to run a report each day (last 24 hours) where I can subtract the _time of first,second,third pair of events (duration) and then add the duration values together so it will show how long a user has not been on the computer. Current search I have, finds the difference of the consecutive events. In the results I see the right time difference values but it also include wrong data as well which I cannot remove. | delta _time p=1| rename delta(_time) AS timeDeltaS | eval timeDeltaS=abs(timeDeltaS) | eval "Duration"=tostring(timeDeltaS,"duration") | table Account_Name,_time, "Duration"

Query comparing EPOCH times

$
0
0
I have the query below that checks for the expiration date of a certificate, converts it to epoch time, and then basically changes the value of the result as it 'degrades' (gets closer to expiration). I have a feeling this is really messy and could be improved on, so I'm just looking for general recommendations on a better way of doing it. It works, but to me it looks excessive. index=test host=mycertificateauthority| rex field=Line "(?\d{1,2}\/\d{1,2}\/\d{4})" | stats count by _time,host,date | eval dateepoch=strptime(date,"%m/%d/%Y") | eval thirtydays=(relative_time(dateepoch,"-30d")) | eval fifteendays=(relative_time(dateepoch,"-15d")) | eval fivedays=(relative_time(dateepoch,"-5d")) | eval result=case((now()<=thirtydays),"0",(now()>=thirtydays) AND (now()<=fifteendays) AND (now()<=fivedays) AND (now()<=dateepoch),"1",(now()>=thirtydays) AND (now()>=fifteendays) AND (now()<=fivedays) AND (now()<=dateepoch),"2",(now()>=thirtydays) AND (now()>=fifteendays) AND (now()>=fivedays) AND (now()<=dateepoch),"3",(now()>=thirtydays) AND (now()>=fifteendays) AND (now()>=fivedays) AND (now()>=dateepoch),"4")

Adapt search filter to a query

$
0
0
Hello, i need to insert IP addresses here ![alt text][1] [1]: /storage/temp/250689-cattura1.png coded as follow: * to be applied to: | pivot Cisco_IOS_Event Device last(hostname) AS "Hostname" last(site_id) AS "Site ID" last(model) AS "Model" last(serial_number) AS "Serial number" count(Device) AS "Count of events" SPLITROW host AS host and many others panels. I should need to understand where to insert in panel's queries the referens to search-box posted above. E.g.: in this query, search-box works fine and it outputs me only IPs int 10..29.10.x subnet: | pivot Cisco_IOS_Event Cisco_IOS_Event count(Cisco_IOS_Event) AS "Count of Cisco IOS Event" SPLITROW host AS host SPLITCOL severity_id_and_name FILTER host is "$host$" SORT 500 host ROWSUMMARY 0 COLSUMMARY 0 NUMCOLS 100 SHOWOTHER 1 | sort -"0 - emergency", -"1 - alert", -"2 - critical", -"3 - error", -"4 - warning", -"5 - notification", -"6 - informational", -"7 - debugging" | table host "0 - emergency" "1 - alert" "2 - critical" "3 - error" "4 - warning" "5 - notification" "6 - informational" "7 - debugging" Hope to have explaned clearly. thanks UPDATE i'm going to this about this filter could be applied only to PIVOT queries.. because working a little bit more on the previous posted PIVOT query i had successful results. But e.g. on the following NON-PIVOT query i'm still have issues: eventtype=cisco_ios-duplex_mismatch | dedup dvc,cdp_local_interface | table _time,dvc,cdp_local_interface,cdp_neighbor,dest_interface,message_text | rename cdp_local_interface AS "Local Int.",cdp_neighbor AS "CDP Neighbor",dest_interface AS "Remote Int.",message_text AS "Message" | sort dvc

Most efficient way to use dashboard tokens for filtering

$
0
0
I have a dashboard where I am using tokens to filter the results of the individual panels. The use case for the filters are: Token=anything (*) Token=specific_value Token=anything BUT specific_value I have the first two tested and working, but can't seem to figure out the best way to account for the 3rd scenario. I have been incorporating the token into my searches by using: | fillnull value=NULL field (this ensures value will always be equal to something, even when not in an event) | search field=$token$ This works great for scenario 1 and 2 but obviously there is no way (I think?) to leverage field=value when in the last case I want to do the opposite (!=). Is there a better way to leverage the token in my search so I will be able to filter based on all three scenarios? All values, specific value, everything NOT specific value? Thanks!

Port Not Listening - Receiving Data - Splunk Enterprise - Free License

$
0
0
Hello Team Splunk! I am trying to receive data from a remote machine on the local network. In order to do so I configured a receiver to listen on port 9997. This is shown below in Figure 1. However, when I check *netstat* I see that the port is not actually listening for incoming connections. Does anyone know what is going wrong? Also, I should I mention that I am using *Splunk 6.0* on Windows 7 operating system (OS). ![alt text][1] *Figure 1*: Splunk set to listen on 9997 [1]: /storage/temp/250691-splunk-receivingdata-portnotlistening.png

Splunk sometimes not recognizing Scapy generated packets

$
0
0
Before the question, a bit of background. I have a setup in which I have two machines. The first collects data from various devices and sends it directly to the second over UDP, targeting port 5005. The second is running Splunk and has been configured to listen on port 5005 for UDP messages, and record them on a "sandbox" index. These machines are isolated from the internet and are connected by Ethernet cables to a hub switch right next to them. For the sake of brevity, let's call them M1 and S, short for Machine 1 and Splunk Machine. S has been assigned IP 192.168.0.5 while M1 has 192.168.0.6. There are three ways I can transmit packets from M1 to S. The first is by running from the terminal on S1: echo -n "{'Message':'hello'}" > /dev/udp/192.168.0.5/5005 This message is successfully sent from M1 to S and shows up in the sandbox index. The second is by running some scripts that emulate our desired behavior and form packets using Scapy, again on M1 targeting S. **This is Scapy, not Scipy.** This process also successfully completes the full loop and shows up in the sandbox index. The final method, and the one this question centers on, is to open scapy on M1 and generate packets and send them that way. Assuming we want to emulate sending packets from a docker container on M1 with an an IP of 10.10.12.9, the command used to generate these packets is as follows, with some slight editing, namely substituting text for the actual mac addresses and placing each field on its own line: sendp(Ether(dst="",src="") /IP(src="10.10.12.9",dst=192.168.0.5") /UDP(dport=5005,sport=33017) /Raw(load="{'Message':'Hello.'}"), iface="veth201") If I execute this command in Scapy, I'm told that it sends the packet. If I run a tcpdump on S, I can see that the packet generated by the command does in fact travel from M1 to S and is received. However, this packet is entirely ignored by Splunk. When I say "entirely ignored" I mean that, if I open the Splunk Web Search in a browser on S, and start a real time search with a 5 minute window for all events on the "sandbox" or "main" indexes, sending the scapy packet as described above does not cause an event, while the other two methods trigger events on the "sandbox" index as expected. How do I make Splunk recognize this packet?

Timechart with many values

$
0
0
Hello! I'm trying to make a timechart like this one bellow, but i have some hosts that i need to show their medium cpu usage per hour (0am - 11pm),i'm getting one month data and trying to show their average per hour, but i only can put the average of all hots, but a need the average for each one. [1]: /storage/temp/250692-screenhunter-012.jpg My search until now: earliest=04/01/2018:00:00:00 latest=04/30/2018:23:59:00 index="summary" instance="cpu.usage.average" source=Summary_VMhost | rename media as Value | table * | where VMhost="" OR like(VMhost,"hostname00020.somecorp.net") OR like(VMhost,"hostname00021.somecorp.net") OR like(VMhost,"hostname052073.somecorp.net") OR like(VMhost,"hostname052074.somecorp.net") OR like(VMhost,"hostname052075.somecorp.net") OR like(VMhost,"hostname052076.somecorp.net") OR like(VMhost,"hostname631.somecorp.net") OR like(VMhost,"hostname632.somecorp.net") OR like(VMhost,"hostname641.somecorp.net") OR like(VMhost,"hostname642.somecorp.net") | eval date_hour=strftime(_time,"%H") | eval Horario_critico=if((date_hour>=7 AND date_hour<11) OR (date_hour>=13 AND date_hour<17),100,null) | stats avg(Value) max(Horario_critico) by date_hour

How to Pivot Certain Fields

$
0
0
I need to pivot only certain fields in a set of logs. Sample data below. Can you tell me how to go about doing this? **Sample Data** _time Id Server DetailId Name Value 5/15/18 11:00 1 Server1 1 Name1 Value1 5/15/18 11:00 1 Server1 2 Name2 Value2 5/15/18 11:00 1 Server1 3 Name3 Value3 5/15/18 11:01 2 Server1 4 Name1 Value4 5/15/18 11:01 2 Server1 5 Name2 Value5 5/15/18 11:01 2 Server1 6 Name3 Value6 **Desired Results** _time Id Server Name1 Name2 Name3 5/15/18 11:00 1 Server1 Value1 Value2 Value3 5/15/18 11:01 2 Server1 Value4 Value5 Value6
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>