Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Can you help me format a table that would generate the highest CPU users per hour over a day?

$
0
0
G'Day I've got some data I'm pulling out of some events with a search: HOUR - Two digit hour of the day PROCESS - Name of a running process CPU_USAGE - The CPU the process used during the hour What I want is a table with Hour in the first column, then the 10 processes with the highest CPU usage within that hour. Not the most popular process (which is what TOP seems to give me), but the ones with the highest CPU usage. So 240 rows it the finished table, 10 per hour. I can get the top 10 in the first hour. I can get the 10 highest users, but I can't seem to get the highest 10 users within each hour. Something like: 00 ProcessA 75% 00 ProcessB 60% ... 00 ProcessG 10% 01 ProcessC 90% 01 ProcessA 45% 01 ProcessG 40% ... 01 ProcessF 3% 02 ProcessB 80% ... Any hints would be appreciated. The second part is creating a chart to show the same...

SNMP -- Correcting date/time output and rogue ap mac address

$
0
0
Hello, I just configured an SNMP-Trap on an RHEL box to send to Splunk. Getting the following output: Agent Hostname: (hostname) \N Date: 5 - 8 - 8 - 9 - 6 - 4461316 CISCO-LWAPP-AP-MIB::cLApRogueApMacAddress.0 = STRING: 0:d:67:83:2a:f2 Is there a way to correct the date format to show a proper time? I want to make rogue AP detections actionable, it seems that the format tosses the first hex of the mac address onto the ApRogueApMacAddress itself (the .0 prior to string value) Using the following format options: format2 %V\n% Agent Address: %A \n Agent Hostname: %B \n Date: %H - %J - %K - %L - %M - %Y \n Enterprise OID: %N \n Trap Type: %W \n Trap Sub-Type: %q \n Community/Infosec Context: %P \n Uptime: %T \n Description: %W \n PDU Attribute/Value Pair Array:\n%v \n -------------- \n

How to exclude log from sending to splunk to save quota

$
0
0
Hi guys. I have daily quota for 3G. but the log is too much. So im trying to exclude some log like heart beat send to splunk to save some usage. Im trying to use Splunk Filter Rules -> Exclude Patterns some keywords i clicked exclude. but i still able see these words when i search on splunk. Can anyone help. thanks.

Share a basic hello world script with Ruby?

$
0
0
All, I need to send some data from a Ruby script to HEC collectors. Anyone have a basic hello world script they can share? Doesn't need to be fancy.

How do I exclude log from sending to Splunk to save quota?

$
0
0
Hi guys. I have daily quota for 3G. but the log is too much. So, I'm trying to exclude some logs, like heart beat, to send to Splunk to save some usage. I'm trying to use Splunk Filter Rules: -> Exclude Patterns Some keywords I clicked exclude. But, i still am able to see these words when i search on Splunk. Can anyone help? Thanks.

HEC: Share a basic hello world script with Ruby?

$
0
0
All, I need to send some data from a Ruby script to HEC collectors. Anyone have a basic hello world script they can share sending a string to a HEC with ruby? Doesn't need to be fancy.

SPL: How to perform a SQL Like Minus Operation?

$
0
0
I am trying to remove certain logs from a base query of a certain type based on the results of another query of a different type of log. Both are connected by the user field. Specifically, I have identified instances where a user has 4 or more failed login attempts, and am trying to remove instances where they successfully changed their password after. This leaves a list of users, and their associated logs, who have a large number of failed logins but did not update their password. Here is the base query: index=1234 logger_name=auth message="user failed to login" earliest=-24h latest=now | stats count by user | search count>=4 | join user [search index=1234 logger_name=auth message="user failed to login*" earliest=-24h latest=now] Here is the query I am essentially trying to include. However, SPL only handles left, right, and inner joins | MINUS user [search index=1234 logger_name=passwordchange message="Update Password:Success" earliest=-24h latest=now] How might I accomplish this? ![alt text][1] [1]: http://sql.sh/wp-content/uploads/2013/03/sql-ensemble-minus-300.png

Why does Splunk keep crashing when I try to download software?

$
0
0
Hello, my Splunk keeps crashing when I try to download software ever since I added in the [proxy_config] and http:// and https:// to the server.conf file... When its not in there, it doesn't crash. [proxyConfig] http_proxy=http://hostname:9997 https_proxy=https://hostname:9997 [build a0c72a66db66] 2018-08-30 12:41:25 Received fatal signal 6 (Aborted). Cause: Signal sent by PID 7157 running under UID 0. Crashing thread: TcpChannelThread

Color coding for values by rows

$
0
0
Hi All, I have two to three rows, something like below: ABC 98 97 67 DEF 50 45 23 GHI 3 2 1 three rows of a table is as shown above. Now i need to apply three color codings to each row based on ranges. For example: First Row -->>90 Green 80 to 90 Yellow &<80 red Second row > 45 Green 30 to 45 Yellow <30 red Third row >3 Green 2 to 3 Yellow <1 red Kindly provide an easiest way to achieve this. TIA Regards, BK

Search for IP address hitting a specific port + any other ports

$
0
0
I think this should be within my grasp, but I don't seem to be able to create a search that returns what I'm looking for. I'm trying to return from syslog any IP address that hits a specific port (say 12345), but *also* attempts connecting to any other ports, other than 12345. In my scenario, a well-behaved host should connect exclusively to port 12345, and nothing else. What I'm coming up with either returns no results, or only results matching DPT=12345; nothing in between. Thanks

How to detect the beginning/ending of Daylight Savings Time?

$
0
0
I have a report in which a date/time field is converted from GMT to MST/MDT, depending on if it is currently in Daylight Savings Time. Since DST ends/begins on a different date every year, how do automatically change which timezone to convert to in the query?

How do I Search for IP address hitting a specific port + any other ports?

$
0
0
I think this should be within my grasp, but I don't seem to be able to create a search that returns what I'm looking for. I'm trying to return from syslog any IP address that hits a specific port (say 12345), but *also* attempts connecting to any other ports other than 12345. In my scenario, a well-behaved host should exclusively connect to port 12345 and nothing else. What I'm coming up with either returns no results or only results matching DPT=12345; it does not return anything in between. Thanks

Time token changes in a comparison timechart

$
0
0
I successfully put together a graph that compares bandwidth consumption over a period of time (currently hardcoded to 60 minutes) with that of the previous week. Now having troubles hooking my query up with the time range picker on Splunk Dashboard: My current query looks like: ``` index=xxx earliest=-60m@m latest=-0m@m |eval period="today" | append [search index=xxx earliest=-10140m@m latest=-10080m@m | eval period="last_week" | eval new_time=_time+(60*60*24*7)] | eval _time=if(isnotnull(new_time), new_time, _time) | timechart span=5m sum(bytes) by period ``` While researching how to, I found these posts: https://answers.splunk.com/answers/453444/how-to-input-time-using-earliest-and-latest-tokens.html https://answers.splunk.com/answers/475557/how-to-dynamically-compare-two-time-ranges.html Then made the following changes: ``` index=xxx | eval earliest=if(isnum("$time_token.earliest$"), "$time_token.earliest$", relative_time(now(), "$time_token.earliest$")) | eval latest=if(isnum("$time_token.latest$"), "$time_token.latest$", relative_time(now(), "$time_token.latest$")) | eval period="today" | append [search index=xxx |eval earlist=if(isnum("$time_token.earliest$"), relative_time("$time_token.earliest$", "-10080m@m"), relative_time(relative_time(now(), "$time_token.earliest$"), "-10080m@m")) | eval latest=if(isnum("$time_token.latest$"), relative_time("$time_token.latest$", "-10080m@m"), relative_time(relative_time(now(), "$time_token.latest$"), "-10080m@m")) | eval period="last_week" | eval new_time=_time+(60*60*24*7)] | eval _time=if(isnotnull(new_time), new_time, _time) | timechart span=5m sum(bytes) by period ``` Unfortunately, my graph does not look right. Appears its in a 7 days time range and it seems like they are sum up the same data bytes. See image below. Anyone with ideas? Thanks in advance. ![alt text][1] [1]: /storage/temp/255871-screen-shot-2018-08-30-at-50043-pm.png

Tokens: Why is the search element with depends attribute not working?

$
0
0
I am trying to define a chained search where filters are applied if the corresponding token is set. But, in the example below, the depends attribute seems not to work as expected. The search is waiting for input as long as fooFilter or barFilter is not set. Splunk Verion is 6.6.8 and according to the doc, the depends attribute should be supported in searches. What did I miss? index=a | stats count by foo bar $globalTimePicker.earliest$$globalTimePicker.latest$ where foo=$fooFilter$ where bar=$barFilter$ sort -count

How do you calculate average time between transaction groups by two fields?

$
0
0
I have logs from a SIP proxy server and I'm trying to get metrics from SIP transactions metrics from a SIP proxy server logs. I have the following events: Peer AAA events: Time, call id A, message A.1, peer_name "AAA", resource "111" Time, call id A, message A.2, peer_name "AAA", resource "111" Time, call id A, message A.3, peer_name "AAA", resource "111" Time, call id C, message C.1, peer_name "AAA", resource "112" Time, call id C, message C.2, peer_name "AAA", resource "112" Time, call id C, message C.3, peer_name "AAA", resource "112" Time, call id I, message I.1, peer_name "AAA", resource "111" Time, call id I, message I.2, peer_name "AAA", resource "111" Time, call id I, message I.3, peer_name "AAA", resource "111" Time, call id J, message J.1, peer_name "AAA", resource "112" Time, call id J, message J.2, peer_name "AAA", resource "112" Time, call id J, message J.3, peer_name "AAA", resource "112" (...) ---------- Peer BBB events: Time, call id B, message B.1, peer_name "BBB", resource "111" Time, call id B, message B.2, peer_name "BBB", resource "111" Time, call id B, message B.3, peer_name "BBB", resource "111" Time, call id D, message D.1, peer_name "BBB", resource "112" Time, call id D, message D.2, peer_name "BBB", resource "112" Time, call id D, message D.3, peer_name "BBB", resource "112" Time, call id F, message F.1, peer_name "BBB", resource "111" Time, call id F, message F.2, peer_name "BBB", resource "111" Time, call id F, message F.3, peer_name "BBB", resource "111" (...) ---------- Peer CCC events: Time, call id E, message E.1, peer_name "CCC", resource "113" Time, call id E, message E.2, peer_name "CCC", resource "113" Time, call id E, message E.3, peer_name "CCC", resource "113" Time, call id G, message G.1, peer_name "CCC", resource "114" Time, call id G, message G.2, peer_name "CCC", resource "114" Time, call id G, message G.3, peer_name "CCC", resource "114" Time, call id H, message H.1, peer_name "CCC", resource "113" Time, call id H, message H.2, peer_name "CCC", resource "113" Time, call id H, message H.3, peer_name "CCC", resource "113" (...) ---------- Notes: - All peer can have N resources. - Different peers can have the same name resource - Exists N different peers. - In the timeline, messages from different peers may be mixed. Order in Timeline (only show AAA and BBB messages to simplify): 1. Time, call id A, message A.1, peer_name "AAA", resource "111" 2. Time, call id B, message B.1, peer_name "BBB", resource "111" 3. Time, call id C, message C.1, peer_name "AAA", resource "112" 4. Time, call id A, message A.2, peer_name "AAA", resource "111" 5. 7. Time, call id A, message A.3, peer_name "AAA", resource "111" 6. Time, call id D, message D.1, peer_name "BBB", resource "112" 7. Time, call id I, message I.1, peer_name "AAA", resource "111" 8. Time, call id B, message B.2, peer_name "BBB", resource "111" 9. Time, call id I, message I.2, peer_name "AAA", resource "111" 10. Time, call id C, message C.2, peer_name "AAA", resource "112" 11. Time, call id C, message C.3, peer_name "AAA", resource "112" 12. Time, call id J, message J.1, peer_name "AAA", resource "112" 13. Time, call id B, message B.3, peer_name "BBB", resource "111" 14. 4. Time, call id F, message F.1, peer_name "BBB", resource "111" 15. Time, call id F, message F.2, peer_name "BBB", resource "111" 16. Time, call id I, message I.3, peer_name "AAA", resource "111" 17. Time, call id J, message J.2, peer_name "AAA", resource "112" 18. Time, call id D, message D.2, peer_name "BBB", resource "112" 19. Time, call id D, message D.3, peer_name "BBB", resource "112" 20. Time, call id J, message J.3, peer_name "AAA", resource "112" My goal is to know the average time between transactions from the same peer / resource. Peer AAA and resource 111: - Call id A, peer AAA, resource 111 - Call id I, peer AAA, resource 111 - Call id ..., peer AAA, resource 111 Peer AAA and resource 112: - Call id C, peer AAA, resource 112 - Call id J, peer AAA, resource 112 - Call id ..., peer AAA, resource 112 Peer BBB and resource 112: - Call id B, peer BBB, resource 111 - Call id F, peer BBB, resource 111 (...) At the end I would like to get a table with: || Peer || Resource || Avg (time) bettween different transactions) || || AAA || 111 || 2s || || AAA || 112 || 3,5s || || BBB || 111 || 1s || || BBB || 112 || 5s . || || CCC || 113 || 1s || || CCC || 114 || 5s . || I created a query that give almost what I want but only if I limit to a specific peer and resource. Otherwise the query does not pay attention to transactions per peer and resource and calculates the difference between all transactions. index="index" sourcetype="sourcetype" ("SUBSCRIBE" OR "NOTIFY") | transaction call_id maxspan=3s | eval success=if(searchmatch("404"),1,0) | where success=1 | | extract resource> | where peer_name="ABC" | where resource="123" | eval initial_time=_time | autoregress _time AS previous_time | delta previous_time AS difference | chart avg(difference) AS ratio BY peer_name resource || field1 || flied 2 || avg time || | ABC | 123 | -5.031163865546219 | Any ideas? Using Splunk 7.0.3.4 version. Thanks in advance.

HEC: Share a basic hello world script with Ruby to send to HEC?

$
0
0
All, I need to send some data from a Ruby script to HEC collectors. Anyone have a basic hello world script they can share sending a string to a HEC with ruby? Doesn't need to be fancy.

Single Value: Trellis View - color change based on string values

$
0
0
Hello Ninjas - I am not sure if I am having a brain fart or if I am just not grasping this. Seeking some help, please. I have searched for a good few hours now and have read several of the docs. I have a simple index populated with REST API calls that return a single word. "alive" or "dead" as the field "state" Splunk ver 7.0.3 I am trying to build a dashboard panel that will sit on a KPI dashboard. We have a rather vanilla system so no apps that would probably do this for me. The result I want for each host is a single value. The trellis view is appealing because I wouldn't have to go through using a base search (but maybe i have to) and then create multiple single values. So a Single Value; value displayed is the "state" and the color should be representative of the state, but I see the ranges must be numerical so I eval'd a numerical value field based on the state value called "state_sev". The code snippet below returns the "state" but does not change colors. If I change the `

Field extractions

$
0
0
When using the curl get, I am receiving a json response, however with no field extraction. SPATH is not working and, neither are my manual regexes. I would like the extracted fields from the json, or the json broken into multiple events. Am I missing something? A simple, single value result is: { "origin" : "NYC", "currency" : "USD", "results" : [ { "destination" : "AGP", "departure_date" : "2018-09-09", "return_date" : "2018-09-17", "price" : "742.85", "airline" : "IB" } ] } A multi value event example is: { "origin" : "NYC", "currency" : "USD", "results" : [ { "destination" : "AGP", "departure_date" : "2018-09-09", "return_date" : "2018-09-17", "price" : "742.85", "airline" : "IB" }, { "destination" : "AGP", "departure_date" : "2018-09-09", "return_date" : "2018-09-18", "price" : "742.85", "airline" : "IB" }, { "destination" : "AGP", "departure_date" : "2018-09-09", "return_date" : "2018-09-19", "price" : "742.85", "airline" : "IB" }, { "destination" : "AGP", "departure_date" : "2018-09-09", "return_date" : "2018-09-13", "price" : "746.85", "airline" : "IB" }, { "destination" : "AGP", "departure_date" : "2018-09-09", "return_date" : "2018-09-14", "price" : "746.85", "airline" : "IB" }, { "destination" : "AGP", "departure_date" : "2018-09-09", "return_date" : "2018-09-16", "price" : "931.78", "airline" : "IB" }, { "destination" : "AGP", "departure_date" : "2018-09-09", "return_date" : "2018-09-11", "price" : "959.92", "airline" : "BA" }, { "destination" : "AGP", "departure_date" : "2018-09-09", "return_date" : "2018-09-10", "price" : "1062.46", "airline" : "AA" }, { "destination" : "AGP", "departure_date" : "2018-09-09", "return_date" : "2018-09-15", "price" : "1195.56", "airline" : "IB" }, { "destination" : "AGP", "departure_date" : "2018-09-09", "return_date" : "2018-09-12", "price" : "1394.32", "airline" : "AT" } ] }

Need hand write a [subsearch with me]

$
0
0
Firstly, i am trying to separate 1) cachekey=false in one query and 2) cachekey=true in another query and 3) with both combined in one query. also i want avg response time and perc90 response time. where i am facing difficulty is the count 1 and 2 conbined is not equal to the count in query #3, dont know where it went wrong in the query, experts please spare some time pull me out of this. Below are the query how i wrote 1) index=datapower CVSEVENT=EXIT opName="ABCD" status="SUCCESS" [search index=datapower opName=" 1234" cache ="false" | dedup grid | fields grid ] | dedup grid | table respTime, grid | stats avg(respTime), perc90(respTime), count 2) index=datapower opName="ABCD" status="SUCCESS" [search index=datapower opName="1234" cache ="true" | dedup grid | fields grid ] | dedup grid | table respTime, grid | stats avg(respTime), perc90(respTime), count 3) index=datapower opName="ABCD" OR opName="1234" status="SUCCESS" |stats avg(respTime), perc90(respTime), count by opName Thanks,

I'm looking for a query to search for users logging in remotely via either Remote desktop, through a VM in ESXI or with SSH terminal into the domain that our Splunk server is in

$
0
0
We had a user log in remotely either with ESXI with a VM, with Remote Desktop or with the command prompt using SSH. Our Splunk server is on a domain and we are trying to determine who logged in and made changes. I have searched the forum and cannot find a definite answer in the community. I'm fairly new to Splunk with writing queries and all so appreciate any help and/or advice anyone can give. Thanks,
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>