Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How to show legend for rangemap in single value panel?

$
0
0
Hi Team, I have created a single value panel with rangemap in query. It displays a text in center based on which its background color changes. Is there any way i can show legend for rangemap colors at bottom of this panel. or an alternate way to show like green means this and red means that.

sharepoint 2007 logs splunk app

$
0
0
Hi Splunkers, Is there any splunkbase app specifically there to monitor SharePoint 2007. I know there is an app for SharePoint2010 and Sharepoint2013. Would like to ingest Sharepoint2007 logs and monitor?Please suggest? Also as per sharepoint2007 all the logs stores in the database. Can I use only the DBConnect option to ingest these logs?

How to get repeated event for same host and instance over the period

$
0
0
I have some events on my server. I want to get events which are occurring repeatedly for same host and same instance for 1 or 2 weeks. DB_HOST_NAME DB_INSTANCE_NAME EVENT TIME_WAITED WAIT_CLASS Timestamp -------------- ----------------- ------- --------- ------------- ---------- n3pvdo1001 cogcsprd enq: TX - row lock contention 8527.248862 Application 24/SEP/2018 16:00:26 UTC r1pvdo1025 rcmprd enq: TX - row lock contention 1572.800453 Application 24/SEP/2018 16:00:35 UTC a9pvdb001 obpd1 enq: TX - row lock contention 880.093816 Application 24/SEP/2018 16:00:29 UTC n3pvdb020 truth latch: cache buffers chains 345.201907 Concurrency 24/SEP/2018 16:00:33 UTC tckdbp1 epps1 enq: TM - contention 171.47339 Application 24/SEP/2018 16:00:35 UTC svodbp1 svoprod enq: TM - contention 168.377045 Application 24/SEP/2018 16:00:23 UTC n1pvdb1008 slmprod enq: TX - row lock contention 161.56093 Application 24/SEP/2018 16:00:30 UTC svodbp1 svoprod enq: TM - contention 156.281149 Application 24/SEP/2018 16:00:23 UTC n3pvdb020 truth db file sequential read 125.486741 User I/O 24/SEP/2018 16:00:33 UTC teldbp1 dw1 SQL*Net message from dblink 121.329347 Network 24/SEP/2018 16:00:35 UTC n3pvdb020 truth db file sequential read 95.950487 User I/O 24/SEP/2018 16:00:33 UTC au2dbp1 auth2 SQL*Net message from dblink 88.267973 Network 24/SEP/2018 16:00:33 UTC cldbp3 amaprod SQL*Net message from dblink 84.309742 Network 24/SEP/2018 16:00:36 UTC tckdbp1 epps1 enq: TX - row lock contention 83.137442 Application 24/SEP/2018 16:00:35 UTC n3pvdb020 truth latch: cache buffers chains 67.086209 Concurrency 24/SEP/2018 16:00:33 UTC svodbp1 svoprod enq: TX - row lock contention 61.439869 Application 24/SEP/2018 16:00:23 UTC If the same event is occurred on same host and instance for last 7 days, I want to specify the status flag as Amber, if more than 2 week red, if less than a week Green.

I need to create an dashboard in splunk which give information of the cpu and memory utilization of the devices installed on the server(Cisco prime application)

$
0
0
I need to create an dashboard in splunk which give information of the cpu and memory utilization of the devices installed on the server(Cisco prime application)

AD request and file audit

$
0
0
Hello! I need to show audit access to the file in Windows, in the context of a certain group in the AD. For example: there is a file file_for_test.doc. To view the latest data on the audit, I use the following code: host="hostname" sourcetype="WinEventLog" Object_Name="*file_for_test.doc" Accesses="ReadData*" | head 10000 | stats first(_time) as _time by Account_Name,Accesses,EventCode,Object_Name | table _time, Account_Name, Accesses, EventCode, Object_Name Result: _time Account_Name Accesses EventCode Object_Name 2018-09-25 13:24:07 User_1 ReadData (or ListDirectory) 4663 \Device\file_for_test.doc 2018-09-25 10:59:32 User_2 ReadData (or ListDirectory) 4663 \Device\file_for_test.doc 2018-09-25 08:41:39 User_3 ReadData (or ListDirectory) 4663 \Device\file_for_test.doc 2018-09-24 18:14:33 User_4 ReadData (or ListDirectory) 4663 \Device\file_for_test.doc But I need to display data only for users in the certain group AD. For example, only user 1, user 4. It's to get a list of these users: | ldapsearch domain=dom_name search="(&(objectClass=group)(CN=group_name))" | ldapgroup | table member_name Result: member_name User_1 User_4 How to combine 2 of these requests to get the following result: _time Account_Name Accesses EventCode Object_Name 2018-09-25 13:24:07 User_1 ReadData (or ListDirectory) 4663 \Device\file_for_test.doc 2018-09-24 18:14:33 User_4 ReadData (or ListDirectory) 4663 \Device\file_for_test.doc

Splunk App for Windows Infrastructure

$
0
0
I install and configure Splunk App for Windows Infrastructure. With this I install: Splunk Add-on for PowerShell, Splunk Supporting Add-on for Active Directory (and configure it "Connection test for default succeeded"), Splunk Add-on for Microsoft Active Directory, Splunk Add-on for Microsoft Windows DNS, Splunk Add-on for Microsoft Windows. When I configure it I complete all requirements I see only one server (self splunk) but I don't see any domain controllers. Where I must add domain controllers?

Not able add new member in search head cluster it is showing error Failed to proxy call to member

$
0
0
I am trying to add new member in existing cluster but it showing error "Failed to proxy call to member https://xxx:80809" I tried the both the ways with the help of splunk docs splunk add shcluster-member -current_member_uri https://xxxx:8089 splunk add shcluster-member -new_member_uri https://xxxx:8089 my pass4SymmKey is same in both places but still i am facing issue can any one help me to fix it thanks in advance

How to compare avg of first 10 results to avg of last 10 results and apply a calculation

$
0
0
I need to return the average of the earliest 10 results **(OG)** in an index and the average of the latest 10 results **(FG)** in the same index. I then need to apply a calculation to get the result **(ABV)** -ie: **ABV=[average of earliest 10 results] minus [average of the latest 10 results] multiplied by 131.25** I can calculate **OG** by using this search: **| streamstats window=10 earliest(SG) as SGStart | stats avg(SGStart) as OG** ..and I can calculate **FG** by using this search: **| streamstats window=10 latest(SG) as SGEnd | stats avg(SGEnd) as FG** ..and I can also calculate **ABV** by appending: **| eval stepG = 'OG'-'SG' | eval ABV=stepG*131.25 | table ABV** ...but obviously some events are lost in the pipeline due to filtering and I can't figure out how to put it all together. Any help would be greatly appreciated!

how to get the logs from the servers

$
0
0
Dear All, I am new to splunk. Just installed splunk on my servers. Kindly let me know how I can start receiving the logs from other severs. Thanks & Regards Siraj

How can one determine on system level if a Splunk install is a Heavy Forwarder or an Indexer?

$
0
0
Hi team, I'm looking to find a way to identify if a Splunk server is a heavy forwarder or an Indexer in an automated way. Is there a way to find out, by looking at filesystems, processes or running commands, on the system to identify the role of the server in Splunk? I'm looking forward to your feedback, its highly appreciated. Thanks.

How to create alert if specific event found first time in a day and ignore creating alert if the same event found second time in day?

$
0
0
We are indexing web services errors in splunk. Here are some cases we are involved in. 1. We need to create alert if we found a error text for an web service in a day. If we found same error text for same web service, then alert shouldn't be created. 2. This scenario will be tricky one. If the alert found 2 errors texts. For one error text , we already raised alert as it was first error in a day. For another error text we need to send alert as it's new now. Please help me how we can handle this.

I get Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. However, i already have a 30 days license installed

$
0
0
I get Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. However, i already have a 30 days license installed

Filter consecutive events

$
0
0
Some of my logs are generated via automatic jobs and I want to filter them away. What is the best way to filter away a sequence of consecutive events after sorting? For example, these are my events: sessionID | logNo | logText sess1 | 1 | abc sess1 | 2 | def sess1 | 3 | ghi sess2 | 1 | abc sess2 | 2 | def sess2 | 3 | ghi sess3 | 1 | keep sess3 | 2 | this sess4 | 1 | abc sess4 | 2 | def sess4 | 3 | ghi sess4 | 4 | something else base search | sort sessionID, logNo | ... The end result is that I only want to retain sess3 and sess4 events, while filtering away sess1 and sess2. To explain, I have identified a set of logs that are generated automatically having the same sessionID. This is what I intend to filter away from showing up. logNo | logText 1 | abc 2 | def 3 | ghi However, I do not want to filter away sess4 as it contains as additional logNo4 even though it's logNo 1 to 3 are what I want to filter away previously. I have an idea to parse the events into something like: sessionID | combined sess1 | 1 abc 2 def 3 ghi sess2 | 1 abc 2 def 3 ghi sess3 | 1 keep 2 this sess4 | 1 abc 2 def 3 ghi 4 something else Then use a where combined!="1 abc 2 def 3 ghi" to filter away the automatic generated logs in its entirety. The solution has to scale for at least 3..n consective events and the combined logText can be rather long to the tune of >1000 characters each. If this is a good approach, how can I go about doing it. If not, are there any better and computationally efficient ways to achieve this? Thanks in advance!

Best method to add a field based upon another field

$
0
0
Hi, I have a number of pre-existing date fields from Nessus that are reported in epoch format. I'd like to add a new field that translates that field into Julian format. How would I do that? This link had the same issue, but I don't see an answer. I know that this can be done at searchtime, but I want it done automatically, retaining the original field, and adding a new one, with the converted date. https://answers.splunk.com/answers/499710/how-to-convert-epoch-to-human-readable-in-kv-mode.html

return token not working properly with subsearches

$
0
0
Hello all, I have the following search: index=mon1 data{}.testType!="https" data{}.id="95809" source="*LOAD*" | stats latest(data{}.status) as status | lookup mon-status status OUTPUT value as value_full | eval value_fyc=[ search data{}.id="167934" source="*FYC" | stats latest(data{}.status) as status | lookup mon-status status OUTPUT value | return $value ] | eval value=$value_full$ + $value_fyc$ | rangemap field=value low=0-400 severe=401-999 default=low0severe$result.value$$result.range$$field1.earliest$$field1.latest$ If I run the query in the search app, it runs fine and I have a table with all the values populated. ![alt text][1] In my dashboard I use a css to display an icon based on range (i.e. if "severe" display a red cross):

SYS1

but this is not working anymore, after I added the subsearch in my query. I'm not sure the token contains the right value, is there a way to debug it ? thanks, Fausto [1]: /storage/temp/255034-rangemap.png

Why is my return token not working properly with subsearches?

$
0
0
Hello all, I have the following search: index=mon1 data{}.testType!="https" data{}.id="95809" source="*LOAD*" | stats latest(data{}.status) as status | lookup mon-status status OUTPUT value as value_full | eval value_fyc=[ search data{}.id="167934" source="*FYC" | stats latest(data{}.status) as status | lookup mon-status status OUTPUT value | return $value ] | eval value=$value_full$ + $value_fyc$ | rangemap field=value low=0-400 severe=401-999 default=low0severe$result.value$$result.range$$field1.earliest$$field1.latest$ If I run the query in the search app, it runs fine and I have a table with all the values populated. ![alt text][1] In my dashboard I use a CSS to display an icon based on range (i.e. if "severe" display a red cross):

SYS1

but this is not working anymore after I added the subsearch in my query. I'm not sure the token contains the right value, is there a way to debug it ? thanks, Fausto [1]: /storage/temp/255034-rangemap.png

correlation search

$
0
0
hello every body , How to search to correlate there use case please : Detection of access to basic hash files passwords, connections from multiple IPs to the same accounts, Unauthorized device on the network, Logs deleted from source Please ? I want a request in the general framework and I will try to adapt my data. Thank in advance.

xml/html code for reports page and datasets page

$
0
0
Hello, Where can i see the source code for reports page and data sets page ( reports and data sets tabs which appears on top)? Can i able to change the source codes of these pages on app level? Thanks.

How do I create an alert that triggers when a specific event is found for the first time in a day, but is ignored if the same event is found a second time?

$
0
0
How to create alert if specific event found first time in a day and ignore creating alert if the same event found second time in day? We are indexing web services errors in Splunk. Here are some cases we are involved in. 1. We need to create an alert if we find an error text for a web service in a day. If we find the same error text for the same web service, then an alert shouldn't be created. 2. This scenario will be a tricky one. If the alert finds 2 error texts: For one error text , we already raise an alert as it is the first error in a day. For another error text we need to send alert as it's new now. Please help me how we can handle this.

Can you help me create a search that efficiently filters consecutive events?

$
0
0
Some of my logs are generated via automatic jobs and I want to filter them away. What is the best way to filter away a sequence of consecutive events after sorting? For example, these are my events: sessionID | logNo | logText sess1 | 1 | abc sess1 | 2 | def sess1 | 3 | ghi sess2 | 1 | abc sess2 | 2 | def sess2 | 3 | ghi sess3 | 1 | keep sess3 | 2 | this sess4 | 1 | abc sess4 | 2 | def sess4 | 3 | ghi sess4 | 4 | something else base search | sort sessionID, logNo | ... The end result is that I only want to retain sess3 and sess4 events, while filtering away sess1 and sess2. To explain, I have identified a set of logs that are generated automatically having the same sessionID. This is what I intend to filter away from showing up. logNo | logText 1 | abc 2 | def 3 | ghi However, I do not want to filter away sess4 as it contains as additional logNo4 even though it's logNo 1 to 3 are what I want to filter away previously. I have an idea to parse the events into something like: sessionID | combined sess1 | 1 abc 2 def 3 ghi sess2 | 1 abc 2 def 3 ghi sess3 | 1 keep 2 this sess4 | 1 abc 2 def 3 ghi 4 something else Then use a where combined!="1 abc 2 def 3 ghi" to filter away the automatic generated logs in its entirety. The solution has to scale for at least 3..n consecutive events and the combined logText can be rather long to the tune of >1000 characters each. If this is a good approach, how can I go about doing it? If not, are there any better and computationally efficient ways to achieve this? Thanks in advance!
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>