Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Will buckets created in single cluster be replicated across multi-sites.

$
0
0
Will buckets created in single cluster be replicated across multi-sites when it is migrated to multi-site cluster? I briefly tested it which appears not being replicated. Can anyone confirm if it’s the case?

I am using Microsoft Log Analytics Ad-on, but data stops coming in Splunk after firewall rules are modified in OMS.

$
0
0
Are there any specific ports or specific permissions this add-on requires/uses, so that I can inform the team, so if any modifications are made data flow is not interrupted. I have configured Microsoft Log Analytics Add-on in Heavy Forwarder and forwarding the logs received to indexer. There is no clustering. I would like to hear from @jkat54 and @dpanych. Any ideas, why this keep on happening. I used index=_internal log_level=err* OR log_level=warn loganalytics* The latest event I am getting some results using this query is 09-05-2018 18:24:24.168 +0200 ERROR ExecProcessor - message from "python F:\Splunk\etc\apps\TA-ms-loganalytics\bin\log_analytics.py" ERROR('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))

Best Practice to index Oracle database Audit Logs(.xml)

$
0
0
We are trying to index oracle database Audit Logs which is in .xml format in splunk. The docs section suggests it can be done through splunk universal forwarder and DB connect. But we're unable to see any templates in DB connect to query audit logs. We can see templates only for unified audit logs. We are using DB Connect 3.1.3 with oracle add-on 3.7.0. Is it possible to fetch logs through DB Connect or should we be using universal forwarders?

Is there a way to make every alert global by default?

$
0
0
Hi guys, we use alerts all the time and I always want my entire team to be able to see every alert. Which is why I get annoyed by changing every permission one by one. Any way to make it global by default or just change everything to global at once? Thanks

Help to combine multiple queries into one

$
0
0
Hello, I have multiple queries with small differences, is it possible to combine them? Here is example: index=some_index sourcetype=some_source host=*host* (span_name=SomeSpanName1) | eval duration=span_duration/1000 | stats p99(duration) index=some_index sourcetype=some_source host=*host* (span_name=SomeSpanName2 OR span_name=SomeSpanName3) | eval duration=span_duration/1000 | stats p99(duration) index=some_index sourcetype=some_source host=*host* (span_name=SomeSpanName4) | eval duration=span_duration/1000 | stats p99(duration) The result of each query is only one column `p99(duration)` with value. Is it possible to combine these queries and get a result with three columns with different names (I need to know the correspondence of each column to the condition)?

Difference between two date values with substraction of the weekend days

$
0
0
Hi All, I need to find the difference between these two dates with the removal of the weekends I have 2 date value fields as ASSIGNED_DT = 2018-08-30 15:33:51 ANSWER_DT= 2018-09-03 16:59:48 | makeresults | eval ASSIGNED_DT = "2018-08-22 15:33:51" | eval ANSWER_DT= "2018-09-03 16:59:48" | eval Assigned_Time = strptime(ASSIGNED_DT, "%Y-%m-%d %H:%M:%S") | eval Answer_Time = strptime(ANSWER_DT, "%Y-%m-%d %H:%M:%S") | eval start=relative_time(Assigned_Time,"@d") | eval end=relative_time(Answer_Time,"@d") | eval Date=mvrange(start,end+86400,86400) | convert ctime(Date) timeformat="%+" | eval WeekendDays=mvcount(mvfilter(match(Date,"(Sun|Sat).*"))) | eval diff = tostring(( Answer_Time - Assigned_Time), "duration") | table ASSIGNED_DT, ANSWER_DT, diff, WeekendDays everything is working fine and the results are:- ASSIGNED_DT ANSWER_DT diff WeekendDays 2018-08-22 15:33:51 2018-09-03 16:59:48 12+01:25:57.000000 4 Now I just need help with: 1. remove the WeekendDays from the diff 2. Convert diff-WeekendDays as the only number of days in decimal: for example here : it should be 8.01 days or 8 days 1 hour 25 mins only. Thanks for you help

'if like' help

$
0
0
Hi, Struggling to get this to work. I'm trying to create a new field called 'severity' with specific values returned should a particular file extension be detected. Two example values would be as follows; bigdog.exe bigcat.bat With the above values then found within the field 'threat'. The logic Im trying is as follows, with the idea being that the .bat file will return a severity of high, and the .exe as low. But when trying this both come out as a low. | eval severity=if(like(threat, "*.bat"), "high", "low") I suspect the problem is something to do with the use of the asterix which is needed as the values change with the exception of the file extension, but cant work out how to fix. any ideas? Thanks

REGEX in search to extract each line in a log event to separate events

$
0
0
Hi Splunk Gurus - I am new to splunk, need your help on the below. Below is how the events are getting into splunk, every event have multiple lines. Need a REX or REGEX to split every line as individual events. 15:44:26,951 INFO ALPSessionListener:21 - Session destroyed 15:44:27,437 INFO HomeController:121 - mapping -----/home 15:44:27,451 INFO AccessCardUtility:98 - In query payment method {https://alp.doc.company.com/doc/ccpwebservice/ServiceWeb.svc} 15:44:27,586 INFO HomeController:497 - User roles ----[Supervisor] 15:44:27,617 INFO ALPFilter:49 - User name:{InitialLogin}, Session Id:{x71d4QsDMRp0tpUAYH-LnEn-KRPdDPmsbgQpBLi7}, Login Date Time:{2018-09-05T15:44:27.617}, Resource accessing:{http://alp.doc.company.com/doc/WEB-INF/layout/GenericLayout.jsp}, Time Taken:{181ms} 15:44:27,904 INFO ALPInterceptor:70 - User has access to the URL/alp/ReconcileCashDrawer:{true} 15:44:27,904 INFO ReconcileCashDrawerController:121 - mapping -----/ReconcileCashDrawer 15:44:27,932 INFO ALPFilter:49 - User name:{JP19630}, Session Id:{fVrI3lxJKtjsd-IsoEr7An-14xrq}, Login Date Time:{2018-09-05T15:44:27.932}, Resource accessing:{http://alp.doc.company.com/doc/WEB-INF/layout/GenericLayout.jsp}, Time Taken:{28ms} 15:44:28,152 INFO ALPSessionListener:15 - ALP session created 15:44:28,207 INFO HandleDlsPaymentController:634 - payment response is ---org.datacontract.schemas._2004._07.Common_Payment_Common.GetPaymentInfoResponse@468bfb00 15:44:28,214 INFO RecPaymentController:71 - XML recieved { 15:44:28,214 INFO XMLUtility:51 - IN XML UTILITY 15:44:28,234 INFO ALPFilter:49 - User name:{InitialLogin}, Session Id:{gg6KJGawjksfdklafklto9ju8aQTzvaP2PLRum}, Login Date Time:{2018-09-05T15:44:28.234}, Resource accessing:{http://alp.doc.company.com/doc/settleSuccessful}, Time Taken:{783ms} 15:44:28,266 INFO ALPProductLlpsDAO:130 - number of products passed are {2} 15:44:28,346 INFO ALPSessionListener:15 - ALP session created

splunk eval row with last field

$
0
0
Hello Splunkers i requiered eval the last field with current row. example: field 1 ...... field2.........field3........................................................................result 1..................1..............(field1+field2)...........................................................field3 3..................4..............(last_field3 + current field1) - current field2) ... current field3 7..................2..............(last_field3 + current field1) - current field2) ... current field3 numeric example field 1 ...... field2.........field3...........................result 1..................1..............(1+1)...............................2 3..................4..............(2 + 3) - 4) ..................... 1 7..................2..............(1 + 7) - 2) ......................6 thanks!!!

How to investigate DL and Windows group membership

$
0
0
Team, If we have Windows events and AD is synced with Splunk. How can i search/investigate who modified a DL or who was added in a AD group and who added. Is there any query or how can i investigate this matter. Appreciate any help. Ambris.

office 365 logs are being logged intermittently. Could you please help us out?

$
0
0
The splunk version that we use is 7.0.5 and the add on installed is 1.0.1. This has worked in the past .

How to search a lookup table and return the unmatched term?

$
0
0
i am trying to search for the allowed urls (passthrough) and not in my list uploaded csv called url. the csv is made of only 1 column with a header called hostname `fgt_webfilter` profile=* status=passthrough NOT [ inputlookup url ] i am not getting the correct output

How to change the colors in Timeline custom visualization

$
0
0
index="_internal" | timechart span=15m count(name) as name | eval Status=if(name>1500, "RED", if(name>100,"AMBER","GREEN")) | eval user="NA" | table _time, user, Status This is a sample query i used, need to show green color circle when status is green and red for red and so on. Any help? Thanks, Sunith.

How to rebuild Timeline custom visualization app in windows?

$
0
0
Need to change the date format for timeline graph and found solution. Accordingly updated the 2 js file for the app and restarted the Splunk, but it was not reflected in the app. I get to know the app needs to be rebuild to take effect from a solution provided in Splunk. But the real question is how to rebuild the app in windows? In Linux we can use this "Build the visualization by running $ npm run build", how can we do the same in windows?

How do I return values that unmatched column in Lookup table?

$
0
0
i am trying to search for urls that are not in my allowed list lookup csv , my csv file is named as url and has 1 column with a header called hostname, below is the search which gives a wrong output. `fgt_webfilter` profile=* status=passthrough NOT [ inputlookup url ]

Indexer fails on startup

$
0
0
When I try and restart one of my indexers after an OS upgrade I am seeing the following messages. My 2 other indexers are up and running. How do I fix this. I found one articale where they talk about fixing the offending buckets but don't say how and I am not positive this is the same issue 09-06-2018 06:37:17.576 -0400 ERROR DatabaseDirectoryManager - idx=main bid=main~392~F18EA0F4-48F1-4D8C-8209-5B40F 0B66E1E bucket=392_F18EA0F4-48F1-4D8C-8209-5B40F0B66E1E Detected directory manually copied into its database, caus ing id conflicts [path1='/opt/splunk/var/lib/splunk/defaultdb/db/rb_1535649215_1535592644_392_F18EA0F4-48F1-4D8C-8 209-5B40F0B66E1E' path2='/opt/splunk/var/lib/splunk/defaultdb/db/392_F18EA0F4-48F1-4D8C-8209-5B40F0B66E1E']. 09-06-2018 06:37:17.579 -0400 ERROR IndexerService - Error intializing IndexerService: idx=main bid=main~392~F18EA 0F4-48F1-4D8C-8209-5B40F0B66E1E bucket=392_F18EA0F4-48F1-4D8C-8209-5B40F0B66E1E Detected directory manually copied into its database, causing id conflicts [path1='/opt/splunk/var/lib/splunk/defaultdb/db/rb_1535649215_1535592644_ 392_F18EA0F4-48F1-4D8C-8209-5B40F0B66E1E' path2='/opt/splunk/var/lib/splunk/defaultdb/db/392_F18EA0F4-48F1-4D8C-82 09-5B40F0B66E1E']. 09-06-2018 06:37:17.584 -0400 FATAL IndexerService - One or more indexes could not be initialized. Cannot disable indexes on a clustering slave.

Modifying the data values before indexing

$
0
0
Hi All, I want to remove more than 2 white spaces from events values at heavy forwarder before ingesting to indexer. Can anyone guide me with this change, so that I can able to fix the issue. **Current State :** field1="xxxxxx", field2="xxx ", field3="xxx ", field4="x", field5="xxxx ", field6="xxx ", field7="xxx ", field8="xxxx ", field9="xxxxx ", field10="xxxxx" **Required State** field1="xxxxxx", field2="xxx", field3="xxx", field4="x", field5="xxxx", field6="xxx", field7="xxx", field8="xxxx", field9="xxxxx", field10="xxxxx"

savedsearch not working. Getting error.

$
0
0
The index query is runs from base query and i want to append saved search to base query. saved search is just filtration query. since i have many panels with from same index i tried to use it. pls give suggestion if any available. index="******" host="****" source="Perfmon" counter="Available MBytes" sourcetype="Available_Memory" | savedsearch Prem_test it throws below error : Error in 'SearchParser': The savedsearch command can only be used as the first command on a search savedsearch query "| eval Value=round(Value/1024,1) | timechart span=1h eval(round(avg(Value),2)) As "Available""

How to update certain time fields of lookup table without overwriting old table entries?

$
0
0
So I put together a search not too long ago with help from the community on here that would run hourly to update a lookup table I have running. What this table holds is a list of suspicious IP's that has a field saying last date seen. I previously had a search that was doing exactly what I wanted where it would update that field with the most recent date seen, but for some reason that is no longer working and I can't seem to figure out why. Instead of updating the latest date, it is showing a date from almost a month ago despite it still running and being seen as recent as today. Here is the layout of the search I had used last time. sourcetype=blah [| inputlookup suspect_list=csv | table Susp_IP | rename Susp_IP as src_ip ] | search Ticket_num=* | rename src_ip as Susp_IP | eval date_last_seen=_time | table Susp_IP, Ticket_num, date_last_seen |inputlookup append=t suspect_list.csv | dedup Susp_IP | outputlookup suspect_list.csv Essentially it is supposed to be inputting the lookup, searching on those IP's and updating the date last seen field, and then inputting the lookup again so that it will still keep old entries in the event those IP's haven't shown up in the last hour and not be removed. Then it combines them and outputs it back to that same lookup.

SPL to see all indexes and retention

$
0
0
Technically, this is two questions in one with the goal of solving a single problem: I need an SPL query that returns *ALL* the indexes I can search and the associated retention time for each. Here is how far I've gotten: | rest /services/data/indexes | eval yr = floor(frozenTimePeriodInSecs/86400/365)| eval dy = (frozenTimePeriodInSecs/86400) % 365 | eval ret = yr . " years, " . dy . " days" | stats list(splunk_server) list(frozenTimePeriodInSecs) list(ret) by title The query above is very very close, but it only returns a subset of the indexes- technically, it only returns 32 index names to me and i have many more than that. (Note- starting with "rest /services/admin/indexes ... " makes no difference either. My second query is this: | eventcount summarize=false index=* index=_* | dedup index | fields index That will return all 250+ index names, but I can't seem to find anyway to get back to the retention period. So my two questions are: 1) Why is the rest command only pulling a subset (<15%) of all indexes that are returned by the event count query? 2) How can I get a single query that gets to my goal to have a single SPL query that shows all 250+ indexes and their associated retention setting?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>