Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Lost date_* field from my data after porting my environment to production

$
0
0
I developed my Splunk environment in a lab complete with reports to find NON-Business hour logins. IT was working fine. I ported my environment to production and blew away the data in my indexes to start with a clean copy. It has been running fine for several months. I just noticed non of my non-business hour reports are catching events. I am utilizing: ate_hour>18 OR date_hour<7) OR (date_wday=Sunday OR date_wday=Saturday) in my SPL. it is the same report that tested successfully in my lab. Why are the date_* fields no longer working? could this be a configuration file on my indexers? What creates those fields? -Jeff

Forwarder management: deployment server vs Chef

$
0
0
I have very little experience with chef. I have a client with very high security requirements. I was wondering if anyone in the community can explain the costs and benefits of using the deployment server vs chef. I'm assuming the customer would want to use client certs with the deployment server. Thanks.

Help with extraction of field created at index time?

$
0
0
All, Testing an index'd time field extraction in a test environment. It SEEMS to have worked, but randomly the field I am extracting ( pool ) just disappears from search results. That is if I just search, pool is extracted the 400 or so times I expect. But once I try and USE that field it's simply missing except for one host. The other 400 in the test setup are not getting extracted. Heavy Forwarder has this #transforms.conf [pool_transform] REGEX = slcs\d\d(...)\d\d\d FORMAT = pool::"$1" WRITE_META = true #props.conf [host::*] TRANSFORMS-indextimepooltransform = pool_transform #fields.conf [pool] INDEXED=true Search Head has this [pool] INDEXED=true Indexer has this #fields.conf [pool] INDEXED=true Any idea why the field would sorta.. disappear randomly.

Splunk App for Unix and Linux: Help creating a dashboard that shows servers using 20% more CPU than previous week

$
0
0
All, I have 400+ servers with Splunk for Nix installed and collecting metrics to index=os. What I'd like to do is create a dashboard which determines which servers are showing 20% more CPU than they were last week. That the final result is just a table of servers which have showed 20% increase or more CPU compare the previous week. I really have no idea where to start. Any ideas?

How do I configure firewall when forwarding from on-premise to Cloud?

$
0
0
I am building firewall policies to implement an on-premise Splunk Enterprise system and need to forward some data to a Splunk Cloud instance. What communication ports are used?

Trying to sum values of fields with similar names

$
0
0
All, I have dates where the field names are: 20A1,20A2,20A3,20B1,20B2,20B3,20C1,20C2,20C3 1,3,4,5,5,5,6,6,6 I am trying the sum fields: 20A1,20A2,20A3 to get the value 8 as 20A, 20B1,20B2,20B3 to get the value of 15 as 20B 20C1,20C2,20C3 to get the value of 18 as 20C Final Results 20A,20B,20C 8,15,18 Thanks, Stephen Robinson

Splunk Add-on for Oracle Database: What role/permissions are required from Oracle dba to use this add-on?

$
0
0
When creating an Identity in Splunk DB Connect to be used with the Splunk Add-on for Oracle database, what role/permissions within Oracle are required for the Oracle user provided? I need to let my Oracle DBAs know how to configure the user I am asking for.

Is my configuration wrong? Values are no longer showing up for this field.

$
0
0
I've been struggling with this all day. index=blah sourcetype=blah | rex max_match=0 Recipient:\s(?\S+) | eval receipient_count = mvcount(orig_recipient) yielded multiple values of orig_recipient if they existed, and a recipient count. Initially I had this in my props.conf and it worked: EXTRACT-orig_recipient = Recipient:\s(?\S+) but when I realized I needed something that did the same as max_match I moved the REGEX to transforms.conf [get_recipients] REGEX = Recipient:\s(?\S+) MV_ADD = true DEST_KEY = _raw and this in my props.conf [blah] TRANSFORMS-recipients = get_recipients EVAL-receipient_count = mvcount(orig_recipient) the app is on my search head and heavy forwarder. app permissions are set to global. orig_recipients is not showing up at all. i can see "get_recipients" in the admin GUI -> Field Transformations. Help? suggestions? Thanks!

Logs from rsyslog server stopped indexing

$
0
0
My setup is FW, WAF and Web-proxy logs being pushed to my Rsyslog Fwd which has a UF installed to push to my indexers. So my logs that were coming from the Rsyslog server stopped mysteriously around 3am a few nights back, but the UF installed on that server is still sending out metrics logs but no firewall logs. I can't figure what the issue is. Whats even weirder is that all the logs didn't stop at one time but over a course of few hours, the logs had been coming in consistently for a few weeks now. And this new deployment had been running about a 4-5 weeks. There was a sharp increase in logs that came in the day of and after that the logging levels dropped to almost none with only the UF metrics getting indexed but no other logs. • Host OS: Red Hat Linux 7.3 • Syslog software used: rsyslogd 7.4.7 • Splunk Software used: Splunk Universal Forwarder 6.6.3 for Linux • Configuration changes to get syslog data from sources was done in /etc/rsyslog.d/rsyslog-splunk.conf. • Logrotation for syslog data was configured in /etc/logrotate.d/rsyslog-splunk Any ideas? ![alt text][1] [1]: /storage/temp/216783-capture.png

How to figure out which lookup .csv file a certain index is using?

$
0
0
In Splunk, how do I figure out which lookup .csv file a certain index is using? In other words, how to find which index is using a certain lookup file in Splunk?

How to subset top N records from the number generated from eventstats

$
0
0
Hi Splunk friends, I am new to Splunk community and currently facing a question. I have below table which was generated by some raw log-line data . stats2 is actually the aggregated sum of stats1 group by ID. ID stat1 stats2(eventstats sum of stat1 by ID) 1 1 6 1 2 6 1 3 6 2 4 9 2 5 9 3 6 21 3 7 21 3 8 21 4 9 10 4 0 10 4 1 10 What I am looking for is, returning the subset of below table and only pick top N =2 in terms of stats2, for example: ID stat1 stats2(eventstats sum of stat1 by ID) 3 6 21 3 7 21 3. 8 21 4 9 10 4 0 10 4 1 10 I tried several methods that all failed, and I do not want to leverage join statement, which is not efficient in Splunk. Thanks so much for the help. Jay

Is there a rest call to get license pools and members of those pools?

$
0
0
Hi, Is there a rest/search call that will show me the pool names and members (real names, not GID) of those pools?

Splunk search result to csv format

$
0
0
Hello, We have requirement to have Splunk search/dashboard result data in csv format to be fed into another tool. There should not be any manual process- search should run at scheduled time provide result to be picked by another tool. Options: 1) The tool can pick the csv file generated from search using outputcsv command but in doc it was written that it is not supported in distributed env. So if any other node run the search it will written local to that search node. Tool need to check all the nodes for the file. This can be done based on script checking the timestamp of the file generated and pulling it. 2) Can query write the data into separate index in csv format and tool can use Splunk API details to pull up the data? Are there any other ways to implement this and which might be the best option? Thanks Hemendra

Is there any workaround in Splunk to make a star to be considered as constant instead of wild card?

$
0
0
I have some fields as follows sql="Select * from & ABC" sql="Select * from xyz.ABC" sql="Select * from gh2_ABC" sql="Select * from 34,rABC" sql="Select * from xyz.gfr" Now I am trying to work on an event type as follows eventtype name :- test sourcetype="web" sql="Select * from *ABC" And now I want to consider the first star as a constant and the second star as a wild card. Is there any workaround in Splunk to make a star to be considered as constant instead of wild card?

How to display total count of ssh login failures in dashboard?

$
0
0
I have this search currently that searches in real-time the ssh login failures. How can i display the total count of multiple user logon failure in a single metric visualization in the dashboard? Currently it only shows the name of the user in the dashboard. index=os process=sshd eventtype=failed_login | stats count by user | search count >= 3 ![alt text][1] ![alt text][2] [1]: /storage/temp/217815-capture.png [2]: /storage/temp/217816-capture2.png

How to show connectivity status icons, right below App's navigation menu bar?

$
0
0
I have a requirement where I need to show multiple DB connectivity statuses right below the apps navigation bar. These statuses are returned from simple search query(inputlookup). Also, I need this to be visible throughout the app, irrespective of which ever dashboard I navigate to. Thanks in advance !

May I know how Splunk calculate license usage for Packet collections

$
0
0
Hi All I want to know how Splunk will calculate license usages for packets collection? Currently what we are doing is setup monitor sessions on Cisco switches, and then monitor interested vlans' traffics to packet collectors. For example, i have one packet capture device that have one NIC capturing packets, below are 24 hours collected pkts: EM2:8749745734122 bytes = 1018GB So will both those 1018 GB being calculated into license usage? BR Nelson

how can i pass the field values from one search to my subsearch or to another search

$
0
0
index=xx sourcetype=yy |eval ..|table aa [| search index=xx1 sourcetype=yy1 yy=aa values |table yy zz ff ] in a single search ..

Splunk Query optimization for drop and Spike

$
0
0
Hello Everyone, How can optimize this query because this query is taking too much of time. I am creating 4 windows in a day and getting the Average number of Event for 14 day for that particular windows, This will become baseline and i can check spike or Drop in events by 300 % .. Please suggest something " index=* | stats sum(eval(date_hour>=0 AND date_hour<10)) as "L1", sum(eval(date_hour>=10 AND date_hour<12)) as "L2" ,sum(eval(date_hour>=12 AND date_hour<18)) as "L3",sum(eval(date_hour>=18 AND date_hour<24)) as "L4" by device_name | eval L1=round(L1/14,0)| eval L2=round(L2/14,0) | eval L3=round(L3/14,0) | eval L4=round(L4/14,0) | fillnull | outputlookup device_threshould_baseline.csv"

Unable to read the windows msi Staus from the log file in splunk

$
0
0
Hi, We are trying to create a dashboard in splunk to get the status of a msi instllation. We have configured the log file in the splunk. How ever except below two lines ever thing is being displayed in the splunk source file ., can someone suggest what we can do on this. MSI (s) (78:B8) [04:31:24:556]: Product: XXXXX-- Configuration completed successfully. MSI (s) (78:B8) [04:31:24:556]: Windows Installer reconfigured the product. Product Name: XXXXX. Product Version: 15.0.6. Product Language: 1033. P:S I have removed the product name for security reasons.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>