Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Removed users from LDAP authentication but didn't remove them from Splunk users

$
0
0
Hello, I see that there is documentation on this topic, but it is very unclear how it should be operating. So I am using LDAP authentication for Splunk and I removed a large group of users from my LDAP authentication step on a seperate application. However, this didn't remove the users from my list of splunk users. So I removed one specific user's folder in splunk/etc/users and the user is still not removed the splunk user list in UI. How should all of this functionality be working? If I remove the user from my LDAP authentication on my seperate app- will that user not be able to log in? Even though they are still listed a splunk user in my Access Controls- User list on the web? Thanks for the help!

How to find difference in field total over time?

$
0
0
I have event data in below format: Sep 15 2017 07:06:07 app=yahoo dataconsumed=50 Sep 15 2017 08:16:07 app=skype dataconsumed=150 Sep 14 2017 10:26:07 app=facebook dataconsumed=10 Sep 14 2017 12:26:07 app=facebook dataconsumed=5 Sep 13 2017 7:26:07 app=yahoo dataconsumed=10 Sep 13 2017 9:26:07 app=skype dataconsumed=50 Sep 12 2017 3:26:07 app=facebook dataconsumed=80 Sep 12 2017 1:26:07 app=facebook dataconsumed=0 How should I perform the following tasks: 1. For any given time range, search and split the events in to two halves of "day" or "hours" i.e if "All Time" is selected as time range using Time Picker, I should be able to split above events into two halves by day(firsthalf=sep15-sep14 and secondhalf=sep 13-sep12) or by hour(firsthalf=48hour secondhalf=48hour). 2. Then after splitting events into two halves, I must sum dataconsumed by app in both halves(events split by time) i.e time app total_dataconsumed --------------------------------------------------- firsthalf yahoo 50 skype 150 facebook 15 ---------------------------------------------------- secondhalf yahoo 10 skype 50 facebook 80 3. Find difference between total_dataconsumed by app using firsthalf and secondhalf i.e firsthalf - secindhalf app difference -------------------------- yahoo 40 skype 100 facebook -65 I am still stuck on step 1, I don't seem to understand how should one split the search events into halves/spans and apply stats on both halves?

Index configuration sanity check

$
0
0
I am setting up a multisite cluster, and this is the first time I have messed with indexes away from defaults. My goals: All data must be kept for 5 years. When a certain amount of data is in warm, roll it off to cold. When any data meets the 5year mark, delete it. #indexes.conf in my cluster config: #global #My data will stay in warm until it reaches 500GB homePath.maxDataSizeMB = 500000 #My warm data will purge after 5.1 years from ingest date homePath.frozenTimePeriodInSeconds = 160833600 #My data will stay in cold indefinitely coldPath.maxDataSizeMB = 0 #My cold data will purge after 5.1 years from ingest date coldPath.frozenTimePeriodInSeconds = 160833600 #define volumes [volume:hot] D:\splunk-hot [volume:cold] E:\splunk-cold #indexes [myindex] repFactor = auto homePath = volume:hot\myindex\db homePath = volume:cold\myindex\db thawedPath = $SPLUNK_DB\myindex\db Does this conf accomplish my index management goals? I am a little uncertain on the "homePath.frozenTimePeriodInSeconds = 160833600" line -- does this really dump the data straight from warm to frozen?

How do I install the Splunk Security Essentials app?

$
0
0
Hello. Is there any documentation on how to properly install and configure the Security Essentials app? My setup is on CentOS and consists of a HF, IDX, SH, DS, and some universal forwarders. No clustering or load balancing.

Not using chartTime or bucket -- i want to this query and it should display the treant for daily, week and monthly in signal query

$
0
0
Mongo Collection Data : - Id : 1 StartDate : some date EndDate : Some Date X : Foo : “foo1’ Count : 10 Id : 2 StartDate : some date EndDate : Some Date X : Foo : “foo2’ Count : 5 Id : 3 StartDate : some date EndDate : Some Date X : bar : “bar1’ Count : 20

get percentage of eval case fields

$
0
0
I'm looking at a specific email recipient. I want to see the percentage of emails they receive from specific senders. I think my current query gets all the fields I need but I'm having trouble breaking the results down to stats by month. Here is my current query: index=msexchange (recipients="user@domain.org") eventtype="smtp-mail" | eval sender_username=lower(sender_username) | eval valid_sender=case( sender_username=="mailer-daemon" OR sender_username=="postmaster","Bounceback", sender_username!="mailer-daemon" OR sender_username!="postmaster","Valid") | eval Month=strftime(_time,"%b") Now what I would like to do is get a total count of emails sent to the recipient each month and another column that states the percentage of those emails per month where valid_sender="Bounceback" The end results would hopefully look something like this: | Recipient | Month | Count | Bounceback% | | user@domain.org | May | 500 | 25% | | user@domain.org | June | 1000 | 30% | | user@domain.org | July | 750 | 20% |

are search terms are case sensitive?

$
0
0
search terms referred to as what exactly?? is that case sensitive/insensitive? can any one help on this?

pivot can work without search processing language?

$
0
0
pivot can work without search processing languages? it will work only with the data models/data sets like that.any one clarify this please..

how do i find biggest losers and gainer in last 24 hours compared to 24 hours before that, with respect to a variable. E.g total_dataconsumed

$
0
0
I have event data in below format: Sep 15 2017 07:06:07 app=yahoo dataconsumed=50 Sep 15 2017 08:16:07 app=skype dataconsumed=150 Sep 14 2017 10:26:07 app=facebook dataconsumed=10 Sep 14 2017 12:26:07 app=facebook dataconsumed=5 Sep 13 2017 7:26:07 app=yahoo dataconsumed=10 Sep 13 2017 9:26:07 app=skype dataconsumed=50 Sep 12 2017 3:26:07 app=facebook dataconsumed=80 Sep 12 2017 1:26:07 app=facebook dataconsumed=0 For example: for above dataset: ...|if( ((total_dataconsumed by app in last half of time) - (total_dataconsumed by app in fprevious half of time) ) >0, "gainer", "loser") for above sample dataset result would be: app gainer_or_loser dataconsumed ---------------------------------------------------- yahoo gainer 40 skype gainer 100 facebook loser -65

Slack Webhook Alert - Token for Inline (table)

$
0
0
Hi, In my email alerts, the option Inline (Table) is checked. What is corresponding Splunk token i need to use to display this in the Slack Message.

After installation of splunk enterprise got localhost error and not able to install again

$
0
0
After error deleted the splunk folder and trying to install again but now not able to install

How do I use the username in events returned in a search of Index "A" to look up the user in index "B" and only return the events where the user in event from index "A" exists in index "B"

$
0
0
#####This part of my query gets me on the street I want to be on for this report###### index="A" | rex mode=sed field=User_Full_Name "s/ //g" | eval User_Full_Name = LOWER(User_Full_Name) | rex mode=sed field=Emergency_Contact1 "s/ //g" | eval Emergency_Contact1 = LOWER(Emergency_Contact1) | eval results = if(match(Emergency_Contact1,User_Full_Name), "match", "no match") | dedup User_Full_Name | search results="match" | eval Service_Areas=split(Patient_Service_Areas, ",") | search Service_Areas="50*" ######################################## ######This syntax does not return any results even though I know I have matches in my testing data############# | eval User_Logon_ID = LOWER(User_Logon_ID) | search index="B" | eval HSCNET_ID = LOWER(HSCNET_ID) | eval results = if(match(User_Logon_ID,HSCNET_ID), "USF", "no USF") | search results="USF"

Splunk Stream split separate events for persistent tcp stream or Disable TCPREASSEMBLY or some other method

$
0
0
Hi Everyone, I am using splunk stream. Packet stream to capture data from source and destination content fields. For a persistent TCP connection i just cannot seem to break/split in separate events or lines. Is there no way to so this? Other advice appreciated. I am willing to check other alternatives, willing to truncate the data, etc etc etc Sample event Single TCP Conn Open/Close {"endtime":"2017-09-17T15:30:47.271015Z","timestamp":"2017-09-17T15:30:36.440073Z","ack_packets_in":4,"ack_packets_out":5,"app":"tcp","bytes":645,"bytes_in":353,"bytes_out":292,"client_rtt":16,"client_rtt_packets":1,"client_rtt_sum":16,"connection":"192.168.100.3:65534","data_packets_in":1,"data_packets_out":0,"dest_ip":"192.168.100.3","dest_port":65534,"duplicate_packets_in":0,"duplicate_packets_out":0,"missing_packets_in":0,"missing_packets_out":0,"network_interface":"lo0","packets_in":6,"packets_out":5,"protocol_stack":"ip:tcp:unknown","server_rtt":40,"server_rtt_packets":2,"server_rtt_sum":81,"src_ip":"192.168.100.3","src_port":51448,"tcp_status":0,"time_taken":10830958,"SRCCNT":"68656c6c6f"} Sample Event TCP persistent Stream {"endtime":"2017-09-17T15:32:06.278243Z","timestamp":"2017-09-17T15:30:57.342570Z","ack_packets_in":3,"ack_packets_out":158,"app":"tcp","bytes":18484,"bytes_in":9624,"bytes_out":8860,"client_rtt":14,"client_rtt_packets":1,"client_rtt_sum":14,"connection":"192.168.100.3:65534","data_packets_in":153,"data_packets_out":0,"dest_ip":"192.168.100.3","dest_port":65534,"duplicate_packets_in":0,"duplicate_packets_out":0,"missing_packets_in":0,"missing_packets_out":0,"network_interface":"lo0","packets_in":157,"packets_out":158,"protocol_stack":"ip:tcp:unknown","server_rtt":33,"server_rtt_packets":154,"server_rtt_sum":5226,"src_ip":"192.168.100.3","src_port":51475,"tcp_status":0,"time_taken":68935687,"SRCCNT":"68656c6c6f68656c6c6f68656c6c6f68656c6c6f68656c6c6f68656c6c6f68656c6c6f68656c6c6f68656c6c6f68656c6c6f68656c6c6f68656c6c6f68656c6c6f"} I need the above stream to be broken up into separate events. Sorta like a wireshark view Thank You & appreciate any and all ideas/assistance. Pinaki

how find difference between table rows

$
0
0
I have results in following table format: half app_name dataconsumed ----------------------------------- first_half skype 50 first_half facebook 90 first_half yahoo 10 first_half bing 30 second_half skype 150 second_half facebook 100 second_half yahoo 5 second_half bing 50 How should I find the difference in dataconsumed for ex (difference = secondhalf - firsthalf) and exclude app if difference is negative. for above table result should be: app difference ------------------------ skype 100 facebook 10 bing 20 **Note**: In above result table yahoo is excluded since it's difference is negative.

chain 2 search queries and get the earliest and latest of different fields

$
0
0
search string1 - [ field1 ] search string2 [ field1 field2] search string3 [ field1 field2] I want the results of search string 1 to be matched with search string 2 by the common field (which is field 1) and the results of this to be matched with search string 3 where the common field is field 2, then I want to get those results as output with the earliest of field 1 and latest of field 2. I've tried the subsearch command with join but it doesn't generate the required results. Also tried append. Please help!

How to install R Analytics on Windows?

$
0
0
Kind of answer to my question: On Windows the App also runs (partially?) as follows: Install latest R-Studio, there Tools > Install Packages: OpenCPU, then run the R command: **opencpu::opencpu$start()**. It should give an output like Creating new config file: C:/Users/username/Documents/.opencpu.conf OpenCPU started. [httpuv] http://localhost:4711/ocpu In the R-Analytics App specify only the base URL, here http://localhost:4711 Now the example http://localhost:8000/de-DE/app/ita_r/r_script works including R-graphics. So far the Demo views did not display anything. Any ideas for troubleshooting here? Thanks!

Predictive analysis using linear regression and kalman filter

$
0
0
Hi All, I am trying to predict cpu utilization of servers using Machine learning toolkit app of splunk, during the use of this app i found "predict numeric field" showcase using Linear regression algorithm was doing perfect prediction for the given field but it cannot be used for forecasting the same I tried merging splunk queries of Linear regression and Kalman filter to forecast the Predicted field, is this approach correct ? Find below Query i used for the same and let me know your thoughts and suggesstions. I am trying this because i am not sure about the prediction results of only kalman filter. index=main sourcetype=cpumetric metric_name=CPUUtilization Environment="WEB" Average>2.00 | apply "Predict_CPUUtilization" | table _time, "Average", "predicted(Average)" | rename predicted(Average) as Avrg | timechart span=15m avg(Avrg) | predict "avg(Avrg)" as prediction algorithm="LLP5" future_timespan="3" holdback="0" lower"50"=lower"50" upper"50"=upper"50" | `forecastviz(3, 0, "avg(Avrg)", 50)

terminate called after throwing an instance of 'FileAccessException'

$
0
0
One of our forwarder hosts oftenly encounters error below based from the crash log/splunkd_stderr.log terminate called after throwing an instance of 'FileAccessException' what(): Failed to get file size from fd for file=''. We temporarily resolve this by manually starting up splunkd. But can anyone suggest a permanent fix for this? Thank you and much appreciated.

Splunk Independent Stream forwarder - Can we control the balance of data between indexers/receivers?

$
0
0
So I have recently installed Splunk independent stream forwarder as per the current [documentation][1] This works great and I can use it to collect netflow data, however the default setup stream HEC setup sends the http event collector data to the indexers, however it appears to use some kind of persistent connection to *an* indexer. For the last 24 hours it has sent *all* the data to 1 of the 6 indexers, while I'm not expecting a perfect load balancing algorithm at least switching between indexers once every X seconds (even if that means having multiple tcp connections kept open) would be preferable! I can re-point the stream to localhost to let the local Splunk heavy forwarder receive it there, except that each time I restart the HF the system goes down for a couple of minutes. I can also re-point the stream to a load balancer which talks to a forwarding layer before getting to the indexers but I'd like to know if there is a way to avoid this using just the independent stream forwarder... [1]: https://docs.splunk.com/Documentation/StreamApp/latest/DeployStreamApp/InstallStreamForwarderonindependentmachine

Add capacity to indexer cluster

$
0
0
Hello guys, we have 3 'hardware' indexers in a clustered environment (RAID), all physical disk slots are full , replication factor 3 and may be running out of space in a near future. So is it possible to add new/higher storage indexers to this existing cluster in order to **add capacity**? Also is it possible to create new indexes ONLY on those NEW indexers and how? Thanks.
Viewing all 47296 articles
Browse latest View live