Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Sending cooked but unparsed data from Heavy Forwarder

$
0
0
Hi Splunkers, We are ingesting data using the Splunk TA for AWS, which is installed on a heavy forwarder. While this works great within Splunk, we'd like to forward data from the indexer cluster to a 3rd party system using a *props.conf* sourcetype match and a *transforms.conf* regex to route the specific events. We've done that numerous time and it works well for other sources (coming from Universal Forwarders). Here's our ingestion pipeline for AWS events :
AWS S3 <- Splunk TA AWS (on HF) -> IXC -> 3rd party system
Unfortunately, we can't find a way at this point to route events based on the sourcetype at the indexing layer. Our understanding is that the HF will cook and parse the events and the indexer will skip to the indexing queue directly. The question is : is there any way to get the data from the HF to be sent unparsed but cooked, exactly the same way the UF does so that the indexing layer will be able to parse the events through all the pipelines? Thanks!

Put stats(values) and stats(count) in the same table (with tstats)

$
0
0
Hello, I need help with a dashboard Panel I need to make for a client. This guy wants a failed logins table, but merging it with a a count of the same data for each user. My data is coming from an accelerated datamodel so I have to use tstats. Let me give you an example of what I need to do: I need to merge this query: | tstats summariesonly=true allow_old_summaries=true count from datamodel=Authentication.Authentication where Authentication.action="failure" by Authentication.user with this one: | tstats summariesonly=true allow_old_summaries=true values from datamodel=Authentication.Authentication where Authentication.action=failure by _time Authentication.user Authentication.src Authentication.dest Authentication.app |`truncate_name("Authentication")` | eval Time = strftime(_time, "%d-%b-%Y %H:%M:%S") | table Time user app dest src | rename user AS User src AS From dest AS "Destination" app AS "App" | sort -Time in the same table. Thanks!

Splunk Add-on Builder ::=> API requested data to KV Stores

$
0
0
Using Add-on Builder: Using Modular Input Python Code: I fetched the API data in JSON value, need to store in KV-Stores. But the Add-on Builder Python Helper Functions ([https://docs.splunk.com/Documentation/AddonBuilder/2.2.0/UserGuide/PythonHelperFunctions1][1]) only specify write_to_index approach. Is there an approach to store to KV-stores? [1]: https://docs.splunk.com/Documentation/AddonBuilder/2.2.0/UserGuide/PythonHelperFunctions

Splunk Infoblox data seggregation

$
0
0
There are two zones and infoblox versions are different in the environment as below. The issue is the data does not parse for Zone 2 correctly and the data gets logged as sourcetye infoblox:file, which leads to data not being segregated into DNS and DHCP source types respectively. The fields extractions do not work either as a result. Zone1= 6.10 Zone2= 8.3.3 Is the add-on built for 8.3.3 version? What is the resolution to fix the issue? - RR

Field extractions aren’t working as expected since upgraded to 7.2.3

$
0
0
Since upgraded to Splunk version 7.2.3, some fields extractions aren’t showing on the searches properly. In particularly with “Splunk_TA_bluecoat-proxysg” the TA app for bluecoat proxy. In this example I would like to focus on “http_user_agent” field. This was working just fine and could see data prior to upgrading. [bluecoat:proxysg:tcp] FIELDALIAS-user_agent = cs_User_Agent as http_user_agent Can you please assist us to figure this out and get the field extractions correct again?

Status over Time Multi-Value Alert Not working as expected

$
0
0
Perhaps I am just misunderstanding the concept behind Status over Time but I set a KPI to trigger if it is at Critical 90% of the time for the last 24 hours and when I open it to set that 90% (see screenshot) it shows that in the last 24 hours it was WELL below 90% but this notable event triggers over and over again non-stop almost. Am I misunderstanding the concept behind an alert that triggers if the KPI is at Critical 90% of the last 60 mins? ![alt text][1] [1]: /storage/temp/270739-screen-shot-2019-03-07-at-14516-pm.png

adding average and standard deviation as a new field

$
0
0
Hi, This might be trivial question but i am having a hard time to figure it out. any help is greatly appreciated. Here is the problem, I have logs from Remote VPN servers reporting the sent and received data in each session for each user. I am trying to calculate the average of the data sent, and standard deviation over a month, then add the average and twice the calculated standard deviation together as the alerting threshold for the user. however i can not add the value of the average and the stddev !! here is a SPL i have developed for this: eventtype=RAS AND (EventCode=20272) AND ConnectionID!="NA" AND UserID="XYZ" | dedup ConnectionID | bucket _time span=1mon@mon | stats sum(Data_Sent) as Monthly_Total_Sent stdev(Data_Sent) as Monthly_Sent_Stdev by _time UserID | eval Monthly_Avg_Sent(MB)=round(Monthly_Avg_Sent/(1024*1024),2), Monthly_Sent_Stdev(MB)=round(Monthly_Sent_Stdev/(1024*1024),1),Abnormal_Sent_Limit(MB)=2*Monthly_Sent_Stdev(MB)+Monthly_Avg_Sent(MB) However Splunk errors out on the Abnormal_Sent_Limit(MB) calculation!! the error i see is Error in 'eval' command: The 'monthly_sent_stdev' function is unsupported or undefined. I also have tried Values() but with the same results. I mean : Abnormal_Sent_Limit(MB)=2*values(Monthly_Sent_Stdev(MB))+values(Monthly_Avg_Sent(MB)) I am pretty sure i am doing something wrong but i dont know what that is!!

How do you add the average and the standard deviation as a new field?

$
0
0
Hi, This might be trivial question, but I am having a hard time to figure it out. Any help is greatly appreciated. Here is the problem: I have logs from remote VPN servers reporting the sent and received data in each session for each user. I am trying to calculate the average of the data sent, and standard deviation over a month, then add the average and twice the calculated standard deviation together as the alerting threshold for the user. However, I cannot add the value of the average and the stddev !! Here is the SPL I have developed for this: eventtype=RAS AND (EventCode=20272) AND ConnectionID!="NA" AND UserID="XYZ" | dedup ConnectionID | bucket _time span=1mon@mon | stats sum(Data_Sent) as Monthly_Total_Sent stdev(Data_Sent) as Monthly_Sent_Stdev by _time UserID | eval Monthly_Avg_Sent(MB)=round(Monthly_Avg_Sent/(1024*1024),2), Monthly_Sent_Stdev(MB)=round(Monthly_Sent_Stdev/(1024*1024),1),Abnormal_Sent_Limit(MB)=2*Monthly_Sent_Stdev(MB)+Monthly_Avg_Sent(MB) However, Splunk errors out on the Abnormal_Sent_Limit(MB) calculation!! The error i see is: > Error in 'eval' command: The> 'monthly_sent_stdev' function is> unsupported or undefined. I also have tried Values() but with the same results. I mean : Abnormal_Sent_Limit(MB)=2*values(Monthly_Sent_Stdev(MB))+values(Monthly_Avg_Sent(MB)) I am pretty sure i am doing something wrong, but I don't know what that is!!

Dashboard conditional search not working

$
0
0
Hello experts, I am trying to dynamically change my dashboard view based on 3 dropdown inputs. All the time, my show_tab1 results are hidden even if the condition matches. Any help to tweak the code is appreciated. | from datamodel:"0DP_T_common" | search C_Category=$selected_cat$ C_endpoint="$selected_endpoint$" C_Response=$selected_response$ | table C_Day C_StartTime C_Category C_endpoint C_Response duration01truetrue
| from datamodel:"0DP_T_selected" | search C_Category=$selected_cat$ C_endpoint="$selected_endpoint$" C_Response=$selected_response$ | table C_Day C_StartTime C_Category C_endpoint C_Response duration01

stacked bar chart in timeline series

$
0
0
hi! I want to create a stacked bar chart like in a timline series like this |[----RUN TIME----]|[----IDLE TIME----]|[-----RUN TIME----]| |[----10:00 AM----]|[------10:55AM----]|.................................| how will I be able to do this aside from using the timeline app? thanks!

Dashboard for storage Capacity planning

$
0
0
Hi All, We are planning to have a Dashboard which shows how much storage is being used in the server (web or database server) and what % of storage capacity will be in future. i.e. we want to predict how much storage space needed in future based on the current trend. 1) How to get Storage utilized and free from the server to the splunk? 2) Based on the above data - The Dashboard should show how much storage space would be required in future based and what is the usage currently.

How To Investigate Hiccups (With Evidence)

$
0
0
Hello everyone! I've tried looking at the _internal splunkd logs but couldn't make sense out if it. Boss is asking why there had been, suddenly, an abnormal gap between the events we're ingesting for a particular period in time. It's back to normal but we're trying to figure out what happened. I couldn't figure it out. Thanks in advance for any input you can provide.

TA-asngen lookup - does it actually work?

$
0
0
Been looking for a replacement for the GeoASN app that used to exist on Splunkbase for a while, and the TA-asngen (https://splunkbase.splunk.com/app/3531/) seemed to fit the bill. However, even though it installs fine, and the initial asngen command generates the asn.csv correctly, I'm not able to get a lookup to actually work. This is on 7.0.5 or 7.2.4.2 - same result on either. I have log data which has a field extracted as src_ip which is an IPv4 IP. I then do: ... | lookup local=t asn ip AS src_ip But alas, whilst I certainly see my src_ip, I don't get the other fields from the lookup enriching the output. I've also tried renaming my src_ip to just "ip" but that doesn't cut it either. The TA defines the match_type as CIDR(ip) which makes sense, but I can't seem to get the fields shown. I have also tried an explicit OUTPUT for some of the field names, but, that also does not work. Clearly I'm missing something trivially obvious. Permissions are correct, the files are the correct mode, I can see the content on disk, and running the command generates no errors. It also doesn't generate the expected output!

How to use timechart command for the below query

$
0
0
This is the query i m using: **query1:** *sourcetype=tanium earliest=-24h query="User-Sessions-and-Boot-Time-Details-from-Windows" OR query="User-current-session-details-&-Last-Boot-Time---Mac-OSX-to-Splunk" Uptime="1 days" OR Uptime="Less than 1 day" NOT Last_Logged_In_User="*adm"| table Computer_Name Last_Logged_In_User OS_Boot_Time Last_Reboot| eval LastReboot = coalesce(OS_Boot_Time, Last_Reboot)| dedup LastReboot,Last_Logged_In_User| stats count by Computer_Name,Last_Logged_In_User | where count>2* i need a trend analysis for this query for last 30 days. I also did this: **query2:** *sourcetype=tanium query="User-Sessions-and-Boot-Time-Details-from-Windows" OR query="User-current-session-details-&-Last-Boot-Time---Mac-OSX-to-Splunk" NOT Last_Logged_In_User="*adm" | eval LastReboot = coalesce(OS_Boot_Time, Last_Reboot)| dedup LastReboot,Last_Logged_In_User| timechart span=1d count |eval day = strftime(_time,"%d %b %y , %a") |chart sum(count) by day* But, this gives me the entire number of events. Can anyone help me how to add required condition from query1 to query2

How to add the _time failed in the output.

$
0
0
Hello every one. I just want to add time (_time) failed in the output. can anyone help. below is the query what i am using. ![![alt text][1]][1] index=app sourcetype=mq_metric host=prod* QUEUE=DEAD.LETTER.Q OR MESSAGE.ACTION.Q |stats p90(CURDEPTH) AS "Queue Depth" p90(MSGAGE) as "Massage Age" by QUEUE, _time | stats max(*) AS * |table _time,QUEUE,"Massage Age","Queue Depth" [1]: /storage/temp/270741-raj.png

creating a stacked bar chart using Gantt chart app

$
0
0
hi! I am currently working on a project where i need to show the duration of a machines run time, down time and stop time thru a stacked bar chart. is it possible by using the gantt chart app? or are there any ways which is easier? I want the machine names to be on y axis and the time on the axis. thank you so much!

average of response time for the service call per day.

$
0
0
sourceType="source_log" | rex field=_raw .... ........ Expected output : Service_call Avf for 03/04 avg for 03/05 ........ addBook 125 180 addpens 60 70

streamfwd will not start as root user - SnifferReactor failed to open pcap adapter for device

$
0
0
Stream is installed and the bash set_permissions.sh has been run as the root user. However, when splunkd is started, the streamfwd process also starts as the 'splunk' user and fails to capture. Notes /opt/splunkforwarder/etc/splunk-launch.conf specifies SPLUNK_OS_USER=splunk If this is changed to SPLUNK_OS_USER= then splunkd and streamfwd will start as root and stream WORKS However, we do not want this, we want splunkd to run as the splunk user and only streamfwd to run as root user; this is supposed to happen once the set_permissions.sh script is run (yes we ran it as the root user also) but it does not work.

Migrating splunk folder to another LVM file system

$
0
0
I am in need of migrating the splunk folder(/opt/splunk/var/lib/splunk) to another LVM as the current file system is getting filled up. This folder has db and some other files. Any suggestions or steps? Found this but steps are not refined - https://answers.splunk.com/answers/3390/how-do-i-move-all-log-data-to-another-filesystem.html

Top of field with multiple values

$
0
0
Hi, I'm trying to do a simple search that returns the top repeated values of a field. The problem is that this field has multiple values, then when a try to exec the search, it returns 0 results. With another field with a single value, this problem doesn't happen. For example, let's suppose that we have this two fields; level and groups the field level contents a unique value for example 7, but the groups field can content multiples values [foo,bar,cir...] If execute ** *query *| top level limit 5 ** will return the top 5 levels but if execute ** *query* | top groups limit 5 ** does not return anything. How can get the top of a field with multiple values? Thanks
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>