Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Can I upgrade my CentOS to Python 3?

$
0
0
Hello all! I got a security alert from my company about my CentOS server using Python 2.7. As it stands, I know that my version of Splunk uses 2.7.5 in it's own /bin/python directory; however, if I were to upgrade the OS's version of Python to 3.0... would this affect my on-premise Heavy Forwarder (that uses 2.7.5)? Or, is everything contained within the /bin/python for Splunk? Thanks!

Can you help me color coordinate different charts in a dashboard?

$
0
0
Dear Team Could you please help me to get the same color in the charts, below the line chart... each one with the color of the line, in the Line Chart. The code I have so far: (host=jp) source="/home/jp/pings/targets/googledns.txt" OR source="/home/jp/pings/targets/defaultGateway.txt" | timechart avg(time) by source | rename /home/jp/pings/targets/googledns.txt as "Google DNS" | /home/jp/pings/targets/defaultGateway.txt as "Default Gateway" This code reads the data from 4 different .txt files in real time. The data is a ping to four different IPs. Also the Legend on the right is customized. ![alt text][1] Thank you in advance. Best regards JP [1]: /storage/temp/254999-linechart.png

Google meet Performance measure via Splunk is it possible??

$
0
0
Wanted to check if there is a way to measure performance issues related to Google Meet as we have few performance issues and meet video calls get hang in between. Please Advise

How do I change bar color based on y axis value in timechart?

$
0
0
Hi there, I have already found several answers to the question about how to apply color ranges on the column chart, but I didn't manage to get them to work using a `timechart`. My search looks like this: index="index" startupTime=* | timechart span=1hour count(startupTime) by host limit=0 I have like 100 hosts and I want to mark hosts green which are only having one restart an hour, 2-4 restarts yellow and 4-max red. Is this somehow possible using a time chart? Thanks in advance

Microsoft Azure Billing Add-on: How do I convert a field to a token that I can use In the dashboard checkbox input search?

$
0
0
My goal is the following: - I have "billing periods" coming in from the Azure billing Add-on - I'm converting the billing period value to epoch in the dashboard checkbox input search - I want the user to select a billing period and for that to determine the length in which the base search goes back - I am wanting to use this field (epoch value) to specify the "earliest" time of the base search in the dashboard How can I go about doing this? This is what I have so far, and it's a mess, as I've tested multiple things, but hopefully it will give some context for what I'm trying to do. I'm missing some step on how to convert the evaluated field to a token that I can use in the base search.
billingperiodbillingperiodindex=test sourcetype=azure:billing | rex field=properties.billingPeriodId (?:\/subscriptions\/mysubid\/providers\/Microsoft\.Billing\/billingPeriods\/)(?\d+) | rex field=billingperiod (?\d{4})(?\d{2})(?\d{2}) | eval earliestdate=month."/".day."/".year." 00:00:00" | eval earliestdate = strptime('earliestdate', "%m/%d/%Y %H:%M:%S") | stats count by billingperiod,earliestdate | fields *
index=test sourcetype=azure:billing | fields *$earliestdate$nowTest| timechart sum("properties.pretaxCost") span=1d

Can you help with a Splunk query for filtering a destination port count to a table?

$
0
0
Hello, everyone, I need some help regarding the analysis of a firewall rule that I am trying to analyze via Splunk. What I am trying to do is to filter out a sorted output of the source and destination IP along with the top 200 ports that are used most out of the output. Now, when I sort the count then, I lose the capacity of getting the source IP and Destination IP details. THE TABLE SHOULD BE CONSIDERING THE COMPLETE OUTPUT OF TOP 200 PORTS ALONG WITH THE SOURCE IP AND DESTINATION IPS THAT ARE INVOLVED IN THE COMMUNICATION for example EXAMPLE index=firewall dvc="Devicename*" message_tag="RT_FLOW_SESSION_CREATE" rule="RULENAME" | stats count by dest_port | sort -count

HeavyForwarder to Indexer using two separate ports and indexes?

$
0
0
So my issue is that I am not sure how to get splunk to separate data on the indexer. I am trying to listen on the forwarder port 514 (for Linux syslog) and 6161 (for windows event logs), I use _tcp_routing to send it to a tcpout targetgroup associated with the indexer ports 9997, and 9998. which allows me to have a splunktcp:// index= for each port. Am I doing this all wrong, and how can I get splunk to separate the windows and Linux logs into two different indexes? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Forwarder: fwd inputs.conf- [scripts://$SPLUNK_HOME\bin\scripts\splunk-wmi.path] disabled=0 [tcp://514] _TCP_ROUTING=Linux [tcp://6161] _TCP_ROUTING=Windows ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ fwd outputs.conf - [tcpout] defaultGroup=Windows, Linux [tcpout:Windows] server=(server ip):9997 [tcpout:Linux] server=(server ip):9998 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ indexer: index inputs.conf - [default] host = somehost1 [tcp://9997] index=windowseventlogs connection_host=dns [tcp://9998] index=linuxauditlogs connection_host=dns

Splunkd Service timing out

$
0
0
Splunk Version: Splunk Enterprise 7.0.3 Local Host OS: Windows 7   I have been unable to start Splunkd Service successfully using an MSA. The following is a summary of the steps taken: - Installed Splunk via CLI to be run as Local System user - Started Splunk successfully - Switch Log On option of Splunkd Service to my personal domain user account - Started Splunk successfully - Switch Log On option of Splunkd Service to MSA - Splunk fails to start. - CLI reports “Timed out waiting for splunkd to start.” Windows’ Services GUI reports "Error 1067: The process terminated unexpectedly." - Investigation of $SPLUNK_HOME\var\log\splunk\splunkd-utility.log revealed MSA permission issues for accessing the following: - File - splunk.secret Dir - $Splunk_Home\etc\license - I manually granted MSA permissions necessary to access these locations. No more errors show up in the splunkd-utility.log - Splunk fails to start. - No splunkd.log is created when the Service does not successfully start   The only semi-suspicious log found is in splunkd-utility.log which states: - ServerConfig – Found no hostname options in server.conf. Will attempt to use default for now. ServerConfig – Host name option is “”. Could this be an issue? Is there some other location (.log) I should inspect to determine where my current issue resides?

REGEX HELP

$
0
0
I have a event of below format from Firewall Source. I need to extract the field named "FieldChanges" from it, there are multiple fields separated by Pipe where I can use the Delimiter function but I already have all other fields extracted except above mentioned field. Can you help me the better way of extracting the field values with a Regex or any other alternative. loc=8270|time=20Sep2018 13:10:57|action=accept|orig=application_WOB|i/f_dir=outbound|i/f_name=|has_accounting=0|product=SmartDashboard|ObjectName=Current_Policy_8|ObjectType=firewall_policy|ObjectTable=fw_policies|Operation=Modify Object|Uid={XYZ}|Administrator=ghansen|Machine=ABC|FieldsChanges=Rule 132 UID = {sample data} (sample data) Destination: added 'xyz';Rule 113 UID = {SAMPLE DATA} (XYZ) Source: added 'N1XYZ_18_EDC_Network' ;|session_id=kflow Automatic Session|Subject=Object Manipulation|Operation Number=1|client_ip=ABC Thanks in Advance.

splunk query for excluding hosts which are there in lookup table

$
0
0
Hello splunkers , I need help in one query I have all hosts coming in query when i run index=* and i have some other hosts in CSV file which i have loaded static using lookups. I want to run index=* again but don't want the hosts which are there in CSV to show up in my query , In short : during search time i want to exclude all hosts which are there in CSV static file lookup . I am guessing that join command would work but don't know how can i use . Please help

How come the sum(len(_raw)) of my data does not correlate with my license usage reports

$
0
0
Ok, I am working to trim back some of our indexed data. I initially tried to drill down using a basic sum(len(_raw) for all index broken down by various other fields. The problem is that the sum counts dont match the counts when compared to splunk license usage for the index. In this specific test case, I am comparing the splunk license usage for ONE index for ONE day. I compare it to the byte sum of all of the _raw records for that SAME index for the SAME ONE day. . . I expected the counts to at least be similar. . . My query from a splunk source to get license info. . . index=_internal sourcetype=splunkd source=*license_usage.log [| rest splunk_server_group=dmc_group_indexer /services/server/info | rename guid AS i | fields i ] | eval gb=b/1024/1024/1024 | join i [|rest splunk_server_group=dmc_group_indexer /services/server/info | rename guid AS i | fields serverName i] | search serverName=*rtp* idx=xyzzy_logs | stats sum(gb) by serverName idx yields between 50gb to 53gb per indexer for that ONE index for that ONE day. Verses index=xyzzy_logs splunk_server=*rtp* | eval leng=len(_raw)/1024/1024/1024 | stats sum(leng) as totalgb by splunk_server | table splunk_server, totalgb Yields only 14.7gb to 15.66gb per indexer for the SAME index for the SAME day. Again, i expected them not be exactly the same but thought they should be closer than 300%+. What is splunk licensing counting that does not seem to show up in my indexes? I tried looking for answers for this. . . i found other posts using similar accepted answers with sum(len(_raw) as a "brute force" way to drill down on sizes. . See Splunk Answer: **How to get license usage data for a particular index with a breakdown of usage by a field?**

Can you help me extract the following field values with a regex or any other alternative?

$
0
0
I have an event of the below format from a Firewall Source. I need to extract the field named "FieldChanges" from it. There are multiple fields separated by Pipe where I can use the Delimiter function, but I already have all other fields extracted except the above mentioned field. Can you help me find a better way of extracting the field values with a Regex or any other alternative? loc=8270|time=20Sep2018 13:10:57|action=accept|orig=application_WOB|i/f_dir=outbound|i/f_name=|has_accounting=0|product=SmartDashboard|ObjectName=Current_Policy_8|ObjectType=firewall_policy|ObjectTable=fw_policies|Operation=Modify Object|Uid={XYZ}|Administrator=ghansen|Machine=ABC|FieldsChanges=Rule 132 UID = {sample data} (sample data) Destination: added 'xyz';Rule 113 UID = {SAMPLE DATA} (XYZ) Source: added 'N1XYZ_18_EDC_Network' ;|session_id=kflow Automatic Session|Subject=Object Manipulation|Operation Number=1|client_ip=ABC Thanks in Advance.

How do I make a Splunk query which would exclude hosts which are in a CSV lookup table?

$
0
0
Hello splunkers , I need help with one query. I have all hosts coming in a query when i run i`ndex=*` and i have some other hosts in a CSV file which i have loaded static using lookups. I want to run `index=*` again but I don't want the hosts which are there in CSV to show up in my query. In short, during search time, i want to exclude all hosts which are there in CSV static file lookup . I am guessing that join command would work but don't know how can i use . Please help

Why are changes based on drop-down input in panels not updating in the following dashboard?

$
0
0
The following dashboard is not updating panels when a different option is selected in the drop-down. It only works on initial load -- any ideas? Is there a way to trigger the base search to run again if a drop-down input is changed?
index=test sourcetype=azure:billing | fields *$earliestdate$now
billingperiodbillingperiodindex=test sourcetype=azure:billing | rex field=properties.billingPeriodId (?:\/subscriptions\/oursubid\/providers\/Microsoft\.Billing\/billingPeriods\/)(?\d+) | rex field=billingperiod (?\d{4})(?\d{2})(?\d{2}) | eval earliestdate=month."/".day."/".year." 00:00:00" | eval earliestdate = strptime('earliestdate', "%m/%d/%Y %H:%M:%S") | dedup billingperiod | fields * $result.earliestdate$
Overall Azure Cost for billing period| timechart sum("properties.pretaxCost") span=1d | rename "sum(properties.pretaxCost)" as "Azure Cost (US)"Overall Azure Cost by Azure Service (Top 10)| timechart sum("properties.pretaxCost") span=1d by "properties.consumedService" limit=10 useother=f | rename "sum(properties.pretaxCost)" as "Azure Cost (US)"Total Cost for Billing Period|stats sum("properties.pretaxCost") as "Azure Total Cost (US)" | eval "Azure Total Cost (US)"=round('Azure Total Cost (US)',2)

Can you edit a CIDR-enabled lookup table in the Lookup Editor?

$
0
0
if I have a CIDR-based column, can I edit it in the lookup editor without removing the CIDR capabilities of that column?

Where can I find documentation on how to update a macro by using the API?

$
0
0
I have successfully used the code below to create a macro (POST using 'requests' with Python). However, I have been unable to find any documentation that states this being possible. Based on the error messages I came across, "definition" is known as a "handler" within the Splunk API. I am trying to find any other "handlers" that I can target for updating macros. The main thing I would like to accomplish now is to change the permission level of a newly created macro to the app it is inside of (since it defaults to owner only). payload = {'definition': 'query here'} URL = 'root/servicesNS/username/app_name/admin/macros/macro_name' Thank you for your time. -Randall

How do we identify which splunk search is consuming more memory on the splunk indexers ?

$
0
0
How do we identify which splunk search is consuming more memory on the splunk indexers ?

From a Heavy Forwarder to an Indexer, how can I get Splunk to separate Windows and Linux logs into two different indexes?

$
0
0
So my issue is that I am not sure how to get Splunk to separate data on the indexer. I am trying to listen on the forwarder port 514 (for Linux syslog) and 6161 (for windows event logs), I use _tcp_routing to send it to a tcpout targetgroup associated with the indexer ports 9997, and 9998. which allows me to have a splunktcp:// index= for each port. Am I doing this all wrong, and how can I get Splunk to separate the windows and Linux logs into two different indexes? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Forwarder: fwd inputs.conf- [scripts://$SPLUNK_HOME\bin\scripts\splunk-wmi.path] disabled=0 [tcp://514] _TCP_ROUTING=Linux [tcp://6161] _TCP_ROUTING=Windows ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ fwd outputs.conf - [tcpout] defaultGroup=Windows, Linux [tcpout:Windows] server=(server ip):9997 [tcpout:Linux] server=(server ip):9998 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ indexer: index inputs.conf - [default] host = somehost1 [tcp://9997] index=windowseventlogs connection_host=dns [tcp://9998] index=linuxauditlogs connection_host=dns

Calculate concurrency by second from start time and duration

$
0
0
Newbie here...I have a index of data that represent calls. Each event has a start_time and duration. I've been asked to take all of these events and an calculate how many concurrent calls there are per second. It was suggested that I use Python and split the calls into different rows of a DB but sounds tedious. Is there a way to take each events data with start time and duration and chunk it up into seconds like this...?

How do I use Splunk to find which users have logged into VPN in the past X# of days?

$
0
0
I'm new here, please forgive me if this question has already been answered and I couldn't find it. I’m looking for some help in taking VPN source logs from the network team and creating a report showing which users have logged into VPN in past X days. I've read through several of the 370 posts about VPN but can't seem to find exactly what I'm looking for. Any help would be greatly appreciated!
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>