Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Linux monitoring ps.sh for cpu usage > 100% is normalized to 0

$
0
0
I've the Splunk_TA_nix add-on installed to monitor Linux systems (all VMs). Researching a recent server issue there's a process running at %500 CPU usage. This is only possible because it's a VM. What's I've noticed is sourcetype=top collects the CPU usage correctly however sourcetype=ps normalizes the CPU usage with a condition if the usage is under 0 or over 100, usage is set to 0. From ps.sh: NORMALIZE='(NR>1) {if ($4<0 || $4>100) $4=0; if ($6<0 || $6>100) $6=0}' In this case it's a java container, to figure out which container, I need to look at the ARGS which is collected by ps, not top. So now instead of just using results from ps, need to combine both top and ps to see the history on CPU usage. Is there's a reason for fixing the CPU usages when greater than 100 to 0?

How to overlay/combine line charts with two different time spans?

$
0
0
I have two line charts I'd like to display in one view, but I'm having trouble combining them **because they're using different time spans.** The first chart is `index=os | search sourcetype=cpu cpu=all host=$server$ | eval Percent_CPU_Load = 100 - pctIdle | timechart avg(Percent_CPU_Load) as "Avg CPU"` Which gives me this: ![alt text][1] The second chart is `index=os | search sourcetype=cpu cpu=all host=$server$ | eval Percent_CPU_Load = 100 - pctIdle | timechart values(Percent_CPU_Load) as "Actual CPU" span=5min` Which gives me this: ![alt text][2] **I'd like to combine the two, so that my users can see the actual CPU activity for this server, but also see the trend when it is averaged out. Any help would be much appreciated!!** [1]: /storage/temp/255650-2018-08-08-1104.png [2]: /storage/temp/255651-actual.png

How do I place a hyperlink in dashboard pdf report?

$
0
0
I am creating a splunk dashboard with a few reports. On one report (outputted as a table), I want a long url to be replaced by a short number. When clicking that number (VIA THE PDF DASHBOARD REPORT) , I want it to go to the external url. Placing just the url works, but it's broken cause it's to long and wont open the full URL. I followed the following guide: https://answers.splunk.com/answers/542437/how-can-i-show-customized-hyperlink-text-on-a-dash.html It works while viewing the dashboard in a browser except for I still don't know how to edit the link correctly. ReportName, QueueTime, FinishTime, Result, BuildId $row.URL$ # Not working . URL is the name of the filed with the long url that I want replaced by another field with a short number https://google.com # Works
------------------------------------- index="reports" | where (source LIKE "%Report%") | eval BuildId = Id + "" | table Name, QueueTime, FinishTime, Result

XMl token defaults to * for a field and the need is to initialise * to output of a lookup

$
0
0
I have a drop down which populates the list of servers in the environment and the default value of the server token is * which gets all the servers and some extra as $server$=* , whereas i need * to be only the servers in the lookup. Here is my code
*All serversserverNameSERVERsearch OPEN="Y" AND | search TimeZone=* AND Territory=* AND Region=* AND District=* AND STATE=* | sort SERVER | rex mode=sed field=SERVER "s/(\d+)/000\1/" | rex mode=sed field=SERVER "s/0*([0-9]{4})/\1/" | eval storeName = SERVER+"-"+SERVER_NAME+"-"+STATE | table SERVER serverName As you can see, the lookup search will spit out all the servers which i require and i want the default value (* ) to be restricted to only these values(coming from lookup )

Cannot get custom sourcetype to do line breaks correctly

$
0
0
We have Splunk Enterprise with SH, Clustered IX (2), HF and many UFs. I have created an app in the deployment apps folder (with inputs.conf and props.conf) on deployment manager and deployed to server running UF. Ingestion begins as expected but does not line break as desired. Log looks like this: ........................... ipro Trace started on Thursday, July 12, 2018 at 8:16:12 PM Central Daylight Time(en-US) Machine: P-XXXXXXX, Culture: en-US, UI Culture: en-US Ini Settings: 20:16:12.672 Tid=4,Log file created. 20:16:12.679 Tid=4,Running module as Windows Service 20:16:12.680 Tid=4,Product version: 7.99.999.9331 20:16:12.813 Tid=9,Conn=1,ElapseMs=0,ipro:Received RequestOnly, PingToClient, HeaderSize=4, DataSize=0 20:16:12.813 Tid=4,Conn=1,ElapseMs=1,ipro:Sent RequestResponse, Connect, HeaderSize=43, DataSize=269 ............................ Splunk appears to get that the date stamp is in the first row of the log file and that the time stamps appear at the beginning of each row. the problem is line breaking. I want it to break at each time stamp allowing for multi-line log entries to merge into one event. I have tried a number of different options in the props.conf file and specified several different regex. Nothing I do seems to change the outcome. I wonder if I am deploying this correctly. It seems to randomly break lines, where most of the them time there are two or more log entries in each Splunk event. the number log entries in each event is not consistent so I do not know what it is breaking on. Here is inputs.conf ........................... [monitor://c:\ProgramData\XXX\XXXXXXX\ipro\XXXX\\] disabled = false index = ipro followtail = 0 sourcetype = appx:ipro whitelist = \\.txt$ ignoreOlderThan = 0d ................................... here is props.conf .................................. [appx:ipro] BREAK_ONLY_BEFORE_DATE = false BREAK_ONLY_BEFORE = [0-9]{2}:[0-9]{2}:[0-9]{2}\\. DATETIME_CONFIG = MAX_TIMESTAMP_LOOKAHEAD = 80 NO_BINARY_CHECK = true SHOULD_LINEMERGE = true category = Custom description = ipro logs disabled = false pulldown_type = true ........................................ Any suggestions would be helpful.

Splunk not storing time in milliseconds

$
0
0
I am extracting the timestamp from events in microseconds (%Y-%m-%d:%H:%M:%S.%6N). But when index event timestamp is not showing in sub seconds. Always I see zeroth subsecond in timestamp. Is there any overwritten possible other than by props?

Problem with lookup for disabling alerts during maintenance

$
0
0
Sorry for the simple question, I am new to the Splunk world.... I have a CSV loaded (StandardMaintenance.csv) which has two rows UnderMaintenance NO I want to add a check to each alert so that they will not fire during maintenance. Here is my code... ....query goes here... | lookup StandardMaintenance.csv UnderMaintenance | search NOT UnderMaintenance="NO" What am I doing wrong or how better might I accomplish this? Thanks.

Hi. I am indexing data from a ticketing tool.

$
0
0
I need to see what tickets were opened at end of each month. I've done a initial charge of the database, because of this, I can't use the _time indexed, otherwise I have to use open_date and close_date. Basically, the logic that I need to apply is: Make a count of all tickets that were opened before end of month and were closed after the end of that month. I need show like timechart with this info by month. Any idea about the way to get this info? Maybe could be useful the gentimes command?

OSSEC server not seeing/reporting file changes in Splunk

$
0
0
I've configured the agent on my machine to monitor file changes for a specific folder and validated that Splunk's OSSEC Reporting and Management app is seeing my agent, and my workstation shows up regular entries. It also noticed my changes in the config file, so I'm fairly certain the agent is reporting some things. When I created, modified and deleted a file inside the newly monitored folder, there are no entries in Splunk for this. Am I missing something simple? Have tried both of the entries below in the config, yet no love from Splunk O:\10GBTestO:\10GBTest

Can I Build A Dashboard Using Data Pulled From DB2 Using DB Connect?

$
0
0
I am potentially working on building on a Splunk dashboard. It is meant to take data every day that is in a DB2 database, and put it into a dashboard. I've watched some DB Connect videos but it just shows the data as a report. If I need the data to show as data from today, yesterday, and possibly going back for a data range, would I be able to do that with Splunk/DB Connect pulling that information from DB2?

Get list of VM's from splunk

$
0
0
Is there a way to get the list of VM's which is forwarding data to the Splunk ?

Splunk Drill Down Option Issue

$
0
0
Hi , I am trying to create a dashboard for Error OR fail* from application logs. There are three hosts from where data is reporting to splunk instance. Now i have run search query Error OR fail* and from output result created three dashboards in single. 1. Pie chart shows count for all hosts 2. Total Number of events that have Error or fail* 3. Events for all hosts that have Error or fail* I have 1 dropbox as well which have all three hosts. I have defined a token value "drop_box" and pass that value in search of each dashboard. Now when i am selecting any host ,data from all three dashboard shown for that host only. ![alt text][1] [1]: /storage/temp/255653-capture.png Now my requirement is , when i will click pie chart for a particular host, i should get the data for that host only but i am not getting this and it shows error "could not create search". below is code.

Is there a link to filter on apps with an additional pricetag?

$
0
0
Is there a link to filter on apps with an additional pricetag? I'd like a list of premium apps not only made by Splunk (ITSI, ES, UBA...) but also from partners like sideview apps, Qmulos apps etc. Thanks!

XML token defaults to * for a field and the need is to initialise * to output of a lookup

$
0
0
I have a drop down which populates the list of servers in the environment and the default value of the server token is * which gets all the servers and some extra as $server$=* , whereas i need * to be only the servers in the lookup. Here is my code
*All serversserverNameSERVERsearch OPEN="Y" AND | search TimeZone=* AND Territory=* AND Region=* AND District=* AND STATE=* | sort SERVER | rex mode=sed field=SERVER "s/(\d+)/000\1/" | rex mode=sed field=SERVER "s/0*([0-9]{4})/\1/" | eval storeName = SERVER+"-"+SERVER_NAME+"-"+STATE | table SERVER serverName As you can see, the lookup search will spit out all the servers which i require and i want the default value (* ) to be restricted to only these values(coming from lookup )

Splunk is not working. localhost refused to connect.

$
0
0
This site can’t be reached localhost refused to connect. Did you mean http://localhost8000.com/? Search Google for localhost 8000 ERR_CONNECTION_REFUSED -- OS: Windows Server 2016 ![alt text][1] [1]: /storage/temp/255654-annotation.png

Hosts sending logs to an UF

$
0
0
Dears, I have one UF that is receiving logs from many servers. This UF forward logs to my indexer. How can I see which devices are being sent from this UF? I tried the following search: index=_internal host=myforwarder group=tcpin_connections | stats sum(kb) by sourceIp Is there any other way? Thanks a lot.

Using Splunk DB Connect to join splunk index to a table in sql server and fetch relevant data

$
0
0
Hi, I am new to SPL and Splunk. I use the following query to find PTP violations per server index=indexwintimesynclogs|eval offset=Delta|where offset>0.0001 and like(ServerName,"%PRD%") | stats max(offset) as offset, count(offset) as violations by ServerName, TimeSource|sort -offset Now I want to join the results of this query to the CMDB database and get the OS and Application details for each server reported by the above query. What I have done so far? - I have wrote the SQL query that is required to fetch data from CMDB - Installed the App DB Connect and configured connection to CMDB - Tested the connectivity and DB query -> Splunk DB Connect->Data Lab -> SQL Explorer - I tried using dbxlookup to join: index=indexwintimesynclogs|eval offset=Delta|where offset>0.0001 and like(ServerName,"%PRD%") | stats max(offset) as offset, count(offset) as violations by ServerName, TimeSource|sort -offset | dbxlookup connection="CMDB" query="SELECT [T0].[COMPUTERNAME] As 'Server', [T0].AD_DOMAINNAME, [T0].[COMPUTERENVIRONMENT] As 'Environnement', [T0].[OSNAME] As 'OS', [T0].[Id_location] AS 'Location', [T2].[APPLICATIONNAME] As 'ApplicationName', [T2].[APPLICATIONCODE] As 'ApplicationCode', [T2].ID_MANAGEMENTTEAM As 'Management team (Application)' FROM [Publishing].[dbo].INFRASTRUCTURE_Server AS T0 JOIN [Publishing].[dbo].GENERAL_Relations AS T1 ON T0.ID_SERVER = T1.ID_PARENT AND T1.CIT_PARENT = 'Server' AND LINKTYPE = 'CST_APP2SRV' JOIN [Publishing].[dbo].APPLICATION_Application AS T2 ON T2.ID_APPLICATION = T1.ID_CHILD AND T1.CIT_CHILD = 'Application' LEFT JOIN [Publishing].[dbo].ORGANIZATION_BusinessLine AS T3 ON T2.ID_BUSINESSLINE_OWNER = T3.ID_BUSINESSLINE WHERE T0.AM_ASSIGNMENT='0' and T0.COMPUTERENVIRONMENT = 'PROD' and T1.CRSTATE = 'In Service' ORDER BY T0.COMPUTERNAME" Server as ServerName OUTPUT OS AS OS, ApplicationName AS ApplicationIt does not work... I tried creating a lookup under - Splunk DB Connect->Data Lab->Lookups The query gets stuck at the first step - "Set Reference Search" index=indexwintimesynclogs|eval offset=Delta|where offset>0.0001 and like(ServerName,"%PRD%") | stats max(offset) as offset, count(offset) as violations by ServerName, TimeSource|sort -offset Search hangs with message - No fields found, forget to run the search? Please help me...

how to find out if someone modified an index or deleted eventdata from an index ?

$
0
0
I had a test_index index created where I was sending all test data. However, out of nowwhere, today I see all data gone from it. How can I find out which user messed up with this index ?

Help me with rex regular

$
0
0
Hello All, I have a file with data: --------------server1 2018-07-----SQL2008-- Number of Success Logins: SOFTPOINTPERFOMANCEEXPERTLICENCEUSER - SQL SERVER AUTHENTICATION - xx.xxx.xxx.xx - server01.citytown01.alls.com - 13303433 FOR0001\Login114 - WINDOWS AUTHENTICATION - xx.xxx.xxx.xx - server01.citytown01.alls.com - 258857 Log_chat - SQL SERVER AUTHENTICATION - xx.xxx.xxx.xxx - server01.citytown01.alls.com - 214180 FOR0001\Login114 - WINDOWS AUTHENTICATION - xx.xxx.xxx.xxx - server01.citytown01.alls.com - 184989 NT AUTHORITY\SYSTEM - WINDOWS AUTHENTICATION - xx.xxx.xxx.xx - server01.citytown01.alls.com - 12684 FOR0001\Login112 - WINDOWS AUTHENTICATION - xx.xxx.xxx.xxx - server01.citytown01.alls.com - 1166 1SSA - SQL SERVER AUTHENTICATION - xx.xxx.xxx.xx - server01.citytown01.alls.com - 841 Log_chat - SQL SERVER AUTHENTICATION - xx.xxx.xxx.xxx - server01.citytown01.alls.com - 271 FOR0001\Login114 - WINDOWS AUTHENTICATION - xx.xxx.xxx.xxx - server01.citytown01.alls.com - 46 SQLLSS01 - SQL SERVER AUTHENTICATION - xx.xxx.x.xxx - xxxxxxx.xxx.xxxx.com - 37 SOFTPOINTPERFOMANCEEXPERTLICENCEUSER - SQL SERVER AUTHENTICATION - ::1 - server01.citytown01.alls.com - 1 Number of Failed Logins: Log_chat - - xx.xxx.xxx.xxx - server01.citytown01.alls.com - 73 FOR0001\Login118 - - xx.xxx.xxx.xxx - xxxxxxx.xxx.xxxx.com - 10 Log_chat - - xx.xxx.xxx.xxx - server01.citytown01.alls.com - 3 SOFTPOINTPERFOMANCEEXPERTLICENCEUSER - - xx.xxx.xxx.xx - server01.citytown01.alls.com - 1 ------------------------------------------ I need extract only Success Logins and then Failed Logins. I tried use rex ^\s+(?\S+) | eval New=Success_Login | stats count by New But it extract only first Login

When is it necessary to upgrade universal forwarders?

$
0
0
We are planning to upgrade our splunk instances and we are wondering if its necessary for the forwarders as well? if not, then when? both are running in Splunk 7.0 and environment is distributed, clustered indexers.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>