Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Relative time search and plotting in a timechart

$
0
0
I currently have a search query to calculate the maximum, average and median CPU usage of a server over the past 2 hours using NMON data models, which is in real-time. | tstats `CPU_ALL(max)` from datamodel=NMON_Data_CPU where (nodename = CPU.CPU_ALL) (host=myhost) (CPU.frameID="*") (CPU.OStype="*") `No_Filter(CPU)` groupby _time, host prestats=true span=1m | stats dedup_splitvals=t max("CPU.cpu_PCT") AS CPU.cpu_PCT by _time, host | fields * | sort +str(host) | stats max("CPU.cpu_PCT") AS max, avg("CPU.cpu_PCT") AS avg, median("CPU.cpu_PCT") AS median by host | eval max=round(max,2) | eval avg=round(avg,2) | rename max as "Max (%)", avg as "Avg (%)", median as "Min (%)" I would like to plot a timechart showing the values within the last 2 hours, where the values are the avg, max, median CPU usage within the past 2 hours relative to the timestamp. i.e. Assuming current time is 07:00, I would like my timechart to show the following values as a line chart: avg, max, median CPU usage at 05:00 --> showing avg, max, median of CPU usage from 03:00 - 05:00 avg, max, median CPU usage at 05:01 --> showing avg, max, median of CPU usage from 03:01 - 05:01 avg, max, median CPU usage at 05:02 --> showing avg, max, median of CPU usage from 03:02 - 05:02 : : avg, max, median CPU usage at 06:59 --> showing avg, max, median of CPU usage from 04:59 - 06:59 avg, max, median CPU usage at 07:00 --> showing avg, max, median of CPU usage from 05:00 - 07:00 Are there ways to do that? Thanks in advance.

Alerts using Splunk Search Queries

$
0
0
Hi everyone. Does anyone have any idea on how to use conditional statements within a search query? My problem statement wants me to create an alert,as soon as the number of events in the past hour becomes lesser than (20% less) the average number of events in the past 20 hours. As of now,I have a query to parse my log data which displays the number of events. PS : I'm pretty new to Splunk and still learning the basics. It would be great if anyone could help me with this issue. Thanks!

What happened to all the Dashboards in the latest version of the App ?

$
0
0
Hi Some of the dashboards are missing from the previous versions Billing, Azure AD & the nice Topology feature ? Can these be re-added ? gratzi

Whitespace before closing bracket: An Issue?

$
0
0
My Fowarder App is 1.) Deployed 2.) Reloaded 3.) Phoned-in...but still no logs coming in. Here's the inputs.conf just deployed few minutes ago: [monitor:///Some/Directory/*.logs ] index = some_index sourcetype = some_sourcetype blacklist = .(gz|tar|tgz|zip|bkz|arch|etc|tmp|swp|nfs|swn)$ Is the whitespace after ..logs and before the ] our culprit? Needed confirmation. Thanks in advance. p.s. To those who would advice "why not just remove it and then see what happens". Yes, we will do it but our dev-ops process will not be able to pull the code into master until Monday and deploy until Tuesday next week. Thank you for understanding. p.p.s. the directory has logs in it

Search Head Cluster: Lookups definitions not replicated to indexers

$
0
0
I have a search head clusters with an indexer cluster. On a search head, I created a new file-based lookup. On a search head I did a dummy search (which didn't involve the indexer) and made sure that the lookup is working. However, when I do a search which involves the indexer, the lookup failed. On my indexer, I found that the lookup file was successfully replicated (I found in somewhere under $SPLUNK_HOME/var/run/searchpeers/). I looked at the search.log in the indexer and saw that it cannot find the lookup definition, and so the lookup definition itself doesn't seem to be replicated. Are lookup definitions replicated by default to the indexer? On my indexer, in which file will the replicated lookup definitions appear?

How to specify a list in WHERE condition?

$
0
0
Hi All, * I want to display only results which are present in a given list, please see below : `....... | xmlkv | stats count by "ApplicationFunction" | WHERE "ApplicationFunction" IN ("Price", "History", "Notify")` * There are around 10 values that I want to filter out from 30-40 values. So the list specified in **IN** will have 10 values. * I want to create an overview dashboard (PieChart). **Is this possible with Splunk? ** *If yes, please help me. Otherwise, please specify any possible way to achieve the same.* Thanks in advance !

7.2.xへのアップグレード時にKVStoreのエラーが表示されます

$
0
0
Splunkを 7.2.1 から 7.2.3 にアップグレードする際、マイグレーションスクリプト実行中に下記のエラーが表示され、 アップグレードに失敗してしまいます。 > ERROR while running mongod-fix-voting-priority migration. 先に進むにはどうすればいいでしょうか?

Move duplicate rows in a table

$
0
0
I do my search and use the table keyword to get the results and the fields in a table The table i get is like this field1|field2|field3|field4 1 |2 |3 |4 1 |2 |3 |4 1 |2 |3 |4 1 |2 |3 |4 5 |6 |7 |8 5 |6 |7 |8 5 |6 |7 |8 5 |6 |7 |8 The result i want is field1|field2|field3|field4 1 |2 |3 |4 5 |6 |7 |8 my search query is **mySearchCriteria | table field1,field2,field3,field4**

Reload App Failure

$
0
0
Hi team, Could anyone tell me about query to show which app fail to reload after i run command #splunk reload deploy-server?

alert search with subsearch

$
0
0
Hello, I have an alert which selects from the database and whenever entries come back, the alert is triggered. Now, I would like to implement the subsearch there and depending if it brings any result back, the main part of the alert should be triggered. The main search for the alert: | dbxquery query="select * from zkpiv_lstm_score" connection="HANA_MLBSO" | table RCA_TO_REPORT SYSID HOST TIMESTAMP CPU_CONSUMERS MEMORY_CONSUMERS CPU SYSTEM_CPU MEMORY_USED MEMORY_ALLOCATION_LIMIT PING_TIME CONNECTION_COUNT BLOCKED_TRANSACTION_COUNT STATEMENT_COUNT COMMIT_ID_RANGE CS_READ_COUNT CS_WRITE_COUNT CS_MERGE_COUNT CS_UNLOAD_COUNT ACTIVE_THREAD_COUNT WAITING_THREAD_COUNT When the result comes back it means our Anomaly Detection algorithms found an issue and the alert should be triggered, so far so good. But in the same time we have also an alert searching for the system Crash Dumps. Obviously when we find a Crash Dump, we do not need to alert on the anomalies anymore. So, what I would like to achieve is, that if the subsearch for the Crash Dump is true, then the main search for the Anomaly Detection should NOT be true and thus alert not triggered. The subsearch for the Crash Dump: | search [index=mlbso_changelog (crash_context OR crash_stack OR crash_shortinfo) sourcetype = BWP_crashdumps NOT "Table of contents" earliest=-60m latest=now | reverse] How would I do this? Kind Regards, Kamil

Microsoft Windows defender Data not coming

$
0
0
Hi I already have Log Analytics add-on installed and it is working fine and able to get oms logs. and now new requirement has came to get Windows defender ATP logs in splunk and I have configured input in it but unable to receive data in splunk. 1. Is it due to log analytics is using port 443 and same port is trying to use by TA for Microsoft Windows Defender? If yes then how can I change port ? 2. Is it required to set proxy? 3. Is it required to set SSL connection ON? when it is required to set as by default SSL is set to true? 4. I am getting below log - 2019-02-08 11:02:39,280 DEBUG pid=15232 tid=MainThread file=connectionpool.py:_make_request:400 | https://wdatp-alertexporter-eu.securitycenter.windows.com:443 "GET /api/Alerts//api/alerts?sinceTimeUtc=2019-02-01%2011:02:39.097000 HTTP/1.1" 404 1245 From here I thought might be it is trying to use same port 443? also does 404 here means not found? also Endpoint url which i am using is slightly different-https://wdatp-alertexporter-eu.securitycenter.windows.com/api/Alerts @thambisetty could you please give me insight here.. Thanks,

After log rotation, UF does not forward logs.

$
0
0
My environment: Splunk Ver 7.2.3 UF Ver 7.2.3 UF monitors `var/log/messages`, and forward it to Splunk. But after log rotation at `02-01-2019 00:05:00`, UF no longer forward it. In internal log, there is message like below. 02-01-2019 00:05:07.503 +0900 ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=/var/log/messages). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info. But I wonder whether there is a possibility that the rotated file will be the same as the first 256 bytes of the already loaded file (the file one generation ago). Also another weird thing is that **there is a message that begins reading the file as follows just before crc error**, and the **only first 20 lines** from the beginning of the rotated file have been **indexed in Splunk**. 02-01-2019 00:05:04.500 +0900 INFO WatchedFile - Logfile truncated while open, original pathname file='/var/log/messages', will begin reading from start. I can't solve it by myself... **If somebody knows about it, tell me...**

How to get all matching and non matching Rows from Splunk Search and Lookup

$
0
0
Hi, I am working on a query where I have to match the responseCode from search to the responseCode in a lookup i created. That lookup contains the responseCode and it's description. Now there are few cases where the responseCode in search does not matches to anything in the lookup table. I want the count of all responseCodes. If it matches in the lookup then with it's description and if it doesnt matches then description would be null but I want the count. My current search is not giving the count of the unmatched responseCode - index="test" sourcetype="test_log" | dedup time,host,source,_raw | lookup Response_Codes_Desc ResponseCode | stats count by ResponseCode Description | sort - count Please could someone help on this?

Radial Gauge coloring question

$
0
0
Suppose out of 100, 75 is compliant and 25 is not. so i like to dynamically show 75 as yellow and 25 as red if its 100 compliant then show green. how can this be done for radial gauge..

Any difference in information levels using REST API input vs the Workday add-on

$
0
0
Hello Team Using the Workday add-on the logs in some cases do not have the level of detail we see in workday UI ( for audit). e.g. We may see that account has been changed/edited but not what privilege group is added to it etc A query was raised if we may have more detailed info using rest api.. However my gut feeling is that the Workday add on is already utilizing the same REST API endpoint(s) & the level of information seen through the addon will be the same as that with any bespoke work done using REST API work?

gcp splunk error: Unexpected error "" from python handler: "Daily limit exceeded. Try again later.". See splunkd.log for more details.

$
0
0
I am getting the error while using GCP splunk add on to integrate GCP audit logs. 02-08-2019 11:24:44.073 +0530 ERROR AdminManagerExternal - Stack trace from python handler:\nTraceback (most recent call last):\n File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 130, in init\n hand.execute(info)\n File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 594, in execute\n if self.requestedAction == ACTION_LIST: self.handleList(confInfo)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunktalib/common/pattern.py", line 44, in __call__\n return func(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunk_ta_gcp/resthandlers/cloud_monitor_metrics.py", line 39, in handleList\n metric_descs = monitor.metric_descriptors(project)\n File "/opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/bin/splunk_ta_gcp/modinputs/cloud_monitor/wrapper.py", line 128, in metric_descriptors\n raise ValueError('Daily limit exceeded. Try again later.')\nValueError: Daily limit exceeded. Try again later.\n Need help to troubleshoot the issue!

Splunk App for VMware - Licence

$
0
0
Hi, I have installed this app and configured it using the addon. I was able to see the data, however, I am exceeding the trial licence daily limit of 2GB. Currently, I have 5GB data coming in, as a result, I cannot view anything. Can you please advise how I can reduce what is coming in from the addon? So that I can use the app and experiment if it is suitable for our needs? Thanks Abdul

Migrate from single-site indexer cluster to multi-site

$
0
0
Hi guys. I had a single-site indexer cluster with replication_factor 3. Migrated to multisite cluster with parameters: site_replication_factor = origin:2,total:3 site_search_factor = origin:1,total:2 constrain_singlesite_buckets = false After migrate I've 4 replicated bucket copies: 3 in old site and 1 in new site. I already add some configuration manually like this search_factor = 2 replication_factor = 2 But there are still 3 copies in the old cluster. What can I do to remove the extra data from the cluster? Splunk Enterprise Version: 7.2.1 Build: be11b2c46e23

Issue on savedsearches access using custom role on a custom app

$
0
0
Hi, we have a Splunk Server Instance and we have developed several custom app. To limit access we are creating custom roles to limit access only to the related custom app. All is working fine apart the saved search results visualization. Every time that the custom role user try to see a saved search the result is a "Web page not found". I've already modified permission to grant the custom role on read and write, I've changed the savedsearches.conf of the custom app to work on dispatch as user and dispatch app the custom app. I've also tried to change the capabilities for the custom role but seems that the only one that fix the issue is the admin_all_objects. But assigning this capability to custom user he will see all other apps so not fine. Any suggestion? Thanks and regards Tomaso

create a dash board from multiple csv files by using lookup file with multiple drop down.

$
0
0
Hi All **I have data in multiple csv files. I would like to create the dashboard from csv files(dynamic values) by using lookup file(static values). The dashboard should contain daily usage of inbound and outbound traffic of each node.** first csv name :node1.csv Time Node Name Inbound Outbound Received Bandwidth Transmit Bandwidth 1/23/2019 15:03 node1 170323.766 208175.859 20.00 Mbps 20.00 Mbps 1/23/2019 15:08 node2 58398.6836 117372.133 20.00 Mbps 20.00 Mbps second csv name :node2.csv Time Node Name Inbound Outbound Received Bandwidth Transmit Bandwidth 1/23/2019 15:03 Node2 133894.9 171775.438 100.00 Mbps 25.00 Mbps 1/23/2019 15:08 node2 78438.25 156584.391 100.00 Mbps 25.00 Mbps look up file is in csv format. lookup.csv SNO uid start_hour end_hour receivebandwidth transmitbandwidth node location tiers threshold start_wday end_wday 1 Node1.csv 8:00 17:00 40 40 node1 US tiers1 70% 1 7 2 node2.csv 8:00 17:00 40 40 node2 Canada tiers2 70% 1 7 3 node3.csv 0:00 23:59 10 10 India tiers3 70% 1 7 I have tried the below one. but no luck. |eval date_wday=strftime(_time,"%u") |eval start_h=strptime(start_hour,"%H:%M") |eval start_e=strftime(start_e,"%H:%M") |eval end_h=strptime(end_hour,"%H:%M") |eval end_e=strftime(end_e,"%H:%M") |where time_custom>="start_h" AND time_custom< "end_h" AND date_wday>= "start_wday" AND date_wday<= "end_wday" |eval Outtraffic= Outbound/1048576 |timechart span=1d MAX(Outtraffic) AS MAXOuttraffic ,values(Transmit Bandwidth) as MAXOUT-Bandwidth I have passed the data from input.conf file like below. [monitor:///C:/solar/*.csv] disabled = false host_regex = solar\\(?\w+.+) index = main sourcetype = lookup host = vm1 Thanks in advance. Regards karteek.Korrapolu
Viewing all 47296 articles
Browse latest View live