Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

XML validations Option "fields" is deprecated in my dashboard

$
0
0
I am upgrading my Splunk version from 6.3 to the latest and seeing the XML validation issue in one of my dashboards. Can anyone suggest another alternative for this?> name="fields">$show_fields$ Seeing warning: Option "fields" is deprecated

Pagination cursor with GET REST API

$
0
0
If I setup the REST API modular input - it'll properly read the API but I can't figure out how to get it to paginate. In the API response there's a field called next-cursor which its value should be specified in the NEXT API query that the REST modular app makes. I'm thinking, maybe either Response Handler Arguments, OR the Token substitution in Endpoint URL but its not super clear on how to use each of these.

ldapsearch not returning list of all AD groups and users

$
0
0
I'm trying to create a lookup of the domain, ad group and user using `ldapsearch` command from `Active Direcotyr Add-on`. The below query is schduled as report and generates the lookup. If I manually verify the the data, some groups and all users from that groups are missing in the lookup. `| ldapsearch domain="test_domain" search="(&(objectClass=group))" attrs="sAMAccountName,member,groupType,sAMAccountType" | search groupType=SECURITY_ENABLED | spath | rename sAMAccountName as sAMAccountName1 | mvexpand memebr | ldapfetch domain="test_domain" dn="member" attrs="sAMAccountName,distinguishedName"` If I include the group names in the query, it generates the required lookup for the specified groups only. `| ldapsearch domain="test_domain" search="(&(objectClass=group)(|(cn=grp_prefix1*)(cn=grp_prefix2*))" attrs="sAMAccountName,member,groupType,sAMAccountType" | search groupType=SECURITY_ENABLED | spath | rename sAMAccountName as sAMAccountName1 | mvexpand memebr | ldapfetch domain="test_domain" dn="member" attrs="sAMAccountName,distinguishedName"` I'm not able to figure out, why the first query not returning the results for particular groups. I also checked that groups are not being ignored or skipped in lookup due to some limit or alphabetical order. Let me know if any other details are required.

help me on how i can create lookup file in lookup editor

$
0
0
Greetings!! help me on how i can create lookup file in lookup editor I use to see a field called host that is identified by source IP and i want to add also another column that will describe that IP and its name is xyz name and i need to see by its name not only IP eg: if IP is 10.12.1.5 and the name ,how can you do it using lookup editor, kindly help me and guide me, I'm newest on splunk, Thank you!!

_indextime is 5 hrs ahead of event time (_time)

$
0
0
Hi, We have Splunk Enterprise 7.2.6 in our environment. I noticed there are latencies (difference between _time and _indextime from 1hr to 10hrs). My Splunk Heavy Forwarders are in GMT timezone, hence I have set TZ = UTC for few of the sourcetypes in props.conf of HF and it worked. Still I am seeing time difference of 5hrs to 10hrs on few hosts for specific sourcetypes. I am unsure which is creating latency in _indextime. Attached screenshot for reference. Can someone please assist me to fix this issue? ][1]

sort multi value field by word length

$
0
0
is it possible to sort multi-value field by word length...if yes then how to.

Distributed management console

$
0
0
What is the DMC on splunk? Why should I have need to install it? How should be installed, It is an app? The monitoring console is not the same thing as that? I'm configuring and distributed environment so in general setting should I configure it as distributed than standalone? Because if I set it to it it gives me like an error not to configure DMC in search head production

remove custom csv file of threat intellignance

$
0
0
hi I uploaded custom csv file containing IP addresses. Referring link "https://docs.splunk.com/Documentation/ES/latest/API/ThreatIntelligenceAPIreference". I have to remove data rows one by one. there are 400 Ip's against that IP. So I have to execute this command 400 times ? PR there is some other way to remove these feeds from threat intelligance KV store.

Total amount of IPs

$
0
0
Hello. I have this query: index=XX | stats count by ipaddress This creates a table which says how many log entries are there for each IP address, but what I need to know is How many different IP addresses are there. I don't care how many entries are there for each IP, I just need the total amount of different IP Addresses. Any ideas? Thanks in advance! Regards,

Query Help

$
0
0
Hi, I am trying to search logs from specific source and with specific name and to search IP found in previous search in all indexes. Ex: index=firewall and name="malicious IP' (this will give a log with IP address and I want to search that IP address in all indexes. Thanks in advance. Bests, Gozde

Data retention

$
0
0
Where must the data retention be settled in indexer or in my case distributed environment in search head? Then seen that it must be setted in file indexes.conf but it S present just in etc/system/default but we know we don't have to edit files in default folder how can I do that? Do I create a file in local and after splunk will think to update the default folder?

Splunk App for Infrastructure on Search Head

$
0
0
Splunk App for Infrastructure data collection on Search Head Followed: https://docs.splunk.com/Documentation/InfraApp/2.0.0/Admin/ManualInstalLinuxUF Environment: Search Head 7.3.0 Indexer 7.3.0 Setup: collectd -> localhost udp port 5000 -> indexer (via system/local/outputs.conf) Issue: So data flows from collectd to localhost udp port 5000, verified with tcpdump to include viewing data. Search Head forwards data to the Indexer. Indexer has Add-On as instructed in documentation but get the following error: Metric value = unset is not valid for source=5000 sourcetype=em_metrics_udp. Metric event data with an invalid metric value would not be indexed. Ensure the input metric data is not malformed. Thanks. Jeremy

Spath for getting date json field

$
0
0
Hi all, I'm trying to use spath to extract JSON data from a field name that represents a date: { "field1": { "2019-01-02": [] } } but when I try **spath input=message output=result field1.2019-01-02** it's not working. I tried to replace the field name with another text (i.e. replace "2019-02-01" with "targetDate") and setting the replaced result in the spath input parameter but it is still not working. Thank you for your help.

Rest command for Index Size

$
0
0
Hi , I am using the below REST command to create 30+ indexes. But they are getting created with default size as 500 GB. How can I pass argument to restrict the total index size as 5GB? curl -k -u admin:pass https://localhost:8089/services/data/indexes \ -d name=mymetricsindex \ -d datatype=metric

Does Splunk have an app you're supposed to use for the Splunk Add-on for F5 Big-IP?

$
0
0
Does splunk have an app you're supposed to use with the Add-on for F5 Big-IP to view the dashboards it collects data on? or just the add-on?

help for keeping data formating after a scheduled search

$
0
0
hi In my dashboard, I am formatting that like below when the search is directly integrated in the dashboard But if i am doing a scheduled search and that I call this scheduled search from my dashboard with a loadjob command, I lose the data formating present in the xml is it normal? what I have to do for keepin the data formatting even if there is a scheduled search implemented?? thanks

Infoblox modular input

$
0
0
Hi all, I'm trying to create a new input for infoblox with the Infoblox BloxOne Threat Defense Cloud Input Add-on (https://splunkbase.splunk.com/app/3860). When I click on next, when create the input, this message appears: *Encountered the following error while trying to save: Invalid configuration specified: global name 't1' is not defined*. Below my screenshot: ![alt text][1] [1]: /storage/temp/275151-qqqqq.png Which are correct values for the input? Also, someone can provide an example of the inputs.conf file for infoblox? Thanks in advance.

Why am I getting Access is denied errors on indexers?

$
0
0
I have cluster with 2 indexers, RF=2 running Splunk version 7.1.2 on Windows Server 2012. I often get following error: Indexer Clustering: too many bucket replication errors to target peer. In splunkd.log on both indexers I found similar errors: ERROR TimeInvertedIndex - Failed to rename from="C:\Splunk\var\lib\splunk\audit\db\hot_v1_278\.rawSize_tmp" to="C:\Splunk\var\lib\splunk\audit\db\hot_v1_278\.rawSize": Access is denied. ERROR LMApplyResponse - failed to rename C:\Splunk\var\lib\splunk\fishbucket\rawdata\1322324208-C:\Splunk\var\lib\splunk\fishbucket\rawdata\1322324208.old [1,1,1] (Access is denied.) ERROR HotBucketRoller - Unable to rename from='C:\Splunk\var\lib\splunk\_internaldb\db\hot_v1_572' to='C:\Splunk\var\lib\splunk\_internaldb\db\db_1570774736_1570503894_572_F0C749EE-B861-4598-B107-5358365E79D8' because The system cannot find the file specified. and so on. AND NO BUCKETS IN FIXUP STATE. Splunk is installed under local OS Admin (no AD). I checked file permissions for fishbucket\rawdata\*.old files and found that no user or group has any access to it, even SYSTEM! My steps: 1. Removed fishbucket\rawdata\1322324208.old file 2. Executed icacls.exe commands in order to fix Splunk directory permissions: icacls.exe "C:\Splunk" /inheritance:e /T icacls.exe "C:\Splunk" /T /Q /reset 3. Initiated rolling restart on Master Node. Indexers entered maintenance mode and restarted. This caused a huge number of fixup tasks after restarts succeeded and fixed all issues for some time, but today I got same issues with other buckets. Why Splunk still creates files with bad access rights? What I can do in this situation? Maybe reinstall Splunk? Note: I know that running Splunk on Windows is pain and I can't move to Linux servers. I have 2 other clusters with indexers running on Windows and they do well.

How to get timechart to work in a search with multiple calculations

$
0
0
Hello, I am trying to make a timechart for my field "finalProfit" in the search below. I have tried doing timechart per_hour(finalProfit), eval commands in my timechart search, and a number of other options but I'm having no luck. If anyone can help me reorganize the search to work with the timechart command I would greatly appreciate it. Thanks! index=main sourcetype=marketapi | foreach name [ eval baseprice = pricePerOne] | eval savageDraught = case(name=="Wolf Blood", baseprice *4, name=="Blue Umbrella Mushroom", baseprice *4, name=="Bottle of River Water", baseprice *4, name=="Weeds", baseprice *1, name=="Monk's Branch", baseprice *16, name=="Moss Tree Sap", baseprice *16, name=="Powder of Darkness", baseprice *2, name=="Powder of Flame", baseprice *10, name=="Powder of Time", baseprice *6, name=="Red Tree Lump", baseprice *10, name=="Sky Blue Flower", baseprice *2, name=="Spirit's Leaf", baseprice *2, name=="Sunrise Herb", baseprice *1, name=="Thuja Sap", baseprice *12, name=="Violet Flower", baseprice *2, name=="Volcanic Umbrella Mushroom", baseprice *2) | eval savageDraught = savageDraught/2.5 | search savageDraught!='' | eval hammertime=_time | bucket span=1h hammertime | stats sum(savageDraught) AS craftedCost by hammertime | appendcols [search index=main sourcetype=marketapi name="Savage Draught" | eval Time=_time | eval purchaseCost = pricePerOne ] | eval profit=purchaseCost - craftedCost - 100000 | eval finalProfit=profit*.85

How to format result by join column results based on another column

$
0
0
Hi Every one, I am new to Splunk, I have a requirement as given below, I have a result as given below by combining two different input lookup, Country index servers Argentina win_ar serverA Argentina win_ar serverB Argentina win_ar serverC Argentina win_ar serverD Barbodos win_bb serverE Barbodos win_bb serverF Barbodos win_bb serverG Bermuda win_bm serverH Bermuda win_bm serverI Bermuda win_bm serverJ Bermuda win_bm serverk I am looking for an option on how to combine this result and make it look like below So that I can use it for dashboard creation. I tired nomv but it did work for one row but I want to do it based on grouping column names country and combine column servers. Country index servers Argentina win_ar serverA,serverB,serverC,serverD Barbodos win_bb serverE,serverF,serverG Bermuda win_bm serverH,serverI,serverJ,serverK Regards, Naresh
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>