Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

I have been trying to get the GB used over 30 days per index, but it keeps coming back with only the first 10. I need it for all indexes, what do I need to do to get that to happen?

$
0
0
index=_internal source=*license_usage.log type=Usage | eval GB = b/1024/1024/1024 | timechart span=30d useother=0 sum(GB) by idx | rename idx as Index, sum(GB) as GIGbyte | sort – GIGbyte That is the search I have going, not certain why it's only bringing back the first 10 indexes only.

full removing search head clustering

$
0
0
Hello Splunkers , Currentlly i am having three search heads in search head clustering , What i want is to remove each of them from clustering . removing the below entries from serve.conf and restarting each instances will do the charm or not . [replication_port://xxxx] [shclustering] disabled = 0 mgmt_uri = https://XXX.xxx.xxx.x:8089 pass4SymmKey = $1$IfXxB0RZxuxw== id = BFE7xFF-00x7-4x9E-845C-24591xB19 Note : indexers are not clustered Please and let me know if i am missing something

Get hosts' last activity when there are duplicate host names

$
0
0
We have a bunch of hosts. Some of them, not all, are sort of duplicates, in that some are just the host name, and some are the FQDN. Example: - Server012 - Server012.example.tld I know how to use the metadata command to get a list of hosts and the last time they were seen. I also know how to strip out the domain part of the FQDN. | metadata type=hosts | rex field=host "^(?.+)\.example.tld" | table hostname, lastTime The question: How can I collapse the duplicates and get the most recent most recent (sic) time? Example data: | host | lastTime | | Server012 | 1541663236 | | Server012.example.tld | 1541689264 | I want a query that would return: | Server012 | 1541689264 | Any insights?

Is there a way to ignore 2nd value in table cell ?

$
0
0
Hi, In search query output, we are adding some columns from csv file. Some times there are duplicats entry in csv file. this causing to display multiple values in table cell. Is there a way to remove/ignore duplicate values from table cell ? ![alt text][1] [1]: /storage/temp/256576-duplicates.png

File can't be indexed

$
0
0
Hi Team, I have on file (**is the picture**) that are unable to catch and index ![alt text][1] [1]: /storage/temp/257575-1.png i have this configuration in my input.conf [monitor://D:\eo\contLive\logs\job*.log] sourcetype = progress:inter index = progress crcSalt = disabled = false [monitor://D:\eo\contLive\logs\*.log] sourcetype = progress:contlive index = progress disabled = false the source type progress:inter have been created in a specific TA (bellow the props.conf) [ progress:inter ] SHOULD_LINEMERGE=false NO_BINARY_CHECK=true CHARSET=UTF-8 REPORT-intervention-status=REPORT-intervention-status category=Structured disabled=false TIME_FORMAT=%d/%m/%Y %H:%M:%S.%3N i already try to do only this input and the specific file (jobstatus.log) is not indexed [monitor://D:\eo\contLive\logs\*.log] sourcetype = progress:contlive index = progress disabled = false Many thanks for your help

ERROR SummarySizeManager

$
0
0
Hi there, I am seeing the following error in Splunk: ERROR SummarySizeManager - Cannot compute size on disk for dir="/opt/splunk/var/lib/splunk//datamodel_summary/414_C827F652-385D-4F20-B8FE-45F266F00C90": No such file or directory There are many of these in the week however on weekends if there is a rolling restart or upgrade, the system is flooded with these messages. Has anyone come across this before? thanks!

Expected field value for x_exception_id not present

$
0
0
Hello, i'm installing the proxysg app for a client. I've got it all covered but the only panels I cant get to work are the ones who are expecting this value x_exception_id=virus_detected. I've found lots of x_exception_id=policy_denied events instead. with malware detections associated to them, but no "virus_detected". So i cant populate those panels. the fields im sending from proxysg's side are: date time time-taken c-ip cs-username cs-auth-group x-exception-id sc-filter-result cs-categories cs(Referer) sc-status s-action cs-method rs(Content-Type) cs-uri-scheme cs-host cs-uri-port cs-uri-path cs-uri-query cs-uri-extension cs(User-Agent) s-ip sc-bytes cs-bytes x-virus-id x-bluecoat-application-name x-bluecoat-application-operation Thanks for the help!

Are '.' characters in KV lookup field names supported?

$
0
0
I notice that whenever I create a KV-store lookup definition with a field containing a '.' character, it does not work properly. Surrounding the fieldname with """ does not help. Writing to the lookup with the outputlookup command results in the message: "Could not write to collection 'my_collection': An error occurred while saving to the KV Store. When I remove the '.' character(s) from the field names in de lookup definition it works again.

Use subsearch for two correlated queries

$
0
0
One that is exclusive of Server4 in Index1 based of the hosts in Index2. I.e. based on the Index2 hosts I run a query on Index1 and only show the same hosts, Server1–Server3. Two that is exclusive of any hosts that are in Index2 when we run a search in Index1. I.e. based on the Index2 hosts I run a query on Index1 and it only shows the host Server4. P.S. - This is an enterprise class system and the hostnames columns are a moving target and also the hostnames are different fieldnames Index1 -Server1 -Server2 -Server3 -Server4 Index2 -Server1 -Server2 -Server3

Windows Infrastructure app - Active Directory Error

$
0
0
Hi Experts I have installed and configured Splunk app for windows infrastructure in our search head as per the instruction on the Splunk Docs. I can see all events in indexes (wineventlog, windows, msad, perfmon) etc. but i can't see any Active directory related information in the app. when i run "Customise Feature" option i can see below results; Active Directory: Domains not found. Detecting Domain Controllers ... Active Directory: Domains not found. Detecting Domain Controllers ... Active Directory: Domain Controllers not found. Detecting DNS ... Active Directory: Domains not found. any idea what might be the reason ? Many Thanks.

File not indexing

$
0
0
Hi Team, I have an issue when i try to index one file (this one in picture) ![alt text][1] [1]: /storage/temp/257574-1.png My input file is like this : [monitor://D:\eo\contLive\logs\job*.log] sourcetype = progress:inter index = progress crcSalt = disabled = false [monitor://D:\eo\contLive\logs\*.log] sourcetype = progress:contlive index = progress disabled = false the sourcetype progress:inter have been successful created in a specific TA and deploy on all my server All the files are indexed but not jobStatus.log I already try to remove in my imput jobstatus.log and try to catch *.log in my source type progress:contlive but this file was not indexed also. In splunkd.log on the UF i can see this 11-08-2018 15:36:26.387 +0100 INFO WatchedFile - Will begin reading at offset=5797312 for file='D:\eo\contLive\logs\jobStatus.log'. But also not indexed Can you help me ? Many Thanks

Cannot push new index cluster bundle after rollback

$
0
0
I recently had to rollback a config bundle file on our index master running windows server 16. Now I every time i try to push the updated bundle I get this error: Recent rollback bundle operation failed to remove contents under path=C:\Program Files\Splunk\etc\master-apps. Please untar active bundle manually from bundle=[id=DEDFCEB031E7377224B4E3EBCD 608C80, path=C:\Program Files\Splunk\var\run\splunk\cluster\remote-bundle\d52190c329a27ebe07129537c6adbd76-1541522317.bundle] onto path=C:\Program Files\Splunk\etc\master-apps and delete dirty marker file=C:\Program Files\Splunk\etc\master-apps.dirty in order to proceed with new config bundle push. I am not able to find the master-apps.dirty that they are reffering to, and I have tried restarting the index master with no luck. What should I do to correct this error?

how to manually control app config reload of specific deployment client

$
0
0
Hello, I have several critical UF/HF that providing equivalent service in a load-balanced topology. I would like to leverage deployment server / client feature to push the configuration to them, but would like to control when a given DS client will really download/apply the config. For example, I have 3 nodes acting as load-balanced HTTP Event Collector. I would like to trigger the update through DS of one node, wait the node to apply the change, before proceeding with others. Questions : - is it possible to disable automatic config update of a given instance (Splunk Enterprise or UF) - then is there a command that can be run locally or remotely to trigger the usual DS client refresh on-demand/manually Thanks.

When there are duplicate host names, how do you build a search that gets hosts' last activity?

$
0
0
We have a bunch of hosts. Some of them are kind of like duplicates in that they are just the host name, and some are the FQDN. Example: - Server012 - Server012.example.tld I know how to use the metadata command to get a list of hosts and the last time they were seen. I also know how to strip out the domain part of the FQDN. | metadata type=hosts | rex field=host "^(?.+)\.example.tld" | table hostname, lastTime The question is: How can I collapse the duplicates and get the most recent most recent (sic) time? Example data: | host | lastTime | | Server012 | 1541663236 | | Server012.example.tld | 1541689264 | I want a query that would return: | Server012 | 1541689264 | Any insights?

Splunk and MS Teams

$
0
0
I am looking for documentation on how to use Splunk with MS Teams. I want to forward alerts to groups in MS teams.

Basic "IN" command not working in Splunk

$
0
0
I am trying to accomplish a simple "IN" command in Splunk basically filtering the result to show only those entries which have an entry for their "product_id" number in a another tables "product_number" attribute. But splunk throws an error saying "Error in 'search' command: Unable to parse the search: Comparator 'IN' has an invalid term on the right hand side: NOT" sourcetype= Order product_id IN [ search host=product | table product_number] | stats count by order_id Any help in understanding what I am doing wrong would be of great.

Does Anyone Have Experience with Running Splunk Enterprise 7.x on a Cisco UCS SP HX240c (v1 circa 2016)?

$
0
0
Hello, We have 2 Cisco HyperFlex UCS SP HX240c. We believe our vendor is trying to force us to upgrade unnecessarily ($$$) to a more recent version of the HyperFlex. Is there anyone in the Community who has installed Splunk Enterprise 7.x (all components - Indexers, Search Heads, License Master, Monitor Console, Deployment Server) on the HyperFlex platform mentioned above? If not, does anyone have experience with running just the Indexers and Search Heads on this older version of the Cisco HyperFlex? Thanks for a very quick response on this as this answer will make or break our Splunk installation. God bless, Genesius PS I am Googling the Internet for an answer to this question as well. Hopefully we'll find an answer quickly.

OpenSSL 1.0.2o-fips

$
0
0
Running Splunk Enterprise Version: 7.2.0 Build: 8c86330ac18 on Windows Server 2012R2. Ran Nessus Professional Version 8.0.1 (#155) WINDOWS scan and received this low risk finding: OpenSSL AES-NI Padding Oracle MitM Information Disclosure "It was possible to obtain sensitive information from the remote host with TLS-enabled services." "The remote host is affected by a man-in-the-middle (MitM) information disclosure vulnerability due to an error in the implementation of ciphersuites that use AES in CBC mode with HMAC-SHA1 or HMAC-SHA256. The implementation is specially written to use the AES acceleration available in x86/amd64 processors (AES-NI). The error messages returned by the server allow allow a man-in-the-middle attacker to conduct a padding oracle attack, resulting in the ability to decrypt network traffic." Upgrade to OpenSSL version 1.0.1t / 1.0.2h or later. Checked OpenSSL version like this: E:\Splunk\bin>splunk cmd openssl version OpenSSL 1.0.2o-fips 27 Mar 2018 If I'm already using OpenSSL 1.0.2o-fips why is the vulnerability still there?

Why am I getting these socket errors?

$
0
0
HttpListener - Socket error from while accessing /services/streams/search: Broken pipe? Here's my ulimit info ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 31866 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 10240 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 31866 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Please help!

Deploy Splunk UBA in Mixed Server OS env?

$
0
0
My current environment is 2 splunk servers. One acting as a search head / indexer and one acting as a heavy forwarder. I have multiple UF clients pointing to the HF which filters and forwards to the indexer. Both of the servers are Windows Server 2016. I am looking into what would be needed to start using UBA. I see that it requires a *nix server. Could I deploy a second indexer on a *nix server and use that for UBA and continue to use the Windows search head?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>