Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

how to group or follow fields with different value

$
0
0
Hi guys, I am new to splunk. I have multiple events that looks like this: - 2020-02-07 07:21:20 action_time="2020-01-02 07:21:20.39", id_client="1234", ticket="1", - 2020-02-07 07:21:20 action_time="2020-01-02 07:22:20.39", id_client="4567", ticket="2" - 2020-02-07 07:21:20 action_time="2020-01-02 07:23:20.39", id_client="1234", ticket="2" - ... I would like to see transaction like this: in All events, find the first event where id_client = "1234" and ticket="1". If match, find next event with the same id_client, but the ticket= "2". so, for the same client, find first ticket=1, following after the ticket=2. I tried with: ...| transaction action startwith='1' endwith='2' but it does not work how can we do this in splunk ? I thank you i advance,

Using a child data model to reduce search

$
0
0
i'm trying to create a data model with child subsets and calling this in a search. However the searches are calling the whole index rather than the subset - How do I need to adjust the setup to get this to stop searching everything ?

Run CLI command on mac to connect to splunk and retrieve result of query

$
0
0
How I can run python commands from my Mac to retrieve data from Splunk. I am going through the splunk documentation - https://docs.splunk.com/Documentation/Splunk/6.2.1/Admin/AbouttheCLI Settings > Server settings does not show General settings to find installation path 1. How do we install Splunk on mac 2. Which path should we use to run CLI

Defining string for use with inputcsv using dashboard token

$
0
0
I would like to define the value of a variable, lets call it 'infile' based on the value of a token selected via radio button. Pseudocode: If rbutton=yes then infile=inputfileA.csv if rbutton=no then infile=inputfileB.csv Any help appreciated.

Splunk app for infrastructure script error

$
0
0
Hi team, I have a problem in the functioning of splunk application for infrastructure, when I launch the script under the command line of my host (ubunto 16) I always find this error : Failed to install libcurl package. exiting .. Script : `export SPLUNK_URL=X.X.X.X && export HEC_PORT=8088 && export RECEIVER_PORT=9997 && export INSTALL_LOCATION=/opt/ && export HEC_TOKEN=eb8d0b7d-1a8c-4ba2-8997-107acf610cf7 && export SAI_ENABLE_DOCKER= && export DIMENSIONS= METRIC_TYPES=cpu,uptime,df,disk,interface,load,memory,processmon METRIC_OPTS=cpu.by_cpu LOG_SOURCES=/etc/collectd/collectd.log%collectd,\$SPLUNK_HOME/var/log/splunk/*.log*%uf,/var/log/syslog%syslog,/var/log/daemon.log%syslog,/var/log/auth.log%syslog AUTHENTICATED_INSTALL=Yes && wget --no-check-certificate http://172.17.1.51:8000/static/app/splunk_app_infrastructure/unix_agent/unix-agent.tgz && tar -xzf unix-agent.tgz || gunzip -c unix-agent.tgz | tar xvf - && cd unix-agent && bash install_uf.sh && bash install_agent.sh && cd .. && rm -rf unix-agent && rm -rf unix-agent.tgz` Any help please ?

Amazon WorkSpaces

$
0
0
We are running Sysmon on Amazon WorkSpaces. We are trying to get the Sysmon (and other) logs into Splunk. We are currently trying to use a forwarder on the hosts. We run "splunk clone-prep-clear-config" before creating the bundles, however the problem we are encountering is that when we create our bundles AWS is doing something behind the scenes (likely a start up and reboot) that is setting the GUIDs before the bundle is finalized. This means that every forwarder has the same GUID. Has anyone else worked with WorkSpaces logs and how did you overcome this challenge? The other option I was thinking of trying was seeing if we can install the CloudWatch Agent and get the logs from CloudWatch instead of directly from the host.

Editing inputs on multiple Universal forwarders at a time

$
0
0
We have around 600 servers where we need to edit inputs.conf on universal forwarders of those servers.Is there a way we can do it at a time, all these servers are windows OS. I got to know there would be script to edit them, if yes, how can it be done? Thanks

Heavy Forwarder Slow to Start Forwarding Syslog

$
0
0
Hello- My current setup: Device Syslog --> Syslog Server w/ Splunk HvyFwd --> Splunk Indexer When I restart my Heavy Forwarder server or Splunkd, it takes up to 30 minutes to begin forwarding syslogs to the indexer. Is this due to the number of devices and folders stored within the syslog server, and is there a way to speed this process up? Thanks,

Need help with inputs.conf

$
0
0
Hello I have some directories that I need to monitor. Using updated inputs for the TA_nix app I am adding syslog/linux:audit data is specific paths. It mostly works as expected BUT I had a few outliers. Heres the basic directory structure: /var/log is standard BUT the messages coming from other hosts goes to a path /var/log/remote in this path is the 2 types of logs: syslog and linux:audit as well as .bz2 which we never want indexed from any path. /var/log/remote/202/02//messages/ /var/log/remote/202/02//audisp/ within each one of these is an archive directory as well, it contains files being written to and .bz2 which we never want indexed from any path. /var/log/remote/202/02//messages/archive/ /var/log/remote/202/02//audisp/archive/ So the inputs I created looks like this: [monitor:///var/log] whitelist=(\.log|log$|messages|secure|auth|mesg$|cron$|acpid$|\.out) blacklist=(lastlog|anaconda\.syslog|\.bz2$|audisp|\_audisp.log|\audisp.log\-) index=nix_os disabled = 0 [monitor:///var/log/remote/*] whitelist=(messages|\_messages\.log|_messages\.log\-) blacklist=(\.bz2$|audisp|\_audisp.log|\audisp.log\-) index=nix_os sourcetype = syslog disabled = 0 recursive=true [monitor:///var/log/remote/*] whitelist=(audisp|\_audisp.log|\audisp.log\-) blacklist=(\.bz2$|messages|\_messages\.log) index=nix_os sourcetype = linux:audit disabled = 0 recursive=true What I have found is that there are files with the sourcetype set as the filename, which it should be either syslog or linux:audit since the path is: /var/log/remote/2020/02/corp/messages/archive/hostname.domain.com_messages.log-20200206 got the sourcetype set to the file name: hostname.domain.com_messages.log-20200206 Also these did not index: /var/log/remote/2020/02/corp2/audisp/archive/: _messages_audisp.log-20200204_messages_audisp.log-20200205 _messages_audisp.log-20200206 Can anyone tell me: 1.Why did the messages file hostname1234.domain.com_messages.log-20200206 get the sourcetype set to the file name (some are set to "too-small" as well) sourcetype=hostname1234.domain.com_messages or sourcetype=hostname1234.domain.com_messages-too_small 2. Why didnt the /audisp directory and the corresponding files index? For example: /var/log/remote/2020/02/corp2/audisp/archive/_messages_audisp.log-20200204 Thanks for you assistance

How to set a Token when the dashboard is 100% loaded

$
0
0
hi I need to be able to know when a dashboard is 100% completed, how can i get a token for this? I had an idea of doing something like this after each search. LOADED However this is a very manual way of doing it (PAGE_LOADED1 + PAGE_LOADED2 + PAGE_LOADED3 > 0 etc..), i was hoping for an easy peace of code that i can put at the top of each dashboard. Thanks Robert

How to count lookup matches by the field values in the Lookup?

$
0
0
Hi, I was given a request to use csv lists (i.e. lookups) with keyword values to find USB writes in an index where a field name of "file-name" is file info written to usb. The file-name values are not consistent and most often the value returns as a file path, like "D:/Downloads/foo/bar/foo-bar.txt" or something like that. So file-name is actually a file path. I was asked to use a csv supplied to me as lookup criteria, like this... keyword.csv is the lookup name keyword keyword-ID *red* 34948-kjas *green* 89050-kjec *blue* 89008-nkme the column header fields are "keyword" (which is a wildcard string) and "keyword-ID" (which is a rando ID) I wrote a query like this... index=foo sourcetype=bar [|inputlookup keyword.csv |fields keyword | rename keyword as file-name] |stats count by file-name and I get the counts of each unique file-name which is what I thought the requestor wanted but that is not the case. They want to know the count by keyword, like red = 5 green = 1 and blue =3 etc... So I am stuck getting the results from my query piped back into a lookup to count by the key words... I am not sure how I get this done. I was advised in slack to use wildcard matching to reverse the lookup but I could not get it to work. index=foo sourcetype=bar [|inputlookup keyword.csv |fields keyword | rename keyword as file-name] | >>>> ? Any advice appreciated!!

Calculate difference of fields where certain field value exists

$
0
0
For each Digit I have below (Digit 0,2,3,4,5,7,8) I want to calculate the difference in time between the TXN endtime and the FW endTime for that digit. How can i group this new calculated one value for each digit? index= jobName = (all job names here) | lookup digit_processing.csv jobName as jobName output Digit as Digit | eval endTimeEpoch = strptime(endTime, "%Y-%m-%d %H:%M:%S") | table jobName Digit endTime endTimeEpoch status | sort -Digit ![alt text][1] [1]: /storage/temp/282621-capture.png

RESTAPI Search Limits TTL

$
0
0
I have a search being executed via script hitting the REST API. Occasionally it will return no results and looking for the associated events in _internal we get the below: ![alt text][1] [1]: /storage/temp/283626-capture.png Through this we can see that once it hits around 300000ms (5min) the search times out. Anything below it we get data returned as shown by the non-zero values after each 200 status code. I've been looking through the spec files for what setting might be imposing this limit but have not had any luck in finding one that changes this value. I've gone through looking via `grep " 300 " /opt/splunk/etc/system/README/*spec`as well as other variations of that time format. In addition to this, I have send arguments with the POST for auto_cancel and ttl and it does not appear to affect this 5 minute timeout. Any thoughts as to where this limit is being imposed?

BucketMover - aborting move because failed to rename src to dest failed (reason='Directory not empty')

$
0
0
Trying to send the frozen buckets to a ECS Windows shared drive using CIFS mounted on Splunk Linux indexer. Permissions to Splunk service account on frozen is having full level modify access. Is there anything else we can troubleshooting for the below errors? Looks like Splunk trying to rename the inflight folders on mount after copying and failing to do so. Buckets are getting copied to frozen location naming with inflight-db-*** which keeps retrying every few seconds ERROR BucketMover - aborting move because failed to rename src='/data/frozen/index/name/inflight-db_***_***_** to dst='/data/frozen/index/name//db_***_***_**' (reason='Directory not empty') ERROR BucketMover - aborting move because could not remove existing='/data/frozen/index/name/inflight-db_***_***_** (reason='Directory not empty')

Update Universal Forwarder

$
0
0
How can I update 300 forwarders quickly? Is there any method?

How many apps can I deploy in Universal Forwarder?

$
0
0
Hi everybody, I'm trying to deploy 2 apps in an universal forwarder from a deployment server. The problem that I'm encountering is that when the deploy finished and restart the Splunk Universal Forwarder service the apps deployed doesn't work instead if I deploy only 1 app the app work and I recieve the logs. My configuration is the following: In my Universal Forwarder I have: o) App1 o) App2 The input.conf file from App1 has this config: [WinEventLog://ForwardedEvents] index=index1 sourcetype=sourcetype1 whitelist= 4100,4104,4103 evt_resolve_ad_obj=1 renderXml=0 And the App2 has the same configuration but changing the events recieved: [WinEventLog://ForwardedEvents] index=index2 sourcetype=sourcetype2 blacklist= 4100,4104,4103 evt_resolve_ad_obj=1 renderXml=0 This apps works separately but together not. Exists any kind of limitless to use several apps in an unique universal forwarder.

edit server.conf on multiple servers

$
0
0
I want to edit server.conf for around 600 servers, is there anyway we can edit them all at a time.

How to Configure Splunk Heavy Forwarder to Consume Kafka Topics based on SSL/TLS

$
0
0
Hello, we are using Splunk Heavy Forwarder to consume data from Kafka topics (flow #1) and forward it to the Splunk Server (flow #2), i.e. Kafka Cluster --- (1) ----> Splunk HF ----- (2) -----> Splunk Backend system Kafka cluster has been configured to support SSL/TLS encryption on the port 9093, e.g. bootstrap-endpoint:9093 Could you please provide me some guidance how to configure the Splunk Heavy Forwarder to be able to consume the Kafka topics based on SSL/TLS. Thank you very much for your guidance in advance. Best regards Yongyuth

DB Connect not able to process results

$
0
0
I am able to test the connection in datalab using splunk db connect app. I am able to fetch results when I run the query and successfully created connection. But, I am not able to see any data when I click find events on the connection.

Extract pairldelim kvdelim JSON problems

$
0
0
I have JSON data that I'm trying to extract into fields and unable to get all the data extracted correctly. My query is **index=myindex |spath |extract paridelim="," kvdelim=":{}[,]"** My data looks like this *{"version":"1.0.0","integrationType":"metric","action":"created","metrics":{"General":{"tapid":1,"port":2,"length":16,"timestamp":1580164559,"packet_id":626910,"protocol":"test","Indexes":{"Address1":[0],"FCode":[1],"AddressOut1":[2,3],"outputValue":[4,5],"checksum":[6,7]}},"ApplicationLayer":{"Rail":{"Rail16":1}},"TransportLayer":{"Address2":3,"FCode2":{"code":5,"string":"Read Single Values"},"type":"response","crc":56253}}} {"version":"1.0.0","integrationType":"metric","action":"created","metrics":{"General":{"tapid":1,"port":2,"length":30,"timestamp":1580164556,"packet_id":626904,"protocol":"test","Indexes":{"Address1":[0],"FCode":[1],"RValues":[2],"reg1":[3,4],"reg2":[5,6],"reg3":[7,8],"reg4":[9,10],"reg5":[11,12],"reg6":[13,14],"reg7":[15,16],"reg8":[17,18],"reg9":[19,20],"reg10":[21,22],"reg11":[23,24],"reg12":[25,26],"reg13":[27,28],"checksum":[29,28]}},"ApplicationLayer":{"Registering":{}},"TransportLayer":{"Address2":3,"FCode2":{"code":3,"string":"Read Multiple Values"},"type":"response","crc":18279}}}* The query does fine for most of the data but fails to get multi-values. For example: "AddressOut1":[2,3] will only give me AddressOut1 = 2 -- It's not extracting the 3. I was expecting AddressOut1=2,3 "checksum":[6,7] again will only give me checksum = 6 -- and skip the 7. The same with "reg1":[3,4]. I'm only getting the 3. Whenever there is multiple values, I only get the first entry in the array. I suspect it because the "," is used to separate the keys but because one one of the values also uses a "," as a separator, it not able to do it. Is there a better way to extract these or am I missing something? Thanks
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>