Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Is it possible to get the value of a specific row of $result.$

$
0
0
Given that we have `index=foo sourcetype=bar | table Aaa Bbb Ccc Ddd` in a ``, is it possible to get the (say for example) the 4th row of `$result.Ccc$`? According to Splunk, `$result.Ccc$`only retrieves the first row.

Getting 404 using axios call to rest api

$
0
0
I am trying to connect to splunk's rest api. In the command line when I curl -k https://localhost:8089/services/auth/login --data-urlencode username=admin --data-urlencode password=pass. I get a response with a session key. But when I try axios.get({ url: 'https://localhost:8089/services/auth/login' data: { username: 'admin', password: 'pass' } }) ..... I get a 404 status back

How to add own IP locations into the GeoLite2-City.mmdb

$
0
0
Hello, I applied successfully the tool at github Customizing-Maxmind-IP-Geo-DB-for-Internal-Networks https://github.com/threatstream/mhn/wiki/Customizing-Maxmind-IP-Geo-DB-for-Internal-Networks] to add own IPs for an important Enterprise Security Projekt. But somehow the mmdb created by python csv2dat.py -w mmcity.dat mmcity GeoLiteCity-and-mynetworks.csv differs from Splunks internal GeoLite2-City.mmdb >>> import pygeoip, json>>> geo = pygeoip.GeoIP('GeoLite2-City.mmdb')>>> print json.dumps(geo.record_by_addr('182.236.164.11'), indent=4, sort_keys=True) Traceback (most recent call last): File "", line 1, in File "/root/mmutils/env/lib/python2.7/site-packages/pygeoip/__init__.py", line 544, in record_by_addr raise GeoIPError(message) pygeoip.GeoIPError: Invalid database type, expected City Is there a better method? Did I miss another conversion step? Thanks! [1]: https://github.com/threatstream/mhn/wiki/Customizing-Maxmind-IP-Geo-DB-for-Internal-Networks

Back end query to pull the active searches running in Search Head

$
0
0
I want to check what are the searches which are running currently or which are finalizing or which is done via our back-end Search head server which is an Unix machine. So Is there any back end command to pull those sort of information?

Regex for String folloed by number ( 1/2/3 digit)

$
0
0
Below are my 3 logs, i want to write a query, to get all the below 3 logs **EXT_CODE*[0-9]** with 1/2/3 digit followed by EXT_CODE index="zync*"|EXT_CODE2=AB003|EXT_CODE35=BC003|EXT_CODE4=CA010|GEN_CODE14=CD010 index="zync*"CDT|EXT_CODE4=XY005|EXT_CODE42=DE040|EXT_CODE4=ZQ019|GEN_CODE11=PY016 index="zync*"|EXT_CODE5=PC099|EXT_CODE22=BC054|EXT_CODE4=ZC018|GEN_CODE11=ZV010 Can some one please suggest the query my query: index="zync*" EXT_CODE[0-9]*="*"

count(var) by "a list of values within a field"

$
0
0
First of all, sorry, if I am missing something really obvious here but after hours of googling I am still stuck with the following problem. Basically I have a list of URLs and a score in the format like that: http://www.abc.com/abc/abc.html 50 http://www.abc.com/abc/abc.html 30 www.xyz.org/asd/ 12 qwer.com/asd 7 What I try to achieve now is to group some of the URLs and have the sum of the score displayed in a table. For example: abc.com & xyz.org = "External Sites" will then lead to the following table: `Site name | Sum ------------------------ External Sites | 92` The approach so far is to have an `| eval siteName = if(match(url, [some regex], ...)` add a new field with the site name which works. The interesting part now is, that some of the groups might not have events present all the time and `| stats sum(score) as Sum by siteName` obviously gives me only sum of the groups that are present. Is there any way to give me a table for an list of sitenames that "could" be there like the following: `Site name | Sum ---------------------- External Sites | 92 Internal Sites | 0` Thank you very much in advance Andreas

We have observed one error from one forwarder server to indexer.

$
0
0
We have observed one error from one forwarder server to indexer. Error Message:08-20-2018 13:34:39.963 +0200 ERROR TcpInputProc - Message rejected. Received unexpected 842019128 byte message! from src=192.168.1.71:37694. Maximum message allowed: 67108864. (::)

Enable And Disable Rest End Point

$
0
0
Hi Experts I am trying to disable an alert using below rest API example provided in the documentation. It returns back a XML response with all the attributes of the alert but do not disable the alert. Example:- curl -k -u admin:pass https://localhost:8089/servicesNS/admin/search/saved/searches/TestSearch/ \ disable -X POST My curl command curl -X POST -k -u admin:xxx https://server:9099/servicesNS/admin/search/saved/searches/test1234/disable Reference :- http://docs.splunk.com/Documentation/Splunk/6.6.5/RESTUM/RESTusing Any inputs, what is wrong here

Searching strings with accented characters

$
0
0
Hello, I'm having an issue when trying to filter events based on accented characters. For instance if I look at the ingested events, `index=my_index sourcetype=my_source` , I will be able to see the events that have the field value I'm looking for: ... asset_name ... ... D. João ... If I try to filter the events at search time `index=my_index sourcetype=my_source asset_name="D. João"` , the "**No results found.**" message is displayed, the same applies if I select the desired field value from the field's list on the left. How can I get this to work? I've looked for similar questions here on the Splunk Answers forum but it mainly points for the sourcetype encoding, which I think might not be the issue, since the events seem to be properly encoded. Thanks in advance!

I need help in pulling report for specific date

$
0
0
Hi Team, Case 1: I want to pull data on daily basis, starting from first week of starting date , but if sat or sun is coming on 1st of week then it should exclude & take of starting as Monday as start date, and data will keep coming for 8 days after that data will stop or freez for rest of days, 2018-08-10 15:22:12 S_DATE="20180810", S="Actual", YEAR="2017", PERIOD="Dec", VIEW="ABC", ENTITY="ABC_DEFG_HIJ_K", WELCOME_TEST="INVESTING, Inc._DFG_C", ACCOUNT="123456", ACCOUNT_DESC="1213456 - P/L Intermitent Offset", INTER_ENTITY="1234", VALUE="[Parent Total]", AB="876543", PASE="000000", INTER_RC="100076", COUNT="000000", CUSTOM5="ZXY", SYSTEM_TYPE="Total_Late", DATA="0" host =hostname source = logs sourcetype = new

How to Blacklist on UF with a TCP input

$
0
0
I have a UF running on a linux device, with a TCP input. The input is coming from a Graylog forwarder and all the windows events coming with a 'winlogbeat_ preface. I want to black list windows events coming by event code and normally I use a blacklist -= EventCode="xxxx" Message=.... however the eventcode comes in as winlogbeat_event_id, I did try this: blacklist1= winlogbeat_event_id = "4662" This doesn't appear to work. Can someone help with this? Is there anylog that shows events being whitelisted or blacklisted? Thank You!

i have two macros if those values are not macthing(a!=b) then i have to schedule another search query , how it is possible??

$
0
0
i have two macros if those values are not macthing(a!=b) then i have to schedule another search query , how it is possible?? example: macro `a` is 2 (a=2) macro `b` is 3 (b=3) if a!=b then we have to schedule below query how it is possible in splunk ? | makeresults | eval x=3+3 ,y=6+6 | table x,y

Admin Password Change

$
0
0
Is it possible to change the admin account password which we used to login in Splunk Cluster Master, Deployment Master, Search Head & Indexers?

Finding the Splunk Instances via Back-End Command

$
0
0
How to find via back-end by logging into a server might be windows or Unix box whether its an Indexer OR Search Head OR Cluster Master OR Heavy Forwarder OR Deployment Master?

Splunk License Usage

$
0
0
Recently, I have upgraded my Splunk environment to 7.1.2 from 6.5.3 version. Since I upgrade the version, license has been breaching everyday. So I started digging deep on what is consuming much and source details. Nothing changed in number of sources or data. There is no increase in number of events per file or even the size of the event remain same. I am running out of ideas what to check next for this change in license consumption. I know that it is bit odd but any suggestions on what needs to be checked and conclude on root cause? Thanks in advance.

How can I re-index license-usage.log

$
0
0
Hello Someone prior to me had set the license master to forward logs to the wrong hosts so when I fixed it I have no historical data for license usage. Whats the best way to fix this? Thanks for the assistance!

How to round a number when displaying results in chart ?

$
0
0
I am trying to display the response times of services for the last 7 days in a chart , but I want to round the response time . for example I only want 2 digits to be displayed after decimal . My query :- | chart avg(response_time) over services by Date | foreach * [eval response_time = round(response_time,2)] But the above query doesn't work for me

How to resolve error with bucket in indexer cluster?

$
0
0
This is the error message I saw this morning. When I log into my cluster Master I can see both indexers. CLUSTER_ADD_PEER_FAILED_guid XXX-XXX-XXX server name=SplkIndx1 ip=x.x.x.x:8089_bucket already added as clustered, peer attempted to add again as standalone. Guid=XXX-XXX-XXX bid=linuxSecure ~123~456ABC

What is the best practise for monitoring a file directly on the indexer machine(s)?

$
0
0
I need to monitor a file directly on the indexer. I know I can just define an inputs.conf on the indexer itself and read the file. Later on, if I'm upgrading to an indexer cluster, could this create problems? Would the data inputs from the file still be duplicated over the different indexers when reading a file like this (as opposed to receiving data on port 9997 from an UF)? It feels kind of like a hack to push inputs konfiguration from the Cluster Master, but I guess the alternative would be to install an UF on the machine as well as the Splunk Enterprise instance for the indexer, then the input would be load balanced as well, though I think this solution would be a bit of an overkill. What is the best practise for doing this?

Best practice for field extractions

$
0
0
Hi, There is some debate in our group regarding best practices for field extractions. We have a feed that has well defined key-value fields. We also have field extractions setup on the SH, for a number of these fields. Is there a really a need for the field extractions, since key-value pairs will get picked up automatically? Pros/cons? We use CIM/ES extensively.
Viewing all 47296 articles
Browse latest View live