Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

extracting fields from another field

$
0
0
Hi, We are receiving the event in json format and given the _raw event below. I am trying to extract the fields through props and transforms from a particular field but it is not working _raw event [{"command":"gitly-upload-pack tcp://prod-gitly-primary.domain.com:80 {\"repository\":{\"storage_name\":\"default\",\"relative_path\":\"infrastructure/app-config-iam-lint-rules.git\",\"git_object_directory\":\"\",\"git_alternate_object_directories\":[],\"gl_repository\":\"project-139\",\"gl_project_path\":\"infrastructure/app-config-iam-lint-rules\"},\"gl_repository\":\"project-139\",\"gl_project_path\":\"infrastructure/app-config-iam-lint-rules\",\"gl_id\":\"key-id\",\"gl_username\":\"uname\",\"git_config_options\":[],\"git_protocol\":null}","user":"user with id key-7260","pid":6055,"level":"info","msg":"executing git command","time":"2020-02-14T11:23:34+00:00","instance_id":"instanceid","instance_type":"m5.4xlarge","az":"us-east-1b","private_ip":"x.x.x.x","vpc_id":"vpc-id","ami_id":"ami-id","account_id":"12345","vpc":"infra-vpc","log_env":"prod","fluent_added_timestamp":"2020-02-14T11:23:36.397+0000","@timestamp":"2020-02-14T11:23:36.397+0000","SOURCE_REALTIME_TIMESTAMP":"1581679416397075","MESSAGE":"executing git command"} Below is the value assigned to command field and i am trying to split into multiple fields, gitly-upload-pack tcp://prod-gitly-primary.domain.com:80 {"repository":{"storage_name":"default","relative_path":"infrastructure/app-config-iam-lint-rules.git","git_object_directory":"","git_alternate_object_directories":[],"gl_repository":"project-139","gl_project_path":"infrastructure/app-config-iam-lint-rules"},"gl_repository":"project-139","gl_project_path":"infrastructure/app-config-iam-lint-rules","gl_id":"key-id","gl_username":"uname","git_config_options":[],"git_protocol":null} It is extracted as expected through rex search cmd. `**searchquery | rex field=command "^(?[^\s]+)\s(?[^\s]+)\s(?.*)" | spath input=git_json**` i am trying to put it through props and transforms but not working [sourcetype] REPORT-command = morefields_from_command [morefields_from_command] kv_mode = json SOURCE_KEY = command REGEX = (?\S+)\s(?\S+)\s(?.*) my requirement is git_command = gitly-upload-pack git-url = tcp://prod-gitly-primary.domain.com:80 git_json = {"repository":{"storage_name":"default","relative_path":"infrastructure/app-config-iam-lint-rules.git","git_object_directory":"","git_alternate_object_directories":[],"gl_repository":"project-139","gl_project_path":"infrastructure/app-config-iam-lint-rules"},"gl_repository":"project-139","gl_project_path":"infrastructure/app-config-iam-lint-rules","gl_id":"key-id","gl_username":"uname","git_config_options":[],"git_protocol":null} once this done, then i have again split it from git_json as below storage_name = default relative_path=infrastructure/app-config-iam-lint-rules.git .. .. .. git_protocol= null

Can access restrictions be put on a lookup automatically upon creation?

$
0
0
Can access restrictions be put on a lookup automatically upon creation? For example: User A creates a lookup <-- can this lookup be automatically restricted so that User B can not search the contents ? I know this can be done manually by setting the read permissions (select roles) on the lookup but is there a way to automatically set the permissions to be restrictive upon creation?

getting sum of a multivalues field for ach event

$
0
0
Hi, I have a query like below. index=linux sourcetype=iostat mount="*" which will list total_ops for each mount of a host in multiple events. i need to get sum of total_ops of each host of all mounts from latest event. Please help

wanna show the of the job as it is untill its changes its status

$
0
0
Hi Team I have following details One of autosys job is running for 20 hours with the status recording in the logs as RUNNING recording only one event with status in the logs .i.e when it changed from the status STARTING to RUNNING. I would like to show the same status of the Job for that 20 hours duration in the timechart . I am using the following query, but no able to get the solution. can you please help. index=infra_apps sourcetype=ca:atsys:edemon:txt | rename hostname as host | fields Job host Autosysjob_time Status | lookup datalakenodeslist.csv host OUTPUT cluster | mvexpand cluster | search Status=STARTING AND cluster=* AND host="*" AND Job=* | dedup Job Autosysjob_time host | timechart span=5m count(Job) by cluster You help is much appreciated.

Help Getting Data Out of .xml report

$
0
0
I am trying to pull fields out of .xml file where I can make sense of them and put the info into a dashboard. I am trying to pull the ruleID, ruleResult, and result count out where they are relational to each other so I have (CVE#, Fail or Fixed, count#). I tried making new fields but Splunk doesn't see that these fields have any relation to each other and they just come up as individuals.CVE-2000-1985CVE-2000-1820CVE-2000-4568CVE-2000-1156CVE-2000-5641CVE-2000-1985CVE-2000-1156CVE-2000-4568

Logs not picking sorcetype from props.conf in apps/local folder on heavy forwarder

$
0
0
Hi, we want to parse the logs on HF before logs are forwarded to indexers. logs are forwarded from universal forwarder to heavy forwarder. I have given sourcetype in inputs.conf on UF and created props.conf with same sourcetype value stanza( extract = regex). logs are comeing with sourcetype i have given in inputs.conf but its not picking file in props.conf where the regex is. so logs are not parsed. i have tested props.conf and it parsed correctly on test environment but this by uploading file. Am i missing anything here ?

Streamstats Time Sum When Specific Values

$
0
0
Hi All, I'm stumped on the following search. The scenario is I'm trying to track the amount of time a support ticket is assigned to a support team and specific status, for the lifecycle of the ticket. The following |streamstats works great, assuming the ticket doesn't get assigned to the same team and status twice. (getting assigned out and back in) It currently sums the time between. Again, I only want to sum the time in a team and status, not including the time between where it goes out. |dedup ticket_id,_time,ticket_arvig_status |eval temp2=id+","+ticket_status |search (ticket_team="TIER 2" AND ticket_status="tier 2 needed" |streamstats range(_time) AS StatusDuration by ticket_id global=f window=2 |stats sum(StatusDuration) AS TotalStatusDuration by ticket_id, ticket_status, ticket_team |stats avg(TotalStatusDuration) as averageage by ticket_id Any help would be appreciated!

TA-microsoft-sysmon on Forwarders (UFs) - add a output.conf ?

$
0
0
Im a bit new to deploying forwarders on endpoints i manage (im not new to splunk)- Many guides i see (including the install instructions for this sysmon TA), state that you should deploy this TA onto your forwarders. To do this, the user will need to manually create a outputs.conf file (w indexer IP/dns) and place it in: \TA-microsoft-sysmon\default\ **So why is there not a default/blank output.conf file located in \TA-microsoft-sysmon\default\ , from the start?** (or even a blank file, with just a #nothing line? I get that the devs dont know the IP / DNS of our indexers). (im not complaining about this , im asking this incase im missing something and so that i can better understand, as it would seem to me a majority of users of this TA will be deploying it on forwarders as well as their indexer- so im wondering why there is not a outputs.conf "place holder"). thanks!

¿How can I configure the UF to take the hostname of the server from another path and not from the default?

$
0
0
I have two manageable linux servers with universal forwarder, both have the same host name, when you check the "forwarder management" menu, only one server appears at a time. that's why I want to differentiate them by hostname.

Logs not picking sourcetype from props.conf in apps/local folder on heavy forwarder

$
0
0
Hi, we want to parse the logs on HF before logs are forwarded to indexers. logs are forwarded from universal forwarder to heavy forwarder. I have given sourcetype in inputs.conf on UF and created props.conf with same sourcetype value stanza( extract = regex). logs are comeing with sourcetype i have given in inputs.conf but its not picking file in props.conf where the regex is. so logs are not parsed. i have tested props.conf and it parsed correctly on test environment but this by uploading file. Am i missing anything here ?

Build table by char position in string

$
0
0
Hi, I´ve got this event ->> 2020/02/14/16:12:28:872> MachineNumber="K003991_HT"> Pass="FPPPPPPFPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP" Each position of the pass value gives an pass or fail for one position (1..80 but can also be only 1..45). For example Pass="FPPF" says -> Position_1 = Fail Position_2 = Pass Position_3 = Pass Position_4 = Fail Now I want to buld an table to show which position has how much fails of all events. How to do this? One possibility could be to use mvexpand and build more events. For example, build from this ->> 2020/02/14/16:12:28:872 MachineNumber="K003991_HT" Pass="FPPF" that events ->> 2020/02/14/16:12:28:872 MachineNumber="K003991_HT" Pass="Fail" Position="1"> 2020/02/14/16:12:28:872 MachineNumber="K003991_HT" Pass="Pass" Position="2"> 2020/02/14/16:12:28:872 MachineNumber="K003991_HT" Pass="Pass" Position="3"> 2020/02/14/16:12:28:872 MachineNumber="K003991_HT" Pass="Fail" Position="4" ...but how is it possible do do this? Or is there an other possibility to buld my table? Thanks!

SAI app/addon and AWS app/addon on same instance

$
0
0
Hi, I want to use SAI and AWS on the same single deployment instance running Splunk Enterprise 8.0.1. Which versions are compatible with each other? Based on documentation it looks like the current ones are not compatible. Please advise on a combination that would not mess up the deployment.

HEC not giving JSON output when using python

$
0
0
I am fairly new to python and I am trying to use a python script to get the health of my HEC in JSON format. When I am using a curl command like below: curl -k -s -u 'username:password' -X GET https:myServername:8088/services/collector/health I get the below response: {"text":"HEC is healthy","code":17} But when I am using the same command in python to get json event such as above its giving me an error saying the "NO json objects could be decoded" and when I hash out the json.loads() variable from the below script, the output says in an html format "The request your client sent was too large". This might be sending me an html response no matter what. Can you please suggest how to get a JSON response from 8088 port for the /services/collector/health endpoint. python script below: #!/usr/bin/python import json import os import re import sys import urllib import httplib2 import credentials import requests username = credentials.username baseurl = credentials.baseurl password = credentials.password hecBaseUrl = 'https://myServer:8088' myhttp = httplib2.Http(disable_ssl_certificate_validation=True) try: cmdurl = '/services/auth/login' serverResponse = myhttp.request(baseurl + cmdurl, 'POST', headers={}, body=urllib.urlencode({'username':username, 'password':password,'output_mode':'json'}))[1] # print serverResponse parsed_json = json.loads(serverResponse) sessionKey = parsed_json['sessionKey'] print "sessionKey is %s" % sessionKey hecUrl = '/services/collector/health' totalUrl = (hecBaseUrl + hecUrl) print totalUrl hecServerResponse = myhttp.request(hecBaseUrl + hecUrl, 'GET', headers={}, body=urllib.urlencode({'output_mode':'json'}))[1] parsed_json_hec = json.loads(hecServerResponse) print parsed_json_hec print hecServerResponse except Exception, err: sys.stderr.write('Error: %s\n' %str(err))

Field extraction for Log File Entries with Pipe delimiters

$
0
0
Hi, I have a log file I am monitoring. Log file entries have pipe delimited field entries as below: **LE Variation 1:** [default task-2] 2020-01-24 13:10:54,598 INFO sample.sample.sample.sample.sample.sample.StatLogger - ABCStat|XYZ|11111111111111111111|http://www.abc.com/XYZ/123/ABCD/submission|2020-01-24T13:10:52.414Z|2020-01-24T13:10:54.595Z|2181|0|3909|REQSTI003000004:Invalid SOAP message format,Invalid SOAP message format: abc-def.5.2.2.2.2: The value '10.1' of element 'ns1:WSDLVersionNum' does not match the {value constraint} value '10.3'.| **LE Variation 2:** [default task-11] 2020-01-23 12:45:01,851 INFO sample.sample.sample.sample.sample.sample.StatLogger - ABCStat|XYZ|11111111111111111111|http://www.abc.com/XYZ/123/ABCD/submission|2020-01-24T13:10:52.414Z|2020-01-24T13:10:54.595Z|2181|0|3909|success| Both variations exist in the log and I need both. The only differences among the two for distinction is that |success| defines successful transaction and anything other than |success| is a failure. I need fields to be extracted using regex or eval in Splunk search please. You can rename them as samples and I will update at my end as needed. Thanks in-advance.

How to split data from old indexer to new indexers.

$
0
0
I have a setup right now where we have 1 indexer in our test environment and we are putting 2 new indexers in the production environment. I need to know if I move all the data from the old indexer and split it evenly between the new indexers, will I run into any errors on the two indexers?

DHCP Field Extractions

$
0
0
I installed the Microsoft Windows DHCP addon for Splunk to my search heads and am successfully indexing DHCP events, but the data doesn't seem to be CIM compliant per the CIM Validator app. Here are my configs. inputs.conf on the **forwarder** [monitor://C:\dhcplogs] sourcetype = dhcp crcSalt = alwaysOpenFile = 1 disabled = false whitelist = DhcpSrvLog* index=dhcp eventtypes.conf on the **search head** [dhcp] search = index=dhcp sourcetype=dhcp [dhcp_start] search = index=dhcp sourcetype=dhcp (id=10 OR id=11 OR id=13) [dhcp_stop] search = index=dhcp sourcetype=dhcp (id=12 OR id=16 OR id=17) props.conf on the **search head** [dhcp] TRANSFORMS-dhcp_strip_headers = dhcp_strip_headers REPORT-dhcplog = REPORT-dhcplog LOOKUP-dhcp_id = dhcp_id id OUTPUTNEW level signature action LOOKUP-quarantine = quarantine_result qresult OUTPUTNEW quarantine_info FIELDALIAS-dhcp_cim = ip AS dest_ip, mac AS raw_mac, nt_host AS dest_nt_host EVAL-dest_mac = lower(case(match(raw_mac, "^\w{12}$"), rtrim(replace(raw_mac, "(\w{2})", "\1:"), ":"), 1==1, replace(raw_mac, "-|\.|\s", ":"))) EVAL-dest = coalesce(nt_host, ip, lower(case(match(raw_mac, "^\w{12}$"), rtrim(replace(raw_mac, "^(\w{2})", "\1:"), ":"), 1==1, replace(raw_mac, "-|\.|\s", ":")))) tags.conf on the **search head** [eventtype=dhcp] dhcp = enabled network = enabled session = enabled windows = enabled [eventtype=dhcp_start] start = enabled [eventtype=dhcp_stop] stop = enabled transforms.conf on the **search head** [dhcp_id] batch_index_query = 0 case_sensitive_match = 0 filename = dhcp_ids.csv max_matches = 1 [dhcp_strip_headers] REGEX = ^(?:ID|#) DEST_KEY = queue FORMAT = nullQueue [REPORT-dhcplog] DELIMS = "," FIELDS = "id","date","time","description","ip","nt_host","mac","user","transaction_id","qresult","probation_time","correlation_id","dhcid","vendorclass_hex","vendor_ascii","userclass_hex","userclass_ascii","relay_agent","dns_reg_error" [quarantine_result] batch_index_query = 0 case_sensitive_match = 1 filename = dhcp_quarantine.csv max_matches = 1 Thanks for any input.

Can one dynamically set "charting.data.count" in a splunkjs ChartView and have it re-render?

$
0
0
I am creating a Javascript app outside of Splunk, and trying to dynamically reset the number of points that get charted in a ChartView instance. I have tried doing: mychart.settings("charting.data.count", ); mychart.render(); But it is not having any visible effect. Am I missing something, or this setting cannot be dynamically updated?

licensing in a distributed deployment

$
0
0
we've recently migrated to a distributed deployment, with a licensing server. a recent surge in events caused licensing to be exceeded, and we received a reset license which was installed on the license master. however, we still cant search on the shc due to licensing errors. after installing a license on the master, what is needed to enable searching across the cluster?

How to find in between duration between three transaction event?

$
0
0
Hi, How can I find in between duration between three transaction event? For example, the duration1 between mod1 and mod2, and duration2 between mod2 and mod3. My current query is taking a while because I'm appending two searches. how can I improve it Ex: user type time user1 mod1 10:00 user1 mod2 11:00 user1 mod3 12:00 Current code: base search ... | transaction user startswith=eval(status="mod1") endswith=eval(status="mod2") | rename duration as duration1 | append [base search ... | transaction user startswith=eval(status="mod2") endswith=eval(status="mod3") | rename duration as duration2 ] | stats values(duration1), values(duration1) by user

how to troubleshoot connection issues from heavy forwarder to syslog receiver?

$
0
0
I have a heavy forwarder in which I setup the outputs.conf as follows [tcpout] defaultGroup = indexer_group,forwarders_syslog useACK = true [tcpout:indexer_group] server = indexer_ip_address:indexer:port clientCert = xxxxxxxx maxQueueSize = 20MB sslPassword = xxxxxxxxx [tcpout:forwarders_syslog] server = syslog_ip:syslog_port clientCert = xxxxxxx maxQueueSize = 20MB sslPassword = xxxxxxxx blockOnCloning = false dropClonedEventsOnQueueFull = 10 useACK = false Now the heavyforwarder is forwarding logs to indexer_group successfully but I am seeing the following errors on splunkd.log when the heavy forwarder trying to forward the logs to syslog server WARN TcpOutputProc - Cooked connection to ip=syslog_ip:syslog_port timed out ERROR TcpOutputFd - Connection to host=syslog_ip:syslog_port failed WARN TcpOutputFd - Connect to syslog_ip:syslog_port failed. Connection refused Now what are the troubleshooting steps to identify the root cause. Is there any why to check in usnix server whether the heavy forwarder is able to send to receiver on a specific port?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>