Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How can I count failures in the neighborhood events matching a rex

$
0
0
I have a question similar to: https://answers.splunk.com/answers/2602 and https://answers.splunk.com/answers/448796 I would like to get a search match (for which I define a field) and also search the subsequent daemon log for another search. If the second search repeats x count, then save this field as an Error; otherwise (if search contains < x count but > 0), it's a Warning. If the next line does not contain an Error or a Warning, then it’s a Pass. The daemon is [atftpd][1] and its logs of interest are: Sep 25 10:58:07 caffeine atftpd[6596]: Serving kernels/vmlinuz to IP:1668 Sep 25 10:58:07 caffeine atftpd[6596]: Serving kernels/vmlinuz to IP:1669 Sep 25 10:58:23 caffeine atftpd[6596]: timeout: retrying... Sep 25 10:58:28 caffeine atftpd[6596]: timeout: retrying... Sep 25 10:58:33 caffeine atftpd[6596]: timeout: retrying... Sep 25 10:58:38 caffeine atftpd[6596]: timeout: retrying... Sep 25 10:58:43 caffeine atftpd[6596]: timeout: retrying... Sep 25 11:08:07 caffeine atftpd[6596]: Serving kernels/vmlinuz to anotherIP:1211 There is a deterministic pattern to the timeout: retrying... entries (every 5 seconds) and also a configurable count (5). So if I see a Serving... line followed by exactly 5 retrying... I know for sure it's a failure. My search so far saves the IPs and the errors in some fields, but the transaction facility in Splunk returns only the first hit of "timeout": sourcetype=syslog AND atftpd AND caffeine | rex field=_raw "Serving.* to (?[0-9]*.[0-9]*.[0-9]*.[0-9]*)" | rex field=_raw ".* (?timeout).*" | transaction endswith=(: timeout: retrying...) maxcount=5 I would have assumed that maxcount=5 gave the count of the transaction search match, not the total line count of the previous search. [1]: https://linux.die.net/man/8/atftpd

Tripwire TA that integrates with Splunk Enterprise Security?

$
0
0
The last post I see on this subject is almost three years old. Does anyone know if there is a Tripwire TA that integrates with the Splunk Enterprise Security Application? We are following best practice of not installing additional apps onto our Splunk Enterprise Security Cluster, so I'm not interesting in whether there is an app that CAN be installed in parrellel with Splunk ES. Rather, I'm looking for a TA that tags the tripwire data correctly and will integrate it with Splunk ES.

How should I go about using the geospatial lookup to add fields to my root event dataset?

$
0
0
Using Splunk 6.6, I tried for the first time to create a Data Model. My Root Event Dataset consists of events which have latitude and longitude fields. I have a geospatial lookup with all the states of Brazil, and I want to use the geospatial lookup to add a State field to my Root Event Dataset. In the Data Model edit form, I clicked on "Add Field" and saw the option "Lookup". I thought that this would solve the problem. However, I did not find my geospatial lookup listed in the Lookup options. Looking into the Splunk documentation, I found this statement: > The Datasets listing page displays two categories of lookup datasets: lookup table files and lookup definitions. It lists lookup table files for .csv lookups and lookup definitions for .csv lookups and KV Store lookups. Other types of lookups, such as external lookups and geospatial lookups, are not listed as datasets. So, my question is: how should I go about using the geospatial lookup to add fields to my root event dataset? Any ideas? Thank you in advance.

Use REST API to find and run adaptive response action (Selecting one) to a notable event

$
0
0
Hi I was trying to find a way in order to reproduce "http://docs.splunk.com/Documentation/AddonBuilder/2.0.0/UserGuide/CreateAlertActions#Create_an_adaptive_response_action_for_Enterprise_Security" "Create an adaptive response action for Enterprise Security" but using REST API in python I could not find any info. I've found info to update "notable events" ("https://www.splunk.com/blog/2015/04/13/how-to-edit-notable-events-in-es-programatically.html"), but not to add/attach/run an adaptive response to a certain Event (I guess with event_id) I'm trying to automate some Splunk iteration and I would like to use Selenium to it. Thanks a lot for your help. It will be fully appreciated.

How can I create a barchart comparing active unique users vs. total users by month?

$
0
0
How do I create a comparison bar chart of active unique user vs total user by month on Splunk search head? Both are coming from separate data sources.

How to set earliest_time variable to month/day/year in HTML format?

$
0
0
I have a html table then the search for the table has the different fields for example: var search1 = new SearchManager({ "id": "search1", "cancelOnUnload": true, "latest_time": "$latest$", "status_buckets": 0, "earliest_time": "0", "search": " | inputlookup kvstore_lookup | eval KeyID = _key | table KeyID, CustID, CustName, CustStreet, CustCity, CustState, CustZip", "app": utils.getCurrentApp(), "auto_cancel": 90, "preview": true, "runWhenTimeIsUndefined": false }, {tokens: true}); . and I am wondering if there is a way to set that "earliest_time" field to m/d/y:00:00:00? I found out that later by changing the search to have the earliest and latest in the search string it works as: search1.settings.attributes.search = "...earliest="09/18/2017:00:00:00" latest=now | table ..." and that will work but I don't want to it that way I would rather set the earliest_time variable to be that format but when I try to do that, it says invalid earliest time format.

Help with drilldown and tokens on a dashboard

$
0
0
I have a dashboard, with a series of different panels on it. Some for user specific information, process info, hardware, etc.. The top of my dashboard looks like This: ![alt text][1] This, is an example of a table that I have. This data is coming from the stadnard Splunk_TA_nix 'top.sh' script: ![alt text][2] what i'm trying to do is have the inputs load each token 1 at a time upon clicking. So let's say I click on the nessusd process. That process will load in the input above, leaving the other two blank. (This part I have). Next, if I click on a host (mind these are all different hosts, I simply anonymized the data) the both the host AND the old process value would be passed. Then if I were to click a user the user value, AND the old process value, AND the old host value would be passed. The part i'm having trouble with is retaining the old $click.value2$ values in the second and third clicks. Here's the current simple XML i'm using: (CPU) Services by User (top)index=nix sourcetype=top host=$host$ COMMAND=$process$ USER="$user$" | stats avg(pctCPU) as CPU avg(pctMEM) as MEM by USER process_name host | eval CPU=round(CPU,2) | eval MEM=round(MEM,2) | sort - CPU | head 10 | eval CPU=(CPU.""."%") | eval MEM=(MEM.""."%") | eval host="myHost"$hist.earliest$$hist.latest$/app/myApp/test/?form.host=$click.value2$/app/myApp/test/?form.process=$click.value2$/app/myApp/test/?form.user=$click.value2$
This makes the first click load every time, but the second click always loses the previous field value. [1]: /storage/temp/217644-inputs1.png [2]: /storage/temp/217645-process1.png

How to audit security logs to find password compromises?

$
0
0
We audit the security logs looking for password compromises. A user will put the password in as the username and result in a 4625. The user will then log in within minutes on the same machine and show a 4624. We then have the user name and the password. We currently use the below command. This show us the password comprimise and the workstation name. I am trying to figure out how to add a line to show the 4624's within a 120 seconds of a failed log on. 4625 | stats count by Account_Name, Workstation_Name | sort - Account_Name

REST modular input JSON custom handler for AWS Pricing Data

$
0
0
Having a bit of a struggle. AWS has a pricing API available at: [AWS JSON Pricing API URL][1] Because of how the JSON is formatted, it looks like a custom handler is needed. Snipped of the JSON is: { "formatVersion" : "v1.0", "disclaimer" : "This pricing list is for informational purposes only. All prices are subject to the additional terms included in the pricing pages on http://aws.amazon.com. All Free Tier prices are also subject to the terms included at https://aws.amazon.com/free/", "offerCode" : "AmazonEC2", "version" : "20170921013650", "publicationDate" : "2017-09-21T01:36:50Z", "products" : { "76V3SF2FJC3ZR3GH" : { "sku" : "76V3SF2FJC3ZR3GH", "productFamily" : "Compute Instance", "attributes" : { "servicecode" : "AmazonEC2", "location" : "Asia Pacific (Mumbai)", "locationType" : "AWS Region", "instanceType" : "d2.4xlarge", "currentGeneration" : "Yes", "instanceFamily" : "Storage optimized", "vcpu" : "16", "physicalProcessor" : "Intel Xeon E5-2676v3 (Haswell)", "clockSpeed" : "2.4 GHz", "memory" : "122 GiB", "storage" : "12 x 2000 HDD", "networkPerformance" : "High", "processorArchitecture" : "64-bit", "tenancy" : "Host", "operatingSystem" : "Windows", "licenseModel" : "No License required", "usagetype" : "APS3-HostBoxUsage:d2.4xlarge", "operation" : "RunInstances:0002", "ecu" : "56", "enhancedNetworkingSupported" : "Yes", "normalizationSizeFactor" : "32", "preInstalledSw" : "NA", "processorFeatures" : "Intel AVX; Intel AVX2; Intel Turbo", "servicename" : "Amazon Elastic Compute Cloud" } }, "G2N9F3PVUVK8ZTGP" : { "sku" : "G2N9F3PVUVK8ZTGP", "productFamily" : "Compute Instance", "attributes" : { "servicecode" : "AmazonEC2", "location" : "Asia Pacific (Seoul)", "locationType" : "AWS Region", "instanceType" : "i2.xlarge", "currentGeneration" : "No", "instanceFamily" : "Storage optimized", "vcpu" : "4", "physicalProcessor" : "Intel Xeon E5-2670 v2 (Ivy Bridge)", "clockSpeed" : "2.5 GHz", "memory" : "30.5 GiB", "storage" : "1 x 800 SSD", "networkPerformance" : "Moderate", "processorArchitecture" : "64-bit", "tenancy" : "Host", "operatingSystem" : "Windows", "licenseModel" : "No License required", "usagetype" : "APN2-HostBoxUsage:i2.xlarge", "operation" : "RunInstances:0102", "ecu" : "14", "enhancedNetworkingSupported" : "Yes", "normalizationSizeFactor" : "8", "preInstalledSw" : "SQL Ent", "processorFeatures" : "Intel AVX; Intel Turbo", "servicename" : "Amazon Elastic Compute Cloud" } .... I added a handler to responsehandlers.py as: #handler for AWS pricing API call, split and print out all product stanzas class AWSPriceHandler: def __init__(self,**args): pass def __call__(self, response_object,raw_response_output,response_type,req_args,endpoint): if response_type == "json": output = json.loads(raw_response_output) for product in output["products"]: print_xml_stream(json.dumps(product)) else: print_xml_stream(raw_response_output) But that isn't working for me. It doesn't ingest any events at all when referring to that handler. Take the handler out, and the ingest is a single event that gets truncated at the line length limit. Any suggestions? [1]: https://pricing.us-east-1.amazonaws.com/offers/v1.0/aws/AmazonEC2/current/index.json

New Splunk install -- why am I getting an error saying that my license has expired?

$
0
0
I've just created an account and I've installed the free version of Splunk Enterprise. However, when I try to logged in, I get an error saying that my license has expired. How it can be possible ? I mean, I've just created my account. What I'm doing wrong ? thanks a lot. I just want a free version of Splunk to try it.

Help with search head cluster master error -- error accessing URI

$
0
0
We ran into the following error after creating two saved searches. We have 3 searchheads and 2 indexers. searchhead's splunkd.log: 09-06-2017 10:48:42.891 -0400 ERROR SHCMaster - error accessing uri=https://**serverip**:8089/servicesNS/**userid**/**appname**/saved/searches/**searchname**/remove_suppression?output_mode=json, statusCode=502, description=Bad Request I substituted our internal info with the placeholders (in bold). The searches didn't produce the alerts. I cloned them and the cloned searches worked. What could have caused this? Thank you.

earliest_time not working in REST post data, but working in search

$
0
0
I am sending a POST request to Splunk REST 'services/search/jobs' endpoint. If I submit with 'earliest_time' parameter as a relative string like -2d, it works fine. But if I use an absolute date-time string like "9/24/2017:10:00:00", it comes back with 0 results. Instead, if I don't pass earliest_time parameter, and embed the earliest in the query itself like "earliest='9/24/2017:10:00:00'", it works fine. Is this a known bug? Or am I doing something wrong?

How can I put results of Windows updates results per host on a map by location?

$
0
0
I have a query for Windows updates per host. But I NEED to put those on a map. Is it via ''geostats''???? index=* host=* sourcetype="WinEventLog:System" eventtype=windows_system_update | timechart sum(eval(eventtype="eventlog_Update_Successful")) as Installed sum(eval(eventtype="eventlog_Update_Failed")) as Failed

How can we figure out the size of KVStores and Lookups?

$
0
0
In our enterprise sometimes kvstores and lookup files can get really large and we're looking for a way to monitor this. I don't see anything in _internal that would show me the size of each kvstore. What I'd like to be able to do is run a query each day and then graph (or table) the results by kvstore name and size. Anyone out there have an idea on how to accomplish this.

Help with inputs.conf to move Mongo and Apache to a new index?

$
0
0
Hi, for our inputs.conf. I need to move mongo, apache and others to a new index called common and mongo. Does the following looks good ?. Can I do any more optimizations?. Thanks for all the support. [monitor:///var/log/mongo/...] crcSalt = disabled = false index = mongo [monitor:///var/log/hpp/…] crcSalt = disabled = false index = common [monitor:///var/log/apache2/...] crcSalt = disabled = false index = common [monitor:///var/log/epp/…] crcSalt = disabled = false index = common [monitor:///var/log/prd/deployment/...] crcSalt = disabled = false index = common [monitor:///var/log/prd/…] crcSalt = disabled = false index = elastica {% if 'gr’ in salt['grains.get']('roles') %}blacklist = /var/log/prd/gr/png|\.(gz|bz2|z|zip|\d)|UNKNOWN.INFO|audit\.log$ {% else %}blacklist = \.(gz|bz2|z|zip|\d)|UNKNOWN.INFO|audit\.log$ {% endif %} [monitor:///var/log/...] crcSalt = disabled = false index = main blacklist = \.(gz|bz2|z|zip|\d)|UNKNOWN.INFO|prd|apache2|mongo|hpp|epp|audit\.log$

How to view static pcap file on "Splunk for PCAP Files"?

$
0
0
Hi, I am trying to analyze a static PCAP file. I have point splunk to the pcap file using "Data inputs » PCAP File Location". But when I view the Top Talker Overview, with "Selcet tcpdump file" as "All" or "C:\Program Files\Splunk\etc\apps\SplunkForPCAP\bin\pcap2csv.bat" , the search status is always "Search is waiting for input...". As an alternative, I have managed to conver the pcap file to cvs using wireshark and upload the data to splunk, but I still like to use the app as a reference on what I can see from a pcap file. May I know what else do I need to do to view the pcap file using the app?

Index a specific table (forum) of a webpage - allowing me to kick off reports (based on time frame)

$
0
0
Hello! Here is what I'm trying to do: Index a particular section of a web page. This particular section is a forum that is updated constantly, and there is only 1 main column that I'm interested in, which is titled "Subject". How do I accomplish this w/o running into duplicate entries? - which is what I'm getting when I do the following. Currently I run the following using PowerShell: $wc.downloadstring("https://website.com/forum123/") >C:\PS_Output\Output.txt Then I index output.txt and use Splunk to find a Named Variable using Regex to find the occurrences of a particular string (i.e.: 4 consecutive capitol letters). But each time Output.txt is overwritten (when I run $wc.download string twice - seconds apart), I get a lot of duplicates. I believe I have 2 problems: 1) Need to instead clean up output.txt and only have relevant events (no need for all the surround garbage html source). Perhaps I need to add some regex to the $wc.downloadstring class? 2) The tricky part is how quickly the webpage's table is flushed out with new posts. If I run this every minute, but all 50 posts flush with 50 new posts within 30 seconds, I loose about half content that I need. Anyone out there ever tried grabbing content from an external site (not having admin access to the server of course) and keeping historical data? Thanks!

XMLWinEventLog How to add new field extraction and do proper line breaking?

$
0
0
An example of my raw text is attached. How do I do the field extraction and also proper line breaking in event logs like this? I've changed renderXml to true so as to reduce the resource intensity. ![alt text][1] [1]: /storage/temp/217651-raw-data.png

How to tune the query to get faster result ?

$
0
0
The below query is used to return the Error distribution in 3 layers - Application, Dataservice & Queue for a time range two months. Currently the query takes more than 5 mins to return the result. index=performance host="prod*" AND host= "/*web/*" earliest=1500076800 latest=1504915200 | eval layer="Application"| append [search index=performance host="prod*" MQ _raw="/*ERROR/*" earliest=1500076800 latest=1504915200 | eval layer="Queue"] | append [search index=performance host="prod*" exception="*sql*" sqlserver OR db2 earliest=1500076800 latest=1504915200 | append[search index=de riak sourcetype=kvs_console "\[error\]" host="prod*" earliest=1500076800 latest=1504915200 ] | append [search index=de host="*prod*" source="*memsql*" "ERROR" earliest=1500076800 latest=1504915200 ]|append [search index=de OR index=app sourcetype="solr_log" SEVERE OR ERROR earliest=1500076800 latest=1504915200 ]|eval layer = "DataService"] |stats count by layer The query is added a a search panel to dashboard. How can i tune this query so that it gives me faster results.

Is it possible in Splunk to know who has disabled a savedsearch and when?

$
0
0
Hi! I would like to know is there a way to find out **when** a savedsearch has been disabled and **who** has disabled the same. I want to know the details as I have multiple users having admin privileges and it's difficult to keep a track of the changes made to the savedsearches. Thank You.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>