Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Using a macro with dbxquery

$
0
0
I am trying to use a macro with dbxquery. 1. I have one macro, called macro1 that returns something like the following using strftime. It's an eval macro. That part works fine. "Job='201611251010' OR job='201611251020'" 2. I want to use macro1 in another macro, macro2, for use with dbxquery "dbxquery connection=mydb query=\"select * from mytable where field1 like 'foo%' AND (".`macro1`.") limit 1\"" And then use by |`macro2` I always get that macro2 is expected to return a string.

Host static conect in Splunk web dir

$
0
0
Greetings all! I am seeking to host a simple txt file containing IPs from the Splunk web dir. I have a splunk search which yields IPs in a txt file and would like to then move that file to a place which is accessible via Splunk web so other devices can access it. The file is going to be IPs deemed as "malicious" from the search so I want firewalls to be able to reference the text file and update ACLs accordingly. I referenced the below Answer and tried placing the txt file in /opt/splunk/etc/apps/app-here/static/. But I then could not access it by going to http://:8000/en-US/static/app/app-here/badip.txt https://answers.splunk.com/answers/4290/create-a-custom-web-page-for-the-splunk-web-server-to-host.html Since that past answer is from 2011, has the location or URL changed? Or is there a better place to do this now?

How do indexers run searches without a savedsearches.conf? - Splunk App For CEF 2.0

$
0
0
I am trying to understand the data path for the latest CEF app release ( https://splunkbase.splunk.com/app/1847/ ). In the new app on creation of a new output you need to push out a created TA to your indexers. This app contains app/indexes/inputs/outputs and props.conf files and that's it. This is where I am a little lost if I follow the documentation to the letter. http://docs.splunk.com/Documentation/CEFapp/2.0.0/DeployCEFapp/Overview Ripping the app apart it looks like this : 1.The searches are kicked off on the search head. 2. Transformed into cef. 3. Saved as a stash file into the spool dir. 4. Local search heads inputs.conf will pick these up via a batch input. How does the indexers send this data to the output destination? The app instructions do not explicitly say you need to set the outputs on the search head to the indexer (even though this is good standard practice for _internal logs). So if we assume the search head forwards the stash to the indexer there is no props/transforms that would enable this app to actually recognise this data So in short ... The old 1.0 app processing pipeline was : search head -> dist search to indexers -> local cef event conversion -> local stash parsing -> local output group -> destination cef tcp output via tcp routing New 2.0 app processing pipeline seems to be : search head -> dist search to indexers -> local cef event conversion -> local stash parsing .... some process here ... indexer stash ingestion -> output via tcp routing to 3rd party destination. I don't understand how individual indexers are supposed to find this data to forward to the 3rd party as it doesn't have a savedsearches.conf. Is the indexer based inputs.conf a furphy and it actually uses the index based props as the basis for forwarding instead and thus searches are actually STILL run on the search head and not the indexer as documentation states. > " The indexers are responsible for performing the CEF mapping searches and forwarding the results" - http://docs.splunk.com/Documentation/CEFapp/2.0.0/DeployCEFapp/Howtheappworks Anyone know the cef 2.0 processing order?

distributed index.conf with server specific CIFS shares?

$
0
0
What's the recommendations about setting up index.conf which will be distributed (via deployment server) and also supports server specific shares? Basically today all buckets are being stored on a LUN and I need to split off the colddb onto a CIFS share. Here is the current config which all the Windows indexers have/receive: splunk-launch.conf SPLUNK_DB=E:\Splunk indexes.conf [default] frozenTimePeriodInSecs = 126144000 lastChanceIndex = default [volume:primary] path = $SPLUNK_DB maxVolumeDataSizeMB = 7500000 [main] homePath = volume:primary/defaultdb/db coldPath = volume:primary/defaultdb/colddb thawedPath = $SPLUNK_DB/defaultdb/thaweddb maxTotalDataSizeMB = 2000 After a bit of trial and error, I found out one cannot add additional variables to the splunk-launch.conf, but can use $COMPUTERNAME which splunk will pull from the OS env variables. So this is now what I've been trying on one indexer: indexes.conf [default] frozenTimePeriodInSecs = 126144000 lastChanceIndex = default [volume:primary] path = $SPLUNK_DB maxVolumeDataSizeMB = 7500000 [volume:cold] path = \\cifsdata.FQDN\SplunkColdData\$COMPUTERNAME [main] homePath = volume:primary/defaultdb/db coldPath = volume:cold/defaultdb/colddb thawedPath = $SPLUNK_DB/defaultdb/thaweddb maxTotalDataSizeMB = 2000 It appear to work. Took a few hours to migrate the few TB of colddb over, but I've ran into some new errors/oddities with this migrated indexer since then. Additionally, the splunk diag command doesnt resolve the $computername so it complains about every path. Is there a better way to accomplish the same end goal?

File monitor is always missing the first line

$
0
0
I've got a file monitor set up for a headerless CSV file which I generate on a periodic basis. I've noticed that the monitor is always ignoring the first line of the file. I am not using CHECK_FOR_HEADER and from what I can tell this is turned off by default. Anyone seen this before? Here is the config: Inputs.conf: [monitor://C:\ePOExport\Threat] disabled = 0 index = unclassified sourcetype = epo:threat followTail = 0 recursive = false crcSalt = < SOURCE > Props.conf [epo:threat] MAX_TIMESTAMP_LOOKAHEAD = 30 NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = false pulldown_type = 1 REPORT-epo:threat = epo:threat:report Transforms.conf # ------------------------------------ # McAfee ePO Threat Events Fields # ------------------------------------ [epo:threat:report] DELIMS = "," FIELDS = timestamp,signature,threat_type,signature_id,category,severity_id,event_description,detected_timestamp,file_name,detection_method,vendor_action,threat_handled,logon_user,user,dest_nt_domain,dest_dns,dest_nt_host,fqdn,dest_ip,dest_netmask,dest_mac,os,sp,os_version,os_build,timezone,src_dns,src_ip,src_mac,process,url,source_logon_user,is_laptop,product,product_version,engine_version,dat_version,vse_dat_version,vse_engine64_version,vse_engine_version,vse_hotfix,vse_product_version,vse_sp

How to send data using UDP port on 2 Indexers ?

$
0
0
Hi Experts, I have a concern , I am aware that i can get data from UDP port and send it to an indexer . I have a concern when we have 2 indexers . So how will be the setup like 1) Sending data on both the indexers 2) Send it to index cluster which will further segregate to indexers . What I am thinking is If I opt 1st option then I will end up with duplicate data as I have replication factor 2. Thanks VG

After upgrading deployment server to 6.5.1, "busyKeepAliveIdleTimeout...Consider raising timeout value" warning displays. Which configuration file do I edit to raise the timeout value?

$
0
0
What conf file controls the below message? I noticed the following warning message after upgrading my deployment server to 6.5.1 I cannot find in the documentation what configuration file controls this timeout. peer=x.x.x.x idle for more than busyKeepAliveIdleTimeout=12 seconds, disconnecting. Consider raising the timeout value. -Archie

Report Sender: Is this app compatible with HTTPS?

$
0
0
Hi All I successfully used Report Sender with an HTTP url but not with HTTPS url when i activated SSL on Splunk Web Someone got an idea? Regards

How to create a lookup table for sourcetypes that are indexed into Splunk?

$
0
0
Hi all i have various number of sourcetypes. i want to create lookup table for all my sourcetypes. i want all my sourcetypes that are indexed and will be indexed into Splunk in a single lookup table. can any one please let me know how can i do this?? Thanks,

Splunk DB Connect: After deploying the app to the search head cluster, why am I unable to access the app on a search head?

$
0
0
I have installed the Splunk DB Connect app (v 2.3.1, then upgraded to 2.4.0) to a our Search Head Cluster (SHC) deployer and it works wonderfully. I set up identities, connections, inputs and roles/permissions for them. But when I deploy the pre-configured app to the SHC, the roles/permissions do not seem to work. I get "Loading" on the right-hand panel when I access the Splunk DB Connect app on a search head, and when I go to view my inputs, I get: The specified database input foo_input has a missing connection foo_connection. Please choose another option from the menu on the left to continue I can see that all of the configuration has been deployed to the SHC. The roles and permissions all look OK. The search head says the RPC server is running. What am I missing?

How to develop a timechart that will show multiple events and the time the events occurred?

$
0
0
Hi, I am trying to plot a multiseries timechart. Trying to plot the multiple events and the time the events occurred. For example: for Date '01Nov', Event1 occurred at 10AM, Event 2 occurred at 11AM, etc. I have 5 events for a given date. Please guide me in how to plot all the details in the line/column graph. My data looks like this (first row is the headers). Value_Date REGION AREA SLA TIME EVENT2 TIME EVENT3 TIME EVENT4 TIME EVENT5 TIME EVENT6 TIME EVENT7 TIME 11/2/2016 EMEA WMSB 11/2/16 8:30 AM 11/2/16 11:23 AM 11/2/16 11:23 AM 11/2/16 11:48 AM 11/2/16 11:47 AM 11/2/16 11:41 AM 11/2/16 12:06 PM 11/2/2016 AMER Credit 11/2/16 8:00 AM 11/2/16 6:15 AM 11/2/16 6:18 AM 11/2/16 7:16 AM 11/2/16 6:40 AM 11/2/16 6:25 AM 11/2/16 7:06 AM 11/2/2016 EMEA Credit 11/2/16 4:00 AM 11/1/16 10:13 PM 11/1/16 10:16 PM 11/1/16 10:53 PM 11/1/16 10:53 PM 11/1/16 10:23 PM 11/1/16 10:27 PM 11/2/2016 Global FXMM 11/2/16 4:00 AM 11/2/16 3:02 AM 11/2/16 3:20 AM 11/2/16 4:15 AM 11/2/16 3:48 AM 11/2/16 3:43 AM 11/2/16 3:51 AM 11/2/2016 Global FXMM 11/2/16 4:00 AM 11/2/16 12:29 PM 11/2/16 12:31 PM 11/2/16 12:48 PM 11/2/16 12:51 PM 11/2/16 2:18 AM 11/2/16 1:11 PM

How to resolve dashboards from my development environment that are not being updated in my production envrironment's app?

$
0
0
Hi, I have deployed a new app on my production environment and I am using it. Now in development environment, I have changed some dashboards and I have to deployed it to the production environment. I have executed the following steps: - Generate .spl file in development environment in the following way `./splunk.exe package app ` - Run the command below to update app in production environment `./splunk install app -update 1 -auth admin:password` The new dashboards are added but the dashboards present in the app are not updated. Please let me know the best way to do this. Thanks, Aniello

Job inspector: How to identify search time extraction is kicking in

$
0
0
We may be having performance issues as newly saved search time extractions are not working even after being successfully tested via the Field Extractor Sample example: "faQUF","2.3.7","False","2","4","9","1","N-281","PF","19800","India Standard Time","3.8.0.5","2016-11-03T07:19:17.000Z","2016-11-03T10:49:35.000Z","3.8.0.8","/x/api/v2/hosts/fUF","","None","Windows 7 Enterprise","Service Pack 1","64-bit","7x-5x-fx-0x-xx-xx","dcfb" the following props.conf on were set on the SH [fireye:hx:asset_inventory] DATETIME_CONFIG = INDEXED_EXTRACTIONS = csv KV_MODE = none NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Structured description = Comma-separated value format. Set header and other settings in "Delimited Settings" disabled = false pulldown_type = true EXTRACT-agentId,agentVersion,excluded_from_containment,stats_acqs,stats_alerting_conditions,stats_alerts,stats_exploit_alerts,hostname,domain,gmt_offset_seconds,timezone,src_ip,last_audit_timestamp,last_poll_timestamp,last_poll_ip,url,last_alert_id,last_alert_timstamp,os_product_name,os_patch_level,os_bitness,src_mac,md5 = \"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\" EXTRACT-agentId = ^"(?P[^"]*) NOTES: ** Search was run on Verbose ** the extraction was tested first as belonging to its owner, and then shared globally ** Both the single EXTRACT-agentId as well as the composed fields one were tested separately, just kept the single one to exemplify even such a simple extraction is not working Using the job inspector I'm seeing a very quick key value extraction (the 6 invocations may be the 6 default interesting fields Splunk extracts) Duration (seconds) Component Invocations 0.01 command.search.kv 6 I can only see the expected fields when I use the very same regex as a | rex command sourcetype = fireye:hx:asset_inventory | rex field=_raw "\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\",\"(?P[^\"]*)\"" and as expected we get the rex command kicking in Duration (seconds) Component Invocations 1.40 command.rex 5,501 Anything which can point me to why this is broken?

How to install the latest Splunk Universal Forwarder for Windows XP?

$
0
0
Hi, I have been trying to install a Splunk Universal Forwarder using "splunkforwarder-6.1.11-277527-x86-release.msi" on Windows XP. Install fails at the end of Install process and rolls back installation. Our clients have both Win XP and Win 7 machines, so I would not like to maintain 2 different Forwarder versions. Can Splunk fix the issue in their MSI package so it can be installed on Windows XP (Embedded) FYI: The only version I found works on Windows XP is splunkforwarder-5.0.5-179365-x86-release.msi. I tried to upgrade to latest after this install but it fails. I would appreciate any help. Thanks.

How to extract base_url and guid values into two separate fields from our current sample URL field?

$
0
0
Hi I have log files which collect url as: cs_uri_stem="/dsa/api/playercommands/a6ada68b-7a72-4f38-b752-d99f7efd4cb8" with `a6ada68b-7a72-4f38-b752-d99f7efd4cb8` ( guid) different for all events. I want to list all different base urls: like: /dsa/api/playercommands/ I cannot use the `/` because there can be more or less than 4. I have a regex pattern to detect the guid, but that is just detecting it, I need to remove the guid. I would also like to do the opposite which is only keep the guid so I can group per device. So ideally, cs_uri_stem would become 2 fields: `base_url` and `guid`.

Machine Learning Toolkit: Has anyone used this app with data exfiltration?

$
0
0
Hello, Not sure if anyone has used the Machine Learning Toolkit for data exfiltration (data exfil)? I would like to identify outliers from my email traffic. I have the message size within my data, so I was hoping to use this data to establish a baseline and alert on the outliers. Any thoughts on doing this with Splunk and/or the Machine Learning Toolkit?

Alert Manager: How to retrieve an incident_id and a field from within that incident id from a search or api query

$
0
0
I am looking to perform a rest lookup of an Alert Manager Incident ID and retrieve the fields that are included in the incident from the original alert. I can see these in the "Details" section of the alert when expanded showing as "Key" and "Value" I assume these are in the KV store somewhere, but I cannot seem to find them. I can see the incident_id and actions performed against it in the "alerts" index, but I do not see any of fields that are put into the incident from the initial search/alert. The fields I want are available in the initial index and the incident actions and notes are in the "alerts" index, is there any way to search and correlate the two? Thanks

How do I do Field (column) selection for post process search in a dashboard panel?

$
0
0
In a dashboard I'm trying to drive several charts off a single query and use post process search to select the fields that I want. The timechart has a "by" clause and I wanted to select fields (columns in this case) for each chart based on the prefix which is the by field followed by a "-". Assuming the by field is an "a" or a "b" I end up with fields like - _time, a-avgcpu,a-maxcpu,b-avgcpu,b-maxcpu I wanted to chart them separately So I tried ` | fields _time,a-*` and ` | fields _time,b-*` For all the charts they just come up empty. Tried the equivalent search outside the dashboard app and it only displays the _time field. Does using the by clause change something fundamental that I've missed to do with result names...is there another way to do this? ( I see from another question that it appears to be possible if there is no by clause, although they are also not using a wild card field selection in that question either)

How to configure Splunk to forward logs from internal indexes to the indexer cluster, and other logs to a syslog server?

$
0
0
On my intermediary heavy forwarder, I am trying to route the logs from indexes `_internal`, `_audit`, `_introspection` etc ( All the logs under indexes that start with `_`) to indexer cluster and all the other remaining logs to syslog server. I have added below config/code.. But it doesn't seem to work. The problem here is the _audit logs are sent both to syslog server as well as index cluster. The other logs are working fine. Props.conf [default] TRANSFORMS-routing = syslogRouting,indexerRouting Transforms.conf [syslogRouting] REGEX = (.) DEST_KEY = _TCP_ROUTING FORMAT = syslogServer [indexerRouting] SOURCE_KEY = \_MetaData:Index REGEX = _.* DEST_KEY = _TCP_ROUTING FORMAT = IndexerGroup Outputs.conf [tcpout:syslogServer] server = syslogHost:17699 sendCookedData = false [tcpout:IndexerGroup] server = 1.1.1.1:9997,2.2.2.2:9997,3.3.3.3:9997,4.4.4.4:9997,5.5.5.5:9997 autoLB = true autoLBFrequency = 5 forceTimebasedAutoLB = true Can someone tell me whats wrong? Regards, C

Why is the Map View and Table View dashboard template not working in Splunk 6.5?

$
0
0
We recently updated our Splunk to version 6.5 (from version 6.4.2) and our Splunk MapView and TableView (django) no longer work. To investigate the issue, we've tried a simple example from http://dev.splunk.com/view/webframework-codeexamples/SP-CAAAEVP and found this example to behave exactly the same way as our app (also not working). When loading the page, we see we have error 404 (page not found) on the following URL http://localhost:8000/en-us/splunkd/__raw/servicesNS/admin/mysplunkapp/data/ui/views/mypage?output_mode=json&_=1480384381498 http://localhost:8000/en-us/static/@undefined/build/splunkjs/6.6.js http://localhost:8000/en-us/static/@undefined/build/splunkjs/4.4.js (Notice the "@undefined" in the last 2 links) Would someone please help us fix this issue? We have been stuck on this issue for a few days now. Thank you very much in advance
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>