Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Pcap from Palo Alto add-on

$
0
0
Hi, as requested from splunk partner support I bring up the question in this forum. We have a question from a customer that want to analyse pcap files from the threat function on palo alto panorama. We want to know if it possible to onboard pcaps from the current Palo Alto addon and if there is a documentation who to? https://splunkbase.splunk.com/app/2757/

Automatic lookup, matching range field?

$
0
0
Hi, I would like to enriche netflow data (i.e. dst ip, dst port) with "service name", using automatic lookup. My lookup looks like the following example: IP PORT_RANGE SERVICENAME x.x.x.x/32 1024,1048 ServiceA y.y.y.y/30 80,80 ServiceB z.z.z.z/31 8000,8999 ServiceC OR the lookup could be with two PORT fields: IP PORT_MIN PORT_MAX SERVICENAME x.x.x.x/32 1024 1048 ServiceA y.y.y.y/30 80 80 ServiceB z.z.z.z/31 8000 8999 ServiceC Matching the IP is easy with match_type CIDR, BUT how-to match the port range??? Don't mind which of the two examples above to implement a solution for ;-) Or the solution could be a complete 3th solution. Looking forward fore some bright answers, Thanks, //Torben

i want to create a alert on avg disk read/write latency

$
0
0
Below is the example for an event. below are the values available.i want to calculate avg value with span 30 secs and and if value crosses continuously 0.30 for more than 5 mins. the alert should trigger. the data will be available every 10 seconds for time. we have _time value disk_type counter drive 0.008749994761904096 PhysicalDisk Avg. Disk sec/Read G 0.008377771786948093 PhysicalDisk Avg. Disk sec/Read G:

how to know the search history by user, but only the searches you type

$
0
0
Sorry for the inconvenience, but I'm looking for a query that only shows the searches typed by users, because when I check in the audit it shows me the querys programmed. your attention is appreciated. regards

Discrepancy between datamodel, summaries, & raw search

$
0
0
We are running SE 6.5.4, ES 4.7.1, Splunk_SA_CIM - 4.8.0 I'm getting a discrepancy between 3 searches over the exact same 15 minute period (any given 15 minute period) for the following 3 searches: | tstats count FROM datamodel=Web WHERE Web.action=blocked BY Web.category (test case: 49 results) | tstats `summariesonly` count FROM datamodel=Web WHERE Web.action=blocked BY Web.category (test case: 44 results) index=XXXX_proxy action=blocked | stats count by category (test case: 49 results) Web datamodel is accelerated, Earliest time as set in CIM setup = 2 month The disparity is not consistent. Sometimes the result count is equal for all 3 searches, sometimes the 2 data model searches are equal and raw is different, etc. This is making us question the validity of our data models, it seems all three result sets should be the same. How should I troubleshoot this?

Why are users from an LDAP Authenticated group not showing up?

$
0
0
We have created a group through our Active Directory team that contains ~6000 users. We have mapped this group through LDAP authentication on a single Splunk instance as we would normally do with any other AD group. However users that belong to this newly created group are unable to login. If I check the settings for this user group the "LDAP Users" field is entirely blank. This occurrence only appears for this particular group, all others have their LDAP Users field populated appropriately. We have checked in the AD and all the users that should be in the group are correctly listed, but why are they not rendering in Splunk?

Is there an alternative to using > in a search string?

$
0
0
**My basic question is as follows**: Is there a text alternative for specifying greater or less than, rather than using the symbol? This is why I ask: I have a search that queries failed login attempts greater than 10 across all servers in the index. It works a treat! I've added that search to a Splunk Dashboard, and it populates beautifully and serves us well. However, unlike every other section in the dashboard, clicking an entry returns a permission error: *You don't have permission to access /en-US/app/search/search on this server.* If I edit the search string to remove "search count>10 ", the links are clickable and go straight to the search app. I tested on a second dashboard search with the same results. I don't' know if this is an issue with Splunk, or more likely our SSO blocking **>** as the URL is passed to the search application. Rather than explore allowing > in the URLs, I'd prefer to just specify an alternate term, if such a term exists. PS - this is my first post. I did look for an answer to this, and apologize if it exists and I just didn't find it!

View full source of the log file

$
0
0
I have a need to view/export the source a log file. Requirement is to export all lines of the log file within a date/time range. Can you help?

Using chained eval or separate eval statements, any performance gains?

$
0
0
Is there any performance benefit in : using one eval with several chained statements v/s using separate eval statements ( which may be split to improve SPL readability for extremely large SPL's) | eval A = "OM" | eval B = " NOM" | eval C = " NOM" | eval D= " NOM" | eval E = " NOM" or | eval A = "OM" , B = " NOM" , C = " NOM" , D= " NOM" , E = " NOM"

search heads failing because of huge knowledge bundles

$
0
0
currently half of my searchheads are shutdown (auto shutdown due to issues within Splunk) and the remaining are not able to query the indexers The problem is caused by a large knowledge bundle. when i checked the .bundle files on the SHs, it is a huge (~340 MB) file with what looks like a huge python code. i have maxBundleSize set to less than 2048(which is the default) i have a blacklist in distsearch.conf which is as below: [replicationBlacklist] = (.../bin/*) = (.../install/*) = (.../appserver/*) = (.../default/data/ui/*) = (.../default.old.*) My questions is: is there any way to check what files/apps are included in this bundle that is causing issues and if those items are required or can be excluded.

Export indexed data from splunk index to Kafka in real time

$
0
0
Hi, I have an use case where i need to export data indexed in splunk to kafka in real time. So far based on the documentation i can see that it is possible to forward the raw events to a port. 1. Configure forwarder to send a cloned stream of incoming data to a tcp port. 2. Listen to the tcp port via a program and load the data into kafka. Is there any other provision that will enable one to stream splunk indexed events in real time to an external component like kafka or a port . Kindly comment.

How can I define customize sourcetype that I write logs in _internal?

$
0
0
My custom script writes log in /opt/splunk/var/log/splunk/script.log. I want the log to be indexed in _internal but have to define a customized sourcetype for the log to write in a proper linebreak. Please let me know how to define sourcetype for the _internal data.

Universal forwarder - multiple inputs.conf stanzas on the same folder

$
0
0
Hi I'm attempting to configure my universal forwarder to read log files from a single directory with multiple subdirectories. We use log rotate so the files will be renamed with (1) up to (4) before starting again. I'm also trying to push those into the right index based on the file name. For example the top level directory is /srv/logs which has multiple subdirectories i.e application fileservice proxyserver each of these subdirectories contains multiple files from each environment (dev, int, prod etc) Here is an example file name. application-prod.prod.log, i'm using the following inputs.conf which seems to work(ish). I've changed the monitor names to ensure they are treated as separate and i'm trying to blacklist anything I don't want to appear in each index. [monitor:///srv/./logs] blacklist = ppd.*\.log$|prod.*\.log$ sourcetype = service_log index = nonprod crcSalt = [monitor:///srv/logs] blacklist = devint.*\.log$|int.*\.log$|ft.*\.log$|infradev.*\.log$|nonprod.*\.log$ sourcetype = service_log index = prod crcSalt = So in prod, I only want files that contain .prod and ppd, in nonprod I want devint, int, ft, infradev and nonprod. So i'm wondering - Are there better or more performant ways to configure these inputs - Is there anyway I can check the data is correct in my indexes is correct (no prod data in non prod etc) - If there are subdirectories should I be using recursive = true? - The documentation says don't use crcSalt = with log rotate - however I see a number of initcrc errors - should I be setting a initcrclen = 2000 etc? Sorry this is a long one, thanks for any help. Thanks

HEC Curl Command Not Working?

$
0
0
Hello all! I have a weird problem occurring that I would like to get some feedback on. I currently am running a Splunk Enterprise instance on my local machine. Using the curl command and sending data via the HTTP Event Collector is given me some unexpected behavior. If I'm doing something wrong, please let me know! **Command #1:** curl -k "https://localhost:8088/services/collector/raw?source=fakelog" -H "Authorization: Splunk fb920744-d924-413b-9c60-4593f152c3d5" -d '127.0.0.1 - admin [28/Sep/2016:09:05:26.875 -0700] "GET /servicesNS/admin/launcher/data/ui/views?count=-1 HTTP/1.0" 200 126721 - - - 6ms' It gives me a {"text":"Success","code":0}; however, when I search in the main index, it does not show the log. **Command #2:** curl -k "https://localhost:8088/services/collector/raw?source=fakelog" -H "Authorization: Splunk fb920744-d924-413b-9c60-4593f152c3d5" -d '1, 2, 3... Hello, World!' It gives me {"text":Success", "code":0} and displays it in the main index. **Command #3:** curl -k "https://localhost:8088/services/collector/raw?source=fakelog" -H "Authorization: Splunk fb920744-d924-413b-9c60-4593f152c3d5" -d '127.0.0.1 - admin "GET /servicesNS/admin/launcher/data/ui/views?count=-1 HTTP/1.0" 200 126721 - - - 6ms' The difference between Command #1 and this one is the fact that I removed the timestamp. When I check Splunk, this log did come through on the main index. Does anyone know what could be happening here? Thanks!

Inputs not working

$
0
0
Hi, I have the following input setup and it won't work. I cannot figure out what is wrong with it. Any ideas? Thanks, JG [monitor:///C:\Program Files (x86)\Syslogd\Logs\SyslogCatchAll-192.15.0.2-2018-08-08.txt] whitelist = *192.15.0.2*.txt| host_regex=-(.*)-\d\d\d\d-\d\d-\d\d.txt sourcetype = meraki index = Meraki # ignoreOlderThan = 30d disabled = false

Error sending logs - ERROR TcpOutputFd - Read error. An existing connection was forcibly closed by the remote host.

$
0
0
Hello everyone, Currently the following error occurs with a group of universal forward that should send their logs to a Splunk Indexer by TCP/9997 port. **Stage** Splunk Universal Fowrard Version ( 6.3.12 and 6.2.14) OS Windows server 2008 X64 and windows server 2003 X64 Splunk Enterprise Version: 7.1.0 (Role Heavy forward) OS Debian 9.5 **Troubleshooting 1** Since I do not see logs indexed from this source, the following troubleshooting tests were performed: WARN TcpOutputFd - Connect to 10.3.7.127:9997 failed. No connection could be made because the target machine actively refused it. For this test the connectivity tests are carried out, a telnet is made from the universal to the indexer by port 9997 it opens, on the indexer there is a tcpdump and the package is seen to arrive. tcp ESTAB 0 0 10.3.7.127:8089 10.3.5.145:52522 users:(("splunkd",pid=32523,fd=127)) tcp ESTAB 0 0 10.3.7.127:9997 10.3.5.145:65480 users:(("splunkd",pid=32523,fd=12 As you can see the connections between the origin and the destination are established, it is ruled out that the issue is due to a connectivity problem. **Troubleshooting 2** We proceed to review the splunk.log files of the universal forwarder: 08-16-2018 11:23:43.268 -0500 ERROR TcpOutputFd - Read error. An existing connection was forcibly closed by the remote host. 08-16-2018 11:23:43.273 -0500 INFO TcpOutputProc - Connection to 10.3.7.127:9997 closed. Read error. An existing connection was forcibly closed by the remote host. 08-16-2018 11:23:45.102 -0500 WARN TcpOutputFd - Connect to 10.3.7.127:9997 failed. No connection could be made because the target machine actively refused it. 08-16-2018 11:23:45.105 -0500 ERROR TcpOutputFd - Connection to host=10.3.7.127:9997 failed 08-16-2018 11:23:46.105 -0500 WARN TcpOutputFd - Connect to 10.3.7.127:9997 failed. No connection could be made because the target machine actively refused it. 08-16-2018 11:23:46.105 -0500 ERROR TcpOutputFd - Connection to host=10.3.7.127:9997 failed 08-16-2018 11:23:46.105 -0500 WARN TcpOutputProc - Applying quarantine to ip=10.3.7.127 port=9997 _numberOfFailures=2 08-16-2018 11:24:25.726 -0500 WARN HttpPubSubConnection - Unable to parse message from PubSubSvr: 08-16-2018 11:24:25.726 -0500 INFO HttpPubSubConnection - Could not obtain connection, will retry after=50.990 seconds. As we can see the main error is the following: ERROR TcpOutputFd - Read error. An existing connection was forcibly closed by the remote host. And after this we see that the connection was rejected: 08-16-2018 11:23:45.105 -0500 ERROR TcpOutputFd - Connection to host=10.3.7.127:9997 failed 08-16-2018 11:23:46.105 -0500 WARN TcpOutputFd - Connect to 10.3.7.127:9997 failed. No connection could be made because the target machine actively refused it. The splunkd service has been rebooted, both the universal and the heavy forward and the same ERROR is still obtained. I have a suspicion that the error is presented by the issue of the versions that runs the universal foward since it is lower than the Heavy, regarding this failure I do not find technical documentation of the cmunication between these, what solutions could I give for the case taking into account that splunk uf is no longer supported by server 2008 and 2003 until version 6.3.2. Thanks, I remain attentive to any contribution.

Trying to execute Showcase Examples in Splunk MLTK - coming up with error.

$
0
0
Whenever I try to run fit against my data, I receive the following: Error in 'fit' command: Error while initializing algorithm "LogisticRegression": Failed to load algorithm "algos.LogisticRegression" Something very similar happens if I try Preprocessing. I'm unsure why they won't load as I can see the files are there. I've tried multiple options but get the same type of error. I have installed the Splunk MLTK and Scientific Python add-ons. Splunk is Version 7.1.2 MLTK is 3.4.0 Python for Scientific Computing is 1.3 Thanks for your help!

Alert on low average when comparison with other events

$
0
0
Hi, Please help. Step1 : Calculate combined average of an event (event name : mytest here) from source file a,b,c. Step 2 : calculate average of mytest event from each soucve file a,b,c individually. Step 3 : compare if there is 50% change when comparing individual average with combined average.

Troubleshoot page loading slowness

$
0
0
Hi, Is there a way to trouble-shoot page loading slowness? I've been debugging SSO/Siteminder/LDAP issues, but I don't see any specific issues. However, the local account responsiveness is significantly better than SSO/LDAP logins. I have one app with a page that I'd like to use for testing, but not sure if there is a way to see what's going on, hopefully by enabling some additional logging. Any suggestions?

Adding evaluated token breaks searchWhenChanged="false"?

$
0
0
I have a dashboard where the input fields are set to `searchWhenChanged="false"`. This was working as expected until I added an evaluated token from the value of one of these fields. Now, when I change the value of the input field associated with the evaluated token, the results table search automatically runs. Any suggestions on how to stop this? Here is the token evaluation: var offset = new DropdownInput({ "id": "offset", "choices": [ {"label": "GMT", "value": "0"}, {"label": "PDT", "value": "25200"} ], "searchWhenChanged": false, "initialValue": "0", "selectFirstChoice": false, "showClearButton": true, "default": "0", "value": "$form.offset$", "el": $('#offset') }, {tokens: true}).render(); offset.on('change', function(newValue) { FormUtils.handleValueChange(offset); }); offset.on("valueChange", function(e) { if (e.value !== undefined) { EventHandler.evalToken("timezone", "$label$", e.data); } });
Viewing all 47296 articles
Browse latest View live