Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Can you help me figure out how to use UDP to transmit data from one heavy forwarder to another heavy forwarder?

$
0
0
I'm trying to set up a test environment to be used in production. Will be taking data from another Splunk heavy forwarder (HF) and sending it to our HF. Must use UDP to transmit the data. I have played around with creating the output.conf/input.conf, props.conf, and transforms. But it keeps looking like it's indexing in the first HF, and not getting to the second HF. I have tested with Netcat that UDP is sent to the other machine (UDP) watching with tcpdump. Was using UDP:1514 for testing purposes. If anyone can assist. I can try and add the .conf files, but I think they are all messed up now, that not sure if it would be helpful to post them.

Can you help me with input lookup, tstats, and visualization?

$
0
0
Hello, I have a lookup table for all the source types. I'm trying to use stats or tstats to show all the source types, and if they have no data coming, I want to show 0 for those source types. I'm having trouble using the tstats or time chart; it's only working with chart now. IS there a way to solve this problem? Please help, thank you! This is what I have now: index=* |chart count by Sourcetype |append [inputlookup "Sourcetype.csv" |eval count=0 ] *** I would like to have timechart or tstats because I'm trying to use Trellis visualization***

Splunk Cisco Networks App not displaying results

$
0
0
I have installed the Cisco network app version 2.5.6 and the additional Cisco add-on in splunk, and it's failing to show any results. I am receiving syslogs from the cisco switches via the results query tab, but when I the click the separate"cisco networks" tab in the left side of the splunk application and go inside it, there is nothing to show. All of the columns show this error; Error in 'TsidxStats': WHERE clause is not an exact query Don't really know what this means or how I am suppose to fix this issue. Any help would be appreciate. Thank you.

Is sourcetype alias a thing?

$
0
0
As my program isn't great at planning for the future, or doing anything involving industry standards, we are indexing our Liferay Tomcat logs in Splunk, but had not used the typical "access_combined" sourcetype: we just called it "liferay" and we extracted all the fields using more of an IIS theme (so 'cs_uri_stem' instead of 'uri', etc.). We built several rudimentary web stats dashboards for the various sites we are hosting in Liferay. However, in a recent effort to get the Splunk App for Web Analytics working, I used the sourcetype rename and renamed our "liferay" sourcetype to "access_combined" and re-extracted all of the fields using the more common standard field names the App was expecting. So now, the Splunk App for Web Analytics works great, but all of my previously built custom web stats dashboards are broken because the old sourcetype (and associated field extractions) is no longer recognized. Is there a way to have a single sourcetype respond to two different names, like a field alias? Or do I have to go and do a bunch of find & replace work and change all my old dashboards?

Can queued searches from users/roles be prioritized?

$
0
0
If searches are queuing, can searches from particular roles/users be prioritized over others to run next, regardless of when the searches were started. UserA runs a search at 12:00:00PM that is queued. UserB runs a search at 12:00:01PM that is queued. Is there a way to get UserB's job to run before UserA's job based on the user or group the user is in?

deployment server in distributed environment?

$
0
0
Hi all i have the following environment 1-universal forwarder 2- indexer cluster that have 3 indexers and one master-node 3-search head cluster that have 3 SHs and the master-node above as the deployer also. the question is is it possible to set this master node as a deployment server too?

Support for Splunk Enterprise 7.2

$
0
0
When do you suppose Splunk Stream will be fully supported on 7.2, as designated in Splunkbase as compatible? It hasn't been updated in close to a year.

How do you fix a python path issue?

$
0
0
I have the splunk_app_db_connect installed and it works correctly until I install TA-Proofpoint-TAP. When The DB Connect UI is started it generates this errorTraceback (most recent call last): File "/splunk/cold/apps/splunk/bin/rest_handler.py", line 79, in print splunk.rest.dispatch(**params) File "/splunk/cold/apps/splunk/lib/python2.7/site-packages/splunk/rest/__init__.py", line 149, in dispatch module = __import__('splunk.rest.external.%s' % parts[0], None, None, parts[0]) File "/splunk/cold/apps/splunk/etc/apps/splunk_app_db_connect/bin/dbxproxy.py" line 7, in from dbx_settings import Settings File "/splunk/cold/apps/splunk/etc/apps/splunk_app_db_connect/bin/dbx_settings.py", line 10, in import splunklib.client as client File "/splunk/cold/apps/splunk/etc/apps/TA-Proofpoint-TAP/bin/splunklib/__init__.py", line 18, in from splunklib.six.moves import map ImportError: No module named six.moves If the TA isn't there, then the error isn't generated. From reading the message it appears that the splunklib is being picked up from the incorrect directory. I've tried putting a print in the dbx_settings.py to see what the path is before it reaches the import, but I've never found the output. Any ideas on how I can capture the path information so I can see why python is picking up the wrong splunklib.client? TIA, Joe

Email alert action not sending in 7.2.4 (dev/test license)

$
0
0
I just did a fresh install of 7.2.4 and installed my dev/test license. I am trying to test email alert functionality, which worked on this system when a previous version was installed. The search fires and appears to trigger the alert action but it looks like sendemail is failing. This is the message in the python.log: 2019-02-08 15:45:01,734 -0500 ERROR sendemail:1397 - [HTTP 404] https://127.0.0.1:8089/servicesNS/admin/search/saved/searches/Splunk%20Web%20Login?output_mode=json I am not sure if this a bug which needs to have support check into it, or if it is due to using a dev/test license under this version. I did not have this issue with a dev/test license under older versions. I did set up this instance with a different admin username than admin however, so I am not sure if this is related.

How to reference a SQL server database with an XML field containing additional variant fields

$
0
0
I'm a Splunk newbie, so feel free to challenge any of my assumptions. I'm tasked with integrating our proprietary product's event/alert database. I believe the correct approach (in a simple case) is to install DB Connect and a (universal/heavy?) forwarder on the database server host and Splunk enterprise as an indexer/search head on a "query/reporting" host. The difficulty that I'm encountering is that at least one table has a column which contains XML; this XML describes a variable list of additional fields based on the event/alert type (similar to a Windows event log); some of these additional fields include text with commas, which screws up the CSV processing. These fields should be searchable and select-able on the search head, but I'm not sure what the best approach to processing them should be. I started to look into custom search commands to transform the SQL Server record into an appropriate form: A CSV representation seems to be a problem, not just due to delimiting characters in the field text, but to output the header row, all records must be processed to determine the set of additional fields. One option is to convert the data into to a "key=value;" representation. Can I define a custom sourcetype to handle the data? I expect the answer is probably a combination of these approaches. btw, I installed Splunk Enterprise and Db Connect, but even with a reduced set of records, I violated the daily limits on the demo license. Advice on avoiding this would be helpful.

What to do about NGFW logs via syslog

$
0
0
I've looked at a few apps for Cisco Firepower and it's still not clear to me what I need. We have the NGFW which are managed via FTD. We get eStreamer events which are parsed via eNcore. That all seems to be working well. But for the actual firewall events (permits/denies/etc), those are being sent via syslog to a syslog forwarder. The events are being ingested, but are not being parsed correctly for searching so rather than the fields we need, the event is basically one big raw blob. Anyone know what we'd need for the extracts for this? I've looked at this app and it seems to match, but it's not really clear to me. So any help is appreciated. Thanks!

[MACRO SOLUTION] mvexpand multiple multi-value fields

$
0
0
There are already several Splunk Answers around mvexpand multiple multi-value fields. https://answers.splunk.com/answers/25653/mvexpand-multiple-multi-value-fields.html https://answers.splunk.com/answers/123887/how-to-expand-multiple-multivalue-fields.html Some of them also helped in improving Splunk Docs (Example 3) https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Mvexpand#Examples Now, how do I deliver a better solution [*to feature in next installment of Smart AnSwerS ? ;) *] Here is a `macro` based solution, which can scale horizontally for any number of fields. Please note, similar to other solutions, this works only with mvfields of same cardinality (i.e. mvfields having same mvcount) Macro Name: `my_mvexpand(2)` Arguments: `first_mv_field,other_mv_fields` **macros.conf** [my_mvexpand(2)] args = first_mv_field,other_mv_fields definition = | fields - _raw \ | eval fields_value=$first_mv_field$, \ fields_list="$first_mv_field$".",".replace("$other_mv_fields$"," ",",") \ | foreach $other_mv_fields$ \ [ eval fields_value=mvzip(fields_value,'<>') ] \ | mvexpand fields_value \ | eval fields_value=split(fields_value,","),fields_list=split(fields_list,",") \ | eval _raw=mvzip(fields_list,fields_value,"_X==") \ | extract pairdelim="\n" kvdelim="==" \ | fields - _raw,fields_list,fields_value \ | rename *_X as * Usage: `my_mvexpand` macro takes two arguments. First argument is one of the multi-value field, which you would like to expand. Second argument takes the list of other multi-value fields (comma OR space separated), which you would like to zip & expand along with mvfield in the First argument. ### Syntax: `my_mvexpand("mv_field_1","mv_field_2,mv_field_3")` //comma separated second argument `my_mvexpand("mv_field_1","mv_field_2 mv_field_3 mv_field_4")` //space separated second argument ### Example 1: | makeresults | eval f1=split("a1,a2,a3",",") | eval f2=split("b1,b2,b3",",") | eval f3=split("c1,c2,c3",",") `my_mvexpand(f1,"f2 f3")` ### Example 2: | makeresults | eval x="another_single_value_field" | eval f1=split("a1,a2,a3",",") | eval f2=split("b1,b2,b3",",") | eval f3=split("c1,c2,c3",",") | eval f4=split("d1,d2,d3",",") `my_mvexpand("f1","f2,f3,f4")` Feel free to use and enhance :)

Splunk to search and analyse it in logs after one hour

$
0
0
I have a requirement to search and analyse result of searches in same log file after one hour. For example , Search a keyword payment with ID at 12:00 PM in log X Search the same payment ID at 1:00 PM in log X to check if acknowledgment has been received or not. Please if anyone has done similar thing , then kindly share.

mvexpand multiple multi-value fields [MACRO BASED SOLUTION]

$
0
0
There are already several Splunk Answers around mvexpand multiple multi-value fields. https://answers.splunk.com/answers/25653/mvexpand-multiple-multi-value-fields.html https://answers.splunk.com/answers/123887/how-to-expand-multiple-multivalue-fields.html Some of them also helped in improving Splunk Docs (Example 3) https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Mvexpand#Examples Now, how can it auto-scale horizontally for any number of fields? Here is a macro based solution for this question. Macro Name: `my_mvexpand(2)` Usage: `my_mvexpand(2)` macro takes two arguments. First argument is one of the multi-value field, which you would like to expand. Second argument takes the list of other multi-value fields (comma OR space separated), which you would like to zip & expand along with mvfield in the First argument. ### Syntax: `my_mvexpand("mv_field_1","mv_field_2,mv_field_3")` //comma separated second argument `my_mvexpand("mv_field_1","mv_field_2 mv_field_3 mv_field_4")` //space separated second argument ### Example 1: | makeresults | eval f1=split("a1,a2,a3",",") | eval f2=split("b1,b2,b3",",") | eval f3=split("c1,c2,c3",",") `my_mvexpand(f1,"f2 f3")` ### Example 2: | makeresults | eval x="another_single_value_field" | eval f1=split("a1,a2,a3",",") | eval f2=split("b1,b2,b3",",") | eval f3=split("c1,c2,c3",",") | eval f4=split("d1,d2,d3",",") `my_mvexpand("f1","f2,f3,f4")` Please note, similar to other solutions already answered in Splunk Answers, this macro based solution works only with mvfields of same cardinality (i.e. mvfields having same mvcount) Feel free to use and enhance :)

Getting server metrics from Splunk Infrastructure Servers

$
0
0
We are just beginning to use iTSI and I would like to create some KPI's that are splunk servers, cpu, memory and disk space. This data seems to already be in the splunk internal indices, I just dont know where... Does anyone? Any help is much appreciated.

Support ticket raised outside of business hours

$
0
0
I have a support ticket system where people can submit their support tickets. The system is running 24 hours but the workers only work **from 8am to 8pm**,**Monday to Friday**. I have a create_time field which is when the ticket is created. So if a ticket is created on Monday 9pm, the create_time should be Monday 8am. If the ticket is created on Saturday, it should start on Monday instead. Secondly, I have a SLA where Level 1 is 4 hours, level 2 is 8 hours. SLA refers to how long the support ticket must take to be solved. So if a support ticket of Level 1(4 hours) is raised on Monday, 7pm, the workers can only take 1 hour because they leave work at 8pm, and Tuesday 8am continue working on it, which means that the deadline should be Tuesday 11am. How do i do that? This is my current script which is already able to skip weekends index="test" sourcetype="incident_all_v3" | eval check = strptime(strftime(_time , "%d/%m/%Y") , "%d/%m/%Y") | eventstats max(check) as checktime | where checktime = check | dedup 1 ticket_id sortby -_time | join ticket_id type=left [ search index="test" sourcetype="incident_assigned" | eval check = strptime(strftime(_time , "%d/%m/%Y") , "%d/%m/%Y") | eventstats max(check) as checktime | where checktime = check | eval move_datetime = strptime(move_datetime, "%Y-%m-%d %H:%M:%S") | dedup 1 ticket_id sortby -move_datetime | eval move_datetime = strftime(move_datetime, "%Y-%m-%d %H:%M:%S") | fields ticket_id move_datetime] | eval realtime = if(isnotnull(move_datetime), move_datetime, create_time) | eval create_time_epoch = strptime(realtime, "%Y-%m-%d %H:%M:%S") | lookup app_name.csv queue_name output vendor, app_name | search vendor = "Company" AND ticket_type = "Incident" AND app_name = "*" | eval diff_seconds = now() - create_time_epoch | eval diff_days = diff_seconds / 86400 | eval status = if (ticket_state="Closed" OR ticket_state="Completed" OR ticket_state="For Verification" OR ticket_state="Verified", "resolved" , "unresolved") | where status = "unresolved" AND ticket_type = "Incident" | eval SEVERITY = case ( SLA == "SLA Level 1", "1", SLA == "SLA Level 2", "2", SLA == "SLA Level 3", "3", SLA == "SLA Level 4", "4") | eval SEVERITY = "Sev ".SEVERITY | lookup sev_target.csv SEVERITY output TARGET | eval SLA_DEADLINE = case(SEVERITY = "Sev 4", create_time_epoch + (TARGET*3600), SEVERITY = "Sev 3", create_time_epoch + (TARGET*3600), SEVERITY = "Sev 2", create_time_epoch + (TARGET*3600), SEVERITY = "Sev 1", create_time_epoch + (TARGET*3600)) | eval day_of_week= strftime(create_time_epoch, "%A") | eval sum= case( (day_of_week=="Tuesday" OR day_of_week== "Sunday"), 86400, 1=1, 172800) | eval SLA_DEADLINE = if(SEVERITY = "Sev 4", SLA_DEADLINE + sum , SLA_DEADLINE) | eval SLA_DEADLINE = if(SEVERITY = "Sev 3", SLA_DEADLINE + sum , SLA_DEADLINE) | eval SLA_DEADLINE = if(SEVERITY = "Sev 2", SLA_DEADLINE + sum , SLA_DEADLINE) | eval SLA_DEADLINE = if(SEVERITY = "Sev 1", SLA_DEADLINE + sum , SLA_DEADLINE) | eval SLA_DEADLINE = strftime(SLA_DEADLINE,"%Y-%m-%d %H:%M:%S") | table *

View the parameters, e.g. from limits.conf using the search

$
0
0
Hello, Is it possible to view the configuration files / parameters, e.g. limits.conf using the search? I do not have access to the OS but still would like to research on the parameters to advise my Splunk admin for changes. Kind Regards, Kamil

Lookup command returning incorrect null values

$
0
0
I encountered a very weird behaviour. I kind of found a way around it, but I want to make sure that I didn't misunderstand anything and I want to isolate/define the issue as good as possible. Maybe this is already known to some of you. I have a lookup which gives inconsistent results. It seems like if I feed a lot into it via | lookup I don't always get output even if the entry exists. This is inconsistent. One search might return a result, the next might not. My search is something like this (very simplified) | index=myindex sourcetype=mysourcetype someparameters=myparameters [|inputlookup listofnumbers.csv | fields number] | dedup number | lookup numberToText number output text as text1 | search number <1000 | lookup numberToText number output text as text2 | table number, text1,text2 the first lookup as to lookup about 10000 values. Sometimes they get a text1, sometimes they dont even if they are in the lookup numberToText. The second lookup, now dealing with a smaller amount always seems to give the correct output. Does anyone ever experience this? I know that subsearches in the top can only return 10k restults to the search. But I am not aware of any restriction of the lookup command itself. The lookup is a definition which points to a csv. It makes no difference if the csv is addressed directly.

explanation of the concurrency in the limits.conf needed

$
0
0
Hello, My alert gets sporadically skipped with the following log entry: 02-09-2019 08:48:53.968 +0100 INFO SavedSplunker - savedsearch_id="nobody;mlbso;Anomaly Detection", search_type="scheduled", user="d046266", app="mlbso", savedsearch_name="Anomaly Detection", priority=default, status=skipped, reason="The maximum number of concurrent running jobs for this historical scheduled search on this instance has been reached", concurrency_category="historical_scheduled", concurrency_context="saved-search_instance-wide", concurrency_limit=1, scheduled_time=1549698360, window_time=0 I am wondering how can it be that the concurrency limit for this alert is only 1 with the following parameters I have: number_of_cpus = 8 max_searches_per_cpu = 20 base_max_searches = 10 max_rt_search_multiplier = 1 max_searches_perc = 77 Could you please help with this? Kind Regards, Kamil

Get entity list of just splunk infrastructure servers

$
0
0
I would like to know the query I can use to get JUST the splunk infra servers, and not the UF's. I want to use this in iTSI for entities. Thanks!
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>