Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Deploying a Heavy Forwarder on a Cloud Server, what is needed?

$
0
0
Hello everyone! I'm working closely with my server team, and we are going to deploy a Heavy Forwarder on a cloud server. We're doing this so that we can manage our own tokens. We also have a Splunk department that is allowing me to be the knowledge object admin for our index. That being said, I'm asking this question to verify that all my information is correct, and that I'm not doing something I shouldn't or adding something that's not relevant to my deployment. The Heavy Forwarder will **NOT** index anything. There will be nothing stored, and we do not wish to pre-cook any extractions before they arrive to the indexer. The Heavy Forwarder will not have a large volume of traffic < 1GB / day. We only have a few applications interested in using our Heavy Forwarder, and those only roughly send 200KB / day. We deployed a test Heavy Forwarder using Ubuntu on a virtual box, and we able to successfully set it up as a forwarder. So, we wish to do that similar thing again but in the cloud. Here are the ports that I listed for the server team: - **8000**: Web Interface - **8065**: Python - **443** (instead of 8088): REST API event collection - **8089**: Splunk Management Port - **8191**: KV Store Port (MongoDB) We're also going to deploy a minimal server (2 Core CPU + 4GB Ram). Is there anything I need to be aware of before going forth with this deployment? We are also going to be using SSL and using HTTPS. Should we also leave some additional ports open for Universal Forwarder / TCP & UDP access? Thanks!

Custom DBConnect to get McAfee EPO inventory info

$
0
0
Wanted to share this with community: We use the query below to collect a static inventory of systems currently in McAfee EPO, as well as information on their product installations. We set this up as a batch job that collects full information every 30 minutes or so. Very useful for product coverage graphs, as well as a custom section we added in to give us a list of devices where on-access scan has been disabled by policy or user. Running queries against this for product coverage, dat coverage, etc is much quicker than trying to aggregate data on devices over a long period of time, especially if you have traveling devices that may not check in for a couple days at a time. We place this into the mcafee index with sourcetype=mcafee:epo:inventory SELECT a.NodeName AS dest_nt_host, b.product, b.oas_status, b.vse_dat_version, b.vse_engine64_version, b.vse_engine_version, b.vse_hotfix, b.vse_product_version, b.vse_sp, b.enstp_dat_version, b.enstp_engine64_version, b.enstp_engine_version, b.enstp_hotfix, b.enstp_product_version, b.ma_product_version, b.enspf_product_version, b.ensfw_product_version, b.enswc_product_version FROM ( SELECT DISTINCT [EPOLeafNode].[NodeName] FROM [EPOLeafNode] ) a LEFT JOIN ( SELECT [EPOLeafNode].[NodeName] AS [dest_nt_host], CASE WHEN [EPOProductProperties].[ProductCode] LIKE 'VIRUSCAN%' THEN 'VirusScan Enterprise' WHEN [EPOProductProperties].[ProductCode] LIKE 'ENDP_AM%' THEN 'McAfee Endpoint Security' ELSE NULL END AS [product], CASE WHEN vseOASEnabled.value LIKE '1' THEN 'Enabled' WHEN [AM_CustomProps].bOASEnabled LIKE '1' THEN 'Enabled' WHEN vseOASEnabled.value LIKE '0' THEN 'Disabled' WHEN [AM_CustomProps].bOASEnabled LIKE '0' THEN 'Disabled' ELSE 'Unknown' END AS [oas_status], [EPOProdPropsView_VIRUSCAN].[datver] AS [vse_dat_version], [EPOProdPropsView_VIRUSCAN].[enginever64] AS [vse_engine64_version], [EPOProdPropsView_VIRUSCAN].[enginever] AS [vse_engine_version], [EPOProdPropsView_VIRUSCAN].[hotfix] AS [vse_hotfix], [EPOProdPropsView_VIRUSCAN].[productversion] AS [vse_product_version], [EPOProdPropsView_VIRUSCAN].[servicepack] AS [vse_sp], [EPOProdPropsView_THREATPREVENTION].[verDAT32Major] as [enstp_dat_version], [EPOProdPropsView_THREATPREVENTION].[verEngine64Major] as [enstp_engine64_version], [EPOProdPropsView_THREATPREVENTION].[verEngine32Major] as [enstp_engine_version], [EPOProdPropsView_THREATPREVENTION].[verHotfix] as [enstp_hotfix], [EPOProdPropsView_THREATPREVENTION].[productversion] as [enstp_product_version], [EPOProdPropsView_EPOAGENT].[productversion] as [ma_product_version], [EPOProdPropsView_ENDPOINTSECURITYPLATFORM].[productversion] as [enspf_product_version], [EPOProdPropsView_FIREWALL].[productversion] as [ensfw_product_version], [EPOProdPropsView_WEBCONTROL].[productversion] as [enswc_product_version] FROM [EPOLeafNode] INNER JOIN [EPOProductProperties] ON [EPOLeafNode].[AutoID] = [EPOProductProperties].[ParentID] LEFT JOIN [AM_CustomProps] ON [EPOLeafNode].[AutoID] = [AM_CustomProps].[LeafNodeID] LEFT JOIN [dbo].EPOProductSettings AS vseOASEnabled ON (EPOProductProperties.AutoID = vseOASEnabled.ParentID AND vseOASEnabled.SectionName = N'On-Access General' AND vseOASEnabled.SettingName = N'bEnabled') LEFT JOIN [EPOProdPropsView_THREATPREVENTION] ON [EPOLeafNode].[AutoID] = [EPOProdPropsView_THREATPREVENTION].[LeafNodeID] LEFT JOIN [EPOProdPropsView_EPOAGENT] ON [EPOLeafNode].[AutoID] = [EPOProdPropsView_EPOAGENT].[LeafNodeID] LEFT JOIN [EPOProdPropsView_ENDPOINTSECURITYPLATFORM] ON [EPOLeafNode].[AutoID] = [EPOProdPropsView_ENDPOINTSECURITYPLATFORM].[LeafNodeID] LEFT JOIN [EPOProdPropsView_FIREWALL] ON [EPOLeafNode].[AutoID] = [EPOProdPropsView_FIREWALL].[LeafNodeID] LEFT JOIN [EPOProdPropsView_WEBCONTROL] ON [EPOLeafNode].[AutoID] = [EPOProdPropsView_WEBCONTROL].[LeafNodeID] LEFT JOIN [EPOProdPropsView_VIRUSCAN] ON [EPOLeafNode].[AutoID] = [EPOProdPropsView_VIRUSCAN].[LeafNodeID] WHERE EPOProductProperties.ProductCode LIKE 'VIRUSCAN%' OR EPOProductProperties.ProductCode LIKE 'ENDP_AM%' ) b ON a.NodeName = b.dest_nt_host

How do I assign dropdown links in a table with events from two sourcetypes where one of them is an inputlookup and the other one is a regular index search?

$
0
0
For example, the table is like this time description vendor1 time description vendor2 time description vendor1 When I click vendor1-its a regular index based search. But vendor 2, it should go the search based on inputlookup. Please help. Thanks

How can I create a visual depiction of when a device is on or off over a period of time?

$
0
0
I have two separate events that logs a turn on and a turn off. I want to create a timechart showing when the device is on and off over a period of time. I only get a single event each time the state changes. How can I continue the state over time until a new state is recorded?

How to use regex and format strings for an XML sample without using KV_MODE=XML?

$
0
0
Hi, I want to use REGEX and FORMAT strings for an xml sample as given without using KV_MODE=xml So i am trying to use different regex to get hold of parsing fields but failing Please find the sample log for your reference and help -80.03107887624853,25.351308629611Interdiction6Assured2013-11-0304:40:00Infiltrators: Savanna Carrera, Gregoria Farías, Julina Abeyta, Mariquita Alonso, Urbano Briseño, Victoro Montano 3Raft-80.33045250710296,24.93574264936793Interdiction9Pompano2013-05-0404:22:000-80.30497342463124,24.07890526980327Rustic-79.94720757796837,24.82172611548247Interdiction12Barracuda2013-01-0105:22:00Infiltrators: Cristian Caballero, Vicenta Olivares, Leonides Cintrón, Ascencion Betancourt, Alanzo Arenas, Primeiro Sánchez, Serena Monroy, Madina Mojica, Consolacion Cordero, Faqueza Serrano, Grazia Quesada, Ivette Partida 0Rustic **Props.conf** [dreamcrusher] LINE_BREAKER = (\) TIME_PREFIX = TIME_FORMAT = %Y-%m-%d<\/ActionDate>[\r\n]\t+%H:%M:%S SHOULD_LINEMERGE = false MAX_DAYS_AGO = 2500 SEDCMD-aremoveheader = s/\<\?xml.*\s*\\s*//g SEDCMD-bremovefooter = s/\<\/dataroot\>//g REPORT-f = dream_attack KV_MODE = none **transforms.conf** [dream_attack] REGEX = (?m)^[^<]+.(.*?)\>([\S\s]*?)\<(?=[^\s]) FORMAT = $1::$2 Please suggest to me why am I failing? Thanks

Can you skip the first x rows returned in a search?

$
0
0
Hi, If I have a query which returns 100 rows I'd like to be able to only get rows 11-100 shown (and if 200 only rows 11-200) I have looked for an `offset` command similar to `head` or `tail` but I can't see one. Do you know how I could go about this? Thanks

How to "fill" missing hours from a search where there are no results with a value of 0 in a chart?

$
0
0
I have a simple search where we are searching the logs for a specific event. We want to chart out the count of how many times that event is found each hour, irrespective of the day. We are looking to see which hours are the busiest hour. Meaning, if the event happened at 5:00 Monday, 5:00 Tuesday and 6:00 Friday, I expect it to chart out a count of 2 for the 5:00 hour and a count of 1 for the 6:00 hour. This query does work and counts what we need: <search_string_here> | eval hour = strftime(_time,"%H") | chart count by hour The issue, though, is if there are gaps in the hours, they are not in the chart. So the above example will have a chart with only bars for the 5 and the 6 hour. We want to see all hours (0 - 23) on the chart, and if there was no data for that hour, obviously the count would be 0. I can't figure out how to "fill" in the missing hours. Any suggestions?

7.1 Dashboards not converting timepicker to timezone

$
0
0
I'm having two problems with splunk dashboards after I upgraded to 7.1.2. These only seem to occur when searching Date range or date-time range on dashboards. Making a custom search returns correclty. Relative time also works fine. 1. Dashboards are using the searching computer's timezone as a base. 2. Dashboards aren't converting the shared timepicker based on the timezone I made 2 accounts, account A in my computer's local time (PST, -7 hrs since daylight savings) and account B in my splunk server's time (GMT). 1. I make a timerange search since today (date range, since today) on my local computer. Account A returns from midnight (as expected) while account B returns from 7:00AM (PST as base time). in the URL the epoch time for both searches is the same, midnight PST epoch. 2. I make a timerange search since today (date range, since today) on my splunk server. Account A returns from 5PM the previous day (GMT as base time) while account B returns from midnight (as expected). in the URL the epoch time for both searches is now midnight GMT epoch. I've been looking into this for several days and I'm led to believe its a bug with splunk as I have another splunk host (unrelated to this instance, different data) which is still on 6.3 and the dashboard timeranges work correctly as expected. Help would be appreciated.

Dashboard Drill-down not working correctly with conditions

$
0
0
Hey all, I am trying to make a conditional drill down for a table. The problem is it only ever picks up the hostname condition by itself. The severity condition it acts like it is not even there. For example, the hostname when clicked will pen a new tab, when clicked on a severity it just runs the auto search and completely bypasses the condition that is set. Is there something wrong with my XML? I am a bit of a novice at this...$click.name2$$click.value$search?q=index=NIM sourcetype=message severity!=clear severity!=severity hostname=$selected_hostname$ severity=$selected_severity$&earliest=$TIME.earliest$&latest=$TIME.latest$$click.value$search?q=index=NIM sourcetype=message severity!=clear severity!=severity hostname=$selected_hostname$&earliest=$TIME.earliest$&latest=$TIME.latest$ Thanks!

Splunkd service wont start on Windows Server (handler/weak reference error)

$
0
0
Has anyone encountered this error before? Our splunk instance is completely down. 08-10-2018 12:45:50.153 -0700 INFO loader - win-service: Starting as a Windows service: will run various system checks first... 08-10-2018 12:45:50.169 -0700 INFO loader - win-service: Splunk starting as a local administrator 08-10-2018 12:45:50.169 -0700 INFO loader - Automatic migration of modular inputs 08-10-2018 12:45:52.794 -0700 ERROR loader - win-service: Error running pre-flight-checks (_pclose returned 1). 08-10-2018 12:45:52.794 -0700 ERROR loader - win-service: Here is the output from running pre-flight-checks: 08-10-2018 12:45:52.794 -0700 ERROR loader - Checking critical directories... Done 08-10-2018 12:45:52.794 -0700 ERROR loader - Checking indexes... 08-10-2018 12:45:52.794 -0700 ERROR loader - Validated: _audit _internal _introspection _telemetry _thefishbucket add_on_builder_index history iis_logs main perfmon registry summary windows wineventlog 08-10-2018 12:45:52.794 -0700 ERROR loader - Done 08-10-2018 12:45:52.794 -0700 ERROR loader - Traceback (most recent call last): 08-10-2018 12:45:52.794 -0700 ERROR loader - File "C:\Program Files\Splunk\Python-2.7\Lib\site-packages\splunk\clilib\cli.py", line 11, in 08-10-2018 12:45:52.794 -0700 ERROR loader - import logging as logger 08-10-2018 12:45:52.794 -0700 ERROR loader - File "C:\Program Files\Splunk\Python-2.7\Lib\logging\__init__.py", line 618, in 08-10-2018 12:45:52.794 -0700 ERROR loader - _handlers = weakref.WeakValueDictionary() #map of handler names to handlers 08-10-2018 12:45:52.794 -0700 ERROR loader - AttributeError: 'module' object has no attribute 'WeakValueDictionary' 08-10-2018 12:45:52.794 -0700 ERROR loader - <<<<< EOF (pre-flight-checks)

run script in SHC

$
0
0
Hi all, We have some scripts for lookup filling via splunk lookup rest api [link text][1] Also we have search head cluster (SHC). It would be great to use SHC capability to to run our scripts on the one of alive node. Best candidate for this procedure - inputs.conf. We can not only run script, but also collect STDOUT and STDERR in to index (docker style), for example: [script://$SPLUNK_HOME/etc/apps/myapp/bin/lookup_fill.py] interval = 50 23 * * * sourcetype = lookup_fill index = index_for_scripts_output But, when we use inputs.conf our script start on all SHC nodes. Can you advise to us way for single run script from inputs.conf or may be is the better way to: 1. Run custom script on the on of the SHC nodes (in the best case - less loaded) 2. Collect STDOUT and STDERR from script to index . ? Thank you. [1]: http://dev.splunk.com/view/webframework-developapps/SP-CAAAEZG

Splunk Developer License Question

$
0
0
Greetings Splunk Community & Mods, I have a question about the Splunk Dev License. A little more than a year and a half ago I requested and was granted a dev license using my personal email and Splunk account but that has since expired. I recently requested a new Splunk dev license and received an email explaining that my request was denied due to an inability to verify my eligibility, has something changed ? Thanks all !

Apply command on a large field

$
0
0
Hi everyone, I am trying to apply logistic regression to predict phishing based on a baseline of phishing emails data. But, the issue I am facing is that, the apply command execution inside Splunk is not consistent, it was working fine, but now, the job is stuck at Finalizing. When I inspected the job, it has these 2 errors. 1. With the python csv module - Error: field larger than field limit splunk 2. With the apply command - Error in 'apply' command: Failed to load model I have tried clearing the cache, recreating the fit and apply model, nothing works. Not sure how to resolve this issue. Can someone please help me on this?

please help me : How CAN I configurate splunk enterprise so it could see the forwarder ?

$
0
0
hey please help!! i did all the steps of universal forwarder configuration but i still can't forward data into splunk entreprise How CAN I configurate splunk enterprise so it could see the forwarder ?? ![alt text][1] ![alt text][2] [1]: /storage/temp/254675-ki.png [2]: /storage/temp/254674-capture.png

calculate average response time per application

$
0
0
Hi, I am a bit new to splunk and query language. In my logs, i am having "application name", "Request Time stamp" and "Response Time stamp". Using this, I need get average response time for all my applications. Please guide. thanks in advance.

Splunk searching nested json

$
0
0
Hello I use automatic translation because I am not good at English. sorry. I took NVD 's CVE list (Json Feed) into Splunk. That's index="testIndex" product_name = "openssl" "version_data" = "1.6.0" Searching with There is no "1.6.0" in the version of openssl I want to link product with version but it does not work as expected. I can't get spath or mvexpand to extract the nested arrays properly Someone help me. { "cve" : { "CVE_data_meta" : { "ID" : "CVE-2013-0169", "ASSIGNER" : "cve@mitre.org" }, "affects" : { "vendor" : { "vendor_data" : [ { "vendor_name" : "openssl", "product" : { "product_data" : [ { "product_name" : "openssl", "version" : { "version_data" : [ { "version_value" : "*" }, { "version_value" : "0.9.8" }, { "version_value" : "0.9.8a" }, { "version_value" : "0.9.8b" }, { "version_value" : "0.9.8c" }, { "version_value" : "0.9.8d" }, { "version_value" : "0.9.8f" }, { "version_value" : "0.9.8g" } ] } } ] } }, { "vendor_name" : "oracle", "product" : { "product_data" : [ { "product_name" : "openjdk", "version" : { "version_data" : [ { "version_value" : "-" }, { "version_value" : "1.6.0" }, { "version_value" : "1.7.0" } ] } } ] } }, { "vendor_name" : "polarssl", "product" : { "product_data" : [ { "product_name" : "polarssl", "version" : { "version_data" : [ { "version_value" : "0.10.0" }, { "version_value" : "0.10.1" }, { "version_value" : "0.11.0" } ] } } ] } } ] } } }, "publishedDate" : "2013-02-08T19:55Z", "lastModifiedDate" : "2018-08-09T01:29Z" }

Do i create indexs on Search head or on each indexer on non cluster envioment

$
0
0
Hi, We have a indexer{2 indexers] in our environment, 2 fowarder and 1 search heads. If we create indexes on a search head using GUI will the configuration for these be reflected in indexers? Please advice detailed steps to create indexs in a simple splunk envioment with 1 Search head ,2 Fowarders and 2 Indexers . Regards smdasim

user flow design on Sankey visualization in Splunk

$
0
0
Hello Splunkers, I would like to show the user flow on Sankey visualization.For ,that i have index, source type ,interaction_id ,activity_id, screen_id flow_name ,component fields. In the component field, *I have 2 values, one is APIGATEWAY and another one is ES. APIGATEWAY has some flow names and ES also has some flow names., there is a relation between those two flow name values.**I need to get the relation between those flow names of APIGATEWAY to ES as two separate nodes or fields on Sankey visualization .** (Note: flow name (or) API name in APIGATEWAY that calls the another one or more flow name/API name I have this query: *index=abc sourcetype=123 screen_id =xyz interaction_id=def |stats count by activity_id screen_id component flow_name|search flow_name!=""* Note: activity_id is different for all flow_names ![alt text][1] Please suggest how to show the relation between APIGATEWAY and ES flow names on Sankey visualization [1]: /storage/temp/254678-image.png

timeformat are not getting extracted properly

$
0
0
timeformat are not getting extracted properly, we have one type of timestamp but clock there is different. It is starting from 0 - 24 hours and date starting from 1- 31, and also same for month 1- 12, see timestamp example below. [8/10/18 0:20:37:469 EDT] [8/9/18 11:59:59:796 EDT] [8/9/18 13:16:38:194 EDT] [8/12/18 1:49:08:943 EDT] [8/11/18 22:59:45:370 EDT] I tried to use this props.conf but didn't work [sourcetypename] BREAK_ONLY_BEFORE = \[\d+\/\d+\/\d+\s\d+[:]\d+[:]\d+[:]\d+\s\w{3}\] TIME_FORMAT = %m/%e/%y %k:%M:%S:%3N After this I tried to extract using datetime.xml, that is working for some extent but not fully. Using that I am getting delay in indexed event timestamp, please help...

change hostname

$
0
0
I am trying to change the host name. the name is from the log files. Sep 20 11:13:18 10.50.3.100 Sep 20 11:13:15 ac.dc1.buttercomom.com ASM: the host name is always before ASM: I tried to change it through transforms.conf but host name is not changing.below is my transforms.conf file transforms.conf [host_name] SOURCE_KEY = _raw REGEX = \s(\w+.\w+.\w+.\w+) ASM:$ FORMAT = host::$1 DEST_KEY = MetaData:Host props.conf [f5xxx] DATETIME_CONFIG = NO_BINARY_CHECK = true TIME_PREFIX = x0x.xx.x.xx category = Custom pulldown_type = true TRANSFORMS-register = host_name How can i change the host name secondly, if suppose there is problem in my regex, how can I identity that there is a problem with my regex. any clue from log file ?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>