Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Compatibility between Splunk Enterprise and Cisco Security Suite?

$
0
0
I have splunk enterprise v6.5.0 and as i am planning to upgrade to 6.5.6 or 6.6.3 due to some issues, so just wanted to know that Splunk App for Cisco Security Suite v3.1.2 will be compatible with 6.5.6 or 6.6.3+? Any developers or users who have tested this please share their views or results.

Is it possible to anonymze/mask the data being sent from AIX servers to the Splunk Enterprise 6.6.1?

$
0
0
We have installed and configured Splunk Universal forwarder 6.6.1 on AIX server. It is working fine and I am able to see the logs in Splunk Enterprise 6.6.1. However the splunk universal forwarder is not anonymizing/masking the data before forwarding it to the indexer when we use regex/sed. Even tried with masking the data at indexer level but no luck.

DBX query is running in real time while calling in side saved search.

$
0
0
I am using DB queries inside our saved search and when we are calling it in Dashboards, every time saved search is running DB query.Ideally it should run query in schedule time and give us data, which is not happening. Thanks, Shashank

Routing to index based on Regex extraction

$
0
0
Hi all, I want to know if it is possible to route data to different indexes based on the value of a regex dynamically. Example data: Department:Sec Team, Value=3, Date=12/12/2009 Department:Sales, Value=1, Date=12/03/2010 Department:Other, Value=23, Date=03/02/2011 I know you can hard code the routing like such in transforms.conf: [route1] REGEX = "Department:Sec Team" DEST KEY = _Metadata:Index FORMAT = index_sec [route2] REGEX = "Department:Sales" DEST KEY = _Metadata:Index FORMAT = index_sales [route3] REGEX = "Department:Other" DEST KEY = _Metadata:Index FORMAT = index_other However, this can become very messy as more and more departments are created (for example). Is it possible to do something like such? [route] REGEX = "Department:" DEST KEY = _Metadata:Index FORMAT = index_ I am using Splunk Enterprise 6.4.2

Chart yes, timechart no? confused

$
0
0
Hello, I am using the following search: index="ips_snaplogic""postsales" lvl="ERROR"| spath| rex mode=sed "s/.*{/{/" | spath output=msg path=Detail.error.message.message | timechart count BY msg When I use timechart, I get a visual. When I use chart, no results. Any idea why? Thanks

How to search unstructured log for all values in your lookup file?

$
0
0
Hi, I'd like to search our log for multiple possible errors from our lookup file: ![alt text][1] to return only the records containing in any field one of the strings in the Error column and show the corresponding value from the Source column. Is there a way ,such as | inputlookup errors.csv | foreach... | search ...? Many thanks in advance, Luc [1]: /storage/temp/217576-table.png

Splunk + Netflow (Riverbed)

$
0
0
Hi, Can someone direct me on what app I need to install to get data coming from my Netflow (Flow Export) appliance into Splunk Enterprise? I have installed a forwarder and set the deployment/receiver server address to the address of where Splunk Enterprise is installed. I have followed the Splunk Stream guide, and installed this app. Is this the right way? Many thanks

Best way to index MYSQL entry in Splunk as soon as it appears in MYSQL

$
0
0
Hi Splunk community I have a MYSQL table where program A writes entries to it. Then program B deletes them after processing them. I want to index MYSQL entries in Splunk as soon as it appears in MYSQL. I currently am using DB connect but I have not experimented it fully. What is the best way to index MYSQL entry in Splunk for my scenario? Appreciate your thoughts on this. Many thanks in advance.

SPLUNK Dashboard refresh every 24 hours required (00:00 to 24:00 MST Hours)

$
0
0
Hi All, I have a SPLUNK search query which I run on a daily basis for the past day by selecting Date Range Between 09/18/2017 00:00:00 and 09/18/2017 24:00:00 i.e. for one complete day. I get some tabular statistics providing a summary of total records, failed and passed records for that day. Now if I want to automate this to run for everyday to get the summary results for previous day and display it on dashboard. After getting the summary view in tabular format using the Date range selected, I Save it as Dashboard Panel, Panel powered by Inline Search. And then I go to View Dashboard. Click on Edit > Edit Search (Mirror Icon) > Select Time Range as 'Use Time Picker' -> Auto Refresh Delay and click on Custom to put 24h. And then Save it. Please let me know if it will refresh the Dashboard panel exactly after 24hours considering the Date Range provided i.e. Between 09/18/2017 00:00:00 and 09/18/2017 24:00:00. So next refresh will give me data for Date Range Between 09/19/2017 00:00:00 and 09/19/2017 24:00:00. And I need the refresh to happen on 09/20/2017 at 2:00 AM. Thank you.

Need to eval date range instead of relative time from custom time field.

$
0
0
I am currently using this method to use date from custom field for relative time frames which only gives me 3 months. | eval NewTime=strptime(ProjCreatedDate,"%Y-%m-%d %H:%M:%S") | eval _time=NewTime | where _time>=relative_time(now(),"-3mon") AND _time

Can we run multiple queries sequentially using single query

$
0
0
I have a scenario, where I need to 1) append results to .csv file. 2) Once I get csv file updated, I need to eliminate duplicate results from csv file and 3) performing lookup with the csv file I am running three queries for this and scheduling accordingly everyday for sequence operation. Can we merge three queries in to one query so that first operation is followed by second and then followed by third. ex: net query= query1 +query2 +query3 so that query 3 should execute after 2 and 2 should execute after 3. It will help me in case if schedular skips query2 I may not get right results. So If query's are combined I can always get correct results.

DBX Query Drill down form Events

$
0
0
I am looking for a way to create a custom drill down menu option from the Event tab on a specific field value. The example is shown below. When the user clicks on the Execution_ID field value I would like to add a menu option to "View Execution Error" that would run a dbxquery passing in the Execution_ID value. Is this possible and if so can you send me instructions on how. ![alt text][1] [1]: /storage/temp/217582-cuserskbcallpicturessplunk-drilldown-menu.jpg

Can I make a search time field extraction from a piece of the file/source?

$
0
0
I need to create a field in splunk that is a portion of the file path, do I need to do that @ index time or can I do it at search time? I know the regex just dont know how to make a portion of source into a field in the event. Thanks in advance!

Export of results from search screen results in "414 Request URI too long"

$
0
0
When attempting to export results from search the .csv that is created has no data and this html error in it:414 Request-URI Too Long

Request-URI Too Long

The URL your client requested was too long.

Disable PAN App Wildfire Reports

$
0
0
Hello, The PAN App is running jobs every couple seconds reaching out for a Wildfire report but we don't have a Wildfire subscription. How can I disable these reports? Thanks,

How to replicate buckets into 2 indexes?

$
0
0
Hi all, I'd like to achieve this situation: - I've data ingested in one index and I want to replicate them on another index. Since manually moving bucket it's a safe operation, I tried to copy 1 bucket in another index. When i restart Splunk, that bucket became "inflight" and obviously i can't see that data under that index. I've also tried to change the ID but nothing happened. Any idea? Thanks in advance

Diag File failure

$
0
0
Hello All, I'm receiving the following error when I try to create a diag file; ./splunk diag Collecting components: app:splunk_app_db_connect, conf_replication_summary, consensus, dispatch, etc, file_validate, index_files, index_listing, kvstore, log, pool, searchpeers Skipping components: rest Selected diag name of: diag-pel501.mascocs.com-2017-09-19_12-51-02 Starting splunk diag... Logged search filtering is enabled. Skipping REST endpoint gathering... Determining diag-launching user... Getting version info... Getting system version info... Getting file integrity info... Getting network interface config info... Getting splunk processes info... Getting netstat output... Getting info about memory, ulimits, cpu (on windows this takes a while)... Getting etc/auth filenames... Getting Sinkhole filenames... Getting search peer bundles listings... Getting conf replication summary listings... Getting KV Store listings... Getting index listings... Copying Splunk configuration files... filtered out file '/opt/splunk/etc/apps/SA-VMW-HierarchyInventory/lookups/TimeVirtualMachinesOnDatastore.csv' limit: 10485760 size: 11582385 filtered out file '/opt/splunk/etc/apps/SA-EndpointProtection/lookups/localprocesses_tracker.csv' limit: 10485760 size: 1261901947 Exception occurred while generating diag, we are deeply sorry. Traceback (most recent call last): File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 4573, in main create_diag(options, log_buffer) File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 4096, in create_diag copy_etc(options) File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 1939, in copy_etc add_dir_to_diag(etc_dir, "etc", ignore=etc_filt) File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 693, in add_dir_to_diag storage.add_dir(dir_path, diag_path, ignore=ignore) File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 488, in add_dir collect_tree(dir_path, diag_path, adder, ignore=ignore) File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 3069, in collect_tree collect_tree(srcname, dstname, actor, ignore) File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 3069, in collect_tree collect_tree(srcname, dstname, actor, ignore) File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 3069, in collect_tree collect_tree(srcname, dstname, actor, ignore) File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 3072, in collect_tree actor(srcname, dstname) File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 649, in add_file_to_diag add_fake_special_file_to_diag(file_path, diag_path, special_type) File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 625, in add_fake_special_file_to_diag storage.add_fake_file(file_path, name) File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 506, in add_fake_file tinfo.size = 0 AttributeError: 'NoneType' object has no attribute 'size' Diag failure, writing out logged messages to '/tmp/diag-fail-TUZmYc.txt', please send output + this file to either an existing or new case ; http://www.splunk.com/support We will now try to clean out any temporary files... Has anyone seen this error before?

Downloading CSV files from external server for every hour.

$
0
0
**Here is my use-case**: For every hour, I need to download a .csv file from my server using REST API. Using Splunk, I need to index these .csv files **My Approach:** Wrote a Splunk modular input app using Splunk SDK to download CSV files onto a user-specified folder on Splunk file system and then Splunk monitors entire folder/directory. Could you guys validate this approach?. Also looking for ways to optimize.

Average time between two jobs.

$
0
0
Hi, Here is my search query; index=* sourcetype="WMI:WinEventLog:Application" SourceName="Investran RS Word Processing Service" Message=* | table Message , SourceName _time |dedup _time |sort -_time and this brings up ; ![alt text][1] [1]: /storage/temp/217585-search.png So what i am trying to do if possible is,calculate the average time between stop/start.and if that average is greater than lets say 10 mins only bring that results/messages Thanks,

Need help with regex in props.conf

$
0
0
Hi all, Here is how my raw logs look. I need help with props.conf so that I can index by the second time field instead of the first one. Sep 19 12:45:19 129.106.x.x fdbsyslog: **timestamp=2017.09.19 - 12:25:16.056** devname=123 device_id=123 type=alert Thanks in advance
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>