Hi All,
I have created an index and sourcetype for two logs files.
I have set up my props.conf to extract the date/time and separate onto one line, however one of my logs has a colon after the time and it is not separating out correctly.
see below.
19/09/2017 13:34:51.438
2017-09-19 13:34:51.438683 [ptp1:pps--phc1(ens1f0/ens1f1)], last: 0, mean: 0, min: 2147483647, max: -2147483647, bad-period: 0,
overflows: 0
19/09/2017 13:34:51.437
2017-09-19 13:34:51.437853: warning: ptp ptp1: failed to receive Announce within 12.000 seconds
2017-09-19 13:34:51.437898: debug: ptp ptp1: state PTP_LISTENING
2017-09-19 13:34:51.437911: debug: netRefreshIGMP
19/09/2017 13:34:50.823
2017-09-19 13:34:50.823439 [phc0(ens1f0/ens1f1)->system], offset: -8.875, freq-adj: -42949.984, in-sync: 1
my props.conf file
[ptp_log]
SHOULD_LINEMERGE = true
BREAK_ONLY_BEFORE_DATE = false
BREAK_ONLY_BEFORE = ^\d{4}\-\d{2}\-\d{2}\s\d{2}:\d{2}:\d{2}\.\d{6}\s
MAX_TIMESTAMP_LOOKAHEAD = 26
TIME_PREFIX = ^
If I put a colon into regex it will miss the other log file.
Is the only way to do this two sourcetypes?
Thanks,
↧
How to edit props.conf to cope with two different time values in log file
↧
DBX query is running in real-time while calling inside saved search.
I am using database queries inside our saved search and when we are calling it in dashboards, every time saved search is running DB query. Ideally, it should run query during scheduled time and give us data, which is not happening.
Thanks,
Shashank
↧
↧
Lenel OnGuard Add-on for Splunk: Failed to initialize pool: Login failed for user
Any Guidance on connecting the add-on?
Have a local SQL account credentials on the server.
Initiating the app shows Unix related server metrics?
↧
When I use timechart, I get a visual. When I use chart, no results. Any idea why?
Hello,
I am using the following search:
index="ips_snaplogic""postsales" lvl="ERROR"| spath| rex mode=sed "s/.*{/{/"
| spath output=msg path=Detail.error.message.message
| timechart count BY msg
THis is the JSON I am trying to drill into, and grab the error message that I am trying to divide the chart by.
//XXX/projects/Sales_PostSales_processPostSaleOrder_VIP_CCT:{
"Service":"Enterprise Sales",
"Date":"09/19/2017 08:44:41.466",
"Environment":"XXX",
"Debug":"Error",
"Source":"PostSalesIntegration",
"Description":"Error::processPostSaleOrder_VIP_CCT. Error occurred while trying to process the message. Failed to execute HTTP request",
"Message_Unique_Id":null,
"Message_qualifier":null,
"JMSMessageID":null,
"Detail":{
"error":{
"message":"Failed to execute HTTP request",
"reason":"Read timed out",
"resolution":"Please check the Snap properties."
}
When I use timechart, I get a visual. When I use chart, no results. Any idea why?
Thanks
↧
Why doesn't the Splunk Add-on for Symantec DLP use the Data Loss Prevention CIM model?
The app seems to only use the tag "alert" whereas the model uses "dip" and "incident" (http://docs.splunk.com/Documentation/CIM/latest/User/DataLossPrevention).
Obviously I can add the tag, but it seems to be missing other items to conform with the model. Any plans to update this as it hasn't been updated in quite awhile?
↧
↧
Palo Alto Networks App for Splunk: How do you disable Wildfire reports?
Hello,
The PAN App is running jobs every couple seconds reaching out for a Wildfire report but we don't have a Wildfire subscription.
How can I disable these reports?
Thanks,
↧
Error message when creating a diag file
Hello All,
I'm receiving the following error when I try to create a diag file;
./splunk diag
Collecting components: app:splunk_app_db_connect, conf_replication_summary, consensus, dispatch, etc, file_validate, index_files, index_listing, kvstore, log, pool, searchpeers
Skipping components: rest
Selected diag name of: diag-pel501.mascocs.com-2017-09-19_12-51-02
Starting splunk diag...
Logged search filtering is enabled.
Skipping REST endpoint gathering...
Determining diag-launching user...
Getting version info...
Getting system version info...
Getting file integrity info...
Getting network interface config info...
Getting splunk processes info...
Getting netstat output...
Getting info about memory, ulimits, cpu (on windows this takes a while)...
Getting etc/auth filenames...
Getting Sinkhole filenames...
Getting search peer bundles listings...
Getting conf replication summary listings...
Getting KV Store listings...
Getting index listings...
Copying Splunk configuration files...
filtered out file '/opt/splunk/etc/apps/SA-VMW-HierarchyInventory/lookups/TimeVirtualMachinesOnDatastore.csv' limit: 10485760 size: 11582385
filtered out file '/opt/splunk/etc/apps/SA-EndpointProtection/lookups/localprocesses_tracker.csv' limit: 10485760 size: 1261901947
Exception occurred while generating diag, we are deeply sorry.
Traceback (most recent call last):
File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 4573, in main
create_diag(options, log_buffer)
File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 4096, in create_diag
copy_etc(options)
File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 1939, in copy_etc
add_dir_to_diag(etc_dir, "etc", ignore=etc_filt)
File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 693, in add_dir_to_diag
storage.add_dir(dir_path, diag_path, ignore=ignore)
File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 488, in add_dir
collect_tree(dir_path, diag_path, adder, ignore=ignore)
File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 3069, in collect_tree
collect_tree(srcname, dstname, actor, ignore)
File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 3069, in collect_tree
collect_tree(srcname, dstname, actor, ignore)
File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 3069, in collect_tree
collect_tree(srcname, dstname, actor, ignore)
File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 3072, in collect_tree
actor(srcname, dstname)
File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 649, in add_file_to_diag
add_fake_special_file_to_diag(file_path, diag_path, special_type)
File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 625, in add_fake_special_file_to_diag
storage.add_fake_file(file_path, name)
File "/opt/splunk/lib/python2.7/site-packages/splunk/clilib/info_gather.py", line 506, in add_fake_file
tinfo.size = 0
AttributeError: 'NoneType' object has no attribute 'size'
Diag failure, writing out logged messages to '/tmp/diag-fail-TUZmYc.txt', please send output + this file to either an existing or new case ; http://www.splunk.com/support
We will now try to clean out any temporary files...
Has anyone seen this error before?
↧
Indexing a CSV file from a server using REST API and Splunk SDK
**Here is my use-case**:
For every hour, I need to download a .csv file from my server using REST API. Using Splunk, I need to index these .csv files
**My Approach:**
Wrote a Splunk modular input app using Splunk SDK to download CSV files onto a user-specified folder on Splunk file system and
then Splunk monitors entire folder/directory.
Could you guys validate this approach?. Also looking for ways to optimize.
↧
rex field extraction does not work once moved to field extraction
I am parsing data from a trap def as follows:
======================== Trap attributes =========================
Timestamp: 'September 19, 2017 6:56:50 AM CDT'
Agent: '10.10.54.xxx'
Enterprise OID: '.1.3.6.1.4.1.xxxxx'
Generic Type: '6'
Specific Type: '2'
Varbinds: [oid]->[varbind]
'.1.3.6.1.2.1.1.1.0' --> 'dynaTrace Trap'
'.1.3.6.1.4.1.31094.1.1' --> 'Application Process Unavailable (unexpected)'
'.1.3.6.1.4.1.31094.1.2' --> 'Agent 'OpenPlatform-PRO-service-kyc-validation@ip-10-13-12-248' connection lost'
'.1.3.6.1.4.1.31094.1.3' --> 'Connection to a previously connected Application Process/Agent has been lost and agent has not been able to disconnect..'
'.1.3.6.1.4.1.31094.1.4' --> 'Error'
'.1.3.6.1.4.1.31094.1.5' --> 'b7250936-8068-41e3-892a-e0bec55xxxxx'
'.1.3.6.1.4.1.31094.1.6' --> 'albdynaserxxx'
'.1.3.6.1.4.1.31094.1.7' --> 'Monitoring'
'.1.3.6.1.4.1.31094.1.8' --> '2017091906xxxx'
'.1.3.6.1.4.1.31094.1.9' --> '2017091906xxxx'
'.1.3.6.1.4.1.31094.1.10' --> '6s'
'.1.3.6.1.4.1.31094.1.11' --> '-'
'.1.3.6.1.4.1.31094.1.12' --> '-'
'.1.3.6.1.4.1.31094.1.13.1' --> 'Immediate'
'.1.3.6.1.4.1.31094.1.13.2' --> '0'
'.1.3.6.1.4.1.31094.1.13.3' --> '0'
'.1.3.6.1.4.1.31094.1.13.4' --> '60000'
'.1.3.6.1.6.3.18.1.3.0' --> '10.10.54.182'
My search and rex is defined like:
index=\* sourcetype=InCharge-Traps OID=".1.3.6.1.4.1.31094" source!="D:\\InCharge\\SAM\\smarts\\local\\logs\\TRAP-INCHARGE-OI_en_US_UTF-8.log" | **rex "'.1.3.6.1.4.1.31094.1.2' --> '.*['\(](?P.*)(' |\))"**
which produces my field Agentname=**OpenPlatform-PRO-service-kyc-validation@ip-10-13-12-248** as it should.
Now I move it to the Field extractor, writing my own Regular Expression, and enter **'.1.3.6.1.4.1.31094.1.2' --> '.*['\(](?P.*)(' |\))** as my regex. This is where it all falls apart.
The preview looks right and shows the correct Agentnames but when I save it and look at the new extracted field, the data is all incorrect.
My props.conf looks like this:
EXTRACT-Agentname = **'.1.3.6.1.4.1.31094.1.2' --> '.*[\'\(](?P.*)(\' |\))**
What in the wild world of sports am I doing wrong?
Thanks for the help in advance,
Rcp
↧
↧
Renaming table column names
Hello,
When creating tables, i have noticed that if i start renaming fields - for display clarity purpose - like for example "src_ip" to "Source IP", i can't drill down to the original log (search runs but finds nothing) - is there an easy way to get both "nice" looking column header and "drillability"?
Thanks!
↧
How can I get data coming from my Netflow (Flow Export) appliance into Splunk Enterprise
Hi,
Can someone direct me on what app I need to install to get data coming from my Netflow (Flow Export) appliance into Splunk Enterprise?
I have installed a forwarder and set the deployment/receiver server address to the address of where Splunk Enterprise is installed.
I have followed the Splunk Stream guide, and installed this app. Is this the right way?
Many thanks
↧
Splunk dashboard refresh every 24 hours required (00:00 to 24:00 MST Hours)
Hi All,
I have a SPLUNK search query which I run on a daily basis for the past day by selecting Date Range Between 09/18/2017 00:00:00 and 09/18/2017 24:00:00 i.e. for one complete day.
I get some tabular statistics providing a summary of total records, failed and passed records for that day.
Now if I want to automate this to run for everyday to get the summary results for previous day and display it on dashboard.
After getting the summary view in tabular format using the Date range selected, I Save it as Dashboard Panel, Panel powered by Inline Search.
And then I go to View Dashboard.
Click on Edit > Edit Search (Mirror Icon) > Select Time Range as 'Use Time Picker' -> Auto Refresh Delay and click on Custom to put 24h. And then Save it.
Please let me know if it will refresh the Dashboard panel exactly after 24hours considering the Date Range provided i.e. Between 09/18/2017 00:00:00 and 09/18/2017 24:00:00. So next refresh will give me data for Date Range Between 09/19/2017 00:00:00 and 09/19/2017 24:00:00. And I need the refresh to happen on 09/20/2017 at 2:00 AM.
Thank you.
↧
Can these three searches be combined and ran sequentially?
I have a scenario, where I need to
1) append results to .csv file.
2) Once I get csv file updated, I need to eliminate duplicate results from csv file and
3) performing lookup with the csv file
I am running three queries for this and scheduling accordingly everyday for sequence operation.
Can we merge three queries in to one query so that first operation is followed by second and then followed by third.
ex: net query= query1 +query2 +query3 so that query 3 should execute after 2 and 2 should execute after 3.
It will help me in case if schedular skips query2 I may not get right results. So If query's are combined I can always get correct results.
↧
↧
How to modify the network devices which are pointing from one sourcetype to another sourcetype in the same index?
Hi All, Currently I have request from the network team that they wanted to point the site 03r & 04r from index=net sourcetype=cisco:network:router to index=net sourcetype=cisco:network:switch .
I could see there 35 devices currently pointing to the index=net sourcetype=cisco:network:router which needs to be pointed to index=net sourcetype=cisco:network:switch.
device names to be moved to the index=net sourcetype=cisco:network:switch from index=net sourcetype=cisco:network:router
xxxxxx03r
uxxxxx03r
xxxxxx03r
uxxxxx03r-vlan200
uxxxxx04r
uxxxxx04r
uxxxxx04r
cxxxxxx04r
details inputs.conf
[monitor:///opt/syslogs/network/.../router.log*]
index=net
sourcetype=cisco:network:router
host_segment=4
[monitor:///opt/syslogs/network/.../switch.log*]
index=net
sourcetype=cisco:network:switch
host_segment=4
kindly guide me how to reconfigure network device to point to index=net sourcetype=cisco:network:switch instead of index=net sourcetype=cisco:network:router.
thanks in advance.
↧
How do I create a custom drill down menu option from the event tab on a specific field value?
I am looking for a way to create a custom drill down menu option from the Event tab on a specific field value. The example is shown below. When the user clicks on the Execution_ID field value I would like to add a menu option to "View Execution Error" that would run a dbxquery passing in the Execution_ID value. Is this possible and if so can you send me instructions on how.
![alt text][1]
[1]: /storage/temp/217582-cuserskbcallpicturessplunk-drilldown-menu.jpg
↧
Help with rex on raw data
Hi,
I have data like this I want to display middlename and lastname from the below info.
please help me out in writing rex for below raw data
\"middleName\":\"L\",\"lastName\":\"CRIB\"
↧
Determine missing sources via a search?
All,
I have a list of PCI hosts. Now what I want to do is take that list of hosts and create a report/alert to display hosts which are not reporting /var/log/secure. Any idea how I might do this from a search?
↧
↧
Dynamic Table Issue
HI All. I have a simple dashboard where the data in the statistic table changes everytime you change the dropdown input.
The problem is it only works the first time its loaded, for example, on what is set as default. When I change the dropdown entry, instead of displaying the new table, it displays a random table with random fields...can someone please help?
Below is the code:No Data Connection Refused Missing Sequence Remote Disconnect Connection Refused drqs EXCHANGE sourcetype = ntwkserv "NO DATA" _raw!=*disconnect* | rex field=_raw "(?\D\FIFW\s\w+\s\w+\s.+\DGO\D\D)" max_match=0 | table _time, PARSER,FE_MACHINE,ERROR,Ticket | sort 0 -_time drqs EXCHANGE sourcetype = ntwkserv TCPReceiver *refused* | rex field=_raw "(?\D\FIFW\s\w+\s\w+\s.+\DGO\D\D)" max_match=0 | table _time, PARSER, MACHINE,ISSUE,iP,pORT,Ticket | sort 0 -_time drqs EXCHANGE sourcetype = ntwkserv missing _raw!=*refresh* | rex field=_raw "(?\D\FIFW\s\w+\s\w+\s.+\DGO\D\D)" max_match=0 | table _time, PARSER, MACHINE, error, Ticket | sort 0 -_time drqs EXCHANGE sourcetype = ntwkserv TCPReceiver *Remote* | rex field=_raw "(?\D\FIFW\s\w+\s\w+\s.+\DGO\D\D)" max_match=0 | table _time, PARSER, MACHINE,ISSUE,iP,pORT,Ticket | sort 0 -_time
↧
How to check if load is equally distributed on the host and create an alert?
Hi,
We generally raise tickets in Prod through Splunk by putting search query as Report/Alert and now we have a requirement to alert if the load is not equally distributed b/w the hosts. With the top command I see result is in % but I wasn't able to use it in where cause to calculate the deviation.
Say we have 4 hosts sharing an app and ideally it should be almost equal distribution but in unwanted scenario if load is lesser in Prod on one of the host Or higher on a host, I should have an alert.
log ex : index=data loggerName="xyzzy" threadName="thread1" appName="dataSync"
↧
How can I merge two "inbound" values appearing under the same field?
Hello
I have pre-parsed information coming into my Splunk instance for CISCO:ASA. I'm wondering why the field "direction" has a value of "inbound" showing up as "inbound" and "Inbound". How can I combine the two? Do I want to combine the two?....seems like it...
Thanks
Tim
↧