Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Breaking up syslog sourcetype

$
0
0
Good afternoon, I am working on trying to divide my network devices up so that I have different sourcetypes for each vendor, and then ultimately ship them off to different indexes as well. These devices all things like routers and switches, so I need to use their builtin syslog services. Unfortunately, I'm not understanding the documentation properly and it is not working. I'm focusing on Nokia gear for the time being, here is a sanitized example log entry from a Nokia device: Jan 5 13:27:51 123.123.123.123 TMNX: 803766 Base BGP-WARNING-bgpBackwardTransition-2002 [Peer 1: 123.123.123.123]: VR 1: Group mpBGP-IPv4: Peer 123.123.123.123: moved from higher state OPENSENT to lower state IDLE due to event TCP SOCKET ERROR Here's the stanza from my transforms.conf: [nokia] REGEX = TMNX FORMAT = sourcetype::nokia DEST_KEY = MetaData:Sourcetype And here's from props.conf: [source::udp:514] TRANSFORMS-nokia = nokia I am getting data in, but it's all just showing up under the sourcetype of syslog. Thanks in advance for your help.

Is there an SPL command using REST to list the macros contained within a macro?

$
0
0

Hello, I see that we can use SPL to get a list of arguments, "args", of a macro using the "rest" command.

| rest /services/configs/conf-macros

It would be great to be able to list all the dependencies of a macro.

In particular, is there a way to use the "rest" command to get a list of **macros depended upon** by another macro?

For instance, is it possible to get the following output? (See the third column "macros_called_by_macro".)

|---- args ----|---- title ----|---- macros_called_by_macro ---| --- author --- | --- definition --- | | | macro_01 | macro_02, macro_03 | | | macro_02 | macro_04 | | | macro_03 | | Thanks so much!!!

Rename Field From Input File And Perform Search

$
0
0
Hello! I am attempting to find events based on names in a CSV file (I am attempting to build a search to identify security group name changes). However, I appear to be missing something since I do not get any results. Here is the search I am trying, but it is not presenting any results: (EventCode=4781) [inputlookup Groups.csv | rename Security_ID AS Old_Account_Name] Here is what I have and have tried: I have a Groups.csv file that looks like this that contains the groups I would like to search against: Security_ID *\Group1 *\Group2 *\Group3 I have tested renaming the header and this correctly shows the contents of my CSV file with the renamed header as expected: | inputlookup Groups.csv | rename Security_ID AS Old_Account_Name I am also able to successfully get results when I do this: (EventCode=4781) (Old_Account_Name="*\Group1") However, I am not able to perform the original search, which is to search for events that contain any of the groups in the CSV file. I appear to be missing something - can someone please help correct my search query? Thanks!

We are thinking of moving to Azure Kontainer Service (AKS), is there any splunk API plugin for fluentD to push data onto Splunk?

$
0
0
We are thinking of moving to Azure K(C)ontainer Service (AKS), is there any splunk API plugin for fluentD to push data onto Splunk? We don't want to run a native splunk process that does so today.

Windows: Unknown User Name or Bad Password

$
0
0
Hi. How can I distinguish events with Authentication when «Unknown User Name» and when «Bad Password»? (index="wineventlog" OR source=*WinEventLog*) Failure_Reason=* * ("Audit Failure") AND (ComputerName="*") AND * Message != "*privilege*" Account_Name != "*$*" | eval user=mvindex(Account_Name,1) | stats count by _time, ComputerName, user,Source_Network_Address, Keywords | rex mode=sed field=Keywords "s/Audit\s//" | rename ComputerName as host user as account, Source_Network_Address as src Keywords as action | fields _time host account src action | sort -_time ![alt text][1] ![alt text][2] [1]: /storage/temp/225674-screenshot-1.png [2]: /storage/temp/225675-untitled.png

dbConnect 3.1.1 and Splunk Enterprise 7.0.1 - SQL Explorer - Error in 'dbxquery' command

$
0
0
I am running Splunk 7.0.0 with dbConnect 3.1.1 for access to a MySQL database. A few days ago I was able to retrieve data from the database with the SQL Explorer, but after coming back the following day, the SQL Explorer is now returning the following error "Error in 'dbxquery' command: Invalid message received from external search command during setup, see search.log." The search log shows as follows. 01-06-2018 18:45:07.464 INFO ChunkedExternProcessor - Running process: /opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/command.sh -Dlogback.configurationFile\=../config/command_logback.xml -DDBX_COMMAND_LOG_LEVEL\=DEBUG -cp ../jars/command.jar com.splunk.dbx.command.DbxQueryCommand 01-06-2018 18:45:07.466 ERROR ChunkedExternProcessor - Failure starting process 01-06-2018 18:45:07.466 ERROR ChunkedExternProcessor - Error in 'dbxquery' command: Invalid message received from external search command during setup, see search.log. I have searched through this forum and the solution suggested is just to downgrade dbConnect to previous version. Doesn´t seem like a "solution" to me. From the dbConnect page, 7.0 is said to be compatible. Have also noted another user running 7.0 with 3.1.1 on 6 out of 7 servers, so clearly seems that it should be possible to make this work. Anyone having any ideas at all? Is this product perhaps not ready for prime?

Splunk Add-on for OSSEC: OSSEC & Splunk Integration?

$
0
0
Hi. I'm trying this: [Splunk Add-on for OSSEC][1] [Reporting and Management for OSSEC][2] Some logs not parsing property and the log structure itself that parsed have many duplicates information in fields. I mean these logs do not give me super results for monitoring and **to be trust in 80% i can get more useful information from raw data than with the processed add-on**. And it seems to me that I need somehow reconfigure OSSEC conf. (but I'm not found any information, off splunk docs have little information about it) **My question**: if u can, give me more information about OSSEC & Splunk Integration, some blogs, other implementations. tricks to better monitor by OSSEC. Thanks! [1]: https://splunkbase.splunk.com/app/2808/ [2]: https://splunkbase.splunk.com/app/300/

Lookup: Replace / Create new field

$
0
0
Hi. For example: When I run search and see field Sub_Status - 0xC0000064 I wanna new field that will explain what the code is it. ![alt text][1] [1]: /storage/temp/225677-screenshot-1.png

How to display respective entries from two different logs based on a common extracted field value?

$
0
0
Hi All, I have two different sources of log and want to display respective entries from each source based on a extracted field value from the first log. For e.g: **Log 1**: Jan 6 15:33:13 xxxxx : trans(2735890423)[response][247.116.54.12] gtid(***2735890423***): |Test|service|247.116.54.12|2f4ad7ae-a4f9-324d-8d1a-8d98b414c496|***2735890423***||||/rest/services|documentId Note - the field that need to be extracted from this log is value of gtid(2735890423), which is extracted as tranid. (highlighted with bold font) **Log 2**: Jan 6 15:33:13 xxxxx : trans(2316097519)[response] gtid(2735890423): |Test|service|transaction type|response||2f4ad7ae-a4f9-324d-8d1a-8d98b414c496|***2735890423***:2316097519|2018-01-06T15:33:13-08:00|5|86|86|success|200 OK Requirement is, get the value of the dptranid from log 1 and search the other source log for respective entries. This has to be done dynamically, meaning the entry from log 1 has to be search from a different search param but the query has to be in such a way that it returns entry from both logs. For eg: as of know we are using: index="log1" /rest/services --> which results in entries from log 1. then we manually select the tran id from the log and then use another search query to get the result from log 2. I want to write a single query for the same purpose. Thanks.

Performance impacts of Spectre/Meltdown mitigation

$
0
0
Does anyone have figures of performance impact of CVE-2017-5754, CVE-2017-5753 and CVE-2017-5715 (Spectre/Meltdown) patches on Splunk?

Performance impacts of Spectre/Meltdown mitigation

$
0
0
Does anyone have figures of performance impact of CVE-2017-5754, CVE-2017-5753 and CVE-2017-5715 (Spectre/Meltdown) patches on Splunk?

JSON event breaks not working - sometimes

$
0
0
I have a log file of properly formatted JSON events, but the event break is not working properly. Sometimes it separates the JSON into separate events, sometimes it does not. There doesn't seem to be any rhyme or reason to this. I tried the solution here: https://answers.splunk.com/answers/80741/event-break-json.html but it did not work. I am unable to restart Splunk at this time, however, but my understanding is that I shouldn't need to. (Please correct me if I'm wrong.) Here's my props.conf entry: [s-web] KV_MODE = json LINE_BREAKER = "(^){" NO_BINARY_CHECK = 1 TRUNCATE = 0 SHOULD_LINEMERGE = false Here's a sample event: {"pid":17156,"hostname":"sub.hostname.com","name":"s-undefined","level":30,"time":1515143225539,"remoteAddr":"::ffff:99.99.99.99","remoteAddrs":[],"method":"GET","url":"/","sessionId":"abcd2b32-00e8-4e0b-97f6-23abcdef3233e","v":1} Am I missing something here? Thank you in advance for your assistance!

Can we use Start/End times from a query to get duration to use it in another search query to get an average of a field in that duration ?

$
0
0
I am able to get the Start/End times of a load test execution from a search query (by getting End time from Timestamp (field) of the log data, and subtracting the duration (field) to get Start time. Now I want to use this Start time, End time and duration between them in another search query with a different sourcetype such that it would fetch all the data inputs falling within that time duration (between Start/End times) from another app logs - to calculate the average/count of a field. So, please help me in achieving the desired data with the required search queries (using subsearch/joins etc.)

What are the basic and important cases to monitor for Windows and Linux?

$
0
0
Hi :sheepy: Did u know any cool blog aka cheat-sheets monitoring for Windows and Linux like [this][1]? Something _«i'm too lazy to understand what is critical and wanna read article where guru on the fingers explains exactly what should be monitoring and why»_ [1]: https://static1.squarespace.com/static/552092d5e4b0661088167e5c/t/5a3187b4419202f0fb8b2dd1/1513195444728/Windows+Splunk+Logging+Cheat+Sheet+v2.2.pdf+

Creating a comparison report

$
0
0
Hi, I'm trying to create report, where I am extracting data from two different sources. This data being extracted from both sources share the same item number value. So the structure is something like this: ITEM | src1 Field 1 | src1 Field 2 | src2 Field 1 | src2 Field 2 11111 0 0 0 0 12121 8 8 8 8 13222 7 7 7 7 Essentially, what I want to do is extract data from both sources for the relevant fields for a specific ITEM. Can someone suggest what I can do to achieve this? EDIT: Apologies, I haven't been able to seperate the values for each fields. Basically, Each src field has only on Integer value.

Add a independent trendline in splunk

$
0
0
I am having the chart with durations, I want to add a line over the chart with values as avg(duration). I used below query, it works perfectly. index=cloudfoundry sourcetype=cl**** "cf_foundation=px**" "cf_org_name=Co***" "cf_space_name=de***" "cf_app_name=splunk-log****" "||splunk-logger||" | dedup processLogId| sort -splunkId |search endDate !=null AND status='COMPLETED' |eval start_epoch=strptime(startDate,"%Y-%m-%d %H:%M:%S.%1N") |eval _time=start_epoch |eval end_epoch=strptime(endDate,"%Y-%m-%d %H:%M:%S.%1N") |eval duration=round((end_epoch-start_epoch)/60) | chart values(duration) as duration by processLogId | eventstats avg(duration) as avg_duration But now my requirement was changed that, Chart should based on last 30 days(may vary) and trendline should only based on last 7 days. Kindly help me to do it

Anybody seen search/indexer performance degradation after installing Meltdown patches on Linux

$
0
0
Hello Has anybody seen any indexer/search performance degradation after installing the Meltdown patches on Linux? Anybody willing to share some performance before and after stats?

Compare result count

$
0
0
HI All, I would like to compare the result count today with the count same date last month. Kindly let me know the best way to achieve this. Regards, BK

What metrics to monitor for Meltdown and Spectre

$
0
0
I realize that these are both hardware vulnerabilities but wanted to know. out of the data we are able to collect with splunk, what specific metrics would be the best to monitor as they directly correlate to Meltdown and Spectre behavior? From what I have been reading from the white papers, it's extremely difficult/ impossible to determine if this attack has been executed in your environment. Any insight or recommendations are greatly appreciated. -iKF

Splunkforwarder playing too "nice"

$
0
0
I have some scripted inputs running on a few servers that will occasionally have very high system loads. The problem is I have holes in my scripted intervals during this time, when I need them the most. The forwarder doesn't die, it just seems to block sending due to limited system resources. I'd like for it NOT to do that, and fight for cycles so I can get a better glimpse into what is happening at this time from my scripted input. Any ideas on how to accomplish this?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>