Hello guys,
Recently i have interviewed with a question like, which service or mechanish is used to get data form forwaders to indexer.
Kindly help me in this case
↧
which mechanism used in indexers ?
↧
Tableau System Logs
Hello everybody,
i am wondering if anybody already do some Tableau System Monitoring with the Logs Tableau provided?
I was a little bit suprised not to find a App or some inputs.conf recomendations.
As far as I can see, it should be Tomcat and Apache Logs - with some redis.
Any link / tip helps.
Thank you!
↧
↧
What is the difference between the results of this add on and the Splunk Add on for windows which support microsoft dhcp?
I'm trying to identify how this add on improves parsing over the Splunk version
↧
Dbquery is slow
Hi.
Hi, I have a dbquery and his execution is very slow, we migrate splunk from 6.6.11 to 7.2.3.
I can see that the process "dispatch.evaluate.dbxquery" consume the half of the running time, is possible optimize this step?
↧
Tstats - how to add a "not" condition before 'count' function?
Hello,
We use an ES ‘Excessive Failed Logins’ correlation search:
| tstats summariesonly=true allow_old_summaries=true values(Authentication.tag) as "tag",dc(Authentication.user) as "user_count",dc(Authentication.dest) as "dest_count",count from datamodel=Authentication.Authentication where nodename=Authentication.Failed_Authentication by "Authentication.app","Authentication.src" | rename "Authentication.app" as "app","Authentication.src" as "src" | where 'count'>=6
But we would like to add an additional condition to the search, where ‘*signature_id*’ field in **Failed Authentication** data model in not equal to 4771
We tried to add at the end of the search something like `|where signature_id!=4771` or `|search NOT signature_id =4771` but of course it didn’t work because count action happens before it.
Do you have an idea how we can implement that condition?
Thank for the help.
Alex.
↧
↧
Blocked Queue on Splunk HF
hi, I can see blocked=true in metrics.log of Splunk heavy forwarder. Blocked Queues are: typingqueue, aggqueue, parsingqueue, indexqueue, splunktcpin. Anyone is having any idea on this issue?
Note: this queue blockage is happening intermittently for individual Heavy forwarders.
↧
Split json into multiple events and sourcetype
Lets say I have the following json data onboarded.
{
"slaves": [{
"id": "1234",
"hostname": "12556"
},
{
"id": "1245",
"hostname": "1266"
}]
"masters": [{
"id": "2234",
"hostname": "22556"
},
{
"id": "2245",
"hostname": "2266"
}]
}
The result that I want is that for each slave I get an event with sourcetype indexnamex:slave and for each master I want to put each event in sourcetype indexnamex:master
So in indexnamex:slave I want 2 events
indexnamex:slave **Event1**
> {"id": "1234","hostname": "12556" }
indexnamex:slave **Event2**
> { "id": "1245", "hostname": "1266" }
And in indexnamex:master also two events
indexnamex:master **Event 1**
> { "id": "2234", "hostname": "22556" }
indexnamex:master **Event 2**
> { "id": "2245", "hostname": "2266" }
I can not split on e.g. hostname x } as it is the same for slaves and masters.
Is it possible to do splitting in multiple steps?
e.g. first split on "slaves" : and "masters":
and after that split do a split on what is left?
If not are there any other options?
note: the example is simpler than my real data as it is 10k lines.
↧
Migration - will Splunk re-index a directory?
Hello. We are migrating to a new Splunk server.
In our current environment, Splunk receives syslog by crawling /logs/////. The /logs directory is an NFS mount.
Our current plan is to migrate our buckets over to our new Splunk server, including the buckets with syslog (which is in the main index). Then we will NFS mount /logs on the new Splunk server. Then I became concerned... assuming the buckets have been migrated over, will Splunk re-index everything in /logs? Or will it just continue from where it left off? Our syslog events go to the main index, so I can't simply not migrate its index.
I would really prefer not to re-index all of /logs if I can avoid it. Will this be the case in our environment?
Thank you!
↧
Feature Request: AppInspect to check for absolute paths
All one has to do is search Splunk Answers as such:
https://answers.splunk.com/search.html?f=&redirect=search%2Fsearch&sort=relevance&q=ImportError%3A+No+module+named&type=question
To see that there are a lot of apps finding conflicting Python libraries/modules. A cause for this is that most apps that I have seen that I have encountered this issue have not specified an absolute path as suggested via Splunk SDK documentation as well as what's provided in the example init py here (line 17):
https://github.com/splunk/splunk-sdk-python/blob/master/splunklib/__init__.py#L17
A step that would mitigate this would be for the AppInspect program to look for this:
from __future__ import absolute_import
And if not, reject the app with a statement regarding that the app needs to be built to utilize absolute versus relative paths. Otherwise, we'll all continue to have the problem of apps stumbling into another app directory looking for a Python library and throwing an error. Other steps should also be taken to prevent but this is a good start and can be readily automated.
↧
↧
Search Wineventlog to find latest login by users and then search for any > 14 days ago
Background: as part of our account management auditing, I'd like to schedule a weekly report that shows me user accounts that haven't logged in in over 14 days. I currently have this search:
index=wineventlog EventCode=4624 user="*-c"
| fields user EventCode index src_dns
| table _time user host src_dns
| stats max(_time) as last by src_dns user
| stats max(last) as "Last Login" last(src_dns) as "Source Workstation" by user
| convert ctime("Last Login")
| sort "Last Login"
| rename user as User
This search displays users by their latest login, but how can I filter it further to show those whose latest login was over 14 days ago?
Thanks!
↧
Multiple AND conditions in Eval w/ if statement
Hello there from someone in healthcare it industry. I'm working with multiple conditions and I want to make sure my syntax is correct here.
| eval goodClaimStat = if((catCode != "E0") and (catCode != "E1") and (catCode != "E2") and (catCode != "E3") and (catCode != "E4") and (catCode != "D0"), "true", "false")
| eval goodEligOrAuth = if(tripleA != "42" and tripleA != "80", "true", "false")
| eval successTransaction = if((transactionState == "Success" and goodClaimStat == "true" and goodEligOrAuth == "true"), 1, 0)
catCode, tripleA and transactionState are all fields that I extracted from events using rex(). I verified in search that I'm extracting them correctly, so the next part of my query is this decision logic. I'm assuming that this is where my problem lies. transactionState has to be "Success" and then, catCode and tripleA cannot equal those values above, respectively.
↧
Splunk Add-On for AWS - Should I use 1 or many SQS for the various inputs?
I am currently utilizing SQS ingestion for all the inputs within the app. I am noticing some duplicity with the sources indexing across 2 different indexes. Should I be using a different SQS for each input? As of now, we are using 1 SQS stream across all the inputs within the Splunk Add On for AWS.
Thanks
↧
Splunk Scheduled Report Filtering and Dashboard Panel
Hi,
I have a scheduled report in Splunk that runs nightly. It is accelerated for 7 days and runs back in time for 7 days also.
This report provides me comprehensive information about all my assets and respective information.
Report has about 10 million statistical records for our assets as we need.
When I reference my dashboard panels using this report, they error out complaining about "error fetching data" and it seems like it a huge data set thats why because it is fine with smaller data set. But when I open report as normal in reports, it loads in less than 5 seconds.
I need to know if I add report in dashboard as table which I do BUT is it possible to add drop down filter menus to parse information from that huge report table or even report by itself? OR how to have dashboard panels load quicker digging thru this large report?
Report contents example:
Host, Barcode, Company, BusinessUnit, Location, ContactPerson
I want filters for Company, BusinessUnit, Location, ContactPerson so I can list Host, Barcode information associated with the selection from this huge data.
Thanks in-advance.
↧
↧
Splunk Add-On for AWS: should I use 1 or many SQS for the various inputs?
I am currently utilizing SQS ingestion for all the inputs within the app. I am noticing some duplicity with the sources indexing across 2 different indexes. Should I be using a different SQS for each input? As of now, we are using 1 SQS stream across all the inputs within the Splunk Add On for AWS.
Thanks
↧
Can you help me with Splunk scheduled report filtering and dashboard panels?
Hi,
I have a scheduled report in Splunk that runs nightly. It is accelerated for 7 days and runs back in time for 7 days also.
This report provides me comprehensive information about all my assets and respective information.
The report has about 10 million statistical records for our assets as we need.
When I reference my dashboard panels using this report, they error out complaining about "error fetching data" and it seems like it a huge data set thats why because it is fine with smaller data set. But when I open a report as normal in reports, it loads in less than 5 seconds.
I need to know if I add a report in a dashboard as a table, which I do, BUT is it possible to add dropdown filter menus to parse information from that huge report table or even the report by itself? OR how do I get the dashboard panels to load quicker when digging through this large report?
Report contents example:
Host, Barcode, Company, BusinessUnit, Location, ContactPerson
I want filters for Company, BusinessUnit, Location, ContactPerson so I can list Host, Barcode information associated with the selection from this huge data.
Thanks in-advance.
↧
Can you help me with a search involving multiple AND conditions in eval w/ if statement?
Hello there from someone in healthcare it industry.
I'm working with multiple conditions, and I want to make sure my syntax is correct here.
| eval goodClaimStat = if((catCode != "E0") and (catCode != "E1") and (catCode != "E2") and (catCode != "E3") and (catCode != "E4") and (catCode != "D0"), "true", "false")
| eval goodEligOrAuth = if(tripleA != "42" and tripleA != "80", "true", "false")
| eval successTransaction = if((transactionState == "Success" and goodClaimStat == "true" and goodEligOrAuth == "true"), 1, 0)
catCode, tripleA and transactionState are all fields that I extracted from events using rex(). I verified in search that I'm extracting them correctly, so the next part of my query is this decision logic. I'm assuming that this is where my problem lies. transactionState has to be "Success" and then, catCode and tripleA cannot equal those values above, respectively.
↧
How do you search Wineventlog to find the latest login by users and then search for any > 14 days ago?
Background: as part of our account management auditing, I'd like to schedule a weekly report that shows me user accounts that haven't logged in over the last 14 days. I currently have this search:
index=wineventlog EventCode=4624 user="*-c"
| fields user EventCode index src_dns
| table _time user host src_dns
| stats max(_time) as last by src_dns user
| stats max(last) as "Last Login" last(src_dns) as "Source Workstation" by user
| convert ctime("Last Login")
| sort "Last Login"
| rename user as User
This search displays users by their latest login, but how can I filter it further to show those whose latest login was over 14 days ago?
Thanks!
↧
↧
Create a Legend based upon another table
Hello gurus. I have a panel with a STATS COUNT chart where the y-axis is numeric value. What we would like is a legend where the description of the y-axis number is given. I know that LOOKUP is involved but I am not sure how to send it to the legend. It cannot be a static table because there are thousands of message Descriptions and we only want to see the descriptions for detail in the chart. Thanks in advance.
↧
UDP Heavy Forwarder to Heavy Forwarder.
Trying to set up a test enviroment to be used in production. Will be taking data from another Splunk HF and sending it to our HF.
Must use UDP to transmit the data.
I have played around with creating the output.conf/input.conf, props.conf, and transforms. But it keep looking like its indexing in the first HF, and not getting to the second HF.
I have tested with netcat that UDP is sent to the other machine (UDP) watching with tcpdump.
Was using UDP:1514 for testing purposes.
If anyone can assist. I can try and add the .conf files but I think they are all messed up now, that not sure if it would be helpful to post them.
↧
Help with input look, tstats, and visualization
Hello,
I have a lookup table for all the source types. I'm trying to use stats or tstats to show all the source types, and if they have no data coming I want to show 0 for those source types. I'm having a trouble using the tstats or time chart, it's only working with chart now. IS there a way to solve this problem. Please help, thank you!
This is what I have now:
index=* |chart count by Sourcetype |append [inputlookup "Sourcetype.csv" |eval count=0 ]
*** I would like to have timechart or tstats because I'm trying to use Trellis visualization***
↧