Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

what is the diff between ‘box add-on’ and ‘box app’?

$
0
0
There seem to be two downloadable installers for integrating with Box - ‘splunk add-on for box’ and ‘box app for splunk’. What is the difference between the two? The add-on published date is much more recent, thank you

Is there a simple way to source version (in github) a Splunk add-on project ?

$
0
0
Hi, I've been working on an add-on that i created using Splunk add-on builder I would like to save the source code (data sources, python and shell) in github so i can manage the versioning Is there a simple way to do it ? Thanks

In this Splunkapp- how to display event monitoring data?I have done all the configurations required . I am trying to display the records from my salesforce developer org.

$
0
0
I have done all the configurations required . I am trying to display the records from my salesforce developer org.

ldapfilter, ldapgroups, ldapfetch not working with non-default stanza.

$
0
0
Hi forum, I'm trying to setup sa-ldapsearch for multiple clients. The hole idea is that on client is not allowed to used lookups of another client while using the same searchhead. I tried copy SA-ldapsearch and rename the app in app.conf to give application level permissions. Every ldap app have it's own ldap.conf looking like this.. [bwtest.loc] alternatedomain = BWTEST basedn = DC=bwtest,DC=loc binddn = svc-splunk@bwtest.loc port = 389 server = 192.168.208.10 ssl = 0 command ldapsearch is working fine, but ldapfilter, ldapgroup and ldapfetch does not: 2018-09-11 15:42:57,500, Level=ERROR, Pid=19384, File=configuration.py, Line=407, Missing required value for alternatedomain in ldap/BWTEST. 2018-09-11 15:52:11,294, Level=ERROR, Pid=19892, File=configuration.py, Line=407, Missing required value for alternatedomain in ldap/bwtest.loc. it looks like configuration.py is not finding the alternatedomain in bwtest.loc stanza. if i configure the settings in the default stanza it's working for me - unfurtuntly this does not work for mulitple concurrent installations. any hints? Anyone installed multiple instances of SA-ldapsearch on a single searchhead? Regards, Andreas

PrintService log ingestion

$
0
0
I'm trying to ingest Windows PrintService logs into our distributed environment. I've got a dedicated index, and built an app with an inputs.conf with the following: [WinEventLog://Microsoft-Windows-PrintService/Admin] index = winprintlog disabled = 0 start_from = oldest The app is distributed via server class from our deployment server. I've confirmed the print servers have the app and the config file. I've restarted Splunk on the Deployment server, and manually restarted several forwarder services, but none of the servers are sending log data. I don't know that there's been any new events since I deployed (they seem rare enough) but this same config was used on a custom Windows Powershell log app and it pulled all of the historical logs as well, of which there are plenty for the PrintService. What am I missing?

Deleting an add-on created by Add-on builder

$
0
0
Hi, I removed an add-on manually by deleting the folder $SPLUNK_HOME/etc/apps/TA-myProject, but now i'm having an issue when i try to import the same project, here is the error message : The 'TA-myProject' add-on project could not be imported because an add-on with this name already exists. Could you please help ? Many thanks in advance

Splunk check for continuous change in event and send alert if it is same from some minutes

$
0
0
I want to get the alert based on below table. _time A B C 11-09-18 9:05 10 8 8 11-09-18 9:06 8 4 4 11-09-18 9:07 5 9 0 11-09-18 9:08 0 7 0 11-09-18 9:09 0 5 0 11-09-18 9:10 0 1 0 11-09-18 9:11 0 0 0 11-09-18 9:12 5 0 9 11-09-18 9:13 7 0 4 11-09-18 9:14 9 0 5 I want to setup an alert if any of these A or B or C have 0 for consecutive 5 minutes. The alert should specify for which (A or B or C) there was 0 for continuous 5 minutes.

Using a sparkline within tstats to visualize data feed over the last 24 hours

$
0
0
Using a tstats command to get a count of various indexes over the last 24 hours. I also want to include the latest event time of each index (so I know logs are still coming in) and add a sparkline to see the trend. I'm having trouble as the sparkline is grouping together into one rather than by index. I referenced [this post][2], but am stuck. | tstats count where (index="email" OR index="b" OR index="ids" OR index="web") BY index _time span=10m | stats sparkline(sum(count), 10m) AS Volume ![alt text][1] [1]: /storage/temp/255979-delete.png [2]: https://answers.splunk.com/answers/500896/using-a-sparkline-with-tstats.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev

Saved searches field names

$
0
0
I'm working w/ a similar issue as: https://answers.splunk.com/answers/512103/how-to-get-a-list-of-schedules-searches-reports-al.html The addendum to that is i want to find the run time of each of the searches. I'm thinking perhaps there are too many searches running at the same time and is causing Splunk inner-connectivity issues. It would be really nice to have a scheduled job time and the amount of time it took to run the last time (or several times).

Is there an easy way to pair two events with the same sourcetype and which have the same values in different fields?

$
0
0
I am looking for an elegant solution to the following problem: I want to summarize data from two different events which have the same sourcetype/index/etc, but which have identical values in two different fields. ---------- Event A: sourcetype= foo ComputerName=homepc FileName=example.exe PID=3333 PPID=2222 Event B: sourcetype=foo ComputerName=homepc FileName=parent.exe PID=2222 PPID=1111 ---------- I want to group data from both events into one summarized line like follows: ComputerName......FileName...........PID.........ParentFileName.......PPID homepc...................example.exe......3333.......parent.exe................2222 I have attempted to accomplish this via JOIN and it does seem to work, but I am aware this is not an ideal solution: ---------- index=_internal sourcetype=foo | table ComputerName FileName PID PPID | rename FileName as Child_FileName, PID as Child_PID, PPID as Parent_PID | join Parent_PID ComputerName [ search index=_internal sourcetype=foo | table ComputerName FileName PID | rename FileName as Parent_FileName, PID as Parent_PID ] ---------- If the sourcetypes in the two searches were different I know I could easily accomplish this via a string of 'eval's and stats. Thanks for any suggestions!

Changing IP address on one of the Search Heads in Cluster

$
0
0
Due to infrastructure issues, one of the Search Head's in the cluster ( total of 2 - 1 in each location ), I need to make a change to the IP address of one of them. What changes would need to occur to make this work? From a config stand point, everything points to the deployment server and cluster master for their configurations. Would anything have to change in terms of configurations? I can't seem to find anything that points specifically to the IP address of the search head.

Help with changing IP address on one of the Search Heads in Cluster

$
0
0
Due to infrastructure issues, I need to make a change to the IP address to one of my Search Heads (one of the Search Heads in the cluster ( total of 2 - 1 in each location ). What changes would need to occur to make this work? From a config stand point, everything points to the deployment server and cluster master for their configurations. Would anything have to change in terms of configurations? I can't seem to find anything that points specifically to the IP address of the search head.

How do I use a sparkline within tstats to visualize data feed over the last 24 hours?

$
0
0
I want to use a `tstats` command to get a count of various indexes over the last 24 hours. I also want to include the latest event time of each index (so I know logs are still coming in) and add to a sparkline to see the trend. I'm having trouble as the sparkline is grouping together into one rather than by index. I referenced [this post][1], but am stuck. | tstats count where (index="email" OR index="b" OR index="ids" OR index="web") BY index _time span=10m | stats sparkline(sum(count), 10m) AS Volume ![alt text][2] Basically, I'm trying to make a tstats version of this: ![alt text][3] index="a" OR index="b" OR index="c" OR index="d" OR index="e" OR index="f" OR index="g" | stats sparkline count latest(_time) AS Latest BY index | convert ctime(Latest) [1]: https://answers.splunk.com/answers/500896/using-a-sparkline-with-tstats.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev [2]: /storage/temp/255979-delete.png [3]: /storage/temp/255980-delete2.png

formatting App logo and test

$
0
0
Hi, I have created a app and looking for fomat the App Name and logo. **Requirement:** App Logo : Change the size of App Logo App Name: Hide or Change the color of App Name For Example: This is just for reference. ![alt text][1] [1]: /storage/temp/254900-app-logo.png

Splunk_TA_nix

$
0
0
Hello All, So on a quite a few of our Splunk servers we are running Splunk as a non-root user. Well we deploy Splunk_TA_nix 6.0.0 to all our Linux clients. Quite a few of the scripts that get run as part of the TA_nix add-on require root privs to execute properly. How do I get around this? Thanks ed

Transforms index time field extraction producing unexpected results.

$
0
0
The field extraction works for nearly all events, except for events where the line count is over 450. The returned value of the extraction for such events are about 27 lines long or 2500+ characters long. The field extractions ends with the following pattern (regex for security): \w+?\s\|\s\d{9} and the pattern that follows the extracted field is \=(\w+?\.){5}\w+. I am aware that I should probably do this extraction and search time, but I have been overruled on that matter. Here are some relevant configurations: PROPS: BREAK_ONLY_BEFORE = \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3} \[ ADD_EXTRA_TIME_FIELDS = True ANNOTATE_PUNCT = true AUTO_KV_JSON = true BREAK_ONLY_BEFORE_DATE = true DEPTH_LIMIT = 1000 FIELD_HEADER_REGEX = \[* LEARN_MODEL = true LEARN_SOURCETYPE = true LINE_BREAKER_LOOKBEHIND = 100 MATCH_LIMIT = 100000 MAX_DAYS_AGO = 2000 MAX_DAYS_HENCE = 2 MAX_DIFF_SECS_AGO = 3600 MAX_DIFF_SECS_HENCE = 604800 MAX_EVENTS = 500 MAX_TIMESTAMP_LOOKAHEAD = 128 NO_BINARY_CHECK = true SEGMENTATION = indexing SEGMENTATION-all = full SEGMENTATION-inner = inner SEGMENTATION-outer = outer SEGMENTATION-raw = none SEGMENTATION-standard = standard SHOULD_LINEMERGE = true TRANSFORMS-sesh_vars = sesh_vars ### VARIOUS TRANSFORMS FIELD EXTRACTIONS HERE TRUNCATE = 50000 detect_trailing_nulls = false disabled = false maxDist = 100 category = Custom TRANSFORMS: [sesh_vars] REGEX = (?m)Session\s+(?(.+\s*)+?)(?=Additional|$) WRITE_META = true

Help with PrintService log ingestion

$
0
0
I'm trying to ingest Windows PrintService logs into our distributed environment. I've got a dedicated index, and have built an app with an inputs.conf with the following: [WinEventLog://Microsoft-Windows-PrintService/Admin] index = winprintlog disabled = 0 start_from = oldest The app is distributed via server class from our deployment server. I've confirmed the print servers have the app and the config file. I've restarted Splunk on the Deployment server, and manually restarted several forwarder services, but none of the servers are sending log data. I don't know that there's been any new events since I deployed (they seem rare enough) but this same config was used on a custom Windows Powershell log app and it pulled all of the historical logs as well, of which there are plenty for the PrintService. What am I missing?

With too much of data does is it advisable to start extracting data from hive tables rather than splunk indexes? Does anybody have any examples or documentation or examples around this?

$
0
0
With too much of data does is it advisable to start extracting data from hive tables rather than splunk indexes? Does anybody have any examples or documentation or examples around this?

User permissions:Why are ldapfilter, ldapgroups, ldapfetch not working with non-default stanza?

$
0
0
Hi forum, I'm trying to setup sa-ldapsearch for multiple clients. The whole idea is that a client is not allowed to use lookups of another client while using the same searchhead. I tried to copy SA-ldapsearch and rename the app in app.conf to give application level permissions. Every ldap app has its own ldap.conf, which look like this.. [bwtest.loc] alternatedomain = BWTEST basedn = DC=bwtest,DC=loc binddn = svc-splunk@bwtest.loc port = 389 server = 192.168.208.10 ssl = 0 command ldapsearch is working fine, but ldapfilter, ldapgroup and ldapfetch is not: 2018-09-11 15:42:57,500, Level=ERROR, Pid=19384, File=configuration.py, Line=407, Missing required value for alternatedomain in ldap/BWTEST. 2018-09-11 15:52:11,294, Level=ERROR, Pid=19892, File=configuration.py, Line=407, Missing required value for alternatedomain in ldap/bwtest.loc. it looks like configuration.py is not finding the alternatedomain in bwtest.loc stanza. if i configure the settings in the default stanza, it works for me - unfortunately, this does not work for multiple concurrent installations. any hints? Has anyone installed multiple instances of SA-ldapsearch on a single search head? Regards, Andreas

With too much data, is it advisable to extract data from hive tables rather than Splunk indexes?

$
0
0
With too much data, is it advisable to start extracting data from hive tables rather than Splunk indexes? Does anybody have any examples or documentation or examples around this?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>