Hello,
I have converted my panels to HTML a few years ago and they worked just fine all these years.
Since around the 12th of May 2019, my HTML panels have stopped retrieving data, now all I get is “Search is waiting for input...”, and no data arrives.
I’m using Chrome Version 74.0.3729.131.
Please note that when using Chrome version 73.0.3683.103 the HTML panels work just fine (even today).
I’m using Splunk version 6.3.3
Please note that regular panels do not have a problem.
Does anyone know why?
Please help
Thanks
↧
HTML panel stopped retrieving data
↧
Base Search returning different results than normal search
As the title suggests, I'm having issues with a base search that I'm trying to create. The base search uses tokens to pull info from a data model and the actual search uses stats to get a count of vendor products. The issue that I'm having is that the search runs normally without the base search, but when it is split up using the base search there is information missing. Clicking on the magnifying glass (in the table with the missing info) opens a new search that reconnects the searches comes up with the correct info. I'm baffled as to why this is happening. I've done research about this issue and all that I've found is this question - https://answers.splunk.com/answers/608175/splunk-dashboard-base-search-gives-result-which-is.html
As far as I know it shouldn't be an issue with limits.conf because the search is returning less than 50 results.
Base Search:
$control_token_visualizations$
|from datamodel:"Malware.Malware_Attacks"
|search $env_tok$ dest="*$hostname_tok$*"$avtype_tok$ vendor_product="$vendor_tok$" sourcetype!=carbonblack:defense:json $time_tok.earliest$ $time_tok.latest$
Continued search:
Top Destinations
↧
↧
Finding the earliestTime and latestTime of hot/warm/cold buckets
I'm unclear if this is the correct way to go about finding the earliest/latest event time in a bucket.
| dbinspect index=wineventlog state=warm
| search tsidxState="full"
| eval sizeOnDiskGB=round(sizeOnDiskMB / 1024, 2)
| stats min(startEpoch) as earliestTime, max(endEpoch) as latestTime, count(path) as numberOfBuckets, sum(sizeOnDiskGB) as totalSizeOnDiskGB by splunk_server
| eval earliestTime=strftime(earliestTime,"%Y/%m/%d %H:%M:%S")
| eval latestTime=strftime(latestTime,"%Y/%m/%d %H:%M:%S")
For this example, i'm specifically looking at finding the earliestTime in warm buckets. I set the time picker and found a date that may be what I'm looking for. Although I'm not sure if this is how I should go about finding such info?
↧
High CPU usage and kvstore errors with Splunk Stream
I'm having issues with Splunk Stream consuming all of my deployments servers CPU usage. My deployment server is constantly at 99% CPU usage. I have 165 deployment clients and only 150 of those clients are stream forwarders. The phone home interval is set to 600 and the ping interval is set to 900. I'm running Splunk Enterprise 7.2.5.1, Stream 7.1.3, and the UFs are 7.2.5.1.
Approximately every two seconds I see these logs in splunk_app_stream.log:
2019-05-15 14:44:41,138 INFO stream_kvstore_utils:178 - search_head_shc_member:: server_roles [u'license_master', u'cluster_search_head', u'deployment_server', u'search_head', u'search_peer', u'kv_store']
2019-05-15 14:44:41,138 INFO stream_kvstore_utils:177 - is_kv_store_ready, kv store status :: ready
2019-05-15 14:44:41,138 INFO stream_kvstore_utils:176 - splunk fatal error: False kv store fatal error: False
Running ps -aux, I see these proccess consuming 5 to 25% of my CPU per process.
splunk 19594 20.0 0.1 154652 29656 ? S 17:17 0:00 /opt/splunk/bin/python /opt/splunk/bin/runScript.py rest_validate_streamfwdauth.ValidateStreamfwdAuth
splunk 19613 21.0 0.1 154652 29880 ? S 17:17 0:00 /opt/splunk/bin/python /opt/splunk/bin/runScript.py rest_validate_streamfwdauth.ValidateStreamfwdAuth
splunk 19615 20.0 0.1 154652 29808 ? S 17:17 0:00 /opt/splunk/bin/python /opt/splunk/bin/runScript.py rest_validate_streamfwdauth.ValidateStreamfwdAuth
Anyone else run into the same issues?
↧
Unable to login to local-account for appdynamics using splunk plugin
We're getting the following error message when trying to login using the appdynamics add-on
05-15-2019 11:43:53.288 -0400 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_AppDynamics/bin/appdynamics_api.py" ERROR403 Client Error: Forbidden for url: https://xxxx-prod.saas.appdynamics.com/controller/rest/applications?output=JSON&time-range-type=BEFORE_NOW&duration-in-mins=5
I've filled out the configuration form w/ the collector url being https://account-name.saas.appdynamics.com, AppD UserID is the userid that we've setup as a local account, same with password. The appd account name is set correctly as well.
As an aside, I'm able to login to the appd web portal using the same creds that I'm trying with using the add-on.
Any help would be appreciated. Thank you.
↧
↧
Newly added Splunk alert action doesn't show in Alert
I have just added 2 new alert actions in Splunk. I verified that the permissions on the alert action are read for everyone, and the app for that alert action is shared to everything. I am unable to see the alert actions in an alert that is already configured.
The alert actions are being distributed via deployment server to two search heads.
What am I missing?
↧
Appending a static lookup
Hi all,
I'm looking for a way to append the contents of a CSV table to any search I make as an additional column. For example I have the following search:
index=X sourcetype=Y name=Z | dedup host-ip, plugin_name, plugin_family, severity, "ports{}.port", "ports{}.protocol" | chart count by host-ip
This gives me a table like this:
host-ip count
1.1.1.1 3
2.2.2.2 4
3.3.3.3 7
I though that adding `|inputlookup append=t testing.csv` would add an additional column with the results from the lookup table but instead it adds them under the results from the search. Is there a command or parameter in the lookup command that can help me with this case?
↧
Field Aliases and Extractions -- overlap or order of operations causing issue
So I have an event:<164>2019-05-14T22:04:15.161Z hostname Hostd: Rejected password for user myuser from 192.168.1.10
The user field is not extracted automatically, so I created (via webUI) a extraction:
[source::VMware:esxlog:source::tcp:1514]
EXTRACT-username-esxi-extraction = (?=[^f]*(?:for user|f.*for user))^(?:[^ \n]* ){7}(?P\w+)
This extraction works great when I do:
mysearch | rex "(?=[^f]*(?:for user|f.*for user))^(?:[^ \n]* ){7}(?P\w+)"
(See sample: https://www.regextester.com/?fam=109334)
Unfortunately, if I just run the search without the REX (the props.conf extraction should handle it fine), I get nothing.
Messing around with it, I found that if I changed the `"(?P\w+)"` to something like `"(?P\w+)"`, it works!
So, I thought maybe there was some overlap, but I don't know how/why that would be an issue. I don't know what to look for in the btool readout -- it looks fine.
So then I thought, ok! ill alias "xxxx" over to "username". Hacky, but I'm so tired of this stupid extraction by now, I don't even care.
ANd that leads me to this clever alias:
[VMware:esxlog:Hostd]
FIELDALIAS-normalize_username_esxi_hostd = Username as username user AS username xxxx AS username
But this does nothing! the other 2 seem to still work (Username and user) but "xxxx" is a dud.
I checked the order of things here:
https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Searchtimeoperationssequence
I see no reason my alias would not work on an extracted field.
Any suggestions or some glaring error I am missing?
↧
Host transforms not working
Hello All,
I have the following props and transfroms
**Props.conf**
[host::splunk-sh1]
TRANSFORMS-vdisyslogs = set_host
**Transforms.conf**
[set_host]
REGEX = [ies|wv|inn].*.mentorg.com
DEST_KEY = MetaData:Host
FORMAT = host::$1
But the host value is set to $1 and not the ies|wv|inn.*.mentorg.com. It works when I run the following search:
index="remoteaccess" sourcetype="vdi:syslogs"
| rex field=_raw "(?[ies|wv|inn].*.mentorg.com)"
What do I have wrong and why is it wrong?
Thanks
ed
↧
↧
Using Heavy Forwarded to Send Subset of Data to 3rd Party and Not Index
Having issues with routing data to a 3rd party and then dropping the events from being indexed. The Windows event is being sent to the 3rd party but also is being indexed. I currently have a case open with support but wanted to ask the question to see if anyone has dealt with the issue before.
-bash-4.2$ more props.conf
[source::WinEventLog:Security]
TRANSFORMS-pta = pta_syslog_filter
[WinEventLog:Security]
TRANSFORMS-eventcodes = badevents
-bash-4.2$ more transforms.conf
#Send eventcode 4624 to 3rd party
[pta_syslog_filter]
REGEX = .*EventCode=4624.*
DEST_KEY =_SYSLOG_ROUTING
FORMAT = pta_syslog
Windows events to drop. If I add 4624 below, the events are not sent to 3rd party.
[badevents]
REGEX=(?m)EventCode=(4634|560|562|5156|4689|4648|4662|4769|5061|5058)
DEST_KEY=queue
FORMAT=nullQueue
-bash-4.2$ more outputs.conf
[tcpout]
defaultGroup = default-autolb-group
[syslog:pta_syslog]
server = 3rdPartyHostIP:11514
sendCookedData = false
type=udp
timestampformat = %s
Splunk indexers
[tcpout:default-autolb-group]
server = indexer1:9997, indexer2:9997
autoLB = true
↧
How to format exceptions with log4j to stdout
When we had Splunk processing log files, the exceptions looked fine. But now that we are processing just the STDOUT from Docker, the exception stack traces were appearing as separate log messages. I fixed that in a way, buy removing the line breaks in the message itself with a pattern in log4j. Now it want it appear as separate lines when I view the exception. Is there some character or string that I can substitute so each method in the stack trace appears as a separate line, but not a separate message?
Here is my current log4j pattern:
%d{ISO8601} %-5p %C{1} - [%x] %m %throwable{separator(\n)}%n
In Spunk it appears with a literal "\n" instead of a line break.
What is the correct Pattern I should use?
↧
HTML dashboard stopped retrieving data
Hello,
I have converted my XML dashboards to HTML a few years ago and they worked just fine all these years.
Since around the 12th of May 2019, my HTML panels have stopped retrieving data, now all I get is “Search is waiting for input...”, and no data arrives.
I’m using Chrome Version 74.0.3729.131.
Please note that when using Chrome version 73.0.3683.103 the HTML panels work just fine (even today).
I’m using Splunk version 6.3.3
Please note that regular dashboard (XML) do not have a problem.
Does anyone know why?
Please help
Thanks
↧
Pushing self signed certificates to universal forwarders
Is there a Splunk recommended solution to pushing self signed SSL certificates to thousands of universal forwarders?
We tried bundling the certificates into an app and pushing it out to the universal forwarders. However, I believe that the default configurations set in /system/local will overwrite the configurations set within the app.
Is there a way around this or is there a better alternative solution?
↧
↧
Splunk Docker Failing when specifying volume mounts
I've successfully run a Splunk instance using the splunk-provided run command. I then made a compatible docker compose version of the same command. It runs fine. The issue comes when i want to persist the volume mounts. The splunk image creates two volumes:
/opt/splunk/etc
/opt/splunk/var
So I added volume mounts to my compose file:
volumes:
- /local/path/for/persistence:/opt/splunk/var
- /local/path/for/persistence:/opt/splunk/etc
Now the container fails with output:
fatal: [localhost]: FAILED! => {"changed": false, "cmd": ["/opt/splunk/bin/splunk", "start", "--accept-license", "--answer-yes", "--no-prompt"], "delta": "0:00:03.109600", "end": "2019-05-15 19:46:49.719364", "msg": "non-zero return code", "rc": 10, "start": "2019-05-15 19:46:46.609764", "stderr": "homePath='/opt/splunk/var/lib/splunk/audit/db' of index=_audit on unusable filesystem.\nValidating databases (splunkd validatedb) failed with code '1'. If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue", "stderr_lines": ["homePath='/opt/splunk/var/lib/splunk/audit/db' of index=_audit on unusable filesystem.", "Validating databases (splunkd validatedb) failed with code '1'. If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue"], "stdout": "\nSplunk> Finding your faults, just like mom.\n\nChecking prerequisites...\n\tChecking http port [8000]: open\n\tChecking mgmt port [8089]: open\n\tChecking appserver port [127.0.0.1:8065]: open\n\tChecking kvstore port [8191]: open\n\tChecking configuration... Done.\nNew certs have been generated in '/opt/splunk/etc/auth'.\n\tChecking critical directories...\tDone\n\tChecking indexes...\n\t\tCreating: /opt/splunk/var/run/splunk/appserver/i18n\n\t\tCreating: /opt/splunk/var/run/splunk/appserver/modules/static/css\n\t\tCreating: /opt/splunk/var/run/splunk/upload\n\t\tCreating: /opt/splunk/var/spool/splunk\n\t\tCreating: /opt/splunk/var/spool/dirmoncache\n\t\tCreating: /opt/splunk/var/lib/splunk/authDb\n\t\tCreating: /opt/splunk/var/lib/splunk/hashDb", "stdout_lines": ["", "Splunk> Finding your faults, just like mom.", "", "Checking prerequisites...", "\tChecking http port [8000]: open", "\tChecking mgmt port [8089]: open", "\tChecking appserver port [127.0.0.1:8065]: open", "\tChecking kvstore port [8191]: open", "\tChecking configuration... Done.", "New certs have been generated in '/opt/splunk/etc/auth'.", "\tChecking critical directories...\tDone", "\tChecking indexes...", "\t\tCreating: /opt/splunk/var/run/splunk/appserver/i18n", "\t\tCreating: /opt/splunk/var/run/splunk/appserver/modules/static/css", "\t\tCreating: /opt/splunk/var/run/splunk/upload", "\t\tCreating: /opt/splunk/var/spool/splunk", "\t\tCreating: /opt/splunk/var/spool/dirmoncache", "\t\tCreating: /opt/splunk/var/lib/splunk/authDb", "\t\tCreating: /opt/splunk/var/lib/splunk/hashDb"]}
I cannot figure out why this will not work. Everything works until I persist the volumes. If I can't persist the data, then running splunk is useless.
↧
need value by time
hello I have a command which gives the value ex., "172" it is basically change when no. of ldap users added and removed I need to get the value by time. where there is no logs generated by time or some thing like that its just a total number which we can see ...
-So the question is, is there any way we can get by time like if we run a dashboard we get these fields and count by week or something.
| ciscoaxlquery "
SELECT count(primarynodeid) from enduser"
it basically gives the fields
Host.....User........count.......port
ccm.corp.exp.com......sideview-cdr........172.......8443
↧
How can I group results without duplicates?
Hi
This is my command to find the number of times an authentication has been rejected.
But I would like to be able to eliminate duplicated results. for example I only have 2 host. But as I have 24 IPs, the "host" value appears 25 times.
index=cisco_asa eventtype=cisco_authentication vendor_action="authentication Rejected"
| stats count by IP host server
| sort -count
Thank you
↧
Set multiple tokens on a single Table view
Hello
I am trying to see how we can set two tokens on a table below. What ever I try, only seem to be able to get one value at a time. Is it possible to allow users to click on two values in a table and once they click on Submit, pass those tokens over for downstream searches?
Group1
A
B
C
D
Setting the drilldown as
$click.value$ $click.value1$
Want to write the HTML as
Your Picks: $click.value$, $click.value1$ -- Tried this and it always should a single value in both.
↧
↧
High end Google maps viz
HI Team,
I have been using maps within Splunk from good time. By default it always shows white background if any country/place is not highlighted. Is there any way we can change the background color of the map before working on it. Something like this.
https://csaladenes.wordpress.com/2018/02/18/motorsports-asian-pull-and-release/
OR
https://danielmiessler.com/blog/visualizing-interesting-log-events-using-splunks-google-maps-application/
(Note that , I am good till the country type view, I dont need state/city view)
Thanks in advance
↧
Restoring KV Store Collection: Socket Error or Timeout
Hi everyone!
I'm trying to restore a KV store collection with about 100K records. I am consistently encountering the following error around the 5 minute mark when the restoration process is running:
05-14-2019 10:54:16.517 -0400 ERROR KVStorageProvider - An error occurred during the last operation ('saveBatchData', domain: '2', code: '4'): Failed to send "update" command with database "s_cpz-coU94dWXKCN+jKROv8LHlXVG8O_testingcollectionBfrJQ4CnAkHyN4DnrvYrGQiU": Failed to read 4 bytes: socket error or timeout
05-14-2019 10:54:16.741 -0400 WARN KVStoreAdminHandler - No data found to restore matching specified parameters archiveName="backedup_collections_100k.tar.gz", appName="cpz-great-app", collectionName="testingcollection"
05-14-2019 10:54:16.741 -0400 ERROR KVStoreAdminHandler - [ "{ \"ErrorMessage\" : \"Failed to send \\\"update\\\" command with database \\\"s_cpz-coU94dWXKCN+jKROv8LHlXVG8O_testingcollectionBfrJQ4CnAkHyN4DnrvYrGQiU\\\": Failed to read 4 bytes: socket error or timeout\" }" ]\n
Based discussions within thread: https://answers.splunk.com/answers/682228/attempting-to-restore-a-kvstore-collection-has-any-1.html
I have also set the following within **limits.conf**:
[kvstore]
max_documents_per_batch_save = 1000000
max_rows_per_query = 1000000
With it consistently erroring out around the 5 minute mark is leading me to wonder if there's a timeout value within one of the conf settings.
Any thoughts on this would be greatly appreciated!
↧
Adding Can delete Role to User LDAP
Hi , i have admin privileges in splunk when i am trying to delete some data it says insufficient privileges and we are managing roles and all through LDAP. When i go to User access control and when i click on user and i am trying add the can delete role i am unable to do it.Do i need to add the role through the LDAP Group that i am belongs and adding the role their ?
↧