I want to create a system where my users can manage an app self-service. I am having trouble finding a way to allow a particular role to edit permissions for objects that they do not own.
Example:
I have one app - "APP - XYZ"
I have two roles:
- User Splunk XYZ - gives read only access to app XYZ
- Editor Splunk XYZ - gives read and write access to app XYZ, has embed_report capability.
User 1 - Has User Splunk XYZ
Can make a dashboards in APP - XYZ but not make it public
User 2 - Has both Editor and User roles.
User 2 can make dashboards in APP - XYZ and change permissions of their own dashboards
User 2 **cannot** change permissions of User 1's dashboards inside APP - XYZ
Is there a way to allow the Editor role the permissions to change all objects within APP - XYZ?
↧
How can I allow users to change permissions of all objects within an app?
↧
Can the field be displayed dynamically in splunk?
I want to show the data in the last few months.
For example, in the combox, when choosing the last a month, there is only one field in the table.
When choosing the last two months, there are two fields in the same table.
When choosing the last three months, there are three fields in the same table. And so on
![alt text][1]
Can the field be displayed dynamically in splunk?
[1]: /storage/temp/255744-动态选择字段.png
↧
↧
Palo Alto stopped logging traffic to Splunk
I am having the same issue as: https://answers.splunk.com/answers/507167/why-are-my-palo-alto-firewall-logs-not-forwarding.html . Palo Alto has stopped logging traffic to Splunk after we performed an OS patch (RHEL 7.5) on the Splunk server and then performed a reboot on the Splunk server - in this case a search head. The splunkd.log didn't reveal anything other than the fact that it stopped sending messages after the Splunk server reboot.
No changes were made to the PA firewall appliance nor any sort of configuration changes on the Splunk server prior to the patch/reboot. Everything was working fine prior to the patch and reboot - which is still working, other than the PA logs. A systemctl status splunk shows that all services are enabled, active and dislays what you would expect.
There isn't much information on the forums regarding this specific topic, any help would be greatly appreciated.
↧
How to Blacklist on a Universal Forwarder with a TCP input?
I have a UF running on a linux device, with a TCP input. The input is coming from a Graylog forwarder and all the windows events coming with a 'winlogbeat_ preface.
I want to black list windows events coming by event code and normally I use a blacklist -= EventCode="xxxx" Message=....
however the eventcode comes in as winlogbeat_event_id,
I did try this:
blacklist1= winlogbeat_event_id = "4662"
This doesn't appear to work.
Can someone help with this?
Is there any log that shows events being whitelisted or blacklisted?
Thank You!
↧
How can I re-index license-usage.log?
Hello
Someone prior to me had set the license master to forward logs to the wrong hosts so when I fixed it I have no historical data for license usage.
Whats the best way to fix this?
Thanks for the assistance!
↧
↧
How to round a number when displaying results in a chart?
I am trying to display the response times of services for the last 7 days in a chart , but I want to round the response time .
for example I only want 2 digits to be displayed after decimal .
My query :-
| chart avg(response_time) over services by Date
| foreach * [eval response_time = round(response_time,2)]
But the above query doesn't work for me
↧
How to import old log files to splunk
I have a remote server which has 1 week older rolling logs. I wanted to monitor those logs so I have installed UF and set up inputs.conf. The newly created logs are showing up on Splunk search, but I am not able to search those 1week older files. Below is my inputs.conf. Is there any other way that I can import that logs to the same source type, same index and from the same host. Thank you
Splunk: 6.6.3
[monitor://D:\xxx*.log]
disabled = false
sourcetype = AAA
ignoreOlderThan = 7d
↧
Is it possible to change the admin account password which we used to login in Splunk Cluster Master, Deployment Master, Search Head & Indexers?
Is it possible to change the admin account password which we used to login in Splunk Cluster Master, Deployment Master, Search Head & Indexers?
↧
Why is the license breaching everyday since the upgrade to 7.1.2 from 6.5.3 version?
Recently, I have upgraded my Splunk environment to 7.1.2 from 6.5.3 version. Since I upgrade the version, the license has been breaching every day.
So I started digging deep on what is consuming much and source details. Nothing changed in the number of sources or data. There is no increase in the number of events per file or even the size of the event remain the same. I am running out of ideas what to check next for this change in license consumption.
I know that it is a bit odd but any suggestions on what needs to be checked and conclude on the root cause? Thanks in advance.
↧
↧
What are some of the best practices for field extractions?
Hi,
There is some debate in our group regarding best practices for field extractions. We have a feed that has well defined key-value fields. We also have field extractions setup on the SH, for a number of these fields. Is there a really a need for the field extractions, since key-value pairs will get picked up automatically? Pros/cons? We use CIM/ES extensively.
↧
Hardcoded Time Bucketing
Hi guys,
I was recently given a new data index that has hardcoded time stamps in the event rather than being based on _time. The events are also re-indexed every night rather than being ingested when the event occurred making this more complex. For example, an event that happened aug 14th will have a hardcoded epoch of aug 14th yet the splunk _time date is yesterday evening. Using this data, I have been able to create a time chart but I am having trouble with months with no events. The months that have no events are being skipped (see below picture) because there is no data for that particular month. How can you create buckets based on the hard coded dates or create something to fill these no existent months?
![alt text][1]
[1]: /storage/temp/254746-screen-shot-2018-08-20-at-112713-am.png
↧
How can I visualize "table _raw" in the same format as the search result for the raw events in default Splunk search screen ?
When I search for my events by giving index=myindex, I get my data in the proper format.
But when i try to print it out in a table, by using "index=myindex | table _raw" the formatting changes and I get the data in a different format.
How can get output of "table _raw" in the same way as events display in default search page.
Can it be done at query level or HTML or CSS level ?
Thanks in advance for your help.
↧
What causes the this splunkd Search Head Assertion in Splunk 7.1.1?
Hello,
splunkd: /home/build/build-src/nightlight/src/framework/SearchResultsMem.cpp:839: SearchResultsMem::iterator SearchResultsMem::erase(SearchResultsMem::iterator, SearchResultsMem::iterator): Assertion `it != end()' failed.
Trying to track down RAM issues but at the same time would like more specific information on what Splunk is trying to do.
Please advise...
Mark
↧
↧
Tracking software install/removal
For Windows, I've been trying to track installs/removals. MSI was a breeze. I'm attempting now anything that isn't MSI. I'm tracking changes in the following paths:
- HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall
- HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\Windows\CurrentVersion\Uninstall
Two issues arose:
1. Uninstalled items just delete the whole key. I'd need to do a back-reference to determine what that was.
2. Programs that upgrade tend to do another CreateKey. It's difficult to differentiate between Installs and Upgrades.
Here's an example of my search for detecting installs.
index="winregmon" process_image!=*msiexec* registry_type="SetValue" *displayname*
| join type=left max=0 host data [
search index="winregmon" process_image!=*msiexec* (registry_type="CreateKey" OR registry_type="DeleteKey") latest=-16m
| dedup host
| rename registry_type as last_registry_type
| rename data AS deleted_data]
| dedup host data
| eval Date=strftime(_time, "%m-%d-%Y")
| eval Time=strftime(_time, "%H:%M:%S")
| table host data Date Time last_registry_type
In my various modifications of this search, either I detect installs + upgrades (i just want installs) or I miss data all together. I'm aware the search above isn't right, just for reference. My idea:
- Find the most recent registry change, per host
- Back-reference to the last Key modification event, Create/Delete
- If Create, it's an upgrade. If Delete, it's an install.
- Only show Installs (DeleteKey being the last event, for that host)
↧
FieldFormat Data Values
Hi,
My data set is an integer that I want to show as integer + % in the data labels. When I use the fieldformat command, the data does not show up on a column chart. Is there anyway to add a percent sign to a column chart using fieldformat or an eval?
↧
Adding a date to a string Message
I am trying to create an error message based on a time frame, the last 15 min. and now. So the error message would say,
"Client Missed file between 15:15:00 - 15:30:00"
The times are calculated at the time of the search and the following search below fails as "Error in 'eval' command: Typechecking failed. '+' only takes two strings or two numbers."
| eval 15MinEarly=strftime(relative_time(now(), "-15m"), "%m/%d/%Y %H:%M:%S")
| eval Now=strftime(now(), "%m/%d/%Y %H:%M:%S")
| eval ErrorMessage = "Client Missed file between: " + 15MinEarly + " - " Now
How do you convert the two times to string so I can concatenate them into the error message?
↧
One-Table Combining Different Search Results in Real-Time
My end goal is to show events in one table coming from multiple searches in real time. They all have the same fields. `appendcols` usually works but not in real-time.
My ideas were:
-Each of the real-time searches will append its results to the same CSV; a different search will display that CSV in real-time.
-Create a dashboard with a panel for each search, somehow dynamically combine them; or at least make them look combined.
There's possible a much simpler answer for this which I'm missing. Any help appreciated!
↧
↧
SA-Eventgen not generating any data.
Hi,
Installed SA_Eventgen and configured it for with two different samples(one is a CSV and another a txt file with raw data) but it is not generating any data. In App's UI under "Eventgen Logs" tab I can see that the eventgen process has begun for both the samples. Here are some screenshots and the eventgen.conf file.
Logs:
2018-08-20 16:18:00.000 Splunk _internal 2018-08-20 16:18:00 eventgen INFO MainProcess All timers started, joining queue until it's empty.
2018-08-20 16:18:00.000 Splunk _internal 2018-08-20 16:18:00 eventgen INFO MainProcess Start '1' generatorWorkers for sample 'Threats.sophos'
2018-08-20 16:18:00.000 Splunk _internal 2018-08-20 16:18:00 eventgen INFO MainProcess Creating timer object for sample 'Threats.sophos' in app 'Sample_Data'
2018-08-20 16:18:00.000 Splunk _internal 2018-08-20 16:18:00 eventgen INFO MainProcess Start '1' generatorWorkers for sample 'isilon_auth.csv'
2018-08-20 16:18:00.000 Splunk _internal 2018-08-20 16:18:00 eventgen INFO MainProcess Creating timer object for sample 'isilon_auth.csv' in app 'Sample_Data'
2018-08-20 16:18:00.000 Splunk _internal 2018-08-20 16:18:00 eventgen ERROR MainProcess No module named jinja2 Traceback (most recent call last): File "/opt/splunk/etc/apps/SA-Eventgen/lib/splunk_eventgen/eventgen_core.py", line 437, in _initializePlugins module = imp.load_module(base, mod_name, mod_path, mod_desc) File "/opt/splunk/etc/apps/SA-Eventgen/lib/splunk_eventgen/lib/plugins/generator/jinja.py", line 9, in from jinja2 import nodes ImportError: No module named jinja2
2018-08-20 16:18:00.000 Splunk _internal 2018-08-20 16:18:00 eventgen WARNING MainProcess Could not load plugin: /opt/splunk/etc/apps/SA-Eventgen/lib/splunk_eventgen/lib/plugins/generator/jinja.py, skipping
2018-08-20 16:18:00.000 Splunk _internal 2018-08-20 16:18:00 eventgen INFO MainProcess Key 'splunkUser' in stanza 'Threats.sophos' may not be a valid setting
2018-08-20 16:18:00.000 Splunk _internal 2018-08-20 16:18:00 eventgen INFO MainProcess Key 'splunkPass' in stanza 'Threats.sophos' may not be a valid setting
2018-08-20 16:18:00.000 Splunk _internal 2018-08-20 16:18:00 eventgen INFO MainProcess Key 'splunkHost' in stanza 'Threats.sophos' may not be a valid setting
2018-08-20 16:18:00.000 Splunk _internal 2018-08-20 16:18:00 eventgen INFO MainProcess Key 'splunkUser' in stanza 'isilon_auth.csv' may not be a valid setting
2018-08-20 16:18:00.000 Splunk _internal 2018-08-20 16:18:00 eventgen INFO MainProcess Key 'splunkPass' in stanza 'isilon_auth.csv' may not be a valid setting
2018-08-20 16:18:00.000 Splunk _internal 2018-08-20 16:18:00 eventgen INFO MainProcess Key 'splunkHost' in stanza 'isilon_auth.csv' may not be a valid setting
2018-08-20 16:18:00.560 Splunk _internal 2018-08-20 16:18:00,560 INFO [Eventgen] Finished setup pools
2018-08-20 16:18:00.549 Splunk _internal 2018-08-20 16:18:00,549 INFO [Eventgen] Finished reload
2018-08-20 16:18:00.541 Splunk _internal 2018-08-20 16:18:00,541 INFO [Eventgen] Finished parse
2018-08-20 16:18:00.541 Splunk _internal 2018-08-20 16:18:00,541 INFO [Eventgen] Finished config parsing
2018-08-20 16:18:00.487 Splunk _internal 2018-08-20 16:18:00,487 INFO [Eventgen] Config made Splunk Embedded
2018-08-20 16:18:00.487 Splunk _internal 2018-08-20 16:18:00,487 INFO [Eventgen] Config object generated
2018-08-20 16:18:00.486 Splunk _internal 2018-08-20 16:18:00,486 INFO [Eventgen] Eventgen object generated
2018-08-20 16:18:00.478 Splunk _internal 2018-08-20 16:18:00,478 INFO [Eventgen] Prepared Config
2018-08-20 16:18:00.478 Splunk _internal 2018-08-20 16:18:00,478 INFO [Eventgen] Input Config is: {'configuration': "{u'modinput_eventgen://default': {'name': u'modinput_eventgen://default', u'host': u'Splunk', u'disabled': u'0', u'VERBOSE': u'0', u'index': u'default'}}", 'checkpoint_dir': '/opt/splunk/var/lib/splunk/modinputs/modinput_eventgen', 'session_key': 'wv2kjziDCSHghZyvYGnSF519l41gzBCmd_euQyENd1P3eVfgMcOM^Lz8SMrmD63iRq_mWKt8NAX430ARnDQgfQGxvBpzyDlAX3PG^7sXEz9BB_E8U6ppQQC', 'server_uri': 'https://127.0.0.1:8089', 'server_host': 'Splunk'}
2018-08-20 16:18:00.478 Splunk _internal 2018-08-20 16:18:00,478 INFO [Eventgen] Initialized streaming
2018-08-20 16:18:00.476 Splunk _internal 2018-08-20 16:18:00,476 DEBUG [Eventgen] Setting up SA-Eventgen Modular Input
2018-08-20 16:18:00.475 Splunk _internal 2018-08-20 16:18:00,475 DEBUG [Eventgen] Initialized ModularInput Logger
2018-08-20 16:18:00.000 Splunk _internal 2018-08-20 16:18:00 eventgen INFO MainProcess Retrieving eventgen configurations from /configs/eventgen
2018-08-20 16:18:00.000 Splunk _internal 2018-08-20 16:18:00 eventgen INFO MainProcess Logging Setup Complete.
Two samples.
![alt text][1]
/opt/splunk/etc/apps/Sample_Data/local/eventgen.conf
[isilon_auth.csv]
mode = replay
timeMultiple = 1
backfill = -15m
sampletype = csv
outputMode = splunkstream
index = main
sourcetype = isilon:data
source = syslog
host = localhost
splunkMethod = http
splunkHost = localhost
splunkUser = admin
splunkPass = password
token.0.token = \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}
token.0.replacementType = replaytimestamp
token.0.replacement = %Y-%m-%d %H:%M:%S
[Threats.sophos]
mode = replay
timeMultiple = 1
backfill = -15m
sampletype = raw
outputMode = splunkstream
index = Sophos
sourcetype = sophos:threats
source = eventgen
host = localhost
splunkMethod = http
splunkHost = localhost
splunkUser = admin
splunkPass = password
token.0.token = \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}
token.0.replacementType = replaytimestamp
token.0.replacement = %Y-%m-%d %H:%M:%S
App even populates the performance dashboard with one of the inputs but there is no actual data to search.
![alt text][2]
Thanks,
~ Abhi
[1]: /storage/temp/255746-eventgen-samples.png
[2]: /storage/temp/255747-eventgen-performance.png
↧
Not receiving readable logs from Brocade Switches
We have added brocade switches to heavy forwarder via tcp:6514. We are able to receive the logs , but not in a readable format.
\x00a\x00\x00]"e8H,W\xCC\xA7az\xB9\xFF\xFB\xFE\x9E\x8C
\xC5\xCBhQ\x8E\xD1a{\x00\x00\x00=\x005\x00<\x00/\x00
\x00\xFF\x00\x00(\x00#\x00\x00\x00
\x00 \x00
input.conf
[tcp://6514]
connection_host = dns
index = san
sourcetype = BROCADE_SWITCH
settings in Brocade switch
-secure -port 6514 to the syslogadmin --setip cmd
Switch type
2 model type 6520's
4 model type 5480
2 model- bvlfcsw100/200
↧
Official Splunk Feature Backlog
Throughout the Answers there are several mentions of feature requests, but all accounts of reporting feature suggestions involve opening a support ticket through the support portal, assumably limiting feature requests to only those with support contracts, which I think is a really good approach, personally.
What I haven't yet been able to find is an official backlog of feature enhancements that are being considered by Splunk from their customer base.
Is this information available? I'm thinking about roadmap / upcoming features we can expect, which may go a long way to reducing some of the noise in the Answers forum about new features.
Thoughts?
Thanks!
↧