I have three columns in stats.
1. Date
2. Number
3. String Value (ABCDEF)
Once i create a chart over "Date " , how can I also show the corresponding "string value" in the graph?
Since line\Area graph will only consider numerical values, it doesn't show the 3rd field values.
I am expecting to have Date in X axis and Number in Y Axis. Can anyone help?
↧
Can we plot string values in a graph?
↧
Why can't I see the logs in the search head when I select the time window?
I can't see the recent logs in the search head for pan devices when i select the time window for anything except all time...however, if i select the time window 'all time' , i can see the recent logs. What might be causing this issue?
`index="pan_fw" | search pan2 time window = say last 60 min`...no output but if i select time window=all time, it will show the recent logs along with old logs.
↧
↧
How do I import a Powershell script that inputs CSV data?
I currently have multiple Powershell scripts that take data from local log files and transform them in a certain way into a CSV that is outputted locally to a directory. I want to be able to call that script and output the data as events in Splunk, but also want to avoid creating them in lookups. I was able to do something similar with a Python script by simply printing events as CSVs, but I can't seem to find the method for Powershell.
Thanks.
↧
Missing Amazon Web Services (AWS) Topology Search, vague documentation
AWS 5.1.1 Topology Splunk article (don't have the karma to link).
docs.splunk . com/Documentation/AWS /5 .1.1/User/Topology
mentions "In order to see your data, ask your admin to configure AWS Config, CloudTrail, CloudWatch, VPC Flow Logs, Billing, Amazon Inspector, and Config Rules inputs. When your inputs become active, the app automatically enables the Config: Topology Data Generator saved search, which supplies additional data specifically for this dashboard."
I don't see any such saved search in my Splunk AWS App and have added AWS Config, CloudTrail, and Config Rule inputs. Shouldn't it appear? Or is it a must to include every other input as well? To clarify, I'm not talking about the saved search automatically enabling itself, but rather showing up under Settings > Searches, reports, and alerts at all.
I'm also curious if "aws:config:notification" and "aws:config" types are treated interchangeably or not in the documentation. I have a lot of "aws:config:notification" source types and no pure "aws:config" types. The troubleshooting page for Topology (linked above) mentions that you should search for "sourcetype=aws:config" to ensure data is reaching Splunk; I'm unsure if "aws:config:notification" events are green or red flag for this.
↧
LDAP connection Error
I get the below error
"distinguishedName: undefined" when i configure my LDAP settings in the Active directory add-on
↧
↧
How to detect Golden ticket using supluk
I'd like to challenge Golden Ticket detection using Splunk.
If you have ideas to detect from Windows security log using Splunk, please share it.
↧
Log Event Queueing on Splunk Forwarders
Hi Splunkies,
is there a way to setup log event event queuing and chunking of queued events on the forwarder side ?
Our problem is that our forwarders flood our indexer with events when it is back online after an outtage due to maintainance or other reasons and some of those events are not indexed and get lost. The fowarders are configured to use acknowledgement and SSL to encrypt the traffic between forwarders and indexers. The use of SSL and acknowledgement is required by the orgranizations data management and securicy policies.
Utilization on the indexer is quite low. CPU ist always <10% even after bringing them up online after maintainance.
Any suggestions or ideas, like a configuration to send queued events in chunks of like 10 Mb and how to do that?
René
↧
How define a drill down
Using a lat & lon on the map I can see the corresponding location - for each location should be possible to link that on different dashboards like:
Rome ---> Rome Dashboard
London -> London Dashboard
↧
Forward Windows events to 3rd party system
Hi,
I am trying to forward the Windows events from Splunk to a 3rd party syslog system. I checked the docs and also several answers here.
I have a Search head, an Indexer and UF agents on the source Windows servers. (Splunk version 7.1.3)
The UFs forward all the events to the indexer with no problem. The IX is forward all(?)/most of the required events to the 3rd party system, but also forward some other syslog messages (received from VMware vcenter) which should not do.
What am I doing wrong?
The outputs.conf on the IX:
[syslog]
[syslog:external]
server=192.168.10.134:514
priority=NO_PRI
The transforms.conf on the IX:
[send_to_syslog]
REGEX = .
DEST_KEY=_SYSLOG_ROUTING
FORMAT=external
I am using Windows TA v4.8.4. I tried to found how to configure to forward all the system/application/security events and nothing else.
So I added the the following code to several place in props.conf:
TRANSFORMS-external = send_to_syslog
Regards,
István
↧
↧
Data model is accelerating within 5 secs but not able to fetch the data from data model.
When we start the acceleration of data model, it completes successfully. But when we run the below query, we are not able to fetch the data.
| tstats summariesonly=t count from datamodel="datamodel_name"
It gives the 0 counts.
But when we run the following query we are able to fetch the data.
| tstats summariesonly=false count from datamodel="datamodel_name"
it gives the 1034 count.
So please let me know if I am doing something wrong.
NOTE:
We have checked the acceleration period and it has the data.
Splunk version : 6.5.3
And I am facing this issue on specific instance only.
↧
Downloading and uploading configuration bundle using REST API
Friends, I'm playing with the splunk rest API. I have a splunk deployment server and one client(running universal forwarder). I created a deployment app and this is the output which I try to make a REST Call about the deployment app created.
My question is if you notice there is a bundle path "/opt/splunk/var/run/tmp/testing/_server_app_testing-1537306672.bundle"
Can we download the bundle using REST API and can I manipulate the bundle using REST call?
applications https://10.x.xxx.xxx:8089/services/deployment/server/applications 2018-09-18T17:11:08-07:00 Splunk 1 30 0 _server_app_testing https://10.x.xxx.xxx:8089/services/deployment/server/applications/_server_app_testing 1969-12-31T16:00:00-08:00 system /opt/splunk/var/run/tmp/testing/_server_app_testing-1537306672.bundle 3488100521637376924 1 1 0 system admin splunk-system-role admin splunk-system-role 0 system continueMatching deinstall excludeFromUpdate filterType machineTypesFilter repositoryLocation restartSplunkWeb restartSplunkd serverclass stateOnClient targetRepositoryLocation tmpFolder unmap blacklist\..* whitelist\..* 0 Tue Sep 18 14:37:52 2018 0 0 0 testing 10240 noop
↧
How to add a role to a user with Splunk Python SDK
I need to add a specific role to a user using the Splunk SDK.
I can list the users and find the roles owned by the user I want to add a role to, but I can't work out how to access and update that user object. I've tried to use a number of variations on "services.user.name", "service.user.content" etc etc but can't get anything with "service.user.xxxxxx" to work. The calls to service.roles.xxxxxx and service.users.xxxxxx both work as advertised. Does anyone have any examples as to how to use the splunklib.client.User class? Any help to point me in the right direction greatly appreciated.
newrolename="new_role"
newrole = service.roles.create(newrolename)
kwargs = {"sort_key": "realname", "sort_dir": "asc"}
users = service.users.list(count=-1,**kwargs)
for user in users:
username=user.name
logger.info(username)
logger.info("username="+username+", current_user="+current_user)
if user.name == current_user:
logger.info("username==current_user")
user_roles=[newrolename]
for role in user.role_entities:
user_roles.append(role.name)
logger.info(user_roles)
#service.user.name # - seems to do nothing (when uncommented)
#TODO: get user object?? update user object to add role to user??
↧
Validate simple XML
I'm trying to validate a dashboard after some scripted changes to aviod corrupting the XML.
I've tried different tools using the simplexml.rnc/simplexml.rng files in $SPLUNK_HOME/share/splunk/search_mrsparkle/exposed/schema but all it all fails.
Any ideas what is wrong?
E.g.
<dashboard><label>test_dash</label><row>search index=hid100001195 sshd -24h@h now </row></dashboard>
$ xmlstarlet val -e -r simplexml.rng ../dash.xml
../dash.xml:17.1: Extra element panel in interleave
../dash.xml:17.1: Element row failed to validate content
../dash.xml:17.1: Extra element row in interleave
../dash.xml:17.1: Element dashboard failed to validate content
../dash.xml - invalid
$ xmllint ../dash.xml --noout --relaxng all.rng
Relax-NG validity error : Extra element panel in interleave
../dash.xml:4: element panel: Relax-NG validity error : Element row failed to validate content
Relax-NG validity error : Extra element row in interleave
../dash.xml:3: element row: Relax-NG validity error : Element dashboard failed to validate content
../dash.xml fails to validate
↧
↧
Problem with Custom Viz with over 50k search results
Hello,
I have a problem with a custom viz. This gets a large amount of results (over 50k). I retrieve them with the example from the blog Post (https://www.splunk.com/blog/2016/04/11/show-me-your-viz.html) in section 3b.
For the initial call it works fine, too.
However, there is a problem with an installation in a dashboard with filters. If the first result is above the chunk size, the offset is increased with UpdateDataParams. If I then limit the search further with the filters so that the quantity is below the previous chunk size, then I get the display "No results found. But there are still results. The Custom Viz doesn't come into the UpdateView method anymore, because probably the current offset is higher than the current result set.
So how can I set the offset back to 0?
↧
Splunk DB connect V3 not picking up inputs from V2
Hi, We have upgraded our DB connect app from 2.4 to 3.1. Everything seems fine but inder the datalab tab, We are unable to find any inputs we had in our DB connect v2 app. Connections and Identities have migrated.
Can anyone help? How can we get the inputs we had in v2 into v3.
Thanks in regards
↧
Unable to create scheduled reports after switching to free version
I read the article here: http://docs.splunk.com/Documentation/Splunk/7.1.3/Admin/MoreaboutSplunkFree
that tells: Any alerts you have defined will no longer trigger. You will no longer receive alerts from Splunk software. You can still schedule searches to run for dashboards and summary indexing purposes.
But after the switch reports are no longer generated and I can no longer find where to edit scheduled reports in the gui.
Is the documentation wrong or am i looking in the wrong place?
↧
Hide Splunk App/menu bar for an App home page
Hello,
I want to hide Splunk App/menu bar for an App home page.
What i want is, when i open a particular app, the app home page should not show splunk App bar.
Can anyone help me with this? Can this be done on an app basis?
Thanks.
↧
↧
How to resolve X509Verify default certificate warnings?
I see below warnings in splunkd.log files on all my splunk instances.
Could you please advise on how to resolve these? or can we ignore them?
WARN X509Verify - X509 certificate (O=SplunkUser,CN=SplunkServerDefaultCert) should not be used, as it is issued by Splunk's own default Certificate Authority (CA). This puts your Splunk instance at very high-risk of the MITM attack. Either commercial-CA-signed or self-CA-signed certificates must be used; see:
↧
Capturing multiple values and creating table from results - MVexpand??
Good afternoon guys & gals,
This on paper is a simple one, but it's absolutely escaping me. We have been asked to extract the most recent 3 entries for 2 different types of quote and then the data values that follow. The data looks like this:-
date=19-09-2018 startTime=00-00 endTime=01-00
BI_FEED=Direct_DataFeed_20180918204501 QUOTE_TRANSACTIONS=53412 PROCESSING_TIME_SEC=987.504327 PROCESSING_STATE=complete
BI_FEED=Direct_DataFeed_20180918213001 QUOTE_TRANSACTIONS=50096 PROCESSING_TIME_SEC=920.179029 PROCESSING_STATE=complete
BI_FEED=CQ_DataFeed_201809190016 QUOTE_TRANSACTIONS=24 PROCESSING_TIME_SEC=54.824542 PROCESSING_STATE=complete
BI_FEED=Direct_DataFeed_20180918204345 QUOTE_TRANSACTIONS=52312 PROCESSING_TIME_SEC=978.504327 PROCESSING_STATE=complete
BI_FEED=CQ_DataFeed_201809190031 QUOTE_TRANSACTIONS=28 PROCESSING_TIME_SEC=65.140814 PROCESSING_STATE=complete
BI_FEED=CQ_DataFeed_201809190045 QUOTE_TRANSACTIONS=196 PROCESSING_TIME_SEC=235.348442 PROCESSING_STATE=complete
BI_FEED=CQ_DataFeed_201809190043 QUOTE_TRANSACTIONS=324 PROCESSING_TIME_SEC=355.376033 PROCESSING_STATE=complete
BI_FEED=CQ_DataFeed_201809190049 QUOTE_TRANSACTIONS=188 PROCESSING_TIME_SEC=198.883841 PROCESSING_STATE=complete
So they would like the 3 recent Direct quotes and the most recent CQ quotes. Then, they would like to table to quote ID, the transaction number and the processing time. So far I have been testing with simply getting the data for the Direct feed, but uncoupling the data is killing me. My thought process is as follows:-
base search here earliest=-15m@m
| rex max_match=3 field=_raw "BI_FEED=Direct_DataFeed_(?[0-9]*)\sQUOTE_TRANSACTIONS=(?[0-9]*)\sPROCESSING_TIME_SEC=(?[0-9.]*)"
| mvexpand dir
| table dir, qt, pts
This produces as expected a separate entry per dir value, but then inputs all 3 entries for the qt and pts values. Obviously this is because I need to separate them out and keep them per quote string, but I'm struggling with that! If anyone is able to assist me in the above it would be much appreciated. Remember that I need to do this for both the direct and CQ values in a single table.
Thanks in advance!
Oh, and running splunk 6.6.4
↧
Set a token using the first value from a sort
The first panel in my dashboard shows the amount of unique users for each software package feature version. The search I use DC's the amount of unique users and listing them by feature version, then sorts them;
| chart dc(USER_NAME) as "Unique User" BY FEATURE_VERSION
| sort +"Feature Version"
I have a second panel on the dashboard which displays the usernames of the individuals, once the user clicks on a specific version from panel one. If the user doesn't click on a feature version in panel one, the second panel remains as "No results found".
What I'd like to do is set a token based on the first value from when the sort is produced. So if that first value is 111 then the token would be, by default, set to 111.
I'm not sure how to set a token based on that first sorted value. Any help would be much appreciated. Thank you.
↧