There's something I'm just not getting today...
I've got a chart command that generates results from a series of searches, evals, and other processes. The net result is a nice little chart with results that looks like this:
Location 2019 2020 Delta
Main 980 1268 29.39 %
The 2019 and 2020 are indeed years. My issue is that Delta is calculated based on those 2 columns as
eval Delta=(('2020'-'2019')/'2019'*100)
This is fine for this year, but of course it means we'd have to edit this dashboard again next year.
How do I reference the relative column positions rather than the column names, or otherwise glean the column names from the dynamic data, in order to crunch the Delta value automagically?
↧
Relative reference to columns
↧
Modify x-axis labels to show bin centers
I am trying to create a histogram plot, but I want to make the x-axis labels more readable. How do I go about doing this?
Here is what I am doing:
`my search | bin field span 0.5 | chart count by field`
Here is an example of the x-axis when create the chart. Is there a way to force the x-axis to show single values for each bin (at the bin center)? Or even better, can I force the x-axis to place integer labels at their respective positions relative to my bins?
![alt text][1]
[1]: /storage/temp/283617-screen-shot-2020-02-11-at-14658-pm.png
↧
↧
How to make table row count 100 over?
Hello, All
In Splunk Enterprise 8.0.1,
I searched "index=_internal | table _raw" and Visualization with Table.
I'd like to make Table rows over 100. But How to I can't find.
Is there a way to remove this limitation and visualize more than 100 rows?
↧
Deployment Sizing on AWS
We are deploying Enterprise Security for various clients on AWS, and are in the planning phase. I am attempting to create reference documentation that would contain the minimum instance type and number of instances per deployment, with a more granular breakdown in terms of capacity.
We also want to provide the following in all deployments:
- HA/DR (somewhat) - So the deployment would contain of a multi-site indexer cluster as well as a search head cluster
- Monitoring Console,Deployment server where neccesary but reduce need for extra instances so group roles where possible (I chose License manager + Deployer and Cluster master node + Deployment Server + Monitoring Console
- Searching of up to around 8-16 users
- Use of smart store for indexer storage
- Use smallest possible instances where possible
- Mainly used for ES
- Hopefully utilize placement groups, kubernetes and other services on cloud in the future when supported by splunk (believe this is soon)
I am also aware that:
- Each deployment/client will be different even if they have the same ingestion rate
- Splunk recommendations have pretty big gaps e.g 2-300GB is 1 SH and 1Indexer whereas I am trying to break it down a bit more like 25-50, 50-100,100-300, 300-600, etc
- Instance types , and prices change..again this is just for reference
Has anyone done something similar?
↧
RBAC without using indexes
Is it possible to do RBAC without indexes ? I have 5 indexes at least, but I can’t use indexes to do RBAC because all users should see all 5 indexes, but the requirement is that they should only see their data. If I ensure that the data is tagged at each of the users location, will it be possible to use these tags to only allow users that work at a specific location to be able to see their data and their data only from the 5 different indexes available ? I like RBAC indexes because it ensures that users will not see any data even if they write their own searches because they simply don’t have access to the indexes that they weren’t assigned access to but unfortunately this doesn’t work because we already indexed , and we can’t do that so we have to rely on another attribute or tag to filter the data. Please let me know if you can suggest anything.
↧
↧
Time Chart and DBXquery
I am new to splunk. I have a DB connection from where I am fetching a table. I want to create a dashboard for with x-axis as time and Y-axis as count of table in every hour.
i tried with timechart function but I am unable to achive my goal. I am getting data without timechart.
| dbxquery query="SELECT * FROM \"CASE\"" | timechart count by Id
this is my query.
↧
SplunkWeb Broken UI
Hi,
We have been experiencing broken UI on 3 of our nodes (DS, SHDep, & IDXCM; 2 screenshots below) and the rest seems to be fine. The Web UI is not showing web objects as normal, like the dropdown, apps, etc.
This is not an issue on serverclass.conf (as the message on the UI is saying) because running functionalities in the background / CLI seems to be fine. Attempts to fix this one include `splunk restart splunkweb` and `splunk restart`, **the former fixes the issue but the problem goes back after 15-30 minutes** of not being used. We've cleared the cache of the browser but also to no avail.
The version we're using is 6.5.2. Have you experienced the same? If so, how do we permanently fix this?
Thanks in advance.
![alt text][1]
![alt text][2]
[1]: /storage/temp/283619-ds-broken-ui.png
[2]: /storage/temp/283618-deployer-broken-ui.png
↧
Splunk inputs and whitelists --- how to?
I've combed through inputs.conf and the various questions on answers but can't seem to get a definitive example in how to employ a whitelist or modify my monitor stanza to match on specific folders and their sub-directories per my use case.
**Example:**
match on /mnt/data/apple/desired_folder/*/*
match on /mnt/data/apple/dir_1/*/*
match on /mnt/data/apple/folder_two/*/*
DONT match /mnt/data/apple/junk/*/*]
DONT match on too many others to list
Each directory in the whitelist, has one more sub-directory, then the log files themselves, of which I want everything in the folder. Do I have to write 3 monitor stanzas for this?
**failed attempts - no logs get pulled in**
[monitor:///mnt/data/apple/(dir_1|folder_two|index_this)/*/*]
and
[monitor:///mnt/data/apple/*/*/*]
whitelist = (dir_1|folder_two|index_this)
For now I've resorted to 3 monitor stanza's but I thought there is a cleaner way to do this in Splunk that I've completely forgotten/missed.
↧
Indexer not indexing data
My Cisco Indexer just stopped indexing new data. Splunk is receiving data from the Syslog server but just not getting index and so nothing is showing in the Cisco Networks apps/addon. I do have an input/output file on my syslog servers through UF that is monitoring the folder with the logs, which is not the problem since i can see current&old logs in the SH. The output is pointing to my HF which forwards the data to the Idexer. I'm running 8.0.1 with 1 server each SH, IDX, DP, HF.
I know it's not indexing cuz my indexer haven't received data for at least a day and there are no errors in the the logs.
↧
↧
Sizing on Smartstore (S3) for local storage
The smartstore documentation says the following:
"The amount of local storage available on each indexer for cached data must be in proportion to the expected working set. For best results, provision enough local storage to accommodate the equivalent of 30 days' worth of indexed data."
**Is this the same as HOT bucket data? or is it ontop of the hot data?**
e.g assuming the following factors:
Intake = 100GB/day
Compression ratio = 0.50
Hot Retention = 14 days
Using this formula found in another forum post:
Global Cache sizing = Daily Ingest Rate x Compression Ratio x (RF x Hot Days + (Cached Days - Hot Days))
Cache sizing per indexer = Global Cache sizing / No.of indexers
Cached Days = Splunk recommends 30 days for Splunk Enterprise and 90 days for Enterprise Security
Hot days = Number of days before hot buckets roll over to warm buckets. Ideally this will be between 1 and 7 but configure this based on how hot buckets rolls in your environment.
100 * .50 ( 2 x 14 + (30-14)) = 2200?
↧
Create an alert (Splunk query) for different nodes where if the status of the node goes down and doesn't come up within 1 hour then an alert should trigger.
Hi Guys,
I am Just creating a rule for a switch for multiple nodes where if the status of the switch goes down and doesn't comes up within an hour then it has to be triggered. But also if you see logs the status is getting up within a fraction of sec so i just want to put a threshold of 1 hour. Kindly help me on forming the Splunk query.
2019-12-02T17:25:38.448Z x.x.x.x <45>12376292: 12377249: *Dec 2 18:14:15.138: %LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel1, changed state to up
2019-12-02T17:25:38.448Z x.x.x.x <45>12376291: 12377248: *Dec 2 18:14:15.101: %LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel1, changed state to down
Thanks in advance
↧
cutom alert action python script
Hi all. I am struggling where should I check.
I want to make splunk user automatically.
so, I made this script.
test.py
import sys
import os
import request
import json
def test():
data = { 'name':'username', 'password':'password', 'roles':'user'}
response = request.post('https://mng_uri:8089/services/authentication/users', data=data, auth=('admin','passme'))
id __name__ == "__main__":
test()
I can execute this scripts `python test.py` in my /home directory,
and I can create user.
so I made custom alert action.
I made an alert and select this custom action, but I can not create user.
I have no idea because there are no error in internal log(splunkd.log).
where should I check???
↧
CSV Lookup for search query
I have a search query like this
index=ppt sm.to{}="<12-12-518@dt.com>" OR sm.to{}="<050920@cp.com>" |table sm.to{} sm.stat
and I want to use a csv lookup instead because I have more email address to use and I want the result to show this two fields .
My csv contains this
sm.to{}
050920@cp.com
12-12-518@dt.com
774211@PP.com
859@dat.com
20909@PP.com
07548@pp.com
Can anyone help with a lookup search query for me . thanks.
↧
↧
Microsoft Office 365 Reporting Add-on for Splunk is affected by stop supporting and retire Basic Authentication for Exchange Online ?
Hello,
I found a blog about microsoft retiring basic authentication for Exchange Online on October 13, 2020.
https://developer.microsoft.com/en-us/office/blogs/end-of-support-for-basic-authentication-access-to-exchange-online-apis-for-office-365-customers/
If this app uses basic authentication, the request will fail after October 13, 2020.
I think this app uses basic authentication. Is there any way this app can use other authentication methods than basic authentication?
---
splunk version : 7.3.1
app version : 1.1.0
Thanks!
↧
How to calculate the value of row for every column and fetch the result in table?
I want calculate the row values of every column by error message...
I did
| Stats count(host) values(host) values(functionality) count(functionality) values (loan_num) by error_message
I'm just getting host count as 90
If i run the query sperately , like.. | stats count(hostcount) by hostvalues
It is showing all the values in their respective columns. Let's if the host are like hosta-20 host-30 hostc-40
so i want to fetch individual details in the same above columns by error_message
↧
Why would Splunk NOT obey "dispatch.ttl" and delete results/artifacts early?
We have a not-ar-all overloaded ED wearch head with a separate volume for dispatch with plenty of room that never gives us 500MB warnings. We also have a few weekly-scheduled searches which bring back 100ish rows of results with dozensish fields with default values of "2p" for "dispatch.ttl" but the results are always gone after 2 days. We are on 7.3.latest.
We have tried setting it to 2 weeks worth of seconds and that did not work. What could be causing this? What logs should I look at/for?
↧
Splunk_TA_paloalto not parsing the logs
Splunk_TA_paloalto is not parsing the logs :
inputs.conf :
[monitor:///data/splunkapp/syslog/MSSLCPRY01/paloalto_fw/*/*.log]
sourcetype = pan:log
index = it
host_segment = 6
disabled = false
Is it mandatory to keep the index pan_log?
Palo alto logs are sending to syslog server/HF and TA installed on syslog/HF.
Can someone please help whats going wrong in this.
↧
↧
Add a percentage row into a chart?
Hello there!
I want to add a percentage row into a chart table.
string:
index=smsc tag=MPRO_PRODUCTION DATA="*8000000400000000*" OR "*8000000400000058*" | dedup DATA | chart count by SHORT_ID, command_status_code | search NOT ESME_RTHROTTLED=0 | sort - ESME_RTHROTTLED | head 15
And the chart table:
![alt text][1]
The red result, is what i need to add. the Value in it should be calculated like the blue marked.
ESME_RTHROTTLED value get divided by ESME_RTHROTTLED and ESME_ROK together.
Can someone help me?
[1]: /storage/temp/283621-screen2.png
↧
how to follow events on a field with different value
Hi guys,
I am new to splunk. I have multiple events that looks like this:
- 2020-02-07 07:21:20 action_time="2020-01-02 07:21:20.39", id_client="1234", ticket="1",
- 2020-02-07 07:21:20 action_time="2020-01-02 07:22:20.39", id_client="4567", ticket="2"
- 2020-02-07 07:21:20 action_time="2020-01-02 07:23:20.39", id_client="1234", ticket="2"
- ...
I would like to see transaction like this:
in All events, find the first event where id_client = "1234" and ticket="1". If match, find next event with the same id_client, but the ticket= "2".
so, for the same client, find first ticket=1, following after the ticket=2 (no other actions).
I tried with: ...| transaction action startwith='1' endwith='2' but it does not work
how can we do this in splunk ?
I thank you i advance,
↧
AWS instance wise billing report in Splunk
Have a requirement to create a dashboard which will give instance level billing breakup for particular service like Under EC2 Service instance A is occurring $xyz cost.
Please share some Idea how we can achieve it using aws add-on?
↧