Is anyone running the Splunk Add-on for Microsoft Cloud Services with Splunk Free?
We had it installed and working, then about the time our Enterprise trial expired, the add-on stopped collecting data. In the splunkd.log file i see:
05-06-2019 17:22:16.955 +0000 ERROR AdminManagerExternal - External handler failed with code '1' and output: 'REST ERROR[403]: Unauthorized client for the requested action - capability=ta_mscs_system_configuration'. See splunkd.log for stderr output.,
I can find nowhere where this dependency is listed, but the timing seems right and I don't want to spend a huge amount of time troubleshooting something that can't be made to work.
Thanks for any help.
↧
Splunk Add-on for Microsoft Cloud Services - Compatible with Free?
↧
color lines in dashboard by timestamp
hello
is it possible to color lines in dashboard by timestamp so the new events in the table will be colored ?
for example
i have table with many columns, one of them is SerialNumber
im running the dashboard once a day and i want to color new SerialNumber for each day (if exists)
(maybe later i will have to color some more columns but i guess the idea will be the same)
↧
↧
Check my work on my indexes.conf file for metrics?
All,
Can you check my work here? Provisioning a metrics index where I am hoping to retain the data and keep it active for 3 years. Anything missing?
I am only expecting a 30-40 megs a day of metrics data for the foreseeable future.
[metrics]
datatype = metric
maxHotBuckets = 20
maxWarmDBCount = 2000
repFactor = 0
thawedPath = $SPLUNK_DB/metrics/thaweddb
frozenTimePeriodInSecs = 94608000
thanks
-Daniel
↧
Export button not working
I have a dashboard panel which creates a tabular/statistical table output.
I want to be able to Export the results using the tiny Export button on bottom right.
When this panel is populated with direct query, Export button becomes available. But when I use same query in base search and use post processing resulting in tabular /statistical results, same export button becomes disabled. Why? Fix? Work around? Reason?
I am on Splunk Enterprise 6.6.6 and 7.x
Thanks.
↧
"As" command modifier not working
New to Splunk. Trying to use the "as" command modifier to change the name of a column. However, the modifier is not being highlighted or changing the column name.
Here is my SPL string:
*sourcetype="access_combined_wcookie" status=200 file="success.do"
| table JSESSIONID as UserSession*
↧
↧
bootstrap script to install the Splunk on ec2
Hi,
I am trying to install Splunk on EC2 using bootstrap script. i.e, Splunk should be installed as soon as EC2 instance is being created.
Does anyone have a script or a process to install Splunk in the way mentioned above.
Thanks
↧
How can I redirect mcollect to a different set of indexers?
All,
I have a |mcollect job that runs every night. I'd like the the results to goto a different indexer rather than the default on my search heads.
How do I specify the metric sourcetype in a props+transform to redirect it?
↧
Why am I Getting Duplicated Data Using HEC Ingestion Method
I'm getting duplicated data when using lambda function to send events from cloudwatch to splunk through HTTP Event Collector. I didn't enable the indexer acknowledgement . Does anyone have the same issue before?
Regards,
Simeng
↧
how to find sum of the latest values of the fields by a certain field ?
I have 2 sourcetypes from Nexpose vulnerability data. One sourcetype is Asset details and other sourcetype is Vulnerability details. Both this sourcetype has common field called "asset_id". Field details are below :
sourcetype Asset contains fields ==> asset_id, vulnerabilities, riskscore, malware_kits,exploits
sourcetype Vul contain fields ==> asset_id, Solution_summary
I want to show "No. of Assets", Vulnerabilities,Riskscore,malware_kits,exploits by solution_summary, so in this case I have to join 2 sourcetypes and do the sum of the fields in sourcetype Asset. But, those fields have different values, so I want to do the sum of the latest values of the fields in vulnerabilities, riskscore, malware_kits and exploits, but somehow it is not working. Also, currently I am using coalesce for asset_id field to merge both the sourcetypes, is it the proper way of doing it ? Please help in writing the search ?
Below is my current search which is adding all the values of the fields, instead of latest value of the field ?
index=rapid7 sourcetype="rapid7:nexpose:vuln" OR sourcetype="rapid7:nexpose:asset" | eval Asset=coalesce(asset_id,asset_id) | stats values(*) as * by Asset | stats dc(asset_id) as Assets, sum(vulnerabilities) as Vulnerabilities, sum(exploits) as Exploits, sum(malware_kits) as "Malware Kits" sum(riskscore) as Risk by solution_summary | sort - Risk | eval Risk=round(Risk,2) |rename solution_summary as Remediation
But in above query sum(vulnerabilities) is adding all vulnerabilities values which I dont want, I want to consider only the latest value of vulnerability number I am getting in that field when I search for last 30 days or any other time range, same with exploits, malware_kits and riskscore ? Please help resolve the issue ?
Thanks
PG
↧
↧
CPU Usage Prediction of later 15 days... of a month
Hi, I am trying to create a dashboard that shows % CPU Processor time avg (Value)..but the query i used to only giving me future values for 3 to 4 hrs of predicted values only.. but i need it to show for alteast 15 days. could you please help me out with query.
the query i am using
index=main earliest=-5d sourcetype="Perfmon:CPU Load" counter="% Processor Time"
|timechart span=5min avg(Value) as "CPU Processor Time" |predict "CPU Processor Time" AS "Predicted value"
algorithm=LLP5 upper95=high lower95=low holdback=30 future_timespan=70
↧
Replace every 2nd pattern with carriage.
i have a field with dates in single line ( could be many dates )
ex: 2019-04-11 23:15:58.547 2019-05-02 10:11:22.833 2019-05-03 10:21:27.0
need help to replace every 2nd space with carriage, so each date shows on a separate line when exported. right now they show on single line when exported.
↧
splunk with Docker in windows
Hello
is it possible to run splunk in docker container in windows ?
if yes, can someone link me to the installation guide ?
thanks
↧
Restrict Search Terms
We have some external users, whom we want to be able to see some dashboards we have created.
However, we do not want them to be able to make search on the search-head.
e.g. Dashboard item has a query like host = baloney.pipe source=B_Circuit | some column chart visualization
The users should be able to see the dashboard However if they want to search host = baloney.pipe source=B_Circuit on a search head they shouldn't get any results. (Only dashboard access to view ; No access to make any search on the index/host etc.. through search head)
Would using the 'Restrict Search Terms' option while creating a role help us achieve this functionality ?
↧
↧
incremental part count per hour
hi! in my current project, I have to create an area map where it shows the number of parts per hour, I was able to display that. But I also want to display a target part count for the day and for each hour. In my use case, each hour, the target part count should be 10 and for 24 hours the final target should be 240 parts.
here is my search so far: `|savedsearch rename1
|fields Date_Time Username Green Yellow Red
|rex field=Date_Time "(?P\d{4}\/\d{2}\/\d{2})\s(?P\d{2}\:\d{2}\:\d{2})"
|sort 0 _time Username Green Yellow Red
|streamstats window=1 current=f list(_time) as prevTime list(Green) as RUN
|bucket Time span=1h |stats list(RUN) as Count1 by Time
| appendcols[|savedsearch rename2
|fields Date_Time Username Green Yellow Red
|rex field=Date_Time "(?P\d{4}\/\d{2}\/\d{2})\s(?P\d{2}\:\d{2}\:\d{2})"
|sort 0 _time Username Green Yellow Reda
|streamstats window=1 current=f list(_time) as prevTime list(Green) as RUN2
|bucket Time span=1h |stats list(RUN2) as Count2 by Time]
|eval Part_Count = Count1 + Count2
|eval Target = round(24hours*10)
|eval Current = round(currenttime * 10)`
↧
Add a new field to event and collect it after
An index receives events which are reviewed by an internal team. Some events needs a new status - I consider that by adding a new field by using __eval__ command and adding it as a new event entity to index (in order to keep the history) by using __collect__ command:
index=source | ... | eval new_status="a new status" | collect index=source
but the new field is not kept and saved - is any workaround upon this?
↧
nslookup TXT queries with Splunk
I am trying to see if its possible to run nslookup -q=TXT domain 8.8.8.8 so i can compare the results of the output to an existing lookup csv file.
↧
Fortinet Fortigate log direct ingest into Splunk
Hi Guys,
Can i just check is it possible for me to direct ingest the Fortigate Fortinet logs in to my Splunk environment ?
Meaning without using Forwarder + syslog server (method), like the following guide for a standalone environment from fortinet :
https://www.fortinet.com/content/dam/fortinet/assets/alliances/Fortinet-Splunk-Deployment-Guide.pdf
My current environment setup are as follows :
1 x Search Head/Node Master role Server.
2 x Cluster Indexer Server.
If direct ingest method is possible in my environment, how should i go about configuring it to ensure both my indexer have a replicated copy of the data that was ingested from Fortinet ?
Thanks in advance!
↧
↧
XML search form - Allow wildcard only for specific dropdown input
Below search form - prevent the user from entering "wildcard " inputs in the text field.
- if user entera any wildcard or blank value in text field - it will show error message.
Now, in this form I wanted to allow the user to do wildcard searches when the dropdown input value is only "audit"
↧
How to detected a Deviation of 20% vs weekly average?
Hi team!
I need to do that:
Eventcode = 4624 and 4634 with Logon Type = 10. An event will be generated if an access volume above normal is detected. Deviation of 20% vs weekly average.
This is my search right now;
index=* index=* (EventCode=4624 OR EventCode=4634) eventtype=wineventlog_security
| stats values(host), values(EventCodeDescription), values(Changes), values(Account_Domain), values(action) by _time
| rename values(host) as Host, values(EventCodeDescription) as Description, values(Changes) as Changes, values(Account_Domain) as "Account Domain", values(action) as Action, _time as Date
| convert timeformat="%m/%d/%Y %H:%M:%S" ctime(Date)
But I dont know how to detected a Deviation of 20% vs weekly average. I mean, how can I do that?
Thank you a lot.
↧
Facing Issues To Run A Report On User Access
Hi Experts,
I have admin permission to login into the splunk. So whenever I run a report, it's taking hardly 2 seconds or less than that. So i have shared that report with user only read access mode. But whenever user is trying to run that report it's showing an message that "**waiting for queued job to start Manage Jobs**".
Could anyone help me on this matter? What should I do in this case or how can I troubleshoot this issue?
If you need more info in this matter, please let me know.
Thanks,
@saibal6
↧