Can I add this circular menu with javascript and css to an xml dashboard without saving the dashboard as html? How can I do it without saving the dasboard as html?
https://jsfiddle.net/yandongCoder/kL4j7xor/10/
https://www.npmjs.com/package/circular-menu
↧
Can I add this circular menu with javascript and css to an xml dashboard without saving the dashboard as html? How can I do it without saving the dasboard as html?
↧
Problems with eStreamer app.
We are ingesting eStreamer logs through eStreamer app version 222 developed not by Splunk.
The fields packet_sec and packet_usec seem to have interchanged their values. Also, the timestamp doesn't not include subseconds which are present either in the form event_usec or packet_usec.
Please help.
Thanks,
Thiru
↧
↧
Search for Windows Services that use a domain admin account
need help creating a search for windows services that use domain admin account
thanks
↧
Using Anaconda with Splunk to ingest Python script
I have to use the library PySNMP to retrieve MiBs and ingest them in Splunk with Python.
I have been given clearance on the production server to install Anaconda and create an environment with PySNMP in it in order to do this. That being said, this makes sense cause I was informed i should not add or remove libraries from the native Splunk Python install.
How can I use the Anaconda environment Python version to run the scripts that I put in the Splunk UI to have data ingestion. I know if it was a separate Python install, I could just point to the Python version I want to use at the top but since its in an environment now, Im not sure what I can do.
Thanks
↧
Capabilities needed for a service account to enable Maintenance Mode and issue offline command
I have been researching the docs here: https://docs.splunk.com/Documentation/Splunk/6.6.3/Security/Rolesandcapabilities
There isn't a good mapping of what I am trying to accomplish here in the docs. We are trying to automate some routine maintenance and also enable our hardware teams to replace disks as they fail without requiring scheduling with our team - outside of approvals and verifications. Can someone assist me in identifying the **minimum** capabilities needed for this service account I have created (LDAP) to be able to perform the following:
Enable Maintenance Mode
Disable Maintenance Mode
Rebalance Primaries
Rebalance _raw data
Splunk offline
I have not listed splunk stop/start because those commands do not require authentication from the CLI.
↧
↧
Timechart Not working after eval _time
Hello,
I am using timechart in my query. I want to create timechart based on time specified in file rather than `_time` (splunk injestion time). when I replace the `_time` with require time, it's not working. even though format is same for both of them.
Data Coulmns:
![alt text][1]
When I replace GC_TIMESTAMP with _time its not working.
![alt text][2]
Query:
|tstats summariesonly=true values(GC.before_gc) as before_gc values(GC.max_gc) as max_gc values(GC.after_gc) as after_gc FROM datamodel=GC_ORG WHERE (nodename=GC host=TALANX_PostGoLive GC.service_name=mxmlc_gc.2017-11-06_17-20-53.log) GROUPBY GC.GC_TIMESTAMP,_time,GC.relative_time span=1s | rename GC.GC_TIMESTAMP as GC_TIMESTAMP | rename GC.relative_time as relative_time | rename GC_TIMESTAMP as _time | timechart fixedrange=false bins=2000 max(before_gc) as total_mem_before_gc,max(after_gc) as total_mem_after_gc, max(max_gc) as max_memory
Am I missing some thing ?
Thanks
[1]: /storage/temp/219700-data-model-output.jpg
[2]: /storage/temp/219701-eval-time-not-working.jpg
↧
Splunk query help ?
Hi ,
We have two list of csv files each one is having 500 hosts each we need to figure out among hosts which are reporting to splunk and not reporting for that i created lookup and able to see some hosts are not reporting to splunk .Since in need to combine the list and also check which hosts are not reporting deployment server also .The reason to check deployment server we need to install agents on hosts which do not have among two csv files .So actually Iam looking for search that show these columns host ,IP age ,Last time reporting splunk and agent version ,reporting deployment server or not .I have two queries .Please help me query to check the lists of the servers that reporting splunk and also deployment .
|metadata type=hosts index=* |lookup samplehostsrecentlist.csv host output PCI host os IP |search PCI=Y |eval age=(now()-recentTime)|search age >1|convert ctime(*Time)| append[ |inputlookup samplehostsrecentlist.csv ] | dedup host | fields host IP PCI os lastTime age | sort lastTime| convert timeformat="%Y-%m-%d %k:%M:%S" ctime(current_time) as current_time ctime(last_login_time) as last_login_time rmunit(age) as numSecs | eval stringSecs=tostring(numSecs,"duration")
| eval stringSecs=case(stringSecs="00:00:00", "0+0:0:0", 0=0, stringSecs)
| eval stringSecs = replace(stringSecs,"(\d+)\:(\d+)\:(\d+)","\1h \2min \3s") | fields - age current_time numSecs | rename stringSecs as age | sort - age
-----------------------
index=_internal source=*metrics.log* fwdType=uf
| stats values(version) as Version values(os) as OS values(fwdType) as ForwarderType values(build) as Build by hostname
| join type=outer hostname [|inputlookup sample1hostsrecentlist.csv | eval hostname=host | table hostname PCI]
| join type=outer hostname [|inputlookup sample2hostsrecentlist.csv | eval hostname=host | table hostname sox]
| where PCI="y" OR sox="y" | rename hostname as Host
↧
Single value with trend for duration?
I have been searching about this for the last couple of days. I don't think Splunk have this feature but I just want to make sure if I was right. So I have this search:
index="monthlycdr" | eval "Call Duration"=replace('Call Duration',"\"","") | convert dur2sec("Call Duration") as "Call Duration" | timechart span="1mon" avg("Call Duration") as "TotalCD"
Which give me this result:
![alt text][1]
But when I covert my search to 00:00:00 format it doesn't show the trend. Here is the new search:
index="monthlycdr" | eval "Call Duration"=replace('Call Duration',"\"","") | convert dur2sec("Call Duration") as "Call Duration" | timechart avg("Call Duration") as "TotalCD"
| eval "TotalCD"=tostring($TotalCD$,"duration") | eval TotalCD=replace(TotalCD,"(\d+):(\d+):(\d+).(\d+)","\1:\2:\3")
Which give me this result:
![alt text][2]
I want the second search to have trend just like the first search. But I believe I can not do this due to a string conversion. Am I right that Splunk wont be able to do this, at least for now?
[1]: /storage/temp/219702-4.png
[2]: /storage/temp/219703-5.png
↧
Splunk DB Connect 3.1.1 -- Error in 'dbxquery' command with Server SQL 2008 R2
Hello,
I am presently putting in place a proof of concept of Splunk Enterprise. I'm trying to access a Microsoft SQL 2008R2 server with the DB Connect addons, The conection seems to work because I see my database list and select one. When I just want to make a select on a table to extract the data, here is the error that I get:
Error in 'dbxquery' command: External search command exited unexpectedly with non-zero error code 255.
Thank you for your help.
↧
↧
What is the most efficient way of filtering on two timestamps?
Hello all,
I keep facing a common theme and I wanted some input. We all know that the first filter should be on the time range, which filters on each event's `_time` field. If we would like to filter on a second timestamp, indexed as a String, through a second dashboard input then what are the most efficient ways of doing so?
What I've found is that dealing with a second timestamp requires painful logic that can deal with both presets and custom inputs coming from the dashboard's time picker. Example, assuming i'm filtering on a field called `TS_Start_Date`, the code that works is:
| where (if("$tok_start_date.earliest$"!="0" AND "$tok_start_date.earliest$"!="",strptime(TS_Start_Date,"%d/%m/%Y %H:%M")>=if(replace("$tok_start_date.earliest$","\d","")!="",relative_time(now(),if("$tok_start_date.earliest$"="now","-0m","$tok_start_date.earliest$")),"$tok_start_date.earliest$"),0=0) AND if("$tok_start_date.latest$"!="0" AND "$tok_start_date.latest$"!="",strptime(TS_Start_Date,"%d/%m/%Y %H:%M")<if(replace("$tok_start_date.latest$","\d","")!="",relative_time(now(),if("$tok_start_date.latest$"="now","-0m","$tok_start_date.latest$")),"$tok_start_date.latest$"),0=0))
If I were to only filter on that field and not on the event `_time` field, then to do so I would first need to extract all data, which is a very inefficient way of doing things. Are there any more efficient ways of approaching this problem?
Any inputs would be appreciated because I've seen this problem a lot and don't know how best to address it...
Best regards,
Andrew
↧
How do you resolve splunk.log error messages after switching authentication from LDAP to SAML?
Hey guys,
After changing our authentication system from LDAP to SAML we get a lot of messages like this in splunkd.log:
11-07-2017 18:35:00.904 +0100 WARN UserManagerPro - AQR not supported and user=system information not found in cache
All I could find out by myself is, that "AQR" is likely to mean "Assessor qualification & requirements" and it has something to do with SAML.
Can anybody help here?
Greetings
Dennis
↧
Is there any APPS or TA's available for Adobe experience manager.
If they are available it's good if not any suggestion, please.
Do I need to create my own TA's if so let me know how to do it?
↧
Multiple base searches in one search
I have three base searches in my dashboard.... ... ...
I need to show the results of each these queries in a single table, so I thought I can use multiple base searches, something like this...
Is there a way the above can be achieved?
Thanks!!
↧
↧
How to configure Net4Log in .Net to send logs to Splunk HTTP Event Collector?
We are trying to configure our app to send log messages to our server using the HEC service in our splunk server. It is already configured and receiving messages, if we send messages from our Unix machine we can see them, we are using curl to do it:
**curl -k "http://ourSplunSrv:8088/services/collector" -H "Authorization: Splunk xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -d '{"event": "TESTING", "sourcetype": "http:LAB-APP"}'**
So, the server is running and listening fine.
**But how to configure the Log4Net to do the same?**
↧
Incomplete lookup results
Hi everyone!
We've been randomly facing with rather annoying and critical issue while working with lookups:
sometimes only several entries get lookup fields when there should be many more of them. Rewriting lookup file helps in most of cases.
But it is not stable performance and means you cannot trust results especially when lookup is used in scheduled searches.
Can't say for sure what conditions cause such behaviour: often it happens with large csv files (over 1mln lines) that are rewritten on a daily basis and it happens in all Splunk versions.
This time I managed to save search logs to the same query when lookup worked incorrectly and when it worked ok (after its rewriting).
Comparison of these two files showed that they are mainly identical except several lines.
In "good" log file there is an entry, that misses in "bad" log:
INFO CMBucketId CMIndexId: New indexName=main inserted, mapping to id=1
Also in good log:
INFO DispatchThread SrchOptMetrics optimize_toJson=1
While in bad:
INFO DispatchThread SrchOptMetrics optimize_toJson=2
Excel with comparison is attached
Hope for your help, guys!
[1]: /storage/temp/218710-goodvsbad.zip
↧
Splunk Add-on for Microsoft Active Directory filling up local disk with .tmp files
Hello,
We have this add-on installed (version 1.0.0) and there are tmp files being created on some Active Directory servers but not being removed. The content of the tmp files are as follows:
constants
SplunkCheckpointPath:C:\Program Files\SplunkUniversalForwarder\var\lib\splunk\modinputs\powershell
SplunkHome:C:\Program Files\SplunkUniversalForwarder
SplunkServerHost:W00P0023
SplunkServerName:W00P0023
SplunkServerUri:https://127.0.0.1:8089
SplunkSessionKey:hdTSdGTrlp7bfoXn_V4qQePQvVwJYoNKkLz_3S32SUkvfA18XnkEXgxWlBenUk2rfSbss_GIJC5yk7xx2oeP6wDpMFZZRKV7hBLM7SSTcJAfLOWKbfVYU4C
stanzas
stanza:AD-Health
event_group:-1,1
index:msad
script:& "$SplunkHome\etc\apps\Splunk_TA_microsoft_ad\bin\Invoke-MonitoredScript.ps1" -Command ".\powershell\2012r2-health.ps1"
source:Powershell
sourcetype:MSAD:NT6:Health
stanza:Replication-Stats
event_group:-1,2
index:msad
script:& "$SplunkHome\etc\apps\Splunk_TA_microsoft_ad\bin\Invoke-MonitoredScript.ps1" -Command ".\powershell\2012r2-repl-stats.ps1"
source:Powershell
sourcetype:MSAD:NT6:Replication
stanza:Siteinfo
event_group:-1,3
index:msad
script:& "$SplunkHome\etc\apps\Splunk_TA_microsoft_ad\bin\Invoke-MonitoredScript.ps1" -Command ".\powershell\2012r2-siteinfo.ps1"
source:Powershell
sourcetype:MSAD:NT6:SiteInfo
Can someone please help investigate what is causing this? I suspect that the logging is not configured correctly here.
↧
How to resolve can not file .srl file error
Trying to bring in infoblox events. We are using the data manager on the Infoblox side. I created a cert using this article steps 1 - 4
http://docs.splunk.com/Documentation/Splunk/6.3.3/Security/Howtoself-signcertificates
I am 6.5.1
when I tried to import the .pem file from Infoblox I got this error:
bash: trvapps/splunkforwarder/etc/auth/mycerts/splunkforwarder.srl:: No such file or directory
140070396331848:error:02001002:system library:fopen:No such file or directory:bss_file.c:398:fopen('/trvapps/splunkforwarder/etc/auth/mycerts/splunkforwarder.srl','r')
root@tospkfu1 /trvapps/splunkforwarder/etc/auth/mycerts $ 140070396331848:error:02001002:system library:fopen:No such file or directory:bss_file.c:398:fopen('/trvapps/splunkforwarder/etc/auth/mycerts/splunkforwarder.srl','r')
↧
↧
How to configure CA for Infoblox
On my forwarder I used the below procedures to create a .pem, .csr and .key file. Passed the .pem to Infoblox Admin and they in turn passed me back a .pem file.
splunk version 6.5.1
link to steps for SPLUNK.
http://docs.splunk.com/Documentation/Splunk/6.3.3/Security/Howtoself-signcertificates
I get this error when I to import it.
/trvapps/splunkforwarder/etc/auth/mycerts/splunkforwarder.srl: No such file or directory
140070396331848:error:02001002:system library:fopen:No such file or directory:bss_file.c:398:fopen('/trvapps/splunkforwarder/etc/auth/mycerts/splunkforwarder.srl','r')
140070396331848:error:20074002:BIO routines:FILE_CTRL:system lib:bss_file.c:400:
↧
How to configure Net4Log in .Net to send logs to the Splunk HTTP Event Collector?
We are trying to configure our app to send log messages to our server using the HEC service in our Splunk server. It is already configured and receiving messages, if we send messages from our Unix machine we can see them, we are using curl to do it:
**curl -k "http://ourSplunSrv:8088/services/collector" -H "Authorization: Splunk xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" -d '{"event": "TESTING", "sourcetype": "http:LAB-APP"}'**
So, the server is running and listening fine.
**But how to configure the Log4Net to do the same?**
↧
What are some use cases where it is best to use accelerated datamodels?
Hi there.
The only possible way to access accelerated datamodels is by using the tstats command. It is tricky to use tstats in general and for extracting single events.
So, why even using accerelated datamodels if only one command is supported to access it?
↧