Hi,
We are planning Splunk to UCMDB integration and I have questions about that.
We are on Splunk Enterprise 6.6.5 presently. Which version of UCMDB is supported to make this connection happen from Splunk to UCMDB by hitting UCMDB's REST API and fetch needed data from UCMDB to Splunk?
How the configuration process needs to take place step by step?
Thanks in-advance for helpful guidance!!!
↧
Splunk to UCMDB
↧
Opinions on using a single SAN mountpoint for both hot and cold buckets?
We're getting ready to deploy new Linux indexers with VMAX storage and I'm thinking of just sending all of my buckets to a single VMAX filesystem. Any opinions?
↧
↧
Event breaking during index time own sourcetype
Hi all,
this linebreak/eventbreak problem drives me crazy... searched all day to find a solution but nothing helped:
We have a universal forwarder monitoring a logfile: (line by line)
Sep 6 15:49:27 lxXXXXXXX.de user:%ENVIRcom.rsa.csd.config.GenConfigCommandBaseORE_LOG 2018-09-06 15:49:27,559 +0200 DEBUG [http-bio-443-exec-178] [1ee3:1e895a3a561:430e6423-||1536235160000] [2ee3:1e895a3a561:430e6423-_TRX] [com.rsa.csd.config.GenConfigCommandBase] [regionContextHeirarchies: [zvs-prod, ALOA-DEFAULT, dummy]]
Sep 6 15:49:27 lxXXXXXXX.de user:%ENVIRcom.rsa.csd.config.GenConfigGetCommandORE_LOG 2018-09-06 15:49:27,559 +0200 DEBUG [http-bio-443-exec-178] [1ee3:1e895a3a561:430e6423-||1536235160000] [2ee3:1e895a3a561:430e6423-_TRX] [com.rsa.csd.config.GenConfigGetCommand] [Parameter name=SMTP_USERNAME@MCF@zvs-prod@CURRENT, Returned values=]
Sep 6 15:49:27 lxXXXXXXX.de user:%ENVIRcom.rsa.csd.config.GenConfigCommandBaseORE_LOG 2018-09-06 15:49:27,559 +0200 DEBUG [http-bio-443-exec-178] [1ee3:1e895a3a561:430e6423-||1536235160000] [2ee3:1e895a3a561:430e6423-_TRX] [com.rsa.csd.config.GenConfigCommandBase] [regionContextHeirarchies: [zvs-prod, ALOA-DEFAULT, dummy]]
Sep 6 15:49:27 lxXXXXXXX.de user:%ENVIRcom.rsa.csd.config.GenConfigGetCommandORE_LOG 2018-09-06 15:49:27,559 +0200 DEBUG [http-bio-443-exec-178] [1ee3:1e895a3a561:430e6423-||1536235160000] [2ee3:1e895a3a561:430e6423-_TRX] [com.rsa.csd.config.GenConfigGetCommand] [Parameter name=OOB_DISABLE_HEALTH_CHECKS@MCF@zvs-prod@CURRENT, Returned values=true]
Sep 6 15:49:27 lxXXXXXXX.de user:%ENVIRcom.rsa.csd.config.GenConfigCommandBaseORE_LOG 2018-09-06 15:49:27,560 +0200 DEBUG [http-bio-443-exec-178] [1ee3:1e895a3a561:430e6423-||1536235160000] [2ee3:1e895a3a561:430e6423-_TRX] [com.rsa.csd.config.GenConfigCommandBase] [regionContextHeirarchies: [zvs-prod, ALOA-DEFAULT, dummy]]
Sep 6 15:49:27 lxXXXXXXX.de user:%ENVIRcom.rsa.csd.config.GenConfigGetCommandORE_LOG 2018-09-06 15:49:27,560 +0200 DEBUG [http-bio-443-exec-178] [1ee3:1e895a3a561:430e6423-||1536235160000] [2ee3:1e895a3a561:430e6423-_TRX] [com.rsa.csd.config.GenConfigGetCommand] [Parameter name=KBA_MODE@MCF@zvs-prod@CURRENT, Returned values=testing]
on my index-cluster i have an app defining the index-time parsing as follows:
[rsa]
TIME_PREFIX = %\S+\s+
TIME_FORMAT = %Y-%m-%d %H:%M:%S,%3N %z
MAX_TIMESTAMP_LOOKAHEAD = 40
TZ = DST
LINE_BREAKER=\]([\n\r]+)\w+\s+\d+\s+
EVENT_BREAKER=\]([\n\r]+)\w+\s+\d+\s+
SHOULD_LINEMERGE=false
BREAK_ONLY_BEFORE_DATE = false
TRUNCATE = 20000
MAX_DAYS_AGO = 14
Problem: Some events are multiline
Sep 6 15:49:44 lXXXXXde ...+.net+clr+2.0.50727;+.net+clr+3.0.30729;+.net+clr+3.5.30729;+rv:11.0)+like+gecko|5.0+(Windows+NT+10.0;+WOW64;+Trident/7.0;+.NET4.0C;+.NET4.0E;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.30729;+.NET+CLR+3.5.30729;+rv:11.0)+like+Gecko|Win32&EXTERNAL_TIMEOUT=&TIMEZONE=1&pm_fpsdx=96&BROWSER_MAJOR_VERSION=NaN&JAVA=1&COOKIE=1&pm_fpol=true&SOFTWARE=&pm_fpsdy=96&version=3.4.0.0_2&pm_fpup=&pm_fpslx=96&pm_fpsly=96&pm_fpsfse=true&BROWSER_TYPE=Mozilla&pm_fpacn=Mozilla&INTERNAL_TIMEOUT=&pm_fpsbd=0&SUPPRESSED=false&LANGUAGE=lang%3Dde-DE|syslang%3Dde-DE|userlang%3Dde-DE&pm_fpsui=&pm_fpasw=flash|ieatgpc&pm_fposp=&OS=Windows&pm_fpsaw=1920&DISPLAY=24|1920|1200|1160&LANGUAGE_BROWSER=de-DE&pm_fpspd=24&ACCEPT_LANGUAGE=de-DE&LANGUAGE_USER=de-DE
ipHist=V=2&BT/0000004900000164d0f8bdde00000165af0e4de6440511f243106cae426666dbff02BD/0000004900000164d0f8bdde0000016...
Sep 6 15:49:44 lXXXXXde ...5af0e4de6440511f243106cae426666dbff8c38e246BU/0000004900000164d0f8bdde00000165af0e4de6440511f243106cae426666dbffd98d4a6dBL/0000004900000164d0f8bdde00000165af0e4de6440511f243106cae426666dbffdeDEBS/00000045000001657c01be9400000165af0e4de6440511f2430ff7e5426671507f1deeaef9GS/0000000100000165af0e4de600000165af0e4de63f8000003f8000003f8000007fdb41295d0100005800000000000fbb730000000100000001027a115d004d608cGS/0000000100000165af0296e700000165af0296e73f8000003f8000003f8000007fdb41295d0100005800000000000fbb730000000100000001027a115d004d608cGS/0000000100000165aefedb3600000165aefedb363f8000003f8000003f8000007fdb41295d0100005800000000000fbb730000000100000001027a115d004d608cGS/0000000100000165aeea741a00000165aeea741a3f8000003f8000003f8000007fdb41295d0100005800000000000fbb730000000100000001027a115d004d608cGS/0000000100000165aee4cd4600000165aee...
Sep 6 15:49:44 lXXXXXde ...4cd463f8000003f8000003f8000007fdb41295d0100005800000000000fbb730000000100000001027a115d004d608cGS/0000000100000165ae90a1e900000165ae90a1e93f8000003f8000003f8000007fdb41295d0100005800000000000fbb730000000100000001027a115d004d608cGS/0000000100000165ae8e154100000165ae8e15413f8000003f8000003f8000007fdb41295d0100005800000000000fbb730000000100000001027a115d004d608cGS/0000000100000165ae8a69aa00000165ae8a69aa3f8000003f8000003f8000007fdb41295d0100005800000000000fbb730000000100000001027a115d004d608cGS/0000000100000165ae7d045a00000165ae7d045a3f8000003f8000003f8000007fdb41295d0100005800000000000fbb730000000100000001027a115d004d608cGS/0000000100000165ae7673d600000165ae7673d63f8000003f8000003f8000007fdb41295d0100005800000000000fbb730000000100000001027a115d004d608cGS/0000000100000165ae6703f400000165ae6703f43f8000003f8000003f8000007fdb41295d01...
So Ineed linemerge=true and a linebreak. The time stamp is inside the messages...
now.. how to deal with that?
↧
Enable higher permissions only in a dashboard
Hi,
There's a bit of blurb so this makes sense. I work in a call centre and have higher permission level than the people I want to have access and view this dashboard that I've made. The dashboard shows stats for whoever is logged into splunk and viewing it.
The problem is, the people I want to use it and view their stats don't have the same permissions; if they did they'd be able to see their colleagues stats and it'd cause friction, If someone was performing better than another.
What I want to know is there a way to have the searches bypass role-permissions just to pull the required data and show it rather than it relying on what their access level is? As the data isn't sensitve to the individual user as they'd only be able to access their own stats but changing their permissions isn't feasible.
Thanks,
↧
Regarding my SH clusterring
Hello, I have 2 SH’s and 2 indexers and 1HF and 1 Deployment in my environment.
Deployment is acting as a cluster master for SH and indexer and SH1 is acting as a captain.
But I got a problem now with that.. SH 1 that is SH captain replication status is showing initial and Sh member replication status is showing Sucessful. I I tried a lot but didn’t get sucessful result.. Can anyone help me with that. Thanks in advance
Splunk Version 7.1.1
↧
↧
Splunk Sales Rep 1 certificate download error
Hi,
I am getting errors when downloading the Splunk Sales Rep 1 error,
I have recently completed the exam and in my profile under accreditations, the course certificate does not appear,
This means I can't complete the Sales Rep 2 exam and so on,
Can someone please look into this for me and help sort this out as I am keen to progress quickly,
Regards,
Abid
↧
Using Heavy Forwarder
We are going to use syslog-ng and a heavy forwarder for the SecretServer. Could it be that we only need to change the props.conf in the SecretServer app to [SecrectServer] rather than the default [syslog] stanza?
↧
TIme value differnece in duration: getting value as 0d
HI All,
I am able to get the time value difference in epoch and able to convert it to string with the following command:-
eval LeadDays = ( Answer_Time - Bookingdate) | eval LeadDays = tostring(LeadDays, "duration") |
Bookingdate Answer_Time LeadDays
1535635518.000000 1535708751.000000 20:20:33.000000
1535636031.000000 1536059535.000000 2+21:38:24.000000
the problem is in first row: is there a way to convert it to 0+20:20:33.000000 instead of 20:20:33.000000.
I tried to use string concat but it didnt work.
Also is there a way to convert 2+21:38:24 to only days as 2+21/24+38/3600= 2.88 days
↧
What is the opinion on using a single SAN mountpoint for both hot and cold buckets?
We're getting ready to deploy new Linux indexers with VMAX storage and I'm thinking of just sending all of my buckets to a single VMAX filesystem. Any opinions?
↧
↧
Solaris TA for Solaris 11 spark version having error with one of the script hardwar.sh
Hi Guys, I have installed a universal forwarder on solaris server 11 and looking to populate the default dashboard with solaris app
Their are few errors I can see in the splunkd.log and would appreciate if anyone can help to resolve the same
09-06-2018 16:13:54.253 +0300 INFO ExecProcessor - New scheduled exec process: /tmp/splunkforwarder/splunkforwarder/etc/apps/Splunk_TA_solaris11/bin/hardware.sh
09-06-2018 16:14:30.029 +0300 ERROR ExecProcessor - message from "/tmp/splunkforwarder/splunkforwarder/etc/apps/Splunk_TA_solaris11/bin/hardware.sh" awk: record `HARD_DRIVES c2d0:SUN...' too long
09-06-2018 16:53:55.323 +0300 INFO ExecProcessor - New scheduled exec process: /tmp/splunkforwarder/splunkforwarder/etc/apps/Splunk_TA_solaris11/bin/hardware.sh
09-06-2018 16:54:23.281 +0300 ERROR ExecProcessor - message from "/tmp/splunkforwarder/splunkforwarder/etc/apps/Splunk_TA_solaris11/bin/hardware.sh" awk: record `HARD_DRIVES c2d0:SUN...' too long
Please let me know if you need any other details on this regards
Thanks
Gaurav
↧
HEC and Indexer Clustering
The setup we have is as follow
Master x 1
Indexers x 3
Search head x 1
I am trying to enable HEC on the indexers through the inputs.conf and outputs.conf setup as described [here][1]
I have setup the local dir and added the inputs.conf and outputs.conf and restarted splunk but there is still nothing listening on port 8088
I did read this in the documentation
> Using HTTP Event Collector in a distributed deployment is incompatible with indexer clustering. Specifically, cluster peers are not supported as deployment clients.
https://docs.splunk.com/Documentation/Splunk/7.1.2/Data/UsetheHTTPEventCollector
Does this mean that I cannot enable HEC even though I am not using the deployment server to push this app out? We are manually editing the .conf files on each indexer.
[1]: http://dev.splunk.com/view/event-collector/SP-CAAAE6Q
↧
Splunk Add-on for Microsoft Cloud Services - Metrics supported?
I looked through the documentation page for the add-on and didn't see anything stating Azure Metrics are supported. Can I Splunk Azure metrics via this add-on?
↧
Time value difference in duration: getting value as 0d
HI All,
I am able to get the time value difference in epoch and able to convert it to string with the following command:-
eval LeadDays = ( Answer_Time - Bookingdate) | eval LeadDays = tostring(LeadDays, "duration") |
Bookingdate Answer_Time LeadDays
1535635518.000000 1535708751.000000 20:20:33.000000
1535636031.000000 1536059535.000000 2+21:38:24.000000
The problem is in the first row: is there a way to convert it to 0+20:20:33.000000 instead of 20:20:33.000000
I tried to use string concat but it didnt work.
Also is there a way to convert 2+21:38:24 to only days as 2+21/24+38/3600= 2.88 days
↧
↧
Does KV store cleanup delete license ?
My Splunk license had expired and I got a new license and installed it.
After adding the license I was getting "KV Store initialization failed" error. I check the KV store status and it showed as Failed. So I did a KV store cleanup. But after restarting, now I am again getting a license expired error as below -
Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times.
My question is did the KV store cleanup delete my license? But in Setting->Licensing, I can still see the license and it is saying valid.
Secondly, what is the solution to this problem now ?
Please let me know as I am completely stuck.
Thanks for your support in advance.
↧
How do I plot 60 days worth of data on the line chart?
I have data coming in from our NetApp storage controllers that shows aggregate space free every day. I need to plot each day's values and then show a chart that shows the last 60 days as dots on a line chart. Management wants to see if storage is trending up or down. i have this search so far but it is messing up and plotting the sum of all of them over 60 days. I want each day's values plotted:
index=netapp_vault_utilization sourcetype=dbx3_netapp_vault_aggregate earliest=-60d@d latest=@h
| timechart span=60d@d sum(SpaceFree) AS aggregate_space_free_60d by name
| append [search index=netapp_vault_utilization sourcetype=dbx3_netapp_vault_aggregate earliest=-48h@h latest=24h@h
| timechart span=60d@d sum(SpaceFree) AS aggregate_space_free_60d by name]
![alt text][1]
[1]: /storage/temp/254870-2018-09-06-10-35-24.png
It will show the correct amount for the current day but the larger numbers above are a sum of when the oldest event is. How did i fix this? Am i even on the right path?
↧
Why does BREAK_ONLY_BEFORE work only for some events?
I have applied regex in the heavy forwarders as below. But this works only for few events and a lot of events are not getting parsed with the regex in BREAK_ONLY_BEFORE.
pulldown_type = 1
SEDCMD-backslash=s/\\//g
TRUNCATE = 0
BREAK_ONLY_BEFORE = {\”name\”
DATETIME_CONFIG = CURRENT
INDEXED_EXTRACTIONS = json
KV_MODE = json
category = Structured
SHOULD_LINEMERGE = false
NO_BINARY_CHECK = true
Sample logs as below.
{\"name\":\"\",\"\":,\"severity\":\"info\",\"time\":,\"host\":\"\",\"hostname\":\"\",\"\":\"\",\"\":\"UNKNOWN CORRELATION\",\"userId\":\"UNKNOWN USER\",\"moduleName\":\"\",\"\":\"a\",\"client\":\"AgentDesktop\",\"type\":\"application\",\"msg\":\"\",\"\":\"\"}{\"name\":\"\",\"level\":30,\"\":\"info\",\"time\":,\"host\":\"\",\"hostname\":\"\",\"\":\"\",\"clientCorrelationId\":\"\",\"userId\":\"UNKNOWN
For some events the same stanza in heavy forwarder works, but for others, it does not work. Can someone let me know what could be wrong?
↧
Help with Split-Shift View on Dashboard
Hopefully I can explain this in a clear way. I am going to post the pictures below, so please take a look at them, as they will be necessary to understand my question.
The way I have my dashboard set up is that the Split-Shift view can only be selected if the today or yesterday button is selected. If this hour, this week, or this year are selected, it will deselect Split-Shift view.
I would like to make it so that if no timespan is selected, but the Split-Shift button is selected, it automatically selects the timespan of today. Does anyone know what XML or settings I need to make this happen? Thank you in advance!
![alt text][1]
![alt text][2]
[1]: /storage/temp/255934-sa3.jpg
[2]: /storage/temp/255935-sa2.jpg
↧
↧
Need help on search to exclude logs with extensions
Hi In my data I have API calls with several extensions like (.html, .com, .php and many more). I am trying to exclude the logs that have these extensions. I tried the below.
index=abc NOT (api_call=".html." OR api_call=".php")
But I don't want to use NOT since there are many extensions that will come in the future. can anyone help?
↧
passing some subsearch result fields to the result
I'm trying to figure out if the following can be done with subsearch or requires a join.
I'm running a search that boils down to:
index=indexA sourcetype=outer
[search index=indexB sourcetype=inner innerinput=abc | fields inneroutput1 inneroutput2 inneroutput3]
| table _time host outeroutput1 outeroutput2 **inneroutput3**
My subsearch results provide the keys necessary for the main one, but I'd like one extra field to be passed to the final table without being used on the outer search. Anything I'm missing or do I have to run a join just for that extra field?
↧
How to list a count ONLY if the value within a query is above a certain threshold?
Hello. Today, I have several panels in a dashboard to provide us daily, weekly, and monthly counts of certain problem areas. When it comes to one of the scripts though, I would like to only provide a count if the number is higher than 10. Below is one of the queries we currently have in place:Today
Also, below are examples of the output in splunk today in order to gather those counts:
STATUS | wrapper | main | 2018/09/06 16:47:38.283 | Pinging the JVM took 11 seconds to respond.
STATUS | wrapper | main | 2018/09/06 16:47:38.283 | Pinging the JVM took 2 seconds to respond.
STATUS | wrapper | main | 2018/09/06 16:47:10.731 | Pinging the JVM took 11 seconds to respond.
STATUS | wrapper | main | 2018/09/06 16:47:10.731 | Pinging the JVM took 2 seconds to respond.
We know that a "kick out" takes place if the Pinging takes longer than 10 seconds today. Is there a way to rewrite the query to only count if the pinging takes longer than 10 seconds or is it something where we would have to rewrite the regular expression to read something like --> Pinging the JVM took [0-9][1-9] seconds to respond.
Any help would be greatly appreciated!
-Cameron
↧