We have a dashboard panel table that contains links to dashboard "snapshots" like this:
http://...?**form.field1.earliest=1505343600&form.field1.latest=1505354400**
On the dashboard we have a panel that has external hyperlinks which need dynamic timestamps as well. In order to create those timestamps we have the following code in the form section of our dashboard:
-1d@d @d strftime('earliest',"%m/%d/%y %H:%M:%S" strftime('latest',"%m/%d/%y %H:%M:%S" 'latest'+3600 $latest$-$earliest$+7200
This code works fine if I am coming into the dashboard directly and use the date picker to select earliest/latest. If I come to the dashboard via a link like above, the variables stime/etime/etime_NR/duration_NR are not set. Ideas?
↧
Handling dashboard parameters passed in from link
↧
How to improve index replication speed ?
Dear Splunkers,
I am performing migration of a multi site indexer cluster with 2 sites. RF=2, SF=2 with 1 copy of raw data and tsidx data in each site. Total 40 indexers with 20 indexers each per site.
Approach is as follows:
1. Bring up 40 new indexers, 20 each in new site
2. Put each of the 40 old indexers in detention
3. Configure forwarders to forward data only to new indexers
4. Do an indexer data rebalance
5. Offline indexers one by one in each site alternatively, with enforce counts enabled (indexers do need to support search heads as usual)
I am currently at step 5, problem is that offlining each indexer takes couple of hours. I am aware that lot of factors including not the least of which are hardware bound and the amount of data (~900T in total) plays a significant role here. Nevertheless I would like to know if there are still improvements that can be made here through Splunk configuration changes.
Appreciate your thoughts,
Thanks,
↧
↧
How to add indexers to license pools via cli
Hi,
I need to add some indexers to an existing license pool via cli. The doc doesn't really give clear examples on how to do this... has anyone tried it?
↧
Splunk DB Connect -- Do I need to change the configuration? Can't Splunk a column from an Oracle table
some of the column from Oracle table(From DB connect) are not getting ingested into splunk after integration.
Could you please let me know why?
Note:The particular column has a huge length of data.Is that one of the reason why it is not indexed properly by splunk? If so what configuration has to be changed and where so that the column will be generated in splunk?
↧
Can I remove remote-bundle files? They take up a lot of disk space.
In SPLUNK_HOME/var/run/splunk/cluster/remote-bundle, it has these files. Which of them can be removed? It takes so much disk spaces.
03f58995749637f6d88a5333918cf6f3-1496941618.bundle
03f58995749637f6d88a5333918cf6f3-1496941618
94264d9cbc7714b2ef84e425cc72d777-1501624862.bundle
94264d9cbc7714b2ef84e425cc72d777-1501624862
d5ef4a7482aafe56e7221d61c705e2f1-1505917123.bundle
d5ef4a7482aafe56e7221d61c705e2f1-1505917123
d5ef4a7482aafe56e7221d61c705e2f1-1505917123.bundle_active
↧
↧
Can I set an alert that turns my dashboard red when triggered?
Would like to trigger an alert and show the dashboard status as RED when the duration > 0.0205035.
Below are the steps I am creating
1. Creating a Single view dashboard for the Service of Full GC count
2. Based on the duration condition above specified the single value dashboard has to show as RED
3. from single value dashboard ..navigating to the Trend chart
**Sample data:**
28820.220: [Full GC (System.gc()) 8832K->8624K(37888K), 0.0261704 secs]
29372.500: [GC (Allocation Failure) 23984K->8816K(37888K), 0.0013546 secs]
29932.500: [GC (Allocation Failure) 24176K->8808K(37888K), 0.0017082 secs]
30492.500: [GC (Allocation Failure) 24168K->8960K(37888K), 0.0017122 secs]
31047.500: [GC (Allocation Failure) 24320K->8944K(37888K), 0.0020634 secs]
31602.500: [GC (Allocation Failure) 24304K->8992K(37888K), 0.0017542 secs]
32157.500: [GC (Allocation Failure) 24352K->8968K(37888K), 0.0018971 secs]
32420.247: [Full GC (System.gc()) 16160K->8944K(37888K), 0.0012816 secs]
32420.248: [Full GC (System.gc()) 8944K->8624K(37888K), 0.0205035 secs]"
↧
Is there a difference between guided and manual mode? Is there a difference between real-time and continuous?
Guided and Manual Mode?
Real Time and Continuous?
Is one more efficient then the other?
Thank you.
Frank
↧
Can I edit the server.conf to add indexers to license pools via CLI?
Hi,
I need to add some indexers to an existing license pool via cli. The doc doesn't really give clear examples on how to do this... has anyone tried it? Can I just edit the server.conf on the license mgr directly?
↧
How can I receive an alert if standalone Splunk instance is down?
As the question say, i want to know if there is a way(s) to have an alert when a standalone splunk environment get down
↧
↧
IIS filter transform not processing when forwarded from universal forwarder, but does with manual file input?
I've found many entries on the subject of filtering IIS logs, with people saying X has worked. However, I'm not able to get it fully working. If I copy an IIS log that should be filtered to the server and import it manually it works (as far as I can tell, I only went to preview) but if I use a UF from a server 2003 (so older UF version) box, to the Splunk server on windows 2012 (6.6.3), it doesn't get filtered. Any help here?
Props.conf:
[iis]
TRANSFORMS-ignoredpages= iis_ignoredpages
Transforms.conf:
[iis_ignoredpages]
#SOURCE_KEY=field:cs_uri_stem
REGEX=(Page1|Page2)
DEST_KEY= queue
FORMAT=nullQueue
Page1 and Page2 are only part of the cs-uri-stem (that's its name in the IIS logs, but Splunk seems to turn it into cs_uri_stem), instead they're like companyname.product.page1/service.asmx or companyname.product/page2.asmx
I've tried placing the props and transforms files on both the system/local directory of the UF and the Splunk receiver, restarted both and it continued to process the unwanted pages.
I understand that it looks like UF itself can't filter these lines, but that it processes them sufficiently to get past props and transforms on the Splunk machine. **I assume there's a way I can make Universal Fowarder send the logs RAW and the Spunk box will go "OH, W3C, process normally," but how do I do that?**
---- Less relevant ----
Filtering out these pages is absolutely critical as they're hundreds of thousands of internal calls that would spam the Splunk logs, and overwhelm our 500mb/day limit that I need to stay under for proof of concept.
↧
Why are the transforms on indexer props being broken by the extractions on my forwarder's props?
Whenever I enable this EXTRACTION stanza on my universal forwarder, my TRANSFORM extraction stops working on my indexer:
[web_app_logs]
NO_BINARY_CHECK = 1
INDEXED_EXTRACTIONS = TSV
PREAMBLE_REGEX = ^#.*
FIELD_DELIMITER=\t
The indexer props with the TRANSFORM line that stops working (I added the input time stuff as redundancy during testing):
[web_app_logs]
TRANSFORMS-AutoSourceType = AutoSourceType
NO_BINARY_CHECK = 1
INDEXED_EXTRACTIONS = TSV
PREAMBLE_REGEX = ^#.*
FIELD_DELIMITER=\t
SHOULD_LINEMERGE = False
MAX_TIMESTAMP_LOOKAHEAD = 50
TZ = UTC
TIME_FORMAT = %s.%6Q
TRUNCATE = 250000
The forwarder's props extraction stanza should be fine according to [this][1], and it does indeed work by parsing my tsv files correctly. The specific commands for field extractions can be found [here][2]. For context the TRANSFORM is setting the events to new sourcetypes depending on a string found within them.
What am I missing? Why is my forwarder's props.conf interferring with my indexer's props.conf stuff that comes after input time stuff? Does one override the other? I tried putting my TRANSFORM into the forwarder's props.conf but that doesn't work either (as expected since it's not a heavy forwarder).
[1]: https://wiki.splunk.com/Where_do_I_configure_my_Splunk_settings%3F
[2]: http://docs.splunk.com/Documentation/Splunk/latest/Data/Extractfieldsfromfileswithstructureddata
↧
Correlation search error -- "there was an error saving the correlation search"
Hi
I am trying to change the Scheduling on a correlation search to Continuous, and I am getting a field " Fields to Group by" in order to save the search.
I have entered a couple of different field names, but to no avail as I keep getting the following message...."There was an error saving the correlation search."
Any suggestions?
Thank you
Frank
↧
Is this normal? CPU is at 100% on search head and heavy forwarder with data inputs from Splunk Add-on for AppDynamics.
We are using the Splunk Add-on for AppDyanmics to pull in single API KPI's from our shared AppDynamics instance into Splunk.
We have 78 inputs being pulled in. They are running on an interval of 5 minutes and duration of 5.
The server that we are pulling the inputs in from is a VM server with 4 processors and it is pegging the server at 100% usage.
All of the data is coming in but is this usual behavior? Or is there something that needs to be changed so it is not peaking out the server?
↧
↧
Is it possible to copy glass table to another splunk instance?
Hi,
We have a Glass table which I'd like to move to another Splunk instance. Unlike Dashboards, I do not see any "edit source" options for Glass Tables. And the edit drop down will only allow to clone locally.
Is there any way to find the source for glass table directly on the server? and can it be deployed on another instance?
OS - CentOS 6.9
Splunk Version - 6.6.2
ES Version - 4.5.2
Thanks,
~ Abhi
↧
Detecting Endpoint Change in a Specific Event
Looking for assistance with creating an email alert when an endpoint changes in logs.
We want to avoid multiple emails going out every 15 minutes and only send the email alert when the switch happens.
The alert would be searching every 15 minutes. Thinking that the best way to do this is come up with a search that only returns the specific event in question. If we find two different endpoints (field value) for the 15 minute window, then we know a switch has occurred.
From here am looking for assistance. How to write the query to detect which endpoint we started with and what we switched to. Thinking that we can do something like the following to get timestamp for endpointA and endpointB events. Then see which one is greater than the other. Then conditional statement to determine what the source and destination endpoints are.
... | eval time_a = case(expression to determine if it's endpointA, _time) | eval time_b = case(expression to determine if it's endpointB, _time)
Any help would be greatly appreciated.
↧
Splunk 7.0 and OSX High Sierra APFS
Splunk 7.0 doesn't start in new MACOS X with the APFS (Encrypted) filesystem. Is APFS not supported?
↧
Why do we see the SSL23_GET_CLIENT_HELLO, unknown protocol error messages?
We see the following messages continuously on our four indexers -
09-28-2017 09:26:36.888 -0500 ERROR TcpInputProc - Error encountered for connection from src=:50230. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol
09-28-2017 09:26:36.888 -0500 ERROR TcpInputProc - Error encountered for connection from src=:50232. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol
09-28-2017 09:26:36.888 -0500 ERROR TcpInputProc - Error encountered for connection from src=:50234. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol
The data does flow through the ssl connection. What can be the cause for this error?
↧
↧
How to improve index replication speed?
Dear Splunkers,
I am performing migration of a multi site indexer cluster with 2 sites. RF=2, SF=2 with 1 copy of raw data and tsidx data in each site. Total 40 indexers with 20 indexers each per site.
Approach is as follows:
1. Bring up 40 new indexers, 20 each in new site
2. Put each of the 40 old indexers in detention
3. Configure forwarders to forward data only to new indexers
4. Do an indexer data rebalance
5. Offline indexers one by one in each site alternatively, with enforce counts enabled (indexers do need to support search heads as usual)
I am currently at step 5, problem is that offlining each indexer takes couple of hours. I am aware that lot of factors including not the least of which are hardware bound and the amount of data (~900T in total) plays a significant role here. Nevertheless I would like to know if there are still improvements that can be made here through Splunk configuration changes.
Appreciate your thoughts,
Thanks,
↧
Why can't an authorized user login via LDAP?
I have successfully configured LDAP to my organization's Active Directory and have several strategies configured; we have a massive disorganized domain, so I need to create multiple strategies to keep the returned results within the search time/size limits.
I have one strategy that works just fine for the OU that it points to. However all other strategies (each pointing to different OUs) fail when users attempt to login with the following errors:
AuthenticationManagerLDAP - Could not find user="somebody01" with strategy="Strategy 1" AuthenticationManagerLDAP - Could not find user="somebody01" with strategy="Strategy 2" AuthenticationManagerLDAP - Could not find user="somebody01" with strategy="Strategy 3" AuthenticationManagerLDAP - Could not find user="somebody01" with strategy="Strategy 4" AuthenticationManagerLDAP - Could not find user="somebody01" with strategy="Strategy 5"The user "sombody01" is discoverable via "Strategy 2" and in fact, enumerates when I browse to Settings > Access controls > Authentication method > LDAP strategies > (Strategy 2) Map groups > "theRelevantGroup-GG" I have tested using Domain Local vs. Domain Global Groups, rearranged the connection order (no connection errors so this was a shot in the dark), and adjusted my DN strings (however I am confident these are all correct [i.e. no errors upon Strategy save and as indicated above, user enumeration in web gui group mapping]), and the results are the same. I have searched for days and cannot find a comparable post, but please link if my Google/Duckduckgo/Splunk Answers fu was not good enough. Cheers.
↧
Substring lookup to enhance DB query results?
Hello,
I am VERY new to Splunk. I have built some basic dashboards using DB queries, because the data is not (yet) being put directly into the Splunk database. With that said, I would like to enhance my current dashboard with some additional data defined in a CSV file. To be more specific my dashboard contains phone numbers. My CSV file contains the location data of North American Numbering Plan area codes and prefixes (NPA-NXX). I would like to lookup the location of the caller, based on the NPA-NXX, and include that in my dashboard.
Given my limited knowledge/skill set with Splunk, I have a few questions:
1) Is this even possible in Splunk?
2) Does Splunk support data/format manipulation within the search string, such as using RegEx, or can you define a substring to look for?
3) Are there any existing tutorials around these areas that could help guide me to a solution?
Any help would be greatly appreciated!!
**EXAMPLE** (dots added for spacing purposes)
[Query Results]
Phone Number .......... Call Count
+12345678901........... 12
[CSV Entry]
NPA-NXX .................. Location
234-567 .................... Anytown, USA
**Desired Output**
Phone Number .............. Location .................................. Call Count
+12345678901............... Anytown, USA ......................... 12
↧