sample query:
index=foo "string of data"="age needed"age earliest=-5d
| stats dedup_splitvals=t , values(_time) AS _time by dept, "age_needed"
| sort department
| fields - _span
| eval conv_age=strptime(Date,"%Y-%m-%dT%H:%M:%S.%Q")
| eval age=((now()-conv_age)/84000)
| eval day=strftime(_time,"%Y-%m-%d")
| stats avg(age) as super_Age by dept, day
| eval super_Age=round(super_Age,2)
| xyseries Vertical day super_Age
This is a more detailed explanation of what I am looking for:
Okay so the column headers are the dates in my xyseries. I have a filter in my base search that limits the search to being within the past 5 days.
Xyseries is displaying the 5 days as the earliest day first (on the left), and the current day being the last result to the right.
**Dont Want**
Dept.......... 1/26........1/27........1/28.........1/29...........1/30 **(with the most recent being last)**
dept1..... value....... value .....value.......value...........value
dept2..... value....... value .....value.......value...........value
I need for the order that the dates are being displayed to show me the most recent day first (i.e the current day)
**This is what I want**
Dept.......... 1/30........1/29........1/28.........1/27...........1/26 **(with the oldest result being last)**
dept1..... value....... value .....value.......value...........value
dept2..... value....... value .....value.......value...........value
↧
How do I get an Xyseries to display dates in descending order?
↧
AWS ELB with SH Cluster Issues
I have a Splunk 7.1.2 cluster, using Search Head Cluster with AWS Load Balancer. It works fine. The server.conf says
**[settings]
httpport = 443
enableSplunkWebSSL = true
privKeyPath = /path/to/mycert.key
caCertPath = /path/to/mycert.pem**
Now I'm deploying a brand new cluster with 7.2.3 version, with the same server.conf, but the load balancer doesn't recognize the instances as Healthy. In splunkd.log for every check from the load balancer, which is a get on *https://splunkhostIP/en-US/account/login?return_to=%2Fen-US%2F*, I receive this two messages when It happens.
01-30-2019 21:27:18.107 +0000 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client hello C', alert_description='handshake failure'.
01-30-2019 21:27:18.107 +0000 WARN HttpListener - Socket error from 172.16.77.204:3955 while idling: error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher
The IP in the message is a Loadbalancer internal IP, calling the instance for healthcheck.
The old shcluster instances don't show me this same warning messages.
The old cluster has the exact same scenario and except for the Splunk Version.
The certificate file is the same for both, and they work exactly alike, calling on the browser with a name, for the Certificate is a Digicert Signed.
And calling using IP, they complain about the certificate, but when I accept to see even "unsafe" they have the same behavior.
I saw some issues with the same Warning messages, but the issues are not like mine.
I really appreciate any help.
↧
↧
Can you help us with the Install Errors we
We are having problems with upgrading Splunk forwarders on Solaris Sparc 10 hosts for vulnerability remediation. We were using 6.3.x and needed to update to a 6.5.x or later Splunk version. The errors are:
# ./splunk start
ld.so.1: splunkd: fatal: relocation error: file /opt/splunkforwarder/bin/splunkd: symbol pthread_condattr_setclock: referenced symbol not found
Killed
There's nothing logged other than the first_install.log which has the Splunk version and platform information: `PLATFORM=SunOS-sparcv9`
Where should we start to troubleshoot?
↧
During the forwarder Install, Is it possible to set up deploymentclient.conf parameters via the command line?
Hello,
Is it possible to set up deploymentclient.conf parameters via the command line?
I have used DEPLOYMENT_SERVER parameter during forwarder installation via the command line. It adds the target-broker, but I am looking for a command line option to set the parameters like below:
[deployment-client]
disabled = false
phoneHomeIntervalInSecs = 1800
handshakeRetryIntervalInSecs = 12
Does anybody know how to do it?
↧
Can you help us with the install errors we're recieving when trying to upgrade the universal forwarder on Solaris 10?
We are having problems with upgrading Splunk forwarders on Solaris Sparc 10 hosts for vulnerability remediation. We were using 6.3.x and needed to update to a 6.5.x or later Splunk version. The errors are:
# ./splunk start
ld.so.1: splunkd: fatal: relocation error: file /opt/splunkforwarder/bin/splunkd: symbol pthread_condattr_setclock: referenced symbol not found
Killed
There's nothing logged other than the first_install.log which has the Splunk version and platform information: `PLATFORM=SunOS-sparcv9`
Where should we start to troubleshoot?
↧
↧
Indexer is ignoring my timezone settings
Hi,
I've got a problem that's driving me crazy. There is a source we're reading via a universal forwarder that is the output of syslog on a a whole bunch of servers. This means that some of lines represent servers in different timezones depending on the host. Yeah, I know, not so great, but it's not within our control or influence.
I have been creating [host::] stanzas in a props.conf on our indexer cluster master and setting the TZ per host, such as "TZ = America/New York". If I go to one of the indexers and
splunk btool props list --debug
I can see the host entries I made.
However, the events are still being indexed as if they are the local time of the indexer. The sourcetype here is 'syslog' but I know that "host::" should override the sourcetype stanza in props.conf. I hunted around for a "source::" stanza that I might not know about that matches and I can't find one anywhere.
I'm not sure where to go from here, but any help would be appreciated. I hope I'm missing something obvious...
Thanks
↧
Install instructions for eventgen missing link for SPL file?
Installation instructions for eventgen says:
To use Eventgen as a Splunk app, you need a SPL file. This SPL file can be downloded directly from splunkbase. But the link seems to be broken, it points to the home page of eventgen download page?
Please clarify, it seems like I am missing something here.
Thanks.
↧
Can you help me with my Amazon Web Services ELB with search head cluster Issues?
I have a Splunk 7.1.2 cluster, using Search Head Cluster with AWS Load Balancer. It works fine. The server.conf says
**[settings]
httpport = 443
enableSplunkWebSSL = true
privKeyPath = /path/to/mycert.key
caCertPath = /path/to/mycert.pem**
Now I'm deploying a brand new cluster with the 7.2.3 version, with the same server.conf, but the load balancer doesn't recognize the instances as Healthy. In the splunkd.log, for every check from the load balancer, which is a get on *https://splunkhostIP/en-US/account/login?return_to=%2Fen-US%2F*, I receive these two messages when It happens.
01-30-2019 21:27:18.107 +0000 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client hello C', alert_description='handshake failure'.
01-30-2019 21:27:18.107 +0000 WARN HttpListener - Socket error from 172.16.77.204:3955 while idling: error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher
The IP in the message is a Loadbalancer internal IP, calling the instance for healthcheck.
The old search head cluster instances don't show me these same warning messages.
The old cluster has the exact same scenario and except for the Splunk version.
The certificate file is the same for both, and they work exactly alike, calling on the browser with a name, for the Certificate is a Digicert Signed.
And calling using IP, they complain about the certificate, but when I accept to see even "unsafe", they have the same behavior.
I saw some issues with the same Warning messages, but the issues are not like mine.
I really appreciate any help.
↧
Why is the Indexer ignoring my timezone settings?
Hi,
I've got a problem that's driving me crazy. There is a source we're reading via a universal forwarder that is the output of a syslog on a whole bunch of servers. This means that some of the lines represent servers in different timezones depending on the host. Yeah, I know, not so great, but it's not within our control or influence.
I have been creating [host::] stanzas in a props.conf on our indexer cluster master and setting the TZ per host, such as "TZ = America/New York". If I go to one of the indexers and
splunk btool props list --debug
I can see the host entries I made.
However, the events are still being indexed as if they are the local time of the indexer. The sourcetype here is 'syslog' but I know that "host::" should override the sourcetype stanza in props.conf. I hunted around for a "source::" stanza that I might not know about that matches and I can't find one anywhere.
I'm not sure where to go from here, but any help would be appreciated. I hope I'm missing something obvious...
Thanks
↧
↧
How can I use a token in a dashboard to show a result with specific time?
Hello everyone,
I have 3 different dashboards. one of them shows me all the events in 24 hours. The other one shows me the same events of this data but with one hour earliest, I mean from -1h to now, and the last one shows me the same but -15m to now
The problem comes when i use the timepicker to choose a different day, then all the dashboards show me the same events for 24 hours
What i am trying to do is to put the earliest=-1h and latest=now with tokens for each dashboard.
↧
How do you add search results to an existing lookup?
i have a table that has 30 columns and some rows,
table 1
column1 column2 ---------- column30
ww xx -------------------------- aa
expecting table will like this
column1 column2 ---------- column30
ww xx -------------------------- aa
---------
-----
-----
etc...
so my question is how to add more rows to it without deleting the old lookup.
↧
Can you help me with an Issue with a column chart that happens when scale is set to log?
Hello,
In Splunk 7.1.6, column chart restrict the Y axis scale to 1 when using log scale. (for linear working fine)
I am not setting up max value = 1 for Y axis but still it restrict to 1 even though values are greater than 1.
See the attachment for better understanding. In Splunk 7.0.3 its working as per expectation.
![alt text][1]
![alt text][2]
[1]: /storage/temp/263784-703-vs-716.png
[2]: /storage/temp/263785-splunk-716-settings.png
↧
How do you customize inputs for Splunk App for Web Analytics?
Due to extensive lack of foresight, I am working in an environment with a Splunk instance that is ingesting Tomcat logs (supporting a Liferay instance) that are not in the standard index/sourcetype (i.e., not access_combined) with non-standard field extractions. Basically the field extractions line up more with IIS than with Apache access logs.
Has anyone successfully managed to implement the Splunk App for Web Analytics in a similar scenario? After digging through the .conf files, I would think it would just require adjusting all of the sourcetypes and field references to use my environment's settings, but in some cases, I am not entirely able to tell which are standard fields, and which are fields being created by the app.
So has any one had any success trying this?
↧
↧
Problem with token eval I am trying to rest one hour to fiel1latest
What I am doing wrong, I am trying to rest one hour to fiel1latestindex=int_gcg_mex_accimarket_151486
| stats count by _time $$newtime$$ $$field1.latest$$ 1
↧
Data Ingestion to Cluster
Hi All,
need help on data ingestion to cluster
i was trying to ingest data to indexer cluster, built on AWS linux,
cluster config - 1master, 1 sh, 2 idx, 1 uf
first ingested single file to main index but unable to ingest to newly created index
↧
Is "What to Search" in top app page running some real-time-search as login user?
Hello Splunkers,
Does anyone know the login user run some real time searches when user just is opening the following screen page.
When I checked the CPU usage in splunk server by monitoring console, it is the cause for my user who was role of "user" to run some real-time-searches .
And the search was running as real-time-search during 1 hour.
At the duration, my user did not run ad-hoc search and schedule-search.
![alt text][1]
I am wondering the panel of "What to Search" might be the cause to run some real-time-searches.
Additionally, does anyone know how to let them be disable?
Any advice or opinion are appreciated.
Regards,
[1]: /storage/temp/264698-default-page.png
↧
Is there any effects, if ownership of savedsearches is "nobody"?
I believe that if ownership is `nobody`, it runs as role `splunk-system-user`, and `splunk-system-user` Inherits role `admin`, so it runs as `admin`.
Of course, if savedsearches contain knowledge objects(*macro, eventtype, lookup table etc...) that are private permittion of other user, it will be fail.
But in other cases, is my understanding that there is no particular influence is correct?
↧
↧
How to add my csv file with headers
I have to add an input file to Splunk which is in csv format.
Example:
Server,OS,Month,Total_size,avg_size,max_size
prod_host,Linux,January,682.59,309.99,362.87
prod_host,Linux,January,682.59,309.99,362.87
I am trying to add the file through Add Data -> Upload. After selecting my input file, in the “ set Source Type” page, I am selecting source type as Structured -> csv. In the right hand side of the page, it show the headers as field names and the corresponding values under each field names. But after I finish with all the steps and start searching with the respective source and source type, my events contain only the values with comma separation.
prod_host,Linux,January,682.59,309.99,362.87,316.96
But how I need is
Server=prod_host,OS=Linux,Month=January,Total_size=682.59,avg_size=309.99,max_size=362.87
Could anyone please help me on this.
↧
Collect log from CheckPoint OPSEC Lea to Splunk Enterprise install on Windows OS
Hi
How can i collect the CheckPoint OPSEC Lea on Splunk Enterprise that install on Windows OS?
Because this guide (https://docs.splunk.com/Documentation/AddOns/released/OPSEC-LEA/Hardwareandsoftwarerequirements) only support on Linux OS.
Thank you
↧
Dendrogram chart not showing results
Hi,
Error "No search set "
My Modified XML, -
I created the Test Dahsboard in the same Custom Viz app but still its not taking the dendrogram,js and dendrogram.css.
Help me wat i missed in it.
Thanks,
↧