Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How do I match Licence Hash to Licence GUID?

$
0
0
Hello, I can find my Enterprise licence GUID in https://splunkcommunities.force.com, but when I look in my Splunk instance at Settings -> Licensing -> All licence details, all I can see is a licence Hash. Does anyone know how I can relate the two so that I can finally work out which Splunk instance is using what licence please? This is a situation I have inherited from previous dev teams and so I don't have the full history of what was done when with what licence. We don't have a complex setup, no forwarders or slaves, etc. Any help would be much appreciated.

How can I concatenate tables from the same search at different time periods ?

$
0
0
Hi, I am trying to compare the top sales of the latest week to the top sales of the previous week. I am trying to get a table that looks like : Product | Rank | Rank previous week Table | 1 | 2 Chair | 2 | 1 ---------- index=sales earliest=-2w latest=-w | stats count as Sales by Product| sort -Sales | table Product | streamstats count as Rank I succeeded in having the latest rank or the previous rank (see query above) but I don't see how I can combine them. Do you know how I could do this ? Thanks !

Wildcards working for inputlookup but not lookup?

$
0
0
Been targeting the same lookup definition and my `lookup` just refuses to recognize wildcards in my lookup table. My `inputlookup` works like so and properly accounts for the wildcards: search NOT [|inputlookup bad_columns | table SCAN_TYPE TABLE NAME SINGLE_COLUMN] My `lookup` is below and just doesn't work: foreach Column* [lookup bad_columns SCAN_TYPE AS SCAN_TYPE TABLE_NAME AS TABLE_NAME SINGLE_COLUMN AS <> OUTPUT SINGLE_COLUMN as match | various other evals...] I'm not sure if the `<>` rename is allowed or if match_type can vary between these two commands. I do not have access to transforms.conf, FYI.

Field extract

$
0
0
Hi everyone, I'm looking to have this result: ![alt text][1] For that I have 2 lines in my file: - **Question**: Service + IdTransaction - **Response**: Status + IdTransaction Until now i can extract the different name of service and different codes but i don't know how to do the matching between them and to increment the result. | rex "(?CONSULT|FIN_GB|FIN_RESERVE|FIN_VENDEUR|AUTHENTIF)" | rex field=_raw "Tlv Dico : (?.{22}.{27})?" | rex field=new "2004(?.{5})?" | stats count(TransactionId) by Service , Status [1]: /storage/temp/252113-output.png

Renaming fields after transform.conf regex

$
0
0
We use a transform.conf file with regex to extract the field values. However, the field name in the data input is not in human-readable format. But each value is predictable and we have a reference csv that would allow us to correlate these data together. uadhshuasdfiuh = Server1 xcoijcxvboijcxvb = Server2 These fields are created on the fly and there are hundreds of them. My question is how would automatically rename these fields, to be more usable in the Splunk ui?

How to configure LDAP authentication using SmartCard?

$
0
0
Hi! So I managed to configure LDAP authentication for the search head, but what if I want to make a user connect through SmartCard? Can I do that?

How can I use the value of previous records field as new field value?

$
0
0
I have a tabled data set like: ID Assessment Name Workflow Name Phase Name Process Name Step Name Step Owner Status Step Start Date Projected Start Date Step Date Completed Projected Completion Date Step Due Date Days Past Due SLA Step Order KgHaubhnZWgvTSiWc Electrical Contractor Services - SIG Lite 2018 SIG ASSESSMENT ASSESS SIG REVIEW SIG Finalized Bob Smith Not Started PlaceHolder 2018-02-26 16:11:04.139000 114 5 4 KgHaubhnZWgvTSiWc Electrical Contractor Services - SIG Lite 2018 SIG ASSESSMENT ASSESS SIG REVIEW Preliminary Findings Call Bob Smith Not Started PlaceHolder 2018-02-19 16:11:04.139000 114 5 3 KgHaubhnZWgvTSiWc Electrical Contractor Services - SIG Lite 2018 SIG ASSESSMENT ASSESS SIG REVIEW SIG Reviewed by Assessor Bob Smith Completed 2018-02-05 14:54:48.132000 2018-02-05 14:54:48.132000 2018-02-05 14:54:48.132000 2018-02-10 14:54:48.132000 2018-02-12 16:11:04.139000 114 5 2 KgHaubhnZWgvTSiWc Electrical Contractor Services - SIG Lite 2018 SIG ASSESSMENT ASSESS SIG REVIEW SIG Received from Vendor Bob Smith Completed 1/3/2018 00:00:00.000000 1/3/2018 00:00:00.000000 2018-02-05 14:54:33.923000 2018-02-05 14:54:33.923000 2018-02-05 16:11:04.139000 109 10 1 KgHaubhnZWgvTSiWc Electrical Contractor Services - SIG Lite 2018 SIG ASSESSMENT ASSESS SIG REVIEW SIG Sent to Vendor Bob Smith Completed 1/3/2018 0:00 1/3/2018 0:00 1/3/2018 0:00 1/3/2018 0:00 2018-01-22 16:11:04.139000 116 3 0 What I am trying to do is, where Status == "Not Started", use the "Step Date Completed" of the previous record as the "Projected Start Date" of the current record, steps order is defined by the field "Step Order". Currently "Projected Start Date" is created by: base search ... | eval Status=case('Step Start Date' == "" AND 'Step Date Completed' == "", "Not Started", 'Step Start Date' != "" AND 'Step Date Completed' == "", "Started", 'Step Start Date' != "" AND 'Step Date Completed' != "", "Completed" ) | eval "Projected Start Date"=if(Status == "Not Started", "PlaceHolder", 'Step Start Date') I just don't know how to get "PlaceHolder" to do the above. Can anyone help? Thanks a ton!

Ploting max system load agains actual load

$
0
0
I have the following simple query: index=os sourcetype=vmstat tag=dcv-na | eval MaxLoad = 28 | timechart max(loadAvg1mi) as LoadAvg,max(MaxLoad) as MaxLoad by host This works well enough but when multiple hosts are involved its gets busy due to the fact that eval is a plot for each host. I'd like just one line across the chart showing the max value for all hosts. Similar to how the licensing reports work.

Splunk_TA_nix cannot open scripts

$
0
0
Hey Everyone, I installed Splunk_TA_nix on my Ubuntu 16.04.2 server. After enabling some scripts and not seeing any data beng monitored, I checked splunkd.log and I see the following error: >07-03-2018 16:13:04.110 +0100 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/cpu.sh" /bin/sh: 0: Can't open For some reason the UF cannot of the .sh script files. As shown below, Splunk is the owner of those files and it has execute permissions: > -rwxrwxr-x 1 splunk splunk 3447 Jul 3 15:21 bandwidth.sh*> -rwxrwxr-x 1 splunk splunk 3997 Jul 3 15:21 common.sh*> -rwxrwxr-x 1 splunk splunk 3997 Jul 3 15:21 common.sh* Does anyone know what is wrong here?

Splunk skips to watch newly created file occasionally

$
0
0
Hello All, I have forwarders where in some of the log files are rotated on a hourly basis and some of them are created less often (once in 2 weeks may be) based on the log flow . I observe that splunk skips to watch the newly created log files occasionally and does not index the log. I can confirm that there is no configuration issue/ permission issue/network issue as this happens occasionally . Most of the log files are read by the splunk as soon as the file is created after log rotation. I only have ignoreolderthan set to 14 days in inputs.conf and I can confirm that this could not be the issue as hourly rotated log files are also not read by splunk at time. There is no error /info relating to watching the newly created file in the splunkd.log Restarting the splunk forwarder will make the splunk to watch these skipped files although old data is somehow not indexed. I would wish to know the reason behind this splunk behavior. Thanks in advance. Regards, Ankith

Overlaying on chart with a previous years data when x-axis do not match

$
0
0
I am displaying some data by Month for 2018/2019 (i.e. 01-2018, 02-2018) on a barchart. Search Query: ( sourcetype=sourcetype1) OR (sourcetype=sourcetype2) OR (sourcetype=sourcetype3) | chart sum(eval(if(sourcetype="sourcetype1",ICOS,NULL))) as Actuals sum(eval(if(sourcetype="sourcetype2",ICOS,NULL))) as Forecast sum(eval(if(sourcetype="sourcetype3",ICOS,NULL))) as Budget over "Month" However I also want to be able to overlay 2017 data over the same period without changing the above x-axis of Month. The 2017 data will come from sourcetype1. Any ideas how I could do that?

Overlaying a previous years data on chart

$
0
0
I am displaying some data by Month for 2018/2019 (i.e. 01-2018, 02-2018) on a barchart. Search Query: ( sourcetype=sourcetype1) OR (sourcetype=sourcetype2) OR (sourcetype=sourcetype3) | chart sum(eval(if(sourcetype="sourcetype1",ICOS,NULL))) as Actuals sum(eval(if(sourcetype="sourcetype2",ICOS,NULL))) as Forecast sum(eval(if(sourcetype="sourcetype3",ICOS,NULL))) as Budget over "Month" However I also want to be able to overlay 2017 data so that 2017-01 is shown above 2018-01 without adding to the x-axis. Any ideas how I could do that?

Report for product in time buckets for 0-14 , 14-30 days

$
0
0
I have splunk data similar to below where the product was purchased on different dates ![alt text][1] [1]: /storage/temp/252117-capture.png

Use Heavy Forwarder to send Windows events to syslog server

$
0
0
Hello, I am looking to send windows logs to a syslog destination via a Heavy Forwarder using the following setup: - windows events are collected from the end devices by an Universal Forwarder - Universal Forwarder is sending the events to an intermediate Heavy Forwarder - the intermediate Heavy Forwarder is applying some transformations on the windows events (most important removes the tab characters) and then sends the flow to a syslog server and to the Splunk indexers. Now, the problem is that because of the transformations, the events that are indexed in Splunk do not have the standard Windows format anymore, and any field extraction rule in the Windows TA does not apply. Here's the transforms performed on the HFs: **props.conf** [WindowsUpdateLog] TRANSFORMS-8 = UDPsyslogRouting [source::WinEventLog*] TRANSFORMS-2 = data_sourcetype_prepend TRANSFORMS-3 = windows_hostname_extract # Replace CR/LF with tab SEDCMD-tabreplace-multiline = s/(?m-s)[\r\n]+/ /g SEDCMD-multitab-remove = s/ +/ /g TRANSFORMS-8 = UDPsyslogRouting **transforms.conf** ..........truncated [UDPsyslogRouting] REGEX = . FORMAT = syslogGroup DEST_KEY = _SYSLOG_ROUTING What i want to achieve is sending the unaltered feed to Splunk indexers, and keep the transformation for the syslog feed. Any idea is welcome! Thank you!

Need to understand difference for add on and app

$
0
0
Hello team, From my understanding Add on is used for collecting information on which we install add on . App is used for visualization of data. For Unix add on and app :- we need install add on UF for which we need data. We need to install add on or app on indexer ( if we need data or not ? ) . We need visualization of data on Search head , then we need to install app on Search head. Somewhere written to install only add on only for all , any reason for same. For Mysql add on and app :- on HF we need to install add on and Search head we need to app. But we configure add on on HF and also on Search head , can not understand . Some where written mysql add on install on indexer and search head. any reason for same.

help on percentage for calculating a trend

$
0
0
hello in the request below, i try to calculate a trend between 2 report but i try to do this : - if the data of a report is the same in anothr report, the result is 100% but i want 0% in this case - if the value of the first report is > to the value of the second report then i want to divise value 2 by value 1 - if the value of the second report is > to the value of the first report then i want to divise value 1 by value 2 could you help me please?? index="windows-wmi" sourcetype="wmi:DiskRAMLoad" Name="mfetp.exe" | head 10 | stats avg(ReadOperationCount) as mfetp_ReadOperation_AVG, avg(ReadTransferCount) as mfetp_ReadTransfer_AVG, avg(WriteOperationCount) as mfetp_WriteOperation_AVG, avg(WriteTransferCount) as mfetp_WriteTransfer_AVG | appendcols [ search index="windows-wmi" sourcetype="wmi:DiskRAMLoad" Name="mfetp.exe" | head 10 | stats avg(ReadOperationCount) as mfetp_ReadOperation_AVG2, avg(ReadTransferCount) as mfetp_ReadTransfer_AVG2, avg(WriteOperationCount) as mfetp_WriteOperation_AVG2, avg(WriteTransferCount) as mfetp_WriteTransfer_AVG2 BY host] | eval percReadOperation_AVG=round((mfetp_ReadOperation_AVG/mfetp_ReadOperation_AVG2)*100,2), percReadTransfer_AVG=round((mfetp_ReadTransfer_AVG/mfetp_ReadTransfer_AVG2)*100,2), percWriteOperation_AVG=round((mfetp_WriteOperation_AVG/mfetp_WriteOperation_AVG2)*100,2), percWriteTransfer_AVG=round((mfetp_WriteTransfer_AVG/mfetp_WriteTransfer_AVG2)*100,2) | table percReadOperation_AVG percReadTransfer_AVG percWriteOperation_AVG percWriteTransfer_AVG

Forward search output to Syslog server

$
0
0
Hi All, I wonder if we could forward a search result output to syslog server? Something like: earliest=-1h sourcetype=mysourcetype | Or else, maybe I could define that whenever Splunk server receive the specific sourcetype data, it will forward it ALSO to a syslog server. The issue is that I must keep the data on Splunk and can't put a Syslog server in front of the Splunk server. Thanks in advance!

How to change stats/chart tabular format output to bar chart?

$
0
0
Hi, I want to plot values on x axis with their count on y as a bar chart. Both |stats count by val and |chart count by val are displaying data in tabular format val count a 3 b 5 How do I display it in bar chart format? Thanks.

Splunk App for VMware 3.4.2

$
0
0
Hi, I have upgraded my environment to 7.1.1 and also renew my licenses. Everything work fine, except the Splunk App for VMware, I always received Licensing error. When I look in my Splunk Licenses, I see all has valid. In the App I see the following message: *License problems detected. See details below. Last updated Fri Jun 22 2018 at 00:05:09 GMT-0400 (EDT)* I have try to refresh, but the last updated is always Fri Jun 22 ? Anyone already faced this problem ? Thanks

How do you create a dashboard which need to be run only once daily so that every time some one opens the dashboard it won't re run the search?

$
0
0
The dashboard has around thirty panels and each of the query has at least 3 parameters changing. So it is hard to create post processing to optimize the dashboard. Is there any way the dashboard will be run once a day like report to improve performance?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>