Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Cluster Indexers

$
0
0
Hi, am quite new to Splunk setup, as such pardon my question here. Given that if i have 2 x indexer setup in a Cluster, is it possible for me to shutdown 1 of the indexer ( Save cost as it's running on AWS) and still can get the latest data replicated into the shutdown indexer from the active indexer ? So in the event of the primary active indexer is down, i can bring up the shutdown indexer to take over the role. Is the above scenario possible ? Or i definitely have to have both the indexer instance online to get the latest data ingest ? Thanks

How to get stats from different events?

$
0
0
How do i get different events names and same reference ID stat time from one event and end time from one event and average for total time for span of time? eventName 505 (startTime) - ----507 with PROCESSED status(endtime) . total avarage time ================================================================= Index= caudit eventName=505 |search "EventStreamData.args.verificationId"="8387be8f" |EventStreamData.requestContext.eventStartTime=* Index= caudit eventName=507 |search "EventStreamData.args.verificationId"="8387be8f" |EventStreamData.response.verificationStatus"=PROCESSED |EventStreamData.requestContext.eventEndTime=* the result will be : start time. End time . average time 12:00: 00 12.00: 30 . .000000xxx

Error : The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data.

$
0
0
I am getting an error from Heavy Forwarder. Below is the error : The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data.

Formatting y-axis to the Custom Time Format

$
0
0
Hi, I want to plot a column chart with time vs day. So day will be in X-axis and time will be in Y-axis. I am using the following but not working : | inputlookup time.csv |eval DT=strftime(strptime(Time,"%H:%M:%S"), "%I:%M:%S %p") | chart values(DT) over Day by Time but in the column visualization it is not showing any data because the value is not in a numeric format for Y-axis. Is there any way to plot the graph in Y-axis as per our format but Splunk would consider it numeric value i.e., number of seconds, etc internally ? Kindly help urgently !! Awaiting reply !!

How to apply curl to each search result without map command ?

$
0
0
Hi, I am using curl command in Web Tools Add-on. How to use "curl" query to make result of below without map command ? index=test => result field is "NUM" 1 2 3 |curl http://1.1.1.1/q?q=1 |curl http://1.1.1.1/q?q=2 |curl http://1.1.1.1/q?q=3 I used macro as below for this case, but it didn't apply. macro(1): curl http://1.1.1.1/q?q=$NUM$ query: index=test | \`macro(NUM)\` result is below: |curl http://1.1.1.1/q?q=NUM |curl http://1.1.1.1/q?q=NUM |curl http://1.1.1.1/q?q=NUM

How does the search header cluster change to a single search header instance?

$
0
0
I have Single Site Cluster. The cluster architecture is as follows: search header cluster: 4 search head + 1deploy indexer cluster : 5 peer node + 1 master node https://docs.splunk.com/Documentation/Splunk/7.2.3/DistSearch/Removeaclustermember Do I just need to delete 3 cluster members and 2 peer node? I don't want to use search header clustering now. I want to change the architecture to a single search . What should I do? change to : search header: 1 (no deploy) indexer cluster : 3 peer node + 1 master node In addition, can I convert it into 1 search header + 1 indexer and make sure that the data is not lost?

Splunk Enterprise to Splunk Cloud

$
0
0
Hi Team, Recently we have purchased Splunk Cloud for our Organisation. And currently we have all our Setup in our On-Prem environment (Splunk Enterprise) so we want to migrate those instances from Splunk Enterprise to Splunk Cloud. All the client machines have been already installed with universal forwarders and its currently reporting to Splunk Enterprise On-Prem Environment. So what would be the recommended process to migrate those server logs into Splunk Cloud? And also we want to know how to migrate all apps , dashboards, event types, field extractions and so on.

xml tags extraction

$
0
0
Hi All, In splunk Enterprise is their any way to extract the XML tags not the xml fields . ie .for example PFB xml i want to extract that yellow shaded thing .Could any one please help me in this. Thanks Nerellu

how to manage the existing alert?

$
0
0
q1. how to manage the existing alert? q2. if i want to create an alert when windows log in fail, how to make it. ( the search criteria is difficult to handle)

Field Extraction using REGEX

$
0
0
Hi All, I am getting event in below fromat, 28/01/2019 07:20:54.000 USERNAME FROM LATEST Test1 10.0.0.1 Jan 25 15:42:07 2018 admin 10.0.1.31 Jan 15 14:11:26 2019 osadmin 10.0.10.12 Jan 23 16:38:12 2019 awa 10.13.5.21 Oct 1 14:15:16 2018 I am trying to extract USERNAME , FROM ,LATEST as a field using field extraction method, I tried the REGEX for Username like this : ^(?P\w+\s+) but when i am running the field extraction , it is giving me Results "USERNAME" only . Please help me to extract USERNAME,FROM,LATEST from the event via field extraction. Thanks Rohit

Adding different Filters to each panel in a dashboard

$
0
0
Hi, I have a dashboard with different panels built from different data sources. When i click on each panel a detailed pop up will appear for which the query (saved search) will be sent from javascript file. Now the thing is I want to use different filters for each pop up panel like the user have to select from respective drop downs and the results should appear accordingly. If i add the filters(tokens) in XML, it appears on all pop ups which make it irrelevant. I would really appreciate any lead for this. Thank you.

Need to check how to identify all technical accounts that are not automatically locked after 5 consecutive failed log in attempts

$
0
0
I need to check how to identify all technical accounts that are not automatically locked after 5 consecutive failed log in attempts Please help with the query Thanks, Sahil

Install splunk forwarder on VmWare Center photon OS?

$
0
0
Hi Splunker; When I install splunk forwarder on VmWare Center photon OS and after run this command /opt/splunk/bin/splunk enable boot-start -user splunk, appeared the following error: service splunk does not support chkconfig Please we need your support.

Monitoring same logs for two different sourcetype

$
0
0
Hello, we are monitoring GC logs and logs could be in two different format.(Conventional GC and G1) Requirement is that if logs are in GC format it goes to GC sourcetype and if G1 then G1 sourcetype. One apporach is to upload these logs twice by setting up 2 different forwarders. but looking for some better approach. GC logs are complex so redirecting the logs by identifying the type would be difficult.(using props and transform) Thanks

CLUSTER MAP IS WORKING FINE IN VERBOSE MODE BUT NOT IN FAST MODE !

$
0
0
query:- index="test"|table FIELD1,FIELD2,Latitude,Longitude,Timestamp| geostats latfield=Latitude longfield=Longitude count by FIELDD1 Result For Verbose Mode ![alt text][1] [1]: /storage/temp/263760-verbose.jpg result for fastmode: NO RESULT FOUND

geostats binspanlat/long restrictions

$
0
0
Hello, I have a customer with a geostats query that fails due to the parameters he uses. I am not sure yet what exactly he wants to achieve, but there seems to be a limitation in Splunk. It boils down to | makeresults | eval Lat=40.27859 | eval Lon=10.26304 | geostats latfield=Lat longfield=Lon binspanlat=0.0002 binspanlong=0.0002 results in this Error an no result: *Quad tree Exception: Invalid Quad tree The search job has failed due to an error. You may be able view the job in the Job Inspector. * What are the minimum working values for the binspan parameters, or is this an issue with the Splunk version we use (6.5.2)? Thanks, Kai.

Universal Forwarderについて

$
0
0
お世話になっております。 Universal Forwarderについて教えてください。 現在、ログを送信したいサーバにUniversal Forwarder、 ログを管理したいサーバにSplunk Enterprizeをインストールしています。 以前はこの組み合わせでログを送信し、Splunk Enterprizeで確認できていました。 一度転送を止め、再度ログを送信しようとしたのですが、うまくできません。 Universal Forwarderでは、Local Systemを選択 特定のディレクトリに貯まるログを送信するため、 Path to monitorに特定のディレクトリまでのパスを指定 Deplyment Serverは何も入れず次へ Receiving IndexerはSplunk EnterprizeがインストールされたサーバのIPアドレスとポートを9997を入れ、インストール。 Splunk Enterprizeでは、データの受信設定で9997で待ち受けるように設定しました。 記憶は残していませんが、以前も同じ設定で通信できたと認識しています。 Universal Forwarderのログを見ると通信がタイムアウトしているようでした。 以前との違いは、Splunk Enterprizeで使用しているライセンスが変わっていることです。 以前は60日制限の試用版SplunkEnterprizeのライセンスを利用していましたが、 今は期間が終わってしまったため、Free版のライセンスを使用しています。 Free版ではUniversal Forwarderは利用できないのでしょうか。

Windows performance data collection using custom app

$
0
0
Hello all, I am new to Splunk and am trying to collect Windows performance data using a custom App rather than the Windows App. I have created an inputs.conf file with the following info: ## CPU [perfmon://CPU] counters = % Processor Time; % User Time; % Privileged Time; Interrupts/sec; % DPC Time; % Interrupt Time; DPCs Queued/sec; DPC Rate; % Idle Time; % C1 Time; % C2 Time; % C3 Time; C1 Transitions/sec; C2 Transitions/sec; C3 Transitions/sec disabled = 0 instances = * interval = 10 mode = single object = Processor useEnglishOnly=true index = cust1_infra_windows This is the data which is present in the defaults inputs.conf, but instead of collecting the data to the perfmon index, I want to collect the data to the custom index. I deployed the app to the universal forwarder but do not see any data in the index(most probably, I am missing some configuration which is used in the Windows app). Any suggestions? Thanks in advance. Sapan

need help in timechart command drilldown to a dependent dashboard

$
0
0
Hi Guys, I have built created a dashboard panel with a timechart command and the used search command the search result are as follows command : index=XXX source=XXX |rex "info\s:\s\+{4}\s(?\w+)\s\+{4}\sJob run_ingest_(?\w+)-" |where Datafeed_name!=""|dedup Datafeed_name feed_status |eval Datafeed_name = substr(Datafeed_name, 1, len(Datafeed_name)-5)|rex field=Datafeed_name "^(?\w{2,5})_(?\w+)$$" |timechart count(data_feed_name) as datafeed_count by feed_status search result: _time COMPLETED FAILED STARTED 2019-01-21 4 5 9 2019-01-22 0 4 0 2019-01-23 3 9 12 2019-01-24 0 0 0 and now i neeed your help in drilldown the dash board with the list of jobs that are faield/completed/started when they click on any particular value. could you please help me in this.

Try to correlate to event in differenct sourcetype without correlation field

$
0
0
Hi, I am trying to correlate two different sourcetype (haproxy and apache). I would like to find the access on haproxy for the error I have on apache. The 2 query I want to correlate: Query 1 on apache: index=* host=hostB sourcetype=apache_error "interesting error" earliest=@d-3d latest=now Query 2 on haproxy: index=* host=hostA sourcetype=haproxy "interesting access" So I am looking for to find the access on the haproxy when the interesting error happened on the apache. I tried something like that but without success: index=* host=hostA sourcetype=haproxy "interessting access" | search [search index=* host=hostB sourcetype=apache_error "interesting error" | eval earliest=relative_time(_time, "@m") | eval latest=relative_time(_time, "@m")+1 | return field1 field2 field3 ] | table _time host _raw field1 field2 field3 I don't find any solution to correlate those sourcetype without any correlation field. Could you help me on that ?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>