Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How to Identify that Linux Application Server is UP or Down?

$
0
0
How to Identify that Linux Application Server is UP or Down? I don't have access to admin so that i cannot search for index=_internal.

Why isn't Node App I wrote using Javascript SDK filtering on extracted fields when searching?

$
0
0
Hello, I am using JS SDK for Splunk, and have written a Node App. Now when I do a search, I get the results back, but I would like to remove duplicates and would like to use `dedup` on extracted fields. When I use this it does not work, but the same search string works fine on GUI and returns unique events. When I use head , it works, but when I use dedup i get no results. Splunk "version":"6.5.2" Search String : `search index=aaa filter1 filter2 | dedup extractedField1`

present run time values only greater than the average for the past 30 days

$
0
0
Hello - we are looking to present daily run time values of events in a search, but only display the daily run time values that are greater than the calculated 30 day run time average. I've tried the eventstats with a where command, but doesn't seem like where plays nice with the values command. I tried using first instead of values, but that seems to skew the daily results. any suggestions? perhaps a sub search? our_search | eventstats values(duration_minutes) as run_time by firm_name | eventstats avg(duration_minutes) as avg_time by firm_name | where run_time>avg_time | timechart span=1d values(run_time) by firm_name

How can I join multiple source types with common field and search?

$
0
0
When I try to join three sourcetypes on CommonField, I don't get all the fields to populate in a table. Example: sourcetype1: CommonField, Field1, Field2, Field3 sourcetype2: CommonField, FieldX, Field Y, Field Z sourcetype3: CommonFIeld, FieldA, FIeldB, Field C Query: source=data* | transaction CommonField keepevicted=true | table Field1, FieldX, FieldY, FieldA, FieldC It does not populate all fields in the table. How can I join three sourcetypes on CommonField, and once joined, I can search as if each joined event has all those fields? Thanks in advance!

How do I present run time values only greater than the average for the past 30 days?

$
0
0
Hello - we are looking to present daily run time values of events in a search, but only display the daily run time values that are greater than the calculated 30 day run time average. I've tried the `eventstats` with a `where` command, but doesn't seem like where plays nice with the values command. I tried using first instead of values, but that seems to skew the daily results. any suggestions? perhaps a sub search? our_search | eventstats values(duration_minutes) as run_time by firm_name | eventstats avg(duration_minutes) as avg_time by firm_name | where run_time>avg_time | timechart span=1d values(run_time) by firm_name

How do I copy the dashboards from the search app to a new distributed search system?

$
0
0
We have created a new Splunk 6.6.3 cluster environment with 3SH and 6 indexers. I've been asked to copy the saved searches, dashboards, etc from the old system to the new system. Unfortunately it seems all of the dashboards were created under the default search application. How do I move from the \etc\apps\search\local to the new clustered system?

Splunk App for Infrastructure: Linux box is "inactive" after reboot

$
0
0
I have installed the Splunk App for Infrastructure (ver 1.1.1) and have 3 test Linux boxes working perfectly. However, a Linux box was rebooted and now the app says that the server is now "inactive". I have restarted the splunkd daemon on the system that was rebooted and it still says "inactive". Do I have to remove the Linux box from the app, remove the UF and configs from the Linux box and then add the server back like I did initially?

How do I present run time values for the past 30 days, but only display those that are greater than the average?

$
0
0
Hello - we are looking to present daily run time values of events in a search, but only display the daily run time values that are greater than the calculated 30 day run time average. I've tried the `eventstats` with a `where` command, but doesn't seem like where plays nice with the values command. I tried using first instead of values, but that seems to skew the daily results. any suggestions? perhaps a sub search? our_search | eventstats values(duration_minutes) as run_time by firm_name | eventstats avg(duration_minutes) as avg_time by firm_name | where run_time>avg_time | timechart span=1d values(run_time) by firm_name

How can I get AIX 6.1 data to Splunk 6.6.4?

$
0
0
Hello, Having trouble getting Splunk forwarders to report from AIX 6.1 systems to Splunk. Facts: System: AIX 6.1 Forwarder: splunk forwarder 6.5.9 for AIX (splunkforwarder-6.5.9-eb980bc2467e-AIX-powerpc.tgz) Splunk environment: 6.6.4 What is the way to debug? There is no network issue, telnet works. We are monitoring AIX 7.1 with 6.6.4 forwarder with no problems. Thanks!

Are there any best practices for Upgrading Splunk server to RHEL 7.5?

$
0
0
We are planning to upgrade the VM server to RHEL 7.5 with splunk distributed deployment installed in them. Do we have any documentation or best practices regarding steps? thanks!

Why does my search result show only one Search Head, while my Search Head Clustering Member report shows multiple?

$
0
0
When I run the search below, only one search head (SH) shows in the results...But... I do know that there are 18 SH's out there which do show up in the SH Clustering page with the role of Member. Does the search result mean that only one of the 18 is actually doing any work? | rest /services/server/info |search server_roles = shc_member

Disconnected from splunk web server

$
0
0
I have amazon aws in which I have install splunk as well as in splunk i have installed "Splunk add-on for Aws". But when i try to open this addon i get this error as below Disconnected from splunk server. ![alt text][1] I have attached an image. please check that as well. [1]: /storage/temp/255794-capture.png

Splunk stopped indexing

$
0
0
I've tried browsing around previous topics but couldn't find anything that worked for my particular situation. I have a very simple test setup with a Universal Forwarder, a Debian 9 machine running the free edition of Splunk Enterprise, and another non-Splunk box. My goal was to simulate log forwarding from the workstation running the Universal Forwarder to the Splunk box to my non-Splunk box. I was indexing things up to 3 hours ago while troubleshooting why logs weren't being forwarded to my non-Splunk server. Eventually, I was able to get this data forwarded successfully to my non-Splunk server but then I noticed it stopped indexing on the Splunk server. No errors. My Splunk servers outputs.conf: [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = 10.X.1.99:514 sendCookedData = false indexAndForward=true [tcpout-server://10.X.1.99:514] My Splunk servers inputs.conf; listening on 9997: [default] host = splunk ------------------------------------ My Universal Forwarders outputs.conf: [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = 10.X.1.181:9997 autoLB = true My Universal Forwarders inputs.conf (SOC workstation): [default] host = SOC-6 Monitored Files: $SPLUNK_HOME/etc/splunk.version /var/log/auth.log /var/log/syslog It's supposed to be a very basic setup. Like I said, I'm receiving logs on the non-Splunk box which was the main goal but I can't leave it partial with the Indexer not indexing. If you require further information feel free to request it. Thanks

Anybody have this running on Windows servers?

$
0
0
Yes, I read the documentation but "not supported" is often different than "doesn't work". Anybody have this working on Windows before I spend too much time on it? Any tweaks or anything to make this work?

Splunk Free Edition stopped indexing after set-up

$
0
0
I've tried browsing around previous topics but couldn't find anything that worked for my particular situation. I have a very simple test setup with a Universal Forwarder, a Debian 9 machine running the free edition of Splunk Enterprise, and another non-Splunk box. My goal was to simulate log forwarding from the workstation running the Universal Forwarder to the Splunk box to my non-Splunk box. I was indexing things up to 3 hours ago while troubleshooting why logs weren't being forwarded to my non-Splunk server. Eventually, I was able to get this data forwarded successfully to my non-Splunk server but then I noticed it stopped indexing on the Splunk server. No errors. My Splunk servers outputs.conf: [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = 10.X.1.99:514 sendCookedData = false indexAndForward=true [tcpout-server://10.X.1.99:514] My Splunk servers inputs.conf; listening on 9997: [default] host = splunk ------------------------------------ My Universal Forwarders outputs.conf: [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = 10.X.1.181:9997 autoLB = true My Universal Forwarders inputs.conf (SOC workstation): [default] host = SOC-6 Monitored Files: $SPLUNK_HOME/etc/splunk.version /var/log/auth.log /var/log/syslog It's supposed to be a very basic setup. Like I said, I'm receiving logs on the non-Splunk box which was the main goal but I can't leave it partial with the Indexer not indexing. If you require further information feel free to request it. Thanks

How do I find missing information from query 2 and query 1

$
0
0
I am trying to find missing stores from query 2 in the below script. However, it returns no results, or all results depending on the search. For the purposes of my search, I know the correct result is one. Can you please assist me in my evaluations to get what I'm seeking? I've beeing trying this for days now. host=s*0004 Type=Information EventCodeDescription="A new process has been created" New_Process_Name="D:\\PublixPOS\\Bin\\PxPosEdwIF.exe" | dedup host | eval StoreCallEDW=substr(ComputerName,2,4) | search [ search index=mainframe host=MVSB* MFSOURCETYPE=SMF080 *CFT* DEFINE_RESOURCE="SUCCESSFUL_DEFINITION" | spath RESOURCE_NAME | search RESOURCE_NAME="EDWABP.V15.TLOG.DATA.*" | eval StoreonMainframe=substr(RESOURCE_NAME,29,4)] | table nodiff StoreEDWFile StoreonMainframe

Embedded JSON column in excel looses data when imported to Splunk

$
0
0
I am new to Splunk. I have an excel file that has a column which contains embedded JSON. When I Import the csv, I lose some of the data. {"CreationTime":"2018-05-12C413:09:34", "Id":"Y97H080-09D", "Action":"FileAccessed"} {"CreationTime":"2017-03-12C412:10:24", "Id":"D4562T20-09D", "Action":"FileCreated"} {"CreationTime":"2018-08-12C405:18:01", "Id":"9302T20-09D", "Action":"FileDeleted"} For example, action column in Splunk after importing the csv will have no data. Can anyone please help me out? Thank you.

How can I compare sum(bytes) in two time period using sub-search?

$
0
0
Hi. im new to Splunk. I'm trying to compare the sum(bytes) for an hour ago, and the same hour one week before by certain field, and calculate the percentage change for these data. I have tried the following code, but the sum(bytes) it gives for doesn't match the actual value. index=xxx earliest=-60m latest=now | stats sum(bytes) as current by abc | appendcols [search index=xxx earliest=-1h@h-1w latest=@h-1w | stats sum(bytes) as before by abc] | eval diff=current-before | eval percentagediff=round(abs(diff/before)*100,0) The problem is that the current and the before values it returns are really off the actual value it should be at that time. May you guys please give me some ideas or suggestions of where could this go wrong? Thank you

Is there a Splunk App that supports pulling data from citrix xenapp 7 DB? If not--is there a workaround?

$
0
0
Is there a Splunk App that supports pulling data from citrix xenapp 7 DB? If not--is there a workaround?

Is there an easy way to delete namespace data in a clustered environment?

$
0
0
Is it possible to delete the contents of a namespace in a clustered environment from the search pipeline or a settings menu somewhere? Or do they need to be deleted by hand on each indexer?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>