Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Simple XML "set token" was removed suddenly.It's happend frequently, although I did not change it .

$
0
0
Hi, I have a issue about the XML setting. I set this and get the latest value of time as "thistime". Not only "thistime" but also the value of "now_time" and "workerlist" were removed. Does anyone have same case? OR know the cause of it? This is the folling XML which was often changed. Regards, > Before(defalut)testt title($tok_this_time$ )Japanese commentindex="test" sourcetype=test #########Abbreviated search########## | stats count values(maxtime) as thistime values(maxtime2) as now_time values(list) as wlist | eval workerlist=mvjoin(wlist," OR ")$result.thistime$$result.now_time$$result.workerlist$-3mon@monnow#########Abbreviated path##########> Aftertestt title($tok_this_time$ )Japanese commentindex="test" sourcetype=test #########Abbreviated search########## | stats count values(maxtime) as thistime values(maxtime2) as now_time values(list) as wlist | eval workerlist=mvjoin(wlist," OR ")-3mon@monnow#########Abbreviated path##########

How to check JMX server connection in Linux machine

$
0
0
Hi, I have my Splunk Add-on for Java Management Extensions installed on a Linux machine. I have to monitor a JMX remote server which is also in Linux. As suggested for Windows to check the connection through jconsole, how can I check the connection between these two Linux servers. Could anyone please help me on this.

Can UF be restart via REST API?

$
0
0
Can UF be restart via REST API? What other things can be done to UF via REST API?

Remote Login Issues

$
0
0
Need help y'all I have an single instance of Splunk living on a virtual box Fedora Server. I have a host to guest network built and included an interface on the guest for that network and can communicate with the server from the host (My MacBook) fine. Fedora Cockpit (from a Firefox Browser) functions perfectly, and ssh is up as well. Firewall is turned off on the host. My goal is to have Splunk running as a headless VM that I can use for lab work and learning, specifically a Linux environment. I have checked all my .conf files in Linux and have confirmed that it is bound to the right address, remote access is set to 'always' and both Splunkd processes are listening on the appropriate sockets e.g. 192.168.56.103:8000 and 192.168.56.103:8089 As of now I still cannot connect using http://192.168.56.103:8000 to the webUI. I am running out of ideas or things to debug. Any thoughts would be appreciated, Lee

how can i combine 2 searches consisting of inputlookup and outputlookups?

$
0
0
how can i combine queries to populate a lookup table? I have a lookup table with the following values item 1 2 3 i'm using the splunk web framework to allow a user to insert an item. if the user enters 3 then item 3 is changed to 4 and item 3 is inserted. the field input_item represents the value entered by the user. i'm using the query below to first renumber item 3 to 4 and to insert item 3 via an appended search. | inputlookup item.csv | eval input_item = 3 | eval itemnumber = if(itemnumber >= input_item, itemnumber +1, itemnumber) | fields - input_item | outputlookup item.csv | append [ | inputlookup item.csv | stats count as testcount | eval input_item =3 | eval itemnumber = input_item | fields - testcount | outputlookup item.csv append=true] unfortunately, the new item is created with a value of 4 instead of 3. is there way to combine these two queries or do i need to create 2 separate queries via 2 separate searches in the search manager? thanks in advance, Peter

Linux OS OutofMemory Error in Indexer

$
0
0
Hi, We have cluster indexer setup with 5 indexers on separate ESX Servers each with 12TB HDD and 128GB RAM. The cluster replication factor(RF) is 2 and Search factor(SF) is 1. We have one Job scheduler and search head and forwarder nodes. Our daily log volume is close to 1TB/day. For the past 10 days we are facing OS OutOfMemory error on two of the indexers and eventually splunkd got killed. we got the following messages on /var/log/messages of indexer **Dec 20 09:11:20 Indexer04 kernel: Out of memory: Kill process 1411 (splunkd) score 4 or sacrifice child Dec 20 09:11:20 Indexer04 kernel: Killed process 1415, UID 500, (splunkd) total-vm:72228kB, anon-rss:1816kB, file-rss:28kB** When i execute the top command on each indexer, the memory usage is about 110 to 120GB which is close to 90+% of RAM usage. Only 5 to 10GB RAM is free. Is this normal for splunk process to use more than 100GB of RAM for a log volume(1TB). If anybody using such huge volume of data with this kind of configuration. Any help would be greatly appreciated. Thanks Bala.

Convert dashboard button Not working

$
0
0
Hi All, I find this weird in splunk. Before, when i need to convert an xml dashboard to html, I just go through the normal process from here: http://dev.splunk.com/view/SP-CAAAEM2 But now, when I try to click Convert dashboard button, it doesn't do anything. Thanks!

Is there any documentation to import splunk ML packages and classes (base.BaseAlgo) into IDE like PyCharm

$
0
0
I am trying to learn Splunk ML toolkit to write my own algo. Splunk docs says how to register and create an algorithm as an app. I am using PyCharm IDE to write the algorithm in python. But the python compiler is giving error "ImportError: cannot import name BaseAlgo". Is there any documentation available which show how to import the splunk ML package from "C:\Program Files\Splunk\etc\apps\Splunk_ML_Toolkit\bin" into my python IDE. Thanks Praveen

Universal Forwarder Not sending my windows events log

$
0
0
Well! i have configured my suplunk server to accept logs on 9997 from remote. And i have configure my universal forwarder to forward logs to my splunk server to 9997 port. My output.conf file is as: [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = 10.0.71.250:9997 [tcpout-server://10.0.71.250:9997] and my input.conf is as: [default] host = splunk1-PC [script://$SPLUNK_HOME\bin\scripts\splunk-wmi.path] disabled = 0 [WinEventLog:Application] disable = false [WinEventLog:Security] disable = false [WinEventLog:System] disable = false By doing netstat -n to my splunk server and windows system [universal forwarder] is can see this vice versa Local Address Foreign Address State 10.0.70.70:51137 10.0.71.250:9997 ESTABLISHED apache logs are coming from the windows system[universal forwarder] but windows events are not. I am unable to find the exact problem. Kindly help!!

How does permissions work when using an add-on

$
0
0
We have alerts for AWS and they call the Resilient Incident creation add on. This was all working, but some of the alerts where stored on splunk_apps_aws and some under saved searchers. I took all the AWS alerts and moved them to the savedsearches.conf under splunk_app_aws. Some of the alerts where still set to 'global' and when fired the resilient component didn't work anymore. They where changed to 'app' and now it works. I looked to see if there was a setting in the conf file, but there isn't. Is this because 'splunk_app_aws' is a app and 'search' is not? Thanks!

Splunk installation

$
0
0
HELLO, We need help configuring splunk forwarder in Linux environment, we have around 70 Linux appliances where we need to divert the syslog messages to slunk. I have installed forwarder agent as per installation document but logs are not getting received at splunk end, can you please help us on this matter. I have the details of the splunk UDP port which would be receive the logs. Regards, Ragesh

Upgrade to Splunk 7 with this add on?

$
0
0
Is this add on supported with Splunk 7? The download page only shows Splunk Versions: 6.6, 6.5, 6.4, 6.3 We are using this app on 6.6.1 and would like to upgrade. Many thanks

GET Data from Rest API Spotify...for beginners

$
0
0
Hi! I'm trying to get Songdata from Spotify. They do offer a Rest API with json-files. I do have everything, a song-id, token but I don't get the data in....nothing happens. How do I check, where the process is stuck? This is the way, my Rest-App looks like: ![alt text][2] ![alt text][1] [2]: /storage/temp/225580-splunk-pic.jpg [1]: /storage/temp/225581-splunk-picii.jpg I'm sorry for having to ask for this, but is there a possibility to get a kind of step by step guidance? My questions are: Are the entries I typed in enough information? How can I check if i at least get a repeat? The Splunk instance is basedon a VM-Linux. And at the end of it: Will it really put the data in my index=spotify and my sourcetype? Thank you!

Using eval to create date in epoch time

$
0
0
I need to create a field `today` that is equal to the epoch timestamp in milliseconds for midnight yesterday. I've been successful in using eval for this, but splunk is adding ".000" to the end of the field value and I can't for the life of me figure out why or how to remove .000, so that the value can be passed to a dbxquery formatted in milliseconds. I've tried using `rex mode=sed field=today "s/.000//"`, then attempted to convert the value to a string first, before sending to rex/sed. The .000 persists. `...| eval today=(relative_time(now(),"-1d@d")*1000) | top today` search result: `today=1513832400000.000`

Unable to read large input file from Universal Forwarder

$
0
0
We have a Linux server which is receiving our syslog traffic and on that machine we have a universal forwarder running on it to read all of the syslog files to send them off to our Splunk indexers. The syslog server has 300+ different devices which send to it and a few of them get to be very large files. There is a separate file for each device and it rolls over to a new file at midnight. This is where the issue occurs. The universal forwarder is hittting this error on some of the files: **WARN TailReader - Enqueuing a very large file** And it says that for each of those large files. Some of the files do seem to get read eventually but the data is behind at that point and other of the files are not read. What can I do on the universal forwarder to avoid these files from being read in batch mode (which is how the ones that do eventually get read work) and instead just tail the files as they go along? And ensure that all of the files are getting picked up? Thanks.

Make Windows Firewall Rules Dashboard

$
0
0
Hello guys and ladies ^__^ I need some ideas to create Windows Firewall Rules dashboard. Right now it's looks like this: ![alt text][1] What to modify or what else would be rational to add to do it better? [1]: /storage/temp/225583-oqivrawuqq-dmowl6bpzoa.png

Splunk Add-on for Apache Web Server

$
0
0
I have Splunk 7.1 / RHEL65 / Test enviroment (New to splunk) I see you have Splunk Add-on for Apache Web Server, but do you still need a forwarder to forward the apache logs? Rgds Dee

tstats results different from eventcount

$
0
0
Can anyone provide an explanation on why these two searches produce different results? I am trying to set up an alert for the case when an index does not have logs in the last couple of hours (time condition on tstats removed for the example below): | tstats latest(_time) as latest where index=* by index returns 51 of my indexes, while: | eventcount summarize=false index=* | dedup index | fields index would return all of my 87 indexes. Thanks!

How visualize track of a file using splunk visualization tool

$
0
0
Hi, I have been working on data, this data has tracking information. I want to see all information with respect to file name in a single line with time stamp. In simple terms, if we check fedex order status it display the package status, where it is gone where it is now, what is the current status of package. I want to visualize my data in same as fedex tracking. Can i do using splunk visualization tool? If so, Please suggest me. My data doesn't have geo location information but it has some process keywords like process, send, transfer. I wanted to work on based on these keywords. ex: 12/22/2017 processed 12/23/2017 send 12/23/2017 transferred sol: processed----->send---------->transferred 12/22/2017 12/23/2017 12/23/2017 NOTE: I am new learner Thanks, Chandana

CURL not returning all Clients

$
0
0
I using CURL to return a list of clients reporting to my deployment server: curl -k -u admin:changeme https://deploymentserver:8089/services/deployment/server/clients I get back a list of 30 but under forwarder management I see 900+ Is there parameter I should be using?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>