Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Splunk DB Connect 2: Why am I getting error "[name of my indexer] Script for lookup table 'mylookup' returned error code 1. Results may be incorrect"?

$
0
0
Hey I am receiving the following error when attempting to run my database lookup using DB Connect 2: [name of my indexer] Script for lookup table 'mylookup' returned error code 1. Results may be incorrect I am running the following when doing a search index="index i am searching" | lookup mylookup userOId AS UserOId OUTPUTNEW firstName lastName The lookup is supposed to correlate json events using the userOid with the firstname and lastname located in a MSSQL database Any suggestions on where I should look for more information on this error? I don't see anything in the Splunk logs.

Falkonry Monitors and Predicts the Operating Conditions of Things: Is this app only for IoT, or can it be used for machine computing also?

$
0
0
I see how Falkonry is used to monitor the condition of physical things, like pumps, motors, etc. But can it also be used to monitor the condition of digital computing resources like disk drives, processors, memory etc?

How to parse a time duration of the format "4s", "9.1ms", etc.

$
0
0
The default duration output from go (golang) is to use a single float with one or two characters identifying the unit, ex: 56.920404ms 4.61µs 45.1s etc. I can't seem to find a built-in way to convert these. How can it be done?

How to calculate the daily change of a field value

$
0
0
Hi Splunkers, I need to calculate the daily value change of a field, and report on the daily difference. The field is just an event counter, that gets increased every time the event is triggered. Events come in at no particular frequency. Example Data: 20151231-235955 foo bar 211 20160101-000304 foo bar 212 20160101-000402 foo baz 213 20160101-020543 foo bar 213 ..... 20160101-235812 foo baz 278 20160102-000919 foo bar 278 Now I need to create a table (timechart, whatever) of the daily changes of the counter. For Jan, 01 2016 it should read 67 (211 as last count of the previous day, 278 as last event of the actual day). I am not sure how to do this. I already tried using `streamstats` and counting the changes of the field, but I don't know how to reset the streamstats value each day. Can someone point me in the right direction? Thanks!

File Integrity Monitoring: How to search Read records where an individual accessed a document, not a folder?

$
0
0
I am trying to report on a File Monitoring report that picks up all operations such as Read, Created, Wrote etc. However, I only want to see Read records where the individual accessed a document. I do not care about Read’s accessing a folder. Keeping in mind that I also want to see all other operation types. I’m thinking of a search command where the Read operation is within parenthesis looking specifically in the directory field for a File extension. Here is my search criteria: host = 10.0.0.3 "D:\\Data\\public\\human" | transaction user, _time | table user, operation, directory, _time,

REST API: Create Search, Dispatch, Get Status, and Results. How can I run this flow in succession?

$
0
0
Hi All, I am using the Splunk REST API (mainly search, savedsearch endpoints) to get data out of Splunk. Currently I am trying to do the following: 1. Create a saved search 2. Dispatch said search to get SID 3. Check status of the job with given SID 4. Get the results of the job for SID back Right now, I have steps 1,2, and 4 working fine. I can run steps 1 and 2 in succession without issues. Step 3 I can run right after 1 and 2, but having issues. Step 3, I can run it, but having issues here because it seems like I need to poll to get the status of the Job--is there a better way to handle this (mainly to check the status of the job)? Step 4, I can run in isolation AFTER i have the SID, but cannot run 1,2,3,4 in succession. Any suggestions on fixing step 3? I need to check the status and only continue when it is "DONE" but can't figure out a way to keep checking the status.

How to find the IP address of the AWS(f5) data coming through port 9997 to a heavy forwarder?

$
0
0
The port 9997 is enabled, data hitting the Heavy Forwarder. How to validate specific data and IP address?

Can someone clarify how the map command is supposed to work or if I have made a mistake in my search?

$
0
0
Hello, I am currently trying to do a search across two different sourcetypes using the map command: sourcetype=source1 "alert" | rename blahblahblah AS Machine | WHERE isnotnull(Machine) | eval earliest=_time-86400 | eval latest=_time+86400 | map search="search sourcetype=source2 Computer=$Machine$ earliest=$earliest$ latest=$latest$" maxsearches=100 | table Computer status The idea is that source1 contains certain events revolving around certain computers. I want to cross-reference this with source2 to find the status of each Computer that shows up in the results of the search from source1 around the given time the event occurs. However, I am only getting results concerning the first computer that appears. e.g., instead of getting: | Computer | status | |--------------|---------| | ComputA | 1.0 | | ComputB | 3.0 | | ComputC | 1.0 | I am just getting: | Computer | status | |--------------|---------| | ComputA | 1.0 | Is this the way the map command is supposed to work and I just misunderstood, or have I made a mistake somewhere else? Thanks ahead of time!

How can I geo map out email activity from index=msexchange?

$
0
0
Newbie here with Splunk searching and regex... I've been tasked to geo map out email activity across the company based on user locations along with the top communicators. They already have data in Splunk (index=msexchange). If anyone has done this or knows how I can map this data out (from index=msexchange), that would be great! Addt'l possibly Interesting fields: sender recipients original_client_ip recipient_count Thanks for any help!

What is F5 data and how do we identify this on a heavy forwarder?

$
0
0
My head is going to blow up. What is f5 data, how to identify this on a Splunk heavy forwarder and make sure the heavy forwarder is configured?

Posting to a receiver using REST API giving "insufficient permission to access this resource" error

$
0
0
We are investigating how to create a Splunk log entry over the REST API via JavaScript. I'm posting the following event via the REST API: curl -k -u user:password "https://tspl001:8089/services/receivers/simple?source=www&sourcetype=junk&index=angularjs_test" -d "2015-01-23 12:45:03 CST Hello there" Here is the response: insufficient permission to access this resource I was told that my user has write privileges and that I'm using the correct sourcetype and index values. I cannot file any reference to what the "www" source is.

Can an element of a role in authorize.conf scoped to an app?

$
0
0
Can an element of a role in authorize.conf be scoped for a particular app? I have a local app where I would like to give "admin_all_objects" to all power users, but restrict that capability to only the one app.

How to search how much bandwidth a forwarder is using?

$
0
0
I'm trying to find how much bandwidth a forwarder is using and how many hosts are sending over the forwarder. I want to show it in a timechart that has the hosts' total bandwidth and then another line that had all the totals. I'm not sure where to start since most documents show using the _internal index. any input will help, thanks!

Unable to login through Java SDK (400 BAD REQUEST)

$
0
0
Hi there, I am working on a java application for my company that is going to use the splunk java sdk to run some scheduled searched and then perform some other operations with the data that it receives back. I am however, having some issues with successfully managing to log into splunk through the Java SDK, Every time I try I get back a 400 error (BAD REQUEST) saying that the request that I have made to the server is not valid. Here is the exact error that I get in the console from java. [Fatal Error] :1:3: The markup in the document preceding the root element must be well-formed. com.splunk.HttpException: HTTP 400 at com.splunk.HttpException.create(HttpException.java:59) at com.splunk.HttpService.send(HttpService.java:355) at com.splunk.Service.send(Service.java:1211) at com.splunk.HttpService.post(HttpService.java:212) at com.splunk.Service.login(Service.java:1044) at com.splunk.Service.login(Service.java:1024) at stc.classes.LoginController.connectAndLogIntoSplunk(LoginController.java:24) at stc.classes.Main.main(Main.java:11) I have tried to follow a number of different tutorials in order to get this working but none seem to be able to help. I have double checked what I can identify as the common trouble spots: 1. I have verified that I am connecting to the management port and not the web port 2. I have double checked the Splunk Server Name in the Server Settings > General Settings page when you login through the web 3. I have tripple checked my credentials to ensure that they are not the cause of the issue As far as I can tell I am not missing anything important, yet I obviously still doing something incorrectly. Any help that could be offered would be greatly appreciated. here is my main class import com.splunk.Service; public class Main { public static void main(String[] args) { LoginController loginController = new LoginController(); Service splunkService = loginController.connectAndLogIntoSplunk(); System.out.println("Session token: "+splunkService.getToken()); } } Here is my login controller class import java.util.HashMap; import java.util.Map; import com.splunk.Service; public class LoginController { public static Service connectAndLogIntoSplunk(){ //Connection and login arguments Map connectionArgs = new HashMap(); connectionArgs.put("host", "hostname.for.the.splunkd.server"); connectionArgs.put("username", "user.name"); connectionArgs.put("password", "p@55w02d"); connectionArgs.put("port", 8089); connectionArgs.put("scheme", "https"); //attempt connection Service ss = new Service(connectionArgs); try { ss.login(); //THIS IS THE LINE THAT IS CAUSING THE ERROR } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } //return the splunk service return ss; } }

How is the Distributed Management Console Physical Memory Usage(%) value calculated?

$
0
0
We are currently running a distributed Splunk 6.2.3 infrastructure with multiple indexers. According to the Distributed Management Console Resource Usage, each indexer shows "Physical Memory Usage(%)" as being > 96.5%. I ran the command `free -htlw` on one of the indexers, and received the following results: total used free shared buffers cache available Mem: 188G 2.1G 5.6G 24M 215M 180G 186G Low: 188G 183G 5.6G High: 0B 0B 0B Swap: 4.0G 3.6M 4.0G Total: 192G 2.1G 9.6G Based on this information, I question the authenticity of that which is being reported by the Distributed Management Console. If someone could clarify exactly how these values differ, it would be greatly appreciated. Thank you.

How do I select different sourcetypes for multiple logs coming from multiple servers using rsyslog.conf (no universal forwarders)?

$
0
0
How do I select different sourcetypes for multiple logs coming from multiple servers (no universal forwarders, using rsyslog.conf)? When I set up the input port, it only offers one type of sourcetype choice.

I set my receiver to also forward data to itself by mistake. How do I remove the forwarder instance without uninstalling the receiver?

$
0
0
I'm running 6.3.2 and when I did the initial setup for my receiver, I misunderstood the directions I was getting and mistakenly set the Receiver to also be a forwarder and a Receiver of itself. So I now have another Forwarder listed on my Deployment page and I would like to remove it without uninstalling and reinstalling the Receiver. It doesn't appear to be harming anything, just more of an eyesore to look at my deployment and have one server always listed as missing.

Website Input: How far off is support for forms based authentication?

$
0
0
Hi, I'm sure I remember reading that form based authentication is in the pipeline? Am I correct and if so, when should it be ready? Thanks, Richard.

How to write a search to track the time when service assignment changes between multiple hosts?

$
0
0
We have a system where, when a service name (a unique service name referenced by service=service_N where N=1 to 20) dies, it gets assigned to another host. To explain further... We have service=service1 running on host=hostname1 initially. After sometime, because of some reason, service1 dies on hostname1, but a new service comes up on another host with the same name. So after a time T, service=service1 is running on host=hostname2. I am able to get the changing state of the service name from the event logs in Splunk using the search: service=service1 | stats value(host) by service and I get this: service1 | hostname1 | hostname2 1. How do I capture the time when the service name assignment changed? 2. What is the best way to graph this data when service=service* ? Thanks

How to schedule a search to run every morning at 6:00AM?

$
0
0
Hi, We have a search that retrieves data for the last 24 hours and will send a CSV to an email distribution list. I am wondering if we can set up a schedule to have this search run everything at 6am. I have tried to set up a schedule via Splunk Web, but it is not running at all in the morning. Thank you.
Viewing all 47296 articles
Browse latest View live




Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>
<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596344.js" async> </script>