Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

comparison between last week's results and this week's results?

$
0
0
Hi, I have a query as follows index="summary" search_name="ABC" | dedup hostname | table hostname Now I want see the hostnames which are in last week's result and not in this week's result and vice versa? What are the earliest and latest times that I should be specified in subsearch and main search? What could be the query to get that result?

An App or Add-on installs as root user by default

$
0
0
I have a single instance Splunk Enterprise 7.1.2 on Linux. I have used a non-root user "splunk" & group "splunk" to install Splunk. At the time of install i made sure to run "chown -R splunk:splunk /opt/splunk" command and verified all files/dirs are now owned by "splunk:splunk". I am noticing that whenever i install a new app or add-on , its owner is root:root by default. I have to manually run that chown command every time after i install an app or add-on & restart splunk. I have looked at this thread [https://answers.splunk.com/answers/481355/why-are-apps-installing-as-root-user-when-dir-is-n.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev][1] as per it, Is it because we are using "sudo $SPLUNK_HOME/bin/splunk restart" command to restart splunk after each app install which is causing splunk to restart as a root user ? What is the other way then ? Anybody else using Splunk On Linux facing the same issue ? Thanks Neeraj [1]: https://answers.splunk.com/answers/481355/why-are-apps-installing-as-root-user-when-dir-is-n.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev

If we enable DMC in search head in Production environment is there any impact on the searches

$
0
0
If we enable DMC in Search head in a Production environment is there any impact on the searches. I have a lot of historical searches are running on it. Is it the Best practice to do?

How to setup forwarder details in inputs.conf?

$
0
0
I have existing Universal Forwarder setup for our prod Splunk Enterprise instance. Now, I am trying to setup a dev Splunk instance. I would like to receive data from the same forwarder which is already being used to provide data to prod instances. I have set the receiver port in my new Splunk instance(lets say host ip as 10.99.1.123) as 9997. And added the same to tcpout servers list as 10.99.1.123:9997 in outputs.conf file of our universal forwarder. But I am not able to find how to specify the forwarder details in inputs.conf file in my newly created Splunk instance. Please let me know if the above process is correct and how to setup the inputs.conf file in order to receive data from the Universal Forwarder.

IO Error Splunk Db Connect

$
0
0
Hi Guys. I need help with Splunk Db Connect error. I configured the splunk db connect with the following parameters. ![alt text][1] The firewall rule is enabled, telnet port 1521 is working, ping is working, I've configured the Drivers and JAVA_HOME correctly (no error shows) BUT when I tried to establish connection, the following error appear for me. ![alt text][2] I don't have any idea whats going on. Anyone could clarify for me? [1]: /storage/temp/251217-config-splunk-db-connect.png [2]: /storage/temp/251219-db-connect-error.png

How to display weekly data starting on a Monday using timecharts?

$
0
0
I'm plotting some data on a timechart, with a span of a couple of months, and using weeks as the data points. How can I make the chart so it takes a week from Monday-Sunday instead of Thursday-Wednesday? Thanks, Sam

Configure alert based on other timezones

$
0
0
Hi, I have data coming in with event timestamps configured in CST time zone. But I have one requirement to schedule alert based on London time everyday. Basically difference between London and CST times are 5 or 6 hours based on time of the year. so, I cannot give constant Cron-Schedule for scheduling the alert by converting London time to CST time. May I know whether there is any approach to handle this scenario.

Related Fields

$
0
0
I have the following events: { "file_name": "java.exe", "process_id": "0fb9dcff-c345-4d76-ae53-af46cd34524a", "command_line": "something", "parent_process_id": "c3df993f-7802-430a-9ef5-e018910aed4b" }, { "file_name": "other.exe", "process_id": "1451fd51-bbce-4c27-999a-ee514e09529f", "command_line": "some^thing", "parent_process_id": "0fb9dcff-c345-4d76-ae53-af46cd34524a" }, { "file_name": "cmd.exe", "process_id": "23a192cf-5f2d-4f42-a753-595b702a280b", "command_line": "some^thing", "parent_process_id": "0fb9dcff-c345-4d76-ae53-af46cd34524a" }, { "file_name": "blah.exe", "process_id": "16ffed00-1175-4554-b4a3-0ab45e8d691f", "command_line": "", "parent_process_id": "39a6cb9d-4dd7-4c44-9ffd-d8ee9561a1a3" } I'm trying to pull the events without a subsearch, where I'm looking for a process that has file_name=cmd.exe and a parent process with the file_name=java.exe; In the above events, you see java.exe has two child process (other.exe and cmd.exe) and then a completely unrelated process called 'blah.exe'. I'd like to just return cmd.exe (but only if the parent_process_id matches the process_id of another event with a file_name=java.exe)

I have to aggregate events in index by week and by month

$
0
0
I have tried using bin command but as index=test| bin span=1w _time | chart count as total_count by _time, action But this gives me event count over a span of 30days for every 7 days. Please help me understand how to aggregate events in index by week and by month.

Autosuggest

$
0
0
I'm creating a dashboard with text inputs. Is there a way to get Splunk to have a dropdown with autosuggestions when the user types in the text box, similar to how it works in the Search and Reporting app?

Manipulating | stats or | chart results mathematically

$
0
0
Hey everyone, I've got a search search = * | eval _time=_time - (6*60*60) | bucket _time span=1d # Takes the current time and rolls it back six hours. We operate on a 6am-6am reporting schedule. | eval MaterialType = case(match(lotNumber,"regex") OR lotNumber = "WasteLots","Waste",match(field1,"regex"),"Production") # Designates each event as a waste event (using the Lot #) or a production event (using the value in field1) | where isnotnull(MaterialType) | eval time = strftime(_time,"%m/%d/%y") | chart sum(netWeightQty) by time, MaterialType | eval _time=_time + (6*60*60) Now this | chart generates the following: ![Big money big money][1] [1]: /storage/temp/252239-capture07182018092200.png __How can I get a value, for each date, of Waste% = 100 * Waste / (Production + Waste)?__ Thanks!

Math function return only 17 most significant digits

$
0
0
Hello everyones, Every math operations or functions seem to round the number to the 17th most significants digits. To showcase the problem, I have made a test lookup table that look like that: id, long_numbers 1, 12345678901234567894 2, 12345678901234567814 3, 12345678901234567826 If I run this : |inputlookup test_long_numbers.csv |eval should_be_the_same=(long_numbers*1) The results in the new column are automaticly rounded like that: id long_numbers should_be_the_same 1 12345678901234567894 12345678901234567000 2 12345678901234567814 12345678901234567000 3 12345678901234567826 12345678901234567000 The min(), max(), avg() used with stats all have the same behaviour. Does someone know why it's acting like that and if there a way to prevent this behaviour? Thanks, Jérôme-A. Sauvé

Create new field based off of sort order

$
0
0
I've just created a simple search which sorts people's scores (anywhere from 0 to 10000). I want to be able to show that the person with the highest score is 1 (first). So, in short, i want to create a new score called "rank" which is automatically generated off of their scores.

Not able to send alert exit code 1

$
0
0
![alt text][1] [1]: /storage/temp/251215-2018-07-18-09-58-35-internal-errors-and-messages-s.png I am getting the errors as shown in the image. Am I missing anything in the setup? Splunk Enterprise version 6.3.2 Thanks

Splunk weburl not coming up. after configuring universal forwarder

$
0
0
I had installed splunk 7.1.1 on Linux machine and started with id/passwd, it was coming up, then I installed splunk universal forwarder on the other Linux machine to get logs from but splunk weburl not coming up, splunk is running, and I stop splunk forwarder, in splunkd.log is see below error ERROR TcpInputProc - Message rejected. Received unexpected message of size=1195725856 bytes from src=10.46.238.52:54385 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload. please suggest

I have enabled DMC in distributed mode is there a way I can come back to standalone mode.

$
0
0
I have enabled DMC in distributed mode is there a way I can come back to standalone mode. What I did is I have selected on distributed mode and apply the changes. if we click overview than it is distributed mode. later I got to know it is not the best way to implement DMC in Search Head in production. Again I have click on the standalone mode and apply the changes. Is that right way it has to change from distributed mode to standalone mode. I can see in the overview which mode is present. can someone help me with this? Thanks in advance.

Cannot disable searches or acceleration

$
0
0
Hello, I am only using the Security Suite app for ASA data. I want to disable the other modules to improve my search scheduler. When I try to disable the search (or turn acceleration off) I get: **Value of argument 'display.visualizations.chartHeight' must be an integer** Any help is appreciated. Thanks

Removing empty bins in timechart

$
0
0
Hello, I am unable to eliminate empty buckets using the timechart command since moving to Splunk 7.0. For example in the below query I will see a gap for Tuesday and a continuous line from the Monday value to the Wednesday value. I'd like the chart (in this example) to not show Tuesday at all, just go from Monday to Wednesday. This used to work in older versions, so is there a modification needed to get this to work in Splunk 7.0+. Thanks for any assistance. index="_audit" | timechart cont=false count(date_wday) by date_wday | eval date_wday=lower(strftime(_time,"%A")) | where (date_wday!="tuesday") | fields - date_wday

Why is the app or add-on installations, on a single instance Splunk Enterprise 7.1.2 on Linux, show as root user by default?

$
0
0
I have a single instance Splunk Enterprise 7.1.2 on Linux. I have used a non-root user "splunk" & group "splunk" to install Splunk. At the time of install i made sure to run "chown -R splunk:splunk /opt/splunk" command and verified all files/dirs are now owned by "splunk:splunk". I am noticing that whenever i install a new app or add-on , its owner is root:root by default. I have to manually run that chown command every time after i install an app or add-on & restart splunk. I have looked at this thread [https://answers.splunk.com/answers/481355/why-are-apps-installing-as-root-user-when-dir-is-n.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev][1] as per it, Is it because we are using "sudo $SPLUNK_HOME/bin/splunk restart" command to restart splunk after each app install which is causing splunk to restart as a root user ? What is the other way then ? Anybody else using Splunk On Linux facing the same issue ? Thanks Neeraj [1]: https://answers.splunk.com/answers/481355/why-are-apps-installing-as-root-user-when-dir-is-n.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev

how to compose the _time in index time from two JSON fields?

$
0
0
I am developing a Python add-on and I am trying to specify a `_time` composed by two JSON fields `lastTstamp` and `lastDate` in the `index time`. Therefore, the extraction is getting a different and wrong timestamp. JSON input: { lastTstamp: 15:32:02Z lastDate: 2015-10-23 id: a4ec1ba0-ab74-11e6-a19f-0a7e67dda05f status: new } event output: `_time: 2015-11-18T05:55:58.000+00:00` So far I tried two approaches: 1st approach: Using `helper.new_event` + `ew.write_event(event)` utc_dt = datetime.strptime(data_json['lastDate'] + 'T' + data_json['lastTstamp'], '%Y-%m-%dT%H:%M:%SZ') event = helper.new_event(time=time.mktime(utc_dt.timetuple()), source=helper.get_input_type(), index=helper.get_output_index(), sourcetype=helper.get_sourcetype(), data=json.dumps(data_json)) ew.write_event(event) 2nd approach: Edit `props.conf` and `transforms.conf` transform.conf: [alert_time] REGEX = 'lastDate': u'(\d{4}-\d{2}-\d{2}).*lastTstamp': u'(\d{2}:\d{2}:\d{2}) FORMAT = $1T$2.000+00:00 DEST_KEY = _time props.conf: [json_alert] KV_MODE = json SHOULD_LINEMERGE = 0 category = Splunk App Add-on Builder pulldown_type = 1 TRANSFORMS-datetime = alert_time` I some cases a time zone difference is expected as normal, but as depicted in the example above, there is a huge gap between input and output timestamp.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>