Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

SPLUNK Alerts and Alert Manager

$
0
0
I downloaded the application "Alert Manager" and have been able to successfully configure alerts for my searches. Strangely enough I have come across a weird issue that has left me scratching my head as I can't determine what is causing my issue. This comes down to a couple of alerts I am trying to alert on. The alerts I will indicate here are (A) Windows EventID 1102 ==> when a security log is cleared (B) Windows EventID 4720 ==> a local user account is created (C) Windows EventID 4732 ==> a local user account is added to a local group (such as Administrators) I can perform my search in SPLUNK and I am using standard TA's for Windows and I can find these alerts without a problem. My searches are as follows: For (A) ==> index=wineventlog sourcetype=wineventlog:security EventCode=1102. The relative time frame I am using is "Last 24 hours" For (B) ==> index=wineventlog sourcetype=wineventlog:security EventCode=4720. The relative time frame I am using is "Last 24 hours" For (C) ==> index=wineventlog sourcetype=wineventlog:security EventCode=4732. The relative time frame I am using is "Last 24 hours" For (A) I saved the search as an Alert and configured it as follows * Enabled = Yes * Permissions = Shared in App * Alert-Type = real-time * Trigger Condition = Per-Result * Action = Alert Manager with a "title", impact=High, Urgency=High, Owner=unassigned When an event occurs for alert (A) I immediately get an alert showing the Alert Manager tool. However if I configure an alert for alerts (B) and (C) with the same parameters this doesn't work. The log exists in the index but Alert Manager doesn't work. I am not sure why this is occurring. Any insights?

Searching two indexes to compare and show the difference

$
0
0
index="proxy_logs" category="none" | top category, protocol, url, cs_Referer limit=1000 | eval results = if(match(upper(cs_Referer),upper(url)), "hit", "miss") | where results="miss" | table category, protocol, url, cs_Referer, results Above is working thanks to a couple of posts on here. No I want to compare the "url" field in index1 against another index2 that also has the "url" field and show the output of index1 that does not match index2. First search looks for items that don't match in the first index. I then want to search the search the second index and output only items that do not match the first index.

Discard event after 10 lines

$
0
0
Hi, I have many events of 500 lines. Only first 10 lines are important. How to truncate or discard or ignore the remaining lines before indexing? When I use MAX_EVENTS in props.conf, Splunk breaks event after 10 lines and creats new event. Tried using BREAK_ONLY_BEFORE, LINEBREAK but nothing seems working. Please suggest props.conf entry to index only 10 lines from event.

Unable to collect Azure metrics into Splunk using Azure monitor add-on for Splunks

$
0
0
I configured the input for collecting Azure Metrics, but when I'm trying to query for Metrics I'm not getting any results. I getting the following error message:**"**ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-Azure_Monitor/bin/azure_monitor_metrics.py" Error caught in get_metrics_for_subscription, type: , value: string indices must be integers, not str, locale = get_resources_for_rgs"**** Did someone experience this also and know what is wrong? Thanks

How to change ownership of a lookup file

$
0
0
We have several lookup files for users who have left, and we would like to transfer the ownership to a new production user that we have created for the purpose. Any idea how to do this? For other knowledge objects this can be done in **settings > All Configurations > Reassign Knowledge Objects**, but lookups do not seem to be included in this list. Splunk 6.6.5 ES: 4.7.4

Displaying charts total in a new column

$
0
0
Hi, I'm pretty new to Splunk and have been playing around with it. index=sse_cae_summary_idx new_sourcetype=sse_altair_log_summary_stype | search FEATURE_NAME="HWHyperMesh*" FEATURE_VERSION="9.0" | eval DurationHour=DURATION/3600 | chart dc(USER_NAME) as "Unique Users" by USER_NAME The above code simply gives me each unique user that is using version 9 of Hypermesh. The chart has two columns, username and unique users. The unique users column has a 1 in for each of the users . Ideally, I'd rather have a total column that just details the amount of unique users that are in the search. Please could someone help me out? Thank you.

Saved search doesn't appear in dashboard panel

$
0
0
I have created a scheduled report and I am calling it in dashboard panel as base search. I have applied some more filter to it and displayed in panel. But it says “ No results found”. Although when I click on "open in search" it works fine. Even the saved results are fine. Could you please advise what could be the issue?

Regex Extraction of the VALUE

$
0
0
Hello Friends, I have folowing issue I have two types of logs: A &B A & B are from the same Index, same sourecetype and same source (wish of the Client) BUT they differ in two aspects: 1) the one contains the **value** aaa and the another bbb 2) log A has the structure FIELDNAME=VALUE log B has the structure FIELDNAME = VALUE\ since they belong to the same sourcetype i have no idea how to delete this \ after the value Please help

Splunk Supporting Add-on for Active Directory - Decrypted password from stanza=credential:SA-ldapsearch:####: is not utf8, skipping

$
0
0
When I updated "Splunk Supporting Add-on for Active Directory" to 2.1.6. I am getting below error messages in splunkd.log file for 10 out of 14 connectios. Decrypted password from stanza=credential:SA-ldapsearch:####: is not utf8, skipping. Any idea?

What is the capability for editing permissions of report?

$
0
0
I have a report created and i want only certain roles to change only permission of a report and should not able to edit the report and can only read the report. How to achieve this? which is the capability to do this? I do not want to use admin role or (admin_all_objects) capability. Thanks.

Splunkforwarder build as OS RPM

$
0
0
Hi, We are using RedHat6 and RedHat7 machines. As per our user request we are trying to build splunkforwarder as OS RPM. We could see that source code of splunkforwarder seems pre-build binaries and libraries so what we did is; just move those file to respective PATH ( /usr/bin, /usr/lib etc ). Couldn't get any src.rpm and sample spec on internet also. After install our RPM we could see issue on running time of "splunk" binary ; is **SPLUNK_HOME is missing**. We are really new to this tool and also no idea to build this as RPM. Can anyone help me to figure out the correct solution on this case. Regards, Abinaya

Splunk Index Full?

$
0
0
I am wondering how I can check if an index is full? Going along with this question, is there a way for me to see how much data each index is able to hold? Thanks for your help.

"service streamfwd status" - Is There Documentation Listing What the Results of This Command Mean?

$
0
0
Hello, I'm looking for documentation on what the results of the "service streamfwd status" command. If this doc included the other parameters besides "status"; ie. stop/start/restart, that would be even better. Also, looking for documentation on the statements listed when the streamfwd.log is viewed? If there is documentation for all .log files, that would be great as well. If documentation is not available but someone is able to explain the below output (service streamfwd status), that will do for now. ● streamfwd.service - SYSV: Splunk Stream Forwarder 7.1.2 Loaded: loaded (/etc/rc.d/init.d/streamfwd; **bad**; vendor preset: **disabled**) Active: active (running) since Thu 2018-09-13 15:10:01 EDT; 3 days ago Docs: man:systemd-sysv-generator(8) Process: 31736 ExecStart=/etc/rc.d/init.d/streamfwd start (code=**exited**, status=0/SUCCESS) CGroup: /system.slice/streamfwd.service └─31744 /opt/streamfwd/bin/streamfwd -D Sep 13 15:10:01 stream1 systemd[1]: Starting SYSV: Splunk Stream Forwarder 7.1.2... Sep 13 15:10:01 stream1 runuser[31741]: pam_unix(runuser:session): session opened for user streamfwd by (uid=0) Sep 13 15:10:01 stream1 runuser[31741]: pam_unix(runuser:session): **session closed for user streamfwd** Sep 13 15:10:01 stream1 streamfwd[31736]: Starting /opt/streamfwd/bin/streamfwd: [ OK ] Sep 13 15:10:01 stream1 systemd[1]: Started SYSV: Splunk Stream Forwarder 7.1.2. I'm particularly concerned with the bolded pieces of the output. Thanks and God bless, Genesius

Splunk Machine Learning Toolkit install additional Python packages

$
0
0
I am researching to implement a LightGBM model using Skicit Learn in Python. Is it possible to install LightGBM Python packages so it can be used through the ML-SPL API ?

SPLUNK Alerts and Alert Manager

$
0
0
I downloaded the application "Alert Manager" and have been able to successfully configure alerts for my searches. Strangely enough I have come across a weird issue that has left me scratching my head as I can't determine what is causing my issue. This comes down to a couple of alerts I am trying to alert on. The alerts I will indicate here are (A) Windows EventID 1102 ==> when a security log is cleared (B) Windows EventID 4720 ==> a local user account is created (C) Windows EventID 4732 ==> a local user account is added to a local group (such as Administrators) I can perform my search in SPLUNK and I am using standard TA's for Windows and I can find these alerts without a problem. My searches are as follows: For (A) ==> index=wineventlog sourcetype=wineventlog:security EventCode=1102. The relative time frame I am using is "Last 24 hours" For (B) ==> index=wineventlog sourcetype=wineventlog:security EventCode=4720. The relative time frame I am using is "Last 24 hours" For (C) ==> index=wineventlog sourcetype=wineventlog:security EventCode=4732. The relative time frame I am using is "Last 24 hours" For (A) I saved the search as an Alert and configured it as follows * Enabled = Yes * Permissions = Shared in App * Alert-Type = real-time * Trigger Condition = Per-Result * Action = Alert Manager with a "title", impact=High, Urgency=High, Owner=unassigned When an event occurs for alert (A) I immediately get an alert showing the Alert Manager tool. However if I configure an alert for alerts (B) and (C) with the same parameters this doesn't work. The log exists in the index but Alert Manager doesn't work. I am not sure why this is occurring. Any insights?

Palo Alto app and palo alto add-on not transforming global protect user

$
0
0
Hi We have noticed that within the paloalto app-->Activity-GlobalProtect that user is always unknown. In the transforms [extract_globalprotect_user] SOURCE_KEY = description REGEX = User name: (?[^,]+) [extract_globalprotect_ip] SOURCE_KEY = description REGEX = Private IP: (?[^,]+) the user should be extracted out of the description. Within the props.conf in traffic section EVAL-user = coalesce(src_user,dest_user,"unknown") has anyone found this issue and resolved it?

How to Display links only when clicked on the h1 tag

$
0
0
I have a dashboard with a panel which has a heading(h1 tag) and two links under it . Now I want to display those two links only when I click on the h1 tag , which means by default links should not be shown. sample code which I use is shown below .

Click Here

link to dashboard 1

link to dashboard 2

Can Search Head Clustering (SHC) be configured to force the reaping of expired artifacts?

$
0
0
I'm new to managing a SHC, and I'm observing that expired job artifacts are not being reaped, even long after the job expiration - as long as the user has the search results open in a browser tab. The jobs persist in the job manager with an expiration value of "Expired". It seems to me from my own observation that in a non-clustered search head configuration that the expired job results disappear from the browser window on-schedule. Is it expected in SHC environments that open browser tabs can prevent the reaping of job artifacts indefinitely, or is there some non-default configuration that controls this behavior? The issue this behavior is causing is that with old expired artifacts being retained, the user soon exceeds their jobs disk limit. I understand this can be prevented by the user closing out old tabs, but wanted mainly to see if the SHC can be configured to force the reaping of expired artifacts.

SNSPublisherError: Alert isn't published to SNS due to empty message content

$
0
0
Hi, I am using Splunk add-on and App for AWS, I enabled one of the default alerts and added AWS SNS alert for trigger actions but i am not receiving SNS alerts. when i check the logs, I see "SNSPublisherError: Alert isn't published to SNS due to empty message content" Mandatory fields for SNS alerts are Account, Region, Topic Name, Message ($result.message$) and all of them are correct. Can someone point me in the right direction as to what i might be missing?

Splunk Add-on for Amazon Web Services: Alert isn't published to SNS due to empty message content

$
0
0
Hi, I am using the Splunk Add-on and App for Amazon Web Services (AWS). I enabled one of the default alerts and added a AWS Simple Notification Service (SNS) alert for trigger actions. But i am not receiving SNS alerts. When i check the logs, I see "SNSPublisherError: Alert isn't published to SNS due to empty message content" Mandatory fields for SNS alerts are Account, Region, Topic Name, Message ($result.message$) and all of them are correct. Can someone point me in the right direction as to what i might be missing?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>