Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

IOSTAT Error

$
0
0
Complete splunk cluster is in windows and was testing the roll over from Hot to Cold bucket and the bucket partition is ahred amongs all Indexer Cluster,So while looking what i found the error "RU - Failed to get volume name for \\musnas05\Splunk_Cold2\, iostats will not be collected " Can anyone all along this troubleshoot

How retention works

$
0
0
Need to understand how retention works ( _time and Indexed time ) If I have set FrozenTimePeriodInDays = 30 Event: Suppose I dont have date in my events like below Event: Identity "32020" , Sys "123" , location "USA" , Region :Asia" , Type :Balance" If i run DB Query at : 30-1-2019 at 3.30 AM As per my understanding if event is not having date , it would take Index time, since query run at 30-1-2019 at 3.30 AM, it will show date in events as below Event in Splunk: 2019-1-30 4:00:14, Identity "32020" , Sys "123" , location "USA" , Region :Asia" , Type :Balance" So as retention period is set to 1 month , above event which is generated today will get delete or archive after one month which is 30 Feb Incase if there is date in event like below Event in Splunk: 2018-1-30 4:00:14, Identity "32020" , Sys "123" , location "USA" , Region :Asia" , Type :Balance" If i run DB Query At : 30-1-2019 at 3.30 AM As retention period is set to 1 month , in this case if i run query at : 30-1-2019 at 3.30 , kindly correct me here if am wrong data will not come in splunk as it will check event date with todays date , and see if it is more then 1 month then it will not Indexed data.

Filter Events before Indexing

$
0
0
I get events from a universal forwarder. If "alertd[123456]: ABC:" be in the event, i would like to index it. All other events can be ignored. Do you have a solution? 2019-01-23T14:22:45+01:00 host kernel: [123456.789101] ll header: yf:ff:ff:ef:ff:ff:00:00:00:00:88:05:01:00 2019-01-23T14:22:49+01:00 host alertd[456789]: get_db_c(): ...... 2019-01-23T14:22:50+01:00 host alertd[123456]: CEF:0|abcdef|host|.... 2019-01-23T14:22:59+01:00 host alertd[456789]: abc_send(): ...... I have tried the following configuration on the Indexer, but it didn't work: props.conf [source::C:\Users\test\testsource.log] TRANSFORMS-set = setnull,setparsing transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = (alertd\[\d{1,6}\]\:\s\w{3}\:) DEST_KEY = queue FORMAT = indexQueue Thanks for your solutions.

How to extract month and year from _time

$
0
0
_ time is in below format 2019-01-30 07:10:51.191 2019-01-30 07:10:51.190 2019-01-30 07:10:51.189 I need output in below format January 2019 Any help would be highly appreciable...

Capabilities For a role to trigger an email via splunk alert

$
0
0
I have a role in SH where the user is not able to send an email to a specific user or groups. What capabilities does a role require that can send an alert that can trigger an email to users and to DL.(7.1.5)

Checkpoint firewall and db connect

$
0
0
i have checkpoint firewall logs on my splunk instance. but now i want to create alerts for it. i want it to alert when someone tries to connect to network components such as routers, switches, etc. from a non permitted segment. now the problem is that there are components added to the fw all the time, and i need the list of components updated. so a list/lookup/xls is out of the question because its a one time thing. so i need a dynamic solution that will include the splunk being updated on the changes that happen on the fw. i know about the db connect option, abut the Checkpoint firewall doesnt use sql db(of any kind) , and i saw that db connect requiers that. is my information wrong? is there another way of making db connect work in this matter with checkpoint firewall? or is there another solution for my problem other than db connect?

Indexing salt on ID value

$
0
0
Hello, I'm looking for a way to not index an event if the ID is already in the index. The log will have this format : Unique ID;data;data2;etc.. Unique ID2;data3;data4;etc.. but two different log files may contain the same event. Is there a way to use the Unique ID as a salt ?

SQL Windows Databases:

$
0
0
I have received logs from SQL Windows database, database level only: SPLUNK received failed login logs a the following: Login failed for user 'DZIT\\trendmicrosrv'. Reason: Failed to open the explicitly specified database 'dsm'. [CLIENT: 10.0.20.135] That is good. But when you select action=success appear the following logs: 2019-01-30 11:55:29.803, event_time="2019-01-30 11:55:29.8035450", sequence_number="1", action_id="LGIS", succeeded="1", is_column_permission="0", session_id="89", server_principal_id="276", database_principal_id="0", target_server_principal_id="0", target_database_principal_id="0", object_id="0", class_type="LX", session_server_principal_name="DZIT\EPM_SP_Farm", server_principal_name="DZIT\EPM_SP_Farm", server_instance_name="TSTEPMSQL1", statement="-- network protocol: TCP/IP set quoted_identifier on set arithabort off set numeric_roundabort off set ansi_warnings on set ansi_padding on set ansi_nulls on set concat_null_yields_null on set cursor_close_on_commit off set implicit_transactions off set language us_english set dateformat mdy set datefirst 7 set transaction isolation level read committed ", additional_information="10x280000200x0001f4380x00000000800010.0.20.1180", file_name="D:\SQLAudit\MSSQL_Server_Audit_E248BC47-025B-474D-A5DE-BA9B35F9688A_0_131933227882110000.sqlaudit", audit_file_offset="1807360", user_defined_event_id="0", audit_schema_version="1", transaction_id="0" These logs is not clear, why this logs appear in this way I need to be clear such as Login successfully ....etc I know this case not at the splunk team but at SQL team but I need your support in that. Thank you;

Input settings for Microsoft Office 365 Reporting Add-on for Splunk

$
0
0
Hi, we are looking to define our Continuously Monitor inputs and was wondering what settings people have done for their Production deployments. I understand it can depend on the volume of message tracking logs being generated. Do we know how much data the TA can handle in terms of throughput? We where thinking of doing the following: Interval – every 900 seconds Query window size – 15 mins Delay throttle – 15 mins thanks

remove path from source to only show file name for file monitor input

$
0
0
Is there a way at input time to omit the path of the file monitor to leave only the file names ? path monitored : `/opt/csv/*` in the location - the files .. filenameA.csv filenameB.csv filenameC.csv filenameD.csv but the source is alway prepended with the path. /opt/csv/filenameA.csv /opt/csv/filenameB.csv can this be removed at input ? gratzi

uncheck checkbox is not working if ON bydefault in 7.1.x

$
0
0
Hello, In checkbox input type when its checked by default, I am uable to deselect the value. I have seen this behavior after upgrade to 7.1.x. In earlier version(7.0.3) I was able to select/unselect the checkbox. Not sure if some functionality is changed or I am missing something. Code: ONON Thanks

Run searches on app first install but not on upgrade

$
0
0
I would like to create an app which when installed will do the following - Run a number searches against an already existing index during first install to output data to a summary index or a csv/lookup - Create a number of REST Modular inputs and run each one once when the app is first installed. - Setup a number of scheduled searches to run at a defined period. Please can someone advise how I can trigger a search to run during an app first install but not on an upgrade? Thanks, Dan

calculate % based upon the selection made in filter

$
0
0
we have a dashboard panel which shows overall AV compliance % for windows servers.code is as below. -------------------------------------------------------------------------------------------------------------------------------------------------- index=dbconnect sourcetype=dbconnect:sql:SCCM_AVCompliance_AllServers | table Name DC OU ResourceID SignatureUpTo1DayOld AntivirusSignatureAge AntivirusSignatureUpdateDateTime AntivirusSignatureVersion | rename Name as host | join host [| inputlookup elixpediadashboardservers.csv | search (host="*") Environment="*" | search "Operating System"=WINDOWS] | append [| inputlookup elixpediadashboardservers.csv | search (host="*") Environment="*" | search "Operating System"=WINDOWS] | dedup host | fillnull AntivirusSignatureAge Value=2 | eval Compliance=if(AntivirusSignatureAge==0 OR AntivirusSignatureAge==1 ,"COMPLIANT","NONCOMPLIANT") | stats count(eval(Compliance=="COMPLIANT")) as compliant, count(eval(Compliance=="NONCOMPLIANT")) as noncompliant, count as total | eval AVUpdateCompliance=round((compliant/total)*100,2) | table AVUpdateCompliance ---------------------------------------------------------------------------------------------------------------------------------------------- Now customer requirement is to add a filter on top of this panel which shows last 4 months like January 2019 December 2018 November 2018 October 2018 and this filter has already been created. My query is... How do i pass month as a token in my query so that if user select November 2018 from dropdown then panel should show AV % compliance only for November month. Any help would be highly appreciable. Thanks

Reloading Index everytime

$
0
0
Hello Experts, We are having an issue where we have an DB connect to connect to oracle database and getting the data from a table. The schedule which we had configured is 5 mins and we have configured as a batch input. We have an requirement that whenever we are getting the data from database we have to delete or replace the old data with new data. how can we achieve it. Kindly suggest.

Add custom eval function or macro to custom app search

$
0
0
Hi, I am currently struggling with a problem. I am implementing custom views within a custom app that has one input field as text. That field can contain a url. When submitting the form I trigger 3 different searches in dashboards. Problem - some searches only need the the hostname, while others need the complete url. So I did research on that and was able to achieve a solution that I consider a dirty/bad one. I added some javascript and a second token, hook into the submit button click and extract the hostname out of the given url and set the new token with that value. There are some timing problems as well. There are several macros/functions available like md5() or len(). So I was wondering if it is possible to add a custom function - something like "index=* sourcetype=whatever TERM(extract_host($url$))" where extract_host calls a python function that takes the token as input an returns a new string that replaces the function call in the search and after that the search is executed. Or something like "eval host=extract_host($url$) | index=* sourcetype=whatever TERM(host)" ? I could not find a way to solve that problem other than using a very bad javascript solution. Any ideas? Thanks in advance.

Null value issue

$
0
0
Hi Guys, Our search query is like this **LogName=Application SourceName=Script | rex "Days Remaining: (?.*)days" | rex ": Origin=(?.+?)\," | rex (?.+?)\; | table CertificateName, DaysRemaining** Output will provide us a table with 2 columns as “CertificateName” and “DaysRemaining” in which “CertificateName” will have the names of the Certificates and “DaysRemaining” will have the days left for certificate expiry. But sometimes “DaysRemaining” column will have not any numbers for few of the “CertificateNames” and remains blank as attached here in the screen shot. Is there any way; 1. We can remove the rows which has no values (blank rows) using the above query 2. Can we input some text string like “Not Available” where ever we have these null value using the above query Please advise.

unable to get events from bamboo add-on getting many errors

$
0
0
ERROR:bamboo:Failed on request: Traceback (most recent call last): File "/users/splunk/az/splunk/etc/apps/ta-bamboo/bin/bamboo.py", line 180, in get_bamboo_plans resp = requests.get(translated_url, **args) File "/users/splunk/az/splunk/etc/apps/ta-bamboo/bin/libs/requests/api.py", line 67, in get return request('get', url, params=params, **kwargs) File "/users/splunk/az/splunk/etc/apps/ta-bamboo/bin/libs/requests/api.py", line 53, in request return session.request(method=method, url=url, **kwargs) File "/users/splunk/az/splunk/etc/apps/ta-bamboo/bin/libs/requests/sessions.py", line 468, in request resp = self.send(prep, **send_kwargs) File "/users/splunk/az/splunk/etc/apps/ta-bamboo/bin/libs/requests/sessions.py", line 576, in send r = adapter.send(request, **kwargs) File "/users/splunk/az/splunk/etc/apps/ta-bamboo/bin/libs/requests/adapters.py", line 447, in send raise SSLError(e, request=request) SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:676) . ERROR:root:Get error when collecting events. Traceback (most recent call last): File "/users/splunk/az/splunk/etc/apps/ta-bamboo/bin/libs/modinput_wrapper/base_modinput.py", line 173, in stream_events self.collect_events(inputs, ew) File "/users/splunk/az/splunk/etc/apps/ta-bamboo/bin/bamboo.py", line 266, in collect_events self.sync(ew) File "/users/splunk/az/splunk/etc/apps/ta-bamboo/bin/bamboo.py", line 115, in sync plans_data = self.get_bamboo_plans(data) File "/users/splunk/az/splunk/etc/apps/ta-bamboo/bin/bamboo.py", line 196, in get_bamboo_plans raise e SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:676)

Connecting Oracle database and run the query

$
0
0
Hi, I would like to connect to Oracle database and run certain queries every morning and output the result in Dashboard, is that something possible in Splunk? Thanks, Sweta

Basic search doesn't return consistent data

$
0
0
I'm doing a simple query into splunk to retrieve some data: index=my_index |table source,host I've also put a specific timestamp using the "date & time range" tab, the query return around 19K events/lines. The issue is that the query 'miss' some data (around 300 events/lines in total), data that appears when I'm lowering the time range or when I'm being more specific in the filtering of my query as such: index= my_index host=a_specific_host |table source,host Then the previously missing data are shown. One thing to notice is that the missing data aren't random, there are always the same. Do you have any idea on what could cause the issue ? I'm running Splunk Enterprise Version: 7.2.3 Regards

Regroup Splunk events with almost similar _time

$
0
0
Hello all, Every 10 seconds, I send a bunch of events to Splunk. I need to count how many events I receive every 10 sec but I can't get the real number because of the fact that Splunk doesn't regroup them together if their time is even slightly different. Very simple example : 10 : 00 : 10.052 Hello Splunk! 10 : 00 : 10.052 Hello Splunk! 10 : 00 : 10.054 Hello Splunk! 10 : 00 : 10.054 Hello Splunk! 10 : 00 : 20.052 Hello Splunk! 10 : 00 : 20.052 Hello Splunk! 10 : 00 : 20.055 Hello Splunk! Splunk would regroup those events into 4 groups (events at 10.052 , 10.054, 20.052, 20.055) instead of 2 groups (events at 10.50 and at 20.50 for example). For such an example, I would like to get something like : 10 : 00 : 10.00 -> 4 Hello Splunk 10 : 00 : 20.00 -> 3 Hello Splunk Is there a workaround to that ? Thank you.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>