Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

"Q" Encoding

$
0
0
Hello, I have a field which contains values encoded in "Q" (I just discovered this encoding type : RFC 1522). It seems to be used a lot in email logs. I wasn't able to find any commands that helps to convert to a 'readable' string. Any help, idea, suggestion would be appreciated in order to convert these values (by Splunk commands or by fields extractions processs).

95% percentile of the transaction duration

$
0
0
Hello I'd like to display the 95% percentile of the transaction duration. Any hint how I can do this? This is my current search. host=server1 | rename CorrelationId AS CDI | transaction CDI |table CDI duration Best, Manuel

Extract data for weekends.

$
0
0
Hi, I need to calculate the amount of time for which a server was used in the weekends. For that I am using the following search string : **index="history" source = "DetailedLogs" | transaction source | convert ctime(_time) AS day_of_week timeformat="%A" | convert ctime(_time) AS day timeformat="%e" | eval weekendno = case(day>=1 AND day <=7 AND (day_of_week=="Saturday" OR day_of_week=="Sunday"),"Weekend 1", day>7 AND day<=14 AND (day_of_week=="Saturday" OR day_of_week=="Sunday"),"Weekend 2",day>14 AND day <=21 AND (day_of_week=="Saturday" OR day_of_week=="Sunday"),"Weekend 3",day>21 AND (day_of_week=="Saturday" OR day_of_week=="Sunday"),"Weekend 4") | chart count(duration) by weekendno** 1. Firstly, after executing the search string, I am getting results of usage of the server on weekdays as well. So, clearly there's something wrong with it. Can someone point out the mistake or is there any better approach to achieve the same result? 2. Also, I want to count the total duration of usage, for which I should use the sum() function but it is giving me an error. 3. What is the default unit of the "duration" field? Is it seconds or hours? PFA attached the screenshots. One of the event fetched has a timestamp "08-29-2017:11:41:42" which is a Tuesday. Thanks ![alt text][1] ![alt text][2] [1]: /storage/temp/209705-splunk-issue.png [2]: /storage/temp/209708-splunk-issue2.png

Datamodel_summary folder disk space

$
0
0
Hello guys, i noticed that the datamodel_summary folder is filling up my diskspace... How can i limit this ? It is growing day by day with large ammout of GB... THanks in advance

Splunk Index Cluster change coldPath

$
0
0
Hi, I was thinking about moving the cold buckets to a separate disk. I found some questions and docs about it but not especially for Index Cluster. So please help me to understand the steps. Would something like this work?: 1. Put Master in maintenance mode ( is this necessary?) 2. For each peer at a time : take it down with ./splunk offline 2.1 Change coldPath in indexes.conf to new path 2.2 copy old cold data to new path 2.3 Start peer up again 3. Repeat for each peer 4. Start up Master again Please provide additional steps, if needed. Since the Master is down I can not provide the changes in indexes.conf via the bunble, so the configuration differs, is there a way around that? Would be great if someone could answer this. Thank you Regards David

One more question about bash_history

$
0
0
Hello, all! Maybe someone has set up tracking bash_history file from all users in /home/*/.bash_history I experimented with fschange, but splunkforwarder don't send data to server. Splunk user can access to read .bash_history files. Can anybody help me with this question? Thanks!

Lowest single value from multiple fields

$
0
0
Dear experts! I have a sourcetype that contains fields like this: domain_field1=5 domain_field2=5 domain_field3=4 domain_field4=3 And I want to display the lowest number available. To make it more complicated, the number of fields can differ, but they will always be prefixed with "domain_" So in the example above the value for the search would be "3". Is this possible?

How to run a bull search

$
0
0
Is there an alternate way to run a search query with 5K ( i trying to filter response base on the session ID) filter condition. Currently the search is hanging all the time. Please advice. Thanks AP

Using kvstore to hold event counts from teh past

$
0
0
I am building up Splunk content for our product in splunk. I am building a dashboard to count events, which are many. I want to use kvstore to store this info and then have the app use the lookup to get this data. I have played a bit with kvstore and do understand how to do this but need advice on setup. We have multiple search heads, how do I store the data at the index layer so the other [isolated] search heads can access them without having the query running locally? It seems that I can enable replication? What config files do I need to setup? Seems that I need to do it collections.conf and transforms.conf. Is this correct? I assume I can store a field as time/date? Any help/advice is welcome!

Regex for Dell Quest TPAM logs ?

$
0
0
Has anyone done any work with Dell/Quest TPAM logs? Not enough experience with regex to know where to start. As an example for UserName: sometimes it is one word sometimes it is two words so having Splunk Build the regex does not work real well. Trying to learn so what kind of Regex will take next word or two words before it see's another word with a trailing colon: which would be the next key pair, like Operation: or ObjectType: in the log examples below. Aug 22 03:41:41 TPAMHOST1 PAR[72]: Source: TPAMCONSOLE UserName: **Automation Engine** Operation: Timed Out ObjectType: Authenticated Session Target: id12345 Role: N/A Failed? 0 OtherInfo: Inactive for 40 minutes Aug 22 03:29:19 TPAMHOST1 PAR[61]: Source: TPAMCONSOLE UserName: **id12345** Operation: Logout ObjectType: Authentication Target: id12345 Role: N/A Failed? 0 OtherInfo: Inactive for 14 seconds. From address 10.10.10.10

Email destination list in Alert

$
0
0
Hello, Could you please tell me if it is possible to provide an email distribution list from a lookup table to a Splunk Alert which send email ? In other words, could I used search results (lookup table) to provide a list of email adresse to an Alert ? Thanks by advance, Cyril

How to run a bulk condition search

$
0
0
Is there an alternate way to run a search query with 5K ( i trying to filter response base on the session ID) filter condition. Currently the search is hanging all the time. Please advice. Thanks AP

How to filter results based on different user ?

$
0
0
I am looking to filter results based on the users. The problem is some of the data doesn't have user value. Currently, I am using below condition User = $user_token$ OR NOT User = * Condition 1: To extract all the results ($user_token$ = *) - Working fine User = * OR NOT User = * ("OR NOT User = *" is for getting data which is not having user value) Condition 2: To extract results with specific user ($user_token$ = XYZ) User = XYZ OR NOT User = * In condition 2 along with XYZ user it extract the data which doen't have user value. I am not sure how to modify condition so both condition work together. **My Search Query:** | tstats summariesonly=true max(All_TPS_Logs.duration) AS All_TPS_Logs.duration values(All_TPS_Logs.user) AS user FROM datamodel=MLC_TPS_DEBUG4 WHERE (nodename=All_TPS_Logs host=LCH_UPGR36-T32_LRBCrash-2017-08-08_09_44_32-archive (All_TPS_Logs.user=MUREXBO OR NOT All_TPS_Logs.user=*)) All_TPS_Logs.name =*** GROUPBY _time, All_TPS_Logs.fullyQualifiedMethod span=1s

Splunk DB connect throws second parsed only query in Oracle DB

$
0
0
Hello everyone ! Dear support. I have a small problem with the Splunk DB Connect 2.4.0. Situation: We are purging Oracle information via an SQL query performed toward the database. The base table of the query has an incremental unique identification field SEQUENCE_NO - used as postition marker. We completed the DB Input and verified that the data gets purged correctly. Problem: At Oracle level, analyzing output of V$SQLAREA system view we noticed two (2) SQLs appear (on every purge round) instead of one; on identical datetime and with different content in the field SQL_TEXT. The first SQL (we call it Q1) is a correct implementation of SEQUENCE_NO limiting our original SQL decalred at DB Input. This is the query that does get executed and brings the rows. This SQL is correct! The second SQL (we call it Q2) - first it should not be, second: it is a full table scan - does not consider the SEQUENCE_NO last value. And third (fortunately) does not get executed - but parsed/loaded only. Oracle fields of V$SQLAREA: FETCHES EXECUTIONS PX_SERVERS_EXECUTIONS END_OF_FETCH_COUNT USERS_EXECUTING - are all of 0 value. Please see below: Original SQL - as Inserted in DB Connect/Operations/DB Inputs: select a.sequence_no, a.user_id, a.system_start_time, a.system_end_time, a.log_type, a.exit_flag, a.terminal_id, a.branch_code, c.branch_name, a.function_id, b.description, b.main_menu, b.sub_menu_1, b.sub_menu_2 from fccubt.smtb_sms_log a, fccubt.smtb_function_description b, fccubt.sttm_branch c where a.function_id = b.function_id(+) and a.branch_code = c.branch_code(+) and 'ENG' = b.lang_code(+) Correct SQL Q1: SELECT * FROM (select a.sequence_no, a.user_id, a.system_start_time, a.system_end_time, a.log_type, a.exit_flag, a.terminal_id, a.branch_code, c.branch_name, a.function_id, b.description, b.main_menu, b.sub_menu_1, b.sub_menu_2 from fccubt.smtb_sms_log a, fccubt.smtb_function_description b, fccubt.sttm_branch c where a.function_id = b.function_id(+) and a.branch_code = c.branch_code(+) and :"SYS_B_0" = b.lang_code(+)) t WHERE "SEQUENCE_NO" > :1 ORDER By "SEQUENCE_NO" ASC as seen - query is limited to the last SEQUENCE_NO - this is correct and does work - executes and brings data Wrong SQL Q2: SELECT * FROM (select a.sequence_no, a.user_id, a.system_start_time, a.system_end_time, a.log_type, a.exit_flag, a.terminal_id, a.branch_code, c.branch_name, a.function_id, b.description, b.main_menu, b.sub_menu_1, b.sub_menu_2 from fccubt.smtb_sms_log a, fccubt.smtb_function_description b, fccubt.sttm_branch c where a.function_id = b.function_id(+) and a.branch_code = c.branch_code(+) and :"SYS_B_0" = b.lang_code(+)) t ORDER By "SEQUENCE_NO" ASC This is a Full Table Scan - fortunately it does not executes. See also the entry in DB Connects's inputs.conf: [mi_input://FCCUBT_SMTB_SMS_LOG] connection = ORA_LIVE description = FCCUBT SMTB_SMS_LOG Application Log enable_query_wrapping = 1 index = db_oracle input_timestamp_column_fullname = (003) NULL.SYSTEM_START_TIME.DATE input_timestamp_column_name = SYSTEM_START_TIME interval = 120 max_rows = 10000 mode = tail output_timestamp_format = yyyy-MM-dd HH:mm:ss query = select a.sequence_no, a.user_id, a.system_start_time, a.system_end_time,\ a.log_type, a.exit_flag, a.terminal_id, a.branch_code, c.branch_name,\ a.function_id, b.description, b.main_menu, b.sub_menu_1, b.sub_menu_2\ from fccubt.smtb_sms_log a, fccubt.smtb_function_description b, fccubt.sttm_branch c\ where a.function_id = b.function_id(+)\ and a.branch_code = c.branch_code(+)\ and 'ENG' = b.lang_code(+) source = dbx sourcetype = oracle:ub:fccubt_log tail_rising_column_checkpoint_value = 40296826 tail_rising_column_fullname = (001) NULL.SEQUENCE_NO.NUMBER tail_rising_column_name = SEQUENCE_NO ui_query_catalog = NULL ui_query_mode = advanced ui_query_schema = OMEGACA ui_query_table = SYS_UNF_TRAIL Question: how to remove the wrong-extra SQL from getting to Oracle ? As I said before, the wrong SQL does not get executed, but who knows if it might change its mind. :-) . And furthermore no Oracle DBA wants to see a FTS - even parse only at your disposal for further info best regards Altin

Splunk unable to send email.

$
0
0
I have created an alert in Splunk which when triggered sends an email to a specified mail id. But sadly, the mail is not getting sent. I checked the python.log file to find this: **501, 'Syntactically invalid HELO argument(s)') while sending mail to** Has anyone encountered this before? What could the reason for this issue? Any fixes? Thanks

I want to connect the server which is in azure (private network) to splunk indexer server , which port should be opened inorder to establish the connection

$
0
0
I want to connect the server which is in azure (private network) to splunk indexer server , which port should be opened inorder to establish the connection

Auto Scheduling DBConnect output

$
0
0
I am trying to use DBConnect to copy data from Splunk to Database. The following are the steps that I followed: 1. Create an identity 2. Set up a connection to the database 3. Create an output dbconnect from DataLab > Outputss I have created the output while following all the steps, and I have also tried to schedule this output so that it automates the scheduling process of transferring data from splunk to the database. I have also set up a cron scheduling job. After the output is saved, it does not run the scheduler, even when I have set the properties to run that output every 10 minutes. What can be done for auto scheduler to run on Splunk to transfer Splunk data to sql server database.

should i save scheduled processes as reports or alerts?

$
0
0
I have some scheduled queries for which the only purpose is to maintain a lookup table (or maybe summary index after I figure out how to do those). Splunk only allows me to save these scheduled searches as either an alert or a report. Is there any advantage to choosing one over the other if I don't need reporting or alerting on the search?

Combine two searches using a lookup table

$
0
0
index=“client_index” AND Event_Type 6152 |eval new=substr(audit_filename, 16,14) |eval ip=mvindex(split(new,”_”),0) |eval mvip=split(ip,”.”),0) |eval site_ip_range=mvindex(mvip,0).”.”.mvindex(mvip,1).”.”.mvindex(mvip,2) |stats count BY site_ip_range |sort no limit site_ip_range |lookup siteusage.csv site_ip_range output Site_Name Site_Number |eval status=1 |append [| input lookup site usage.csv | table site_ip_range | eval status=0] |stats max(status) as status by site_ip_range |where status=0 The first part of this query pulls all active IPs and shows me how many time they have logged in (Event_Type 6152) it then runs it against the lookup file siteusage.csv to give me the details associated with each IP. However, the results exclude “zero” usage. The second query provides “zero” usage, but excludes/overwrites the first query and ONLY provides the zero usage report with no details from siteusage.csv. My goal is to get a consolidated report from zero usage to XXX logins with the details from the siteusage.csv lookup table. I can’t figure out how to combine them to do that.

Are lookup backup files supposed to replicate across servers?

$
0
0
Lookup File Editor with Search Head Clustering : Backup files not replicating As the title said, I have a cluster with 3 servers. When user 1 saved a lookup, the lookup (CSV file) is replicated but the backup file is not. If user 2 wants to restore the backup of user 1 he can't (if he is on another server than user 1) Is this a normal behavior ? I'm using lookup editor v2.5.0 and Splunk 6.6.1
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>