Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How to extract only successful and failed logins using regex?

$
0
0
Hello All, I have a file with data: --------------server1 2018-07-----SQL2008-- Number of Success Logins: SOFTPOINTPERFOMANCEEXPERTLICENCEUSER - SQL SERVER AUTHENTICATION - xx.xxx.xxx.xx - server01.citytown01.alls.com - 13303433 FOR0001\Login114 - WINDOWS AUTHENTICATION - xx.xxx.xxx.xx - server01.citytown01.alls.com - 258857 Log_chat - SQL SERVER AUTHENTICATION - xx.xxx.xxx.xxx - server01.citytown01.alls.com - 214180 FOR0001\Login114 - WINDOWS AUTHENTICATION - xx.xxx.xxx.xxx - server01.citytown01.alls.com - 184989 NT AUTHORITY\SYSTEM - WINDOWS AUTHENTICATION - xx.xxx.xxx.xx - server01.citytown01.alls.com - 12684 FOR0001\Login112 - WINDOWS AUTHENTICATION - xx.xxx.xxx.xxx - server01.citytown01.alls.com - 1166 1SSA - SQL SERVER AUTHENTICATION - xx.xxx.xxx.xx - server01.citytown01.alls.com - 841 Log_chat - SQL SERVER AUTHENTICATION - xx.xxx.xxx.xxx - server01.citytown01.alls.com - 271 FOR0001\Login114 - WINDOWS AUTHENTICATION - xx.xxx.xxx.xxx - server01.citytown01.alls.com - 46 SQLLSS01 - SQL SERVER AUTHENTICATION - xx.xxx.x.xxx - xxxxxxx.xxx.xxxx.com - 37 SOFTPOINTPERFOMANCEEXPERTLICENCEUSER - SQL SERVER AUTHENTICATION - ::1 - server01.citytown01.alls.com - 1 Number of Failed Logins: Log_chat - - xx.xxx.xxx.xxx - server01.citytown01.alls.com - 73 FOR0001\Login118 - - xx.xxx.xxx.xxx - xxxxxxx.xxx.xxxx.com - 10 Log_chat - - xx.xxx.xxx.xxx - server01.citytown01.alls.com - 3 SOFTPOINTPERFOMANCEEXPERTLICENCEUSER - - xx.xxx.xxx.xx - server01.citytown01.alls.com - 1 ------------------------------------------ I need to extract only Success Logins and then Failed Logins. I tried use rex ^\s+(?\S+) | eval New=Success_Login | stats count by New But it extracting only the first Login.

Why is DBConnect for Sybase giving the following error "Connect error: no protocol"?

$
0
0
Hello Everyone I am setting up database monitoring using DBconnect, it worked well for MSSQL, Oracle, DB2 however Sybase is not giving up. I am getting the following error: Connect error: no protocol: :myip:myport/databasename In drivers section, it got recognised correctly as version 7.0. Any ideas on the issue? I am able to telnet to the host:port so no network issues there.

How to rewrite this query to get percentage at each range?

$
0
0
index=sample | eval Latency=case(walltime<500, "0-0.5s", walltime>=500 AND walltime<1000, "0.5s-1s", walltime>=1000 AND walltime<3000, "1s-3s", walltime>=3000 AND walltime<6000, "3s-6s", walltime>=4000 AND walltime<10000, "6s-10s", walltime>=10000 AND walltime<30000, "10s-30s", walltime>=30000, ">=30s") |eval Date =strftime(_time,"%d/%m/%Y") | chart count as RequestCount over Date by Latency The above query gives me in below format Date | 0-0.5s | 0.5s-1s | 1s-3s | 3s-6s | 6s-10s | 10s-30s 08/08/2018 | 12350 | 20095 | 5530 | 563 | 170 |120 09/08/2018 | 15350 | 10455 | 3430 | 1263 | 1010 |10 I would like to represent this count in terms of Percentage. How do I do the calculation? Please let me know.

Splunk addon builder - How to create an input that shows list of indexes?

$
0
0
Hello, I have a requirement in new app being build using add-on builder create a input parameter called choose index. This parameter should show the list of avalable indexes from which an user selects one index. I don't see this option in Splunk Add-on builder helper functions. However, Splunk doc does show such option in a picture just above the section called **Pass values from data input parameters** https://docs.splunk.com/Documentation/AddonBuilder/2.2.0/UserGuide/ConfigureDataCollectionAdvanced Another pic from an add-on which have this feature ![alt text][1] [1]: /storage/temp/255661-2018-08-09-182101.png Can anyone help me on this please?

Can we forward a specific table of a DB to Splunk?

$
0
0
Is it possible to forward specific table of a DB to Splunk? I understand that we can push the complete DB and create a dashboard to see the data we wish to. But I am more interested in understanding if we can just feed a table to the forwarder? Many thanks in advance

How to build a summary index that uses eval statements to configure timechart restults

$
0
0
I am trying to build a summary index to pull a week over week comparison of specific applications. The below query works normally, but for efficiency reasons I would like to place this in a summary index. I am having trouble getting the results I want displayed for the comparison in question. My results with `sitimechart` are using the date and time that the data was ingested into the Summary Index which prevents my comparison method from working. The search results off of the summary index places events in a "NULL" column and does not follow the eval statements. index=1 host=1234 sourcetype=sourcetype application=app earliest=-2w@w latest=@w | eval marker = if (_time < relative_time(now(), "-1w@w"), "last week", "this week") | eval _time = if (marker=="last week", _time + 7*24*60*60, _time) | timechart count by marker cont=FALSE See attached for stats table ![alt text][1] [1]: /storage/temp/255669-results.png

Active Directory – Failed Login Events - SPL – Which is most efficient and why?

$
0
0
Community, New to Splunk, first post, your patience is appreciated. Also, thank you in advance. This post is focused in the direction of efficiency, effectiveness, accuracy, and understanding rather than “How to.” I have three queries, each created by a different entity and I am seeking to understand the difference in Results and to understand the “What’s happening under the hood” of the queries themselves. Splunk provided consultant’s Report: tag=authentication action=failure user !="*$*" | table _time src dest user | rename src as "Source Machine" dest as "Destination Machine" | stats count as Failure_Count by user | where count > 50 | sort – count Comment: 4 hour window, Run Time: 51 minutes, does not appear to complete fully. It is as if the search continues looking for the newest events and adds those to the results. *********************************************************************** In-house created Report: sourcetype="WinEventLog:Security" EventCode=4625 user !="*$*" | table _time src dest user | rename src as "Source Machine" dest as "Destination Machine" | stats count as Failure_Count by user | sort - Failure_Count | where Failure_Count > 50 Comment: 24 hour window, Run Time: less than 30 seconds. However this yields thousands of Event Code: 4625 events per identified user, yet the results do not match the number of user account lockouts. Research shows that 90%+ of these events are to a server rather than Active Directory. Hence, no account lockouts I’d wager. ********************************************************************* Dashboard Panel from Splunk App for Windows Infrastructure: eventtype=msad-failed-user-logons (host="*") |fields signature,src_ip,src_host,src_nt_host,src_nt_domain,user,Logon_Type,host | join max=0 host [ | search eventtype=msad-dc-health (ForestName="*") (Site="*") (DomainDNSName="*") | dedup host | table host] | `ip-to-host`|stats count by user,src_nt_domain|sort -count|rename user as "Username", src_nt_domain as "Domain" Comment: 24 hour window, Run Time: 3:39 seconds and yielded roughly 200 events, which appear to match the user account lockout numbers. The Windows Infrastructure version uses the “eventtype.” Where did this come from? It is clear that there is different methodology, and accuracy, in each of these queries. What I do not understand is what exactly the Windows Infrastructure version is doing. There is a lot of documentation, examples, webinars, and comments, even within this forum which indicate to use the Index and Sourcetype to narrow the search criteria.

Skip message starting with Integer in Splunk.

$
0
0
I am creating a query to get message type count but i want to skip some the message that are not valid . Some of the messages are starting like "-100" or "Data ...". I want to skip them while i counting the messages count. TO get the count i am using below query : eventtype=logs | stats count as Total by message | rename message AS "Type" Message field has below data : Data nprops 5 1 Data props 0 -102 1432 sql error I want to skip all message which are starting from positive ,negative number and those as well which start from Data.

How to build a summary index that uses eval statements to configure timechart results?

$
0
0
I am trying to build a summary index to pull a week over week comparison of specific applications. The below query works normally, but for efficiency reasons I would like to place this in a summary index. I am having trouble getting the results I want displayed for the comparison in question. My results with `sitimechart` are using the date and time that the data was ingested into the Summary Index which prevents my comparison method from working. The search results off of the summary index places events in a "NULL" column and does not follow the eval statements. index=1 host=1234 sourcetype=sourcetype application=app earliest=-2w@w latest=@w | eval marker = if (_time < relative_time(now(), "-1w@w"), "last week", "this week") | eval _time = if (marker=="last week", _time + 7*24*60*60, _time) | timechart count by marker cont=FALSE See attached for stats table ![alt text][1] [1]: /storage/temp/255669-results.png

Coalesce in transforms

$
0
0
Hello, I am working with some apache logs that _can_ go through one or more proxies, when a request go through a proxy a X-forwarded-for header is added. The problem is that the apache logs show the client IP as the last address the request came from. The logs do however add the X-forwarded-for entries to the end of the log entry if they exist. What I need to do is get the clientip field updated via transforms to the correct address so that the web analytics app gets the correct data. The following search shows an example of the goal. index=weblogs | rex field=other "^(?[0-9\.]+)" | eval clientip=coalesce(first_forward, clientip) The _other_ field is already extracted and contains a comma separated list of the X-forwarded-for headers. I see two options on how to solve this, unless there is some magic way to do evals in transforms/props. 1) I could create a regex to extract the values in transforms, but not sure how to coalesce them in transforms/props. 2) Create a macro that does the job, but then I would need to update every search in the app and this would make updating the app lame. Any ideas?

Compare Fields from Different Indexes and display only the duplicates.

$
0
0
Hi, I have two searches `index= windows EventCode=1234 Logon_Type=8 | table host | dedup host` and `index=iis host=*|table host|dedup host` How to combine both these queries to display only the hosts which have that particular EventCode and Type and also in the IIS index. Thanks in advance.

Working to setup the Network toolkit on windows. Any installation or configuration guides?

$
0
0
I created the inputs.conf for ping but get an error about the format when splunk starts. I am using the format [ping://192.168.0.62] hosts = 192.168.0.62 interval = 30s runs = 1 it fails on the hosts line.

some of the values are not able see when I table

$
0
0
index=** sourcetype=**** location=00000 | bin _time span=1d | rex "\[Id=(?[^\,]*?),[\s ].*?,[\s ]percentage=(?[^\,]*?),[\s ].*?,[\s ]location=(?[^\,]*?)," max_match=0 | fields * | stats avg(percentageValue) AS avgpred, stdevp(percentageValue) AS lstdev , var(percentageValue) AS varpf by locationValue,IDValue, _time | table _time locationValue,IDValue,percentageValue But I am not able to see the percentageValue and locationValue in table

Are there any installation or configuration guides to setup the Network Toolkit on Windows?

$
0
0
I created the inputs.conf for ping but get an error about the format when splunk starts. I am using the format [ping://192.168.0.62] hosts = 192.168.0.62 interval = 30s runs = 1 It fails on the hosts line.

some of the values are not able see when use table

$
0
0
index=** sourcetype=**** location=00000 | bin _time span=1d | rex "\[Id=(?[^\,]*?),[\s ].*?,[\s ]percentage=(?[^\,]*?),[\s ].*?,[\s ]location=(?[^\,]*?)," max_match=0 | fields * | stats avg(percentageValue) AS avgpred, stdevp(percentageValue) AS lstdev , var(percentageValue) AS varpf by locationValue,IDValue, _time | table _time locationValue,IDValue,percentageValue But I am not able to see the percentageValue and locationValue in table

Regex - Filtering out unwanted events doesn't work

$
0
0
Raw Cisco WSA squid event: 1533849492.277 0 192.168.1.11 TCP_DENIED/307 0 GET http://detectportal.firefox.com/success.txt - NONE/- - OTHER-NONE-AuthenticatedUsers-NONE-NONE-NONE-NONE <-,-,-,"-",-,-,-,-,"-",-,-,-,"-",-,-,"-","-",-,-,-,-,"-","-","-","-","-","-",0.00,0,-,"-","-",-,"-",-,-,"-","-",-,-,"-"> - **props.conf** [cisco:wsa:squid] TRANSFORMS-null = tcpdenied307-firefox **transforms.conf** [tcpdenied307-firefox] REGEX = .+(TCP_DENIED).+(307).+(detectportal\.firefox\.com).+ DEST_KEY = queue FORMAT = nullQueue Any ideas why my REGEX doesn't work?

Re-use host field in Timechart for count aggregation

$
0
0
I am attempting to create a dynamic timecharted trellis dashboard panel that only shows an aggregation by host based on which host fields are present in the main search. As an example, the below shows two trellis panels, split by sourcetype using a statically assigned hostnames. index=* sourcetype=* host=host1 OR host=host2 | timechart span=1s count(eval(host == "host1")) as "host1" count(eval(host == "host2")) as "host2" count by sourcetype What I would like is the number of Trellis panels (aggregated by host) to shrink or grow based on the number of hosts listed in the primary search. Programmatically this would be something like a for loop over the host aggregation to create multiple panels, depending on the number of host values present. i.e. index=* sourcetype=* host=host1 OR host=host2 | timechart span=1s count(eval(host == )) as "" count by sourcetype With the expanded search evaluating to something like the below, assuming 3 hosts. index=* sourcetype=* host=host1 OR host=host2 OR host=3 | timechart span=1s count(eval(host == "host1")) as "host1" count(eval(host == "host2")) as "host2" count(eval(host == "host3")) as "host3" count by sourcetype Any help would be appreciated! Thanks.

Configuration stanza precedence vs Configuration file location precedence?

$
0
0
For props.conf which has highest precedence. In documentation, they said [source::] settings override both [host::] and [] settings 1) if props.conf is in ..etc/system/local [sourcetype1] TIME_FORMAT=..... ... 2) And props.conf in etc/apps/app1/local/ [source::....] TIME_FORMAT=..... ... we know system/local has higher priority than app1/local.... But has high priority over Could you please let me know which TIME_FORMAT setting will be applied,. from system/local? OR App1/local? Conf. file location precedence is higher OR stanza type precedence is higher?

Splunk Sourcetype wildcard entries

$
0
0
Hi I have a input with sourcetype [eventlog]. In props.conf If I use sourcetype as below to define settings it is working. [eventlog] ... But if I use wildcards as below my input is not getting parsed according to the configurations defined under below stanza. [eventlog*] ... ... May I know whether there is any reason.

Best way to monitor for file transfer across multiple servers without indexing file contents

$
0
0
Hi Splunk community I need to monitor file transfers from servers to servers in different directories. I do not need to know the file content, only the time the file appear in each server as well as the file size. Eventually, we want to show if the number of files from source directory and destination directory tallies and is there a bottle neck in the file transfer process. Also, file transfers occur at anytime during the day, not at regular intervals. Appreciate your advice on my use case. Many thanks in advance.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>