Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How do people actually use the data ingested by this Btool Scripted Inputs for Splunk?

$
0
0
The admins have set up the app to ingest the btool output, but with each property being a separate event, I don't see a way to make use of this. Can't tell which stanza a particular line belongs to. Or is it misconfigured?

How to configure a search for metadata

$
0
0
I have a number of Jenkins jobs for which I would like to create a dashboard with search (pull downs, form fills). The searching would be on the metadata held within each job. For example, one of the pieces of metadata is a filed the Jenkins user filled out called "squad name". If I just search for one of the squad names I know are in there, SquadNameJimDoodle, I get the following results: build_number: 544 build_url:job/Release_Candidate/job/docker-dist-load-test-deploy/job/test/job/jmeter-docker-test event_tag: build_report job_name: job/job/Release_Candidate/job/docker-dist-load-test-deploy/job/test/job/jmeter-docker-test/ job_result: SUCCESS metadata: { [-] FUNCTIONAL_AREA: Digital JMX_FILE: Sample-Test-Plan/sendMessageTest.jmx REMOTE_BRANCH: EEOTS-5691-Update-PEPT-Template-with-Functional-Domain-Field REQUIRED_LGS: 1 SQUAD_NAME: SquadNameJimDoodle STACK_NAME: Jimmystack TEST_REPO_BRANCH: Branch TEST_REPO_URL: https://test_repo } page_num: 1 testsuite: { [+] } user: me As you can see the metadata field SQUAD_NAME: is where the value SquadNameJimDoodle is held. The other fields I need to search on are also in this "metadata" area. I can't figure out how to build the query to search on them. Any help appreciated. Jim

Universal Forwarder Windows 2019 Server core --another domain

$
0
0
Is Windows 2019 server core supported for the universal forwarder? Need to install the universal forwarder into another domain to get security logs from domain controller. What domain account would I need to setup?

How to set the data retention in Splunk?

$
0
0
Where and how can I set the data retention on Splunk? Because I have seen there are many bow to set it like telemetry, main etc.. So it's really not clear...

dnslookup very slow, odd results.

$
0
0
(Splunk 7.2.3) I have a single windows domain. Inside that domain I have 2 subnets, 192.168.1.x, 192.168.2.x. I have 19 hosts, spread across the 2 subnets. All devices report their "host=" as an IP address, not a hostname I ran the search: index=x | dedup host | lookup clientip AS host OUTPUT clienthost AS hostname | table host forwarder hostname And I have some issues with the results: Issue 1) When I inspect the search job, the "command.lookup" portion takes 217 seconds. This is searching the entire index of ~200 logs across the past week. The search with no lookup takes about 3 seconds to display the results. I cannot find any logs relating to this delay, at least not in the /var/log/splunk directory. No timeouts or anything. Issue 2) Of my 19 hosts in the result table, only 9 actually have a "hostname" field. Closer inspection reveals that the missing hosts are all on the same subnet, the 192.168.2.x. For some reason, only one subnet is being pulled. I have 2 search heads, one at each location, so I ran the same search from the other search head (192.168.2.x) . OPPOSITE RESULTS. All the 192.168.1.x hosts are missing the lookup data. When I run a basic nslookup command from a workstation, the results and response time are identical for either subnet. So, I assume the script is doing something else, but I am not entirely sure which script is running this nslookup. Is my search taking long because of timeouts for the failing subnet? I changed my search to specify only one subnets worth of devices: index=x subnet=1 | dedup host | lookup clientip AS host OUTPUT clienthost AS hostname | table host forwarder hostname This result returns all 9 subnet1 host IPs with corresponding hostnames, but the search still took 71 seconds. So timeouts may have had a small part to play, but are definitely not the full culprit.

Time Picker in the Dashboard not working as expected.

$
0
0
We have a dashboard . When we select time period say 11/13/19 (9 am to 11 am ). The results are displying from 11/13/19 (8 am to 10 am) and the results consist of all zeros in between 8 to 9 am where there are values from 9 to 10 am . I think the zeros are diplaying because it's not the correct time. I don't think it's a timezone issue. How can we fix time picker for the dashboard so that it will only displays the results for selected time period(8 am 10 am) with no zeros? Here are the tokens used for the source codeif(isstr(earliest), relative_time(now(),earliest), earliest)if(isstr(latest), relative_time(now(),latest), latest)relative_time(earliestTime,"-7d")relative_time(latestTime,"-7d")relative_time(earliestTime,"-14d")relative_time(latestTime,"-14d")relative_time(earliestTime,"-21d")relative_time(latestTime,"-21d")relative_time(earliestTime,"-28d")relative_time(latestTime,"-28d")

Alternative to subsearch to search more than million entries

$
0
0
Hi I have a sub search command which gives me the required results but is dead slow in doing so. I am having more than a million log entries that i need to search which is the reason why i am looking for an optimized solution. I have gone through answers asked for similar questions but not able to achieve what i need I have a log which has transactions against an entry_id which always has a main entry and may or may not have subEntry I want to find the count of version number for all the mainEntry log which has a subEntry sample Query that i used index=index_a [search index=index_a ENTRY_FIELD="subEntry"| fields Entry_ID] Entry_FIELD="mainEntry" | stats count by version Sample data Index=index_a 1) Entry_ID=abcd Entry_FIELD="mainEntry" version=1 Entry_ID=abcd ENTRY_FIELD="subEntry" 2)Entry_ID=1234 Entry_FIELD="mainEntry" version=1 3)Entry_ID=xyz Entry_FIELD="mainEntry" version=2 4)Entry_ID=lmnop Entry_FIELD="mainEntry" version=1 Entry_ID=lmnop ENTRY_FIELD="subEntry" 5)Entry_ID=ab123 Entry_FIELD="mainEntry" version=3 Entry_ID=ab123 ENTRY_FIELD="subEntry" Please help in optimizing this

Ho to differentiate fields with same name but different values

$
0
0
I have log messages that have same field names and i am trying to create a table for the dashboard My messages are: { Message:"App Started" Timestamp: 2019-11-13 23:15:16.436156 }, { Message:"App Stopped" Timestamp: 2019-11-13 23:15:18.536156 } I need to create table with Message, Start time and stop time. Since both messages has same field name Timestamp, how can eval and differentiate them? Tried using if(Message="App Stopped") but it always gets me same value for both fields.

show all panels output to single panel in a dashabord

$
0
0
Hi Splunkers, I have 6 panels in my dashboard and all the panels have different underlying query but the output fields in the panel stats table are same and the results in all the panels look like the below sample table. I want to club all the results into a single panel/table at the end.So i just want to display one panel which contains the results from all the other panels. Thank you. **user action time object group difference modifier** zbc xyz 10-Sep hddh dj-dhdh 6 jhyy dhdh cnnc 10-Sep fhfhf jjj-ggg 8 gg

export csv not showing up in splunk dashboard

$
0
0
Hi Guys, I have a dashboard with panels.Im trying to export the dashboard results to csv file but im seeing only export to PDF which is not very useful.Can someone help me how to enable export to csv option or export all the panel results into single csv file. I want the csv to be downloaded to local machine Regards, Kranthi M

Assistance with Windows Firewall Logs

$
0
0
Hello, I'm fairly new to Splunk and am trying to extract local Windows Firewall Logs so they can be automatically indexed by Splunk. Universal Forwarder is installed and I validated that Event Logs are being indexed. After some research, I found Technology Add-On for Windows Firewall. The instructions in the add-on were not clear, but I followed it to the best of my ability, extracting the contents of the add-on to C:\program files\splunkuniversalforwarder\etc\apps\TA-winfw-master (then several sub directories under that). I also modified the inputs.conf file under etc\system\local and it currently shows as this: [default] host = myserver [monitor://C:\Windows\system32\LogFiles\Firewall\pfirewall.log] disabled = false sourcetype = winfw The Windows Firewall is configured properly and I validated that logs are showing in the pfirewall.log. I stopped/restarted the universal forwarder service but yet I am not getting the firewall logs yet, even after generating new traffic. I search for sourcetype=winfw and I get no results. I suspect that I'm missing something rather simple but I can't seem to figure it out. Thank you in advance...

How to end a Rex search with mutiple characters or a string sub as } }?

$
0
0

Sample data:


{ "active" : "Y“, “locationID" : 75942068, "existsFlag" : true, "manuallyUnarchived" : false, "pendingReminder" : false, "headerOperationType" : "TRN“, “headerCreationDateString" : "2019111307255700“, “headerCreationDateEpoch" : "1573651557“, “jobs" : [ { "jobNumber" : "RWERQ70“, “jobVendorNum" : "ACME“, “jobAcknowledgementDateString" : "2019-11-08:10:42“, “jobAcknowledgementDateEpoch" : "1573231320“, “jobPodDateString" : "2019-11-13:05:44“, “jobPodDateEpoch" : "1573645440“, “jobShipDateString" : "2019-11-08:11:20“, “jobShipDateEpoch" : "1573233600“, “jobStatusCode" : "DELIVERED“, “jobPartNumbers" : [ { "skuMfgNbr" : "AS3452“, “quantity" : 1 } ], "partShippedDescription" : "SHP142SVC" } ], "comments" : [ { "commentType" : "PRB“, “commentDateEpoch" : "1573192800000“, “arrivalWindowStart" : 1573477200000, "arrivalWindowEnd" : 1573858740000, "avsUsed" : "N“, “laborStatusCode" : "ETA Provided“, “partStatusCode" : "Delivered“, “owner" : { "businessUnit" : 0, "certifiedFlag" : false, "techId" : 0 }, "environment" : "None“, “subEnvironment" : "Other“, “shortComment" : "TechDirection : Other“, “dispatchCreationDateEpoch" : "1573230503“, “serviceAttributes" : { "ServiceType" : "FixerUpper“, “OutofHours" : "N“, “OutofWarranty" : "N“, “ServiceHours" : "10x1“, “ADOverrideRequest" : "N" } }, "address" : { "address" : "1 Main St“, “address1" : "1 Main St“, “city" : "Nowhere“, “country" : "US“, “postalCode" : "12345" }

I need a field containing all the text from "activity" all the way to } }, (the double curly brackets separated by a space and followed by a comma, located right before "address" field. I could do this with if a single terminator character ( } ), as in the example below, but that would only give me half of the data needed. I need a Rex that gives me all the data betwen "activity" and the } } (the two curly brackets). The two curly brackets indicate the end of the main field).

This works: | rex field=_raw "\"activity\"(?&ltACTIVITY_FIELDS&gt[^\}]+)"

This is what I need, but it does not work: | rex field=_raw "\"activity\"(?&ltACTIVITY_FIELDS&gt[^\}\s\}]+)"

Thanks for any assistance provided.

Splunk Practice Environment

$
0
0
I'd like to set up a practice Splunk environment so that I can practice various install methods of Splunk (clustering, distributed, standalone AIO, etc). I have chosen Linux as my OS build for all of my EC2 instances on AWS, but I am unsure if it would just be easier to set up a 4 or 5 instance environment (Monitoring Console, forwarder, 2 indexers, search head) in Virtualbox? In going that route I know I would need a crap-ton of memory and CPU on each virtual image in order to support Splunk min specs. I just need some good recommendations as to what is going to be the best environment to use in setting up a solid Splunk learning environment that I can practice in. Thanks for your help.

How Can I make single report from two csv files.

$
0
0
| inputlookup SF_Week41.csv | fields OpenedDate,ReOpenCount,LastModifiedDate,ResolvedDate,Age(Hours),CaseAge,ClosedDate,CustomerpendingTime,LastCurrentOwnerUpdateDateTime,CaseLastModifiedDate,CaseNumber,Status,OwnerL4,LastResolvedDateassubmit,CumulativeTime(L4),AssignedDateTime(L4),OwnerQueueL4,OwnerLevel,CaseOwner,IssueDefectType,IssueType,IssueSubType,AccountName | eval OpenedWeek=strftime(strptime(OpenedDate,"%m/%d/%Y"),"%V") | eval LastModifiedWeek=strftime(strptime(LastModifiedDate,"%m/%d/%Y"),"%V") | eval ClosedWeek=strftime(strptime(ClosedDate,"%m/%d/%Y"),"%V") | eval ResolvedWeek=strftime(strptime(ResolvedDate,"%m/%d/%Y"),"%V") | eval Morethan30Days = if(CaseAge>30,"Yes","No") | fields Morethan30Days CaseNumber Status OpenedDate OpenedWeek ResolvedDate ResolvedWeek LastModifiedDate LastModifiedWeek ClosedDate ClosedWeek CaseAge | where Morethan30Days="Yes" AND (Status="Customer Pending" OR Status="In Progress") | stats count by Status | addcoltotals count labelfield="Status" label="Total" | rename count as Week41 Output is: Status Week41 Customer Pending 38 In Progress 66 Total 104 | inputlookup SF.csv | fields OpenedDate,ReOpenCount,LastModifiedDate,ResolvedDate,Age(Hours),CaseAge,ClosedDate,CustomerpendingTime,LastCurrentOwnerUpdateDateTime,CaseLastModifiedDate,CaseNumber,Status,OwnerL4,LastResolvedDateassubmit,CumulativeTime(L4),AssignedDateTime(L4),OwnerQueueL4,OwnerLevel,CaseOwner,IssueDefectType,IssueType,IssueSubType,AccountName | eval OpenedWeek=strftime(strptime(OpenedDate,"%m/%d/%Y"),"%V") | eval LastModifiedWeek=strftime(strptime(LastModifiedDate,"%m/%d/%Y"),"%V") | eval ClosedWeek=strftime(strptime(ClosedDate,"%m/%d/%Y"),"%V") | eval ResolvedWeek=strftime(strptime(ResolvedDate,"%m/%d/%Y"),"%V") | eval Morethan30Days = if(CaseAge>30,"Yes","No") | fields Morethan30Days CaseNumber Status OpenedDate OpenedWeek ResolvedDate ResolvedWeek LastModifiedDate LastModifiedWeek ClosedDate ClosedWeek CaseAge | where Morethan30Days="Yes" AND (Status="Customer Pending" OR Status="In Progress") | stats count by Status | addcoltotals count labelfield="Status" label="Total" | rename count as Week46 Output is: Status Week46 Customer Pending 38 In Progress 62 Total 100 Is it possible to get both of them in one report like below using append or some other command? Expected Output : Status Week46 Week41 Customer Pending 38 38 In Progress 62 66 Total 100 104

UF not forwarding logs from windows server

$
0
0
I am not receiving any logs from windows device. In internal logs i can see below ERROR: ERROR ExecProcessor - message from ""c:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - Did not bind to the closest Domain Controller, a further domain controller has been bound. Request your kind help...

Simplest method of writing syslog messages

$
0
0
Simplest method of writing syslog messages? What technology I have to use to receive syslog messages in UF server and write it into a file? Free version which has almost all features required for Splunk like filtering ect.

Compare the login IP of the last time or previous 7 days to find the abnormal login

$
0
0
hello everyone. I have an alert requirement . an administort has login the device. I want to compare his current IP address with that of the last time or previous 7 days,If different, then alert. However, there are multiple administrator accounts, the fixed IP address used by each administrator may also be different. For example, `admin` often uses IP `2.2.2.2` to log in to the device, and `admin2` often uses IP `3.3.3.3` to log in to the device On November 14, 2019 . These two administrators use a different IP login device than usual. I think this is an abnormal behavior, whether they login successfully or fail _time account src_ip status 2019/11/14 14:30:00 admin2 4.4.4.4 Failed 2019/11/14 14:00:00 admin 1.1.1.1 success 2019/11/14 09:00:00 admin 2.2.2.2 success 2019/11/13 09:00:00 admin2 3.3.3.3 success 2019/11/13 08:00:00 admin 2.2.2.2 success 2019/11/12 11:00:00 admin 2.2.2.2 success 2019/11/11 10:00:00 admin 2.2.2.2 success 2019/11/10 00:00:00 admin 2.2.2.2 success 2019/11/09 09:00:00 admin2 3.3.3.3 Failed 2019/11/08 09:00:00 admin2 3.3.3.3 success ![alt text][1] How should I write this spl and configure alert? I want to check the login log every 5 minutes, and then compare the login IP with that of the previous 7 days OR last time all the help will be appreciated [1]: /storage/temp/275144-pic.png

ldapsearch not returing list of all AD groups and users

$
0
0
I'm trying to create a lookup of the domain, ad group and user using `ldapsearch` command from `Active Direcotyr Add-on`. The below query is schduled as report and generates the lookup. If I manually verify the the data, some groups and all users from that groups are missing in the lookup. `| ldapsearch domain="test_domain" search="(&(objectClass=group))" attrs="sAMAccountName,member,groupType,sAMAccountType" | search groupType=SECURITY_ENABLED | spath | rename sAMAccountName as sAMAccountName1 | mvexpand memebr | ldapfetch domain="test_domain" dn="member" attrs="sAMAccountName,distinguishedName"` If I include the group names in the query, it generates the required lookup for the specified groups only. `| ldapsearch domain="test_domain" search="(&(objectClass=group)(|(cn=grp_prefix1*)(cn=grp_prefix2*))" attrs="sAMAccountName,member,groupType,sAMAccountType" | search groupType=SECURITY_ENABLED | spath | rename sAMAccountName as sAMAccountName1 | mvexpand memebr | ldapfetch domain="test_domain" dn="member" attrs="sAMAccountName,distinguishedName"` I'm not able to figure out, why the first query not returning the results for particular groups. I also checked that groups are not being ignored or skipped in lookup due to some limit or alphabetical order. Let me know if any other details are required.

How to search in index with a condition from another index

$
0
0
Hi, I have 2 different indexes. Index1: _time Fehlermeldungtext 2019-07-01 22:01:30 Streckenüberwachung Auslauf! 2019-09-09 04:28:56 Streckenüberwachung Auslauf! 2019-08-26 05:40:59 Streckenüberwachung Auslauf! 2019-08-25 11:18:30 Streckenüberwachung Auslauf! 2019-08-25 11:16:52 Streckenüberwachung Auslauf! 2019-08-25 11:12:30 Streckenüberwachung Auslauf! 2019-08-24 18:37:55 Streckenüberwachung Auslauf! 2019-08-24 18:37:15 Streckenüberwachung Auslauf! 2019-08-24 18:36:36 Streckenüberwachung Auslauf! 2019-08-24 18:35:57 Streckenüberwachung Auslauf! 2019-08-24 17:03:51 Streckenüberwachung Auslauf! Index2: Datum_Einlauf Datum_Auslauf 2019-07-01 21:59:37 2019-07-01 22:03:09 2019-07-01 21:58:25 2019-07-01 22:02:02 2019-07-01 21:56:22 2019-07-01 21:59:55 2019-07-01 21:54:37 2019-07-01 21:58:14 2019-07-01 21:54:04 2019-07-01 21:57:42 2019-07-01 21:52:36 2019-07-01 21:56:12 2019-07-01 21:52:15 2019-07-01 21:55:50 2019-07-01 21:50:14 2019-07-01 21:53:45 2019-07-01 21:49:53 2019-07-01 21:53:27 2019-07-01 21:45:19 2019-07-01 21:48:52 2019-07-01 21:44:35 2019-07-01 21:48:12 2019-07-01 21:44:01 2019-07-01 21:47:31 2019-07-01 21:41:45 2019-07-01 21:45:22 2019-07-01 21:41:11 2019-07-01 21:44:49 I want to find such events in Index2, where Datum_Einlauf<_time (from the Index1) AND Datum_Auslauf>_time (from the Index1). For example: for the 1. row from Index1 2019-07-01 22:01:30 Streckenüberwachung Auslauf! should appear 2 events from Index2: 2019-07-01 21:59:37 2019-07-01 22:03:09 2019-07-01 21:58:25 2019-07-01 22:02:02 Can anybody help me, please?

Splunk Field extractions Key Value pairs with comma seperated data

$
0
0
Hi I am receiving data through a UF from a script running on a HPUX server. the for mat of the data is as follows. group=NAME1 group_id=ID1 group_mem=MEMBER1,MEMBER2,MEMBER3,MEMBER4 There are no specific field extractions in place when the data gets into splunk the automatic field extractions give me field like this. group = NAME1 group_id = ID1 group_mem - MEMBER1 The items MEMBER2-4 although appearing in the raw record are not being extracted to a field. I am also not clear on where the auto extractions are taking place, on the UF, Indexer or at search time Can anyone point me in the right direction? thanks
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>