Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Splunk - Adjusting source file timestamp

$
0
0
Given: I have two log files (file_1, file_2) Each from a different server (server_1, server_2). The servers are not property synchronized via ntpd. (Example: server_1 is 13 seconds ahead of server_2.) I do not have the ability to adjust or correct the server times. I am the Splunk user, not the Splunk administrator. Problem: After ingesting each of the log files, the events are off by 13 seconds (obviously). Question: Can I adjust the _time for all events in source=file_2 by 13 seconds so the events line up correctly in search results, graphs, etc.?

Splunk App Packaging: How to package app with multiple add-ons to a single .spl file?

$
0
0
I am trying to package the app with> splunk package app abc But since there is another add-on which I installed from splunk-base i.e. xyz isn't get packaged along with it. Is there any way to package app with multiple add-ons from splunk-base in a single .spl?

How can I install multiple instances of the universal Forwarder

$
0
0
My team are the IS Security folks for the company. We are migrating to SPLUNK from McAfee Nitro and currently we only have a need to look at Windows security event logs. We have our business folks using their own deployment of SPLUNK and we don't want to piggy back or share the deployment as this now need to be managed by our operations team and slow down any upgrades we might want to do that business area wouldn't. For these reason we want our own deployment. I have tested running multiple instances on a device by installing, poking the registry to change the service name, zipping up the contents, exporting the registry key then play it back on another device. While this would work with old software deployment strategy it will not work with our new which is puppet. I can install via puppet using the MSI and while I can deploy to a folder of my choosing the service is still installed as splunkuniversalforwader. I am looking for suggestions on how I can implement this.

Need to Pull the Full Contents of each config file as a single log entry

$
0
0
Hi Team, We got a request to monitor the config file and raw data would be like this as mentioned below: But while indexing Splunk is taking each and every line in the config file as a separate event and splitting it instead of keeping it as a single event. We want the data to be a single event rather than segregating into multiple events so kindly help on this request. Our main aim is to pull the full contents of each config file as a single log entry.

How to change "No results found" in a dashboard to a custom message

$
0
0
Per some research it appears that there is an simpe XML solution for by using the job propperty = job.resultCount Example " " What I am not sure of is how to add your custom message.

data is send to main index only

$
0
0
in system/local directory below is the configuration. [monitor:\\{Log Location}] sourcetype=test index=chilqa disabled = false but it is surprising why data is sent to main index still. is there any other location which is making the index to pass to main index? Thanks. Vikram.

Duration of all events without time overlap in total?

$
0
0
Hello everyone, beginning on Splunk and asking for your help I've got something like this in my transaction : Event 1 : 9:00:00 Start and 11:00:00 Stop Event 2 : 10:00:00 Start and 11:30:00 Stop Event 3 : 13:00:00 Start and 14:00:00 Stop Event 4 : 13:20:00 Start and 13:40:00 Stop I want to determine the duration of how long were my events combined ON START during the day. This means we need AT LEAST ONE transaction on START for the duration to grow. In our case : 9:00 until 11:30 and 13:00 until 14:00 = 3 hours and 30 minutes in total. So I would like to get 3.5 hours as a result when I have something like what I just showed. I hope this is not too confusing Looking forward to your answers Thanks

How to find out the total events by count and size from Splunk search

$
0
0
How can I get the report of total events (licensing) by count and size (GB) from Splunk search from the past 7 days? How to get the total spaces from hot or cold buckets from all indexers? Thanks.

Connecting events that don't have a common field

$
0
0
Hi guys, more like a generic question: how do you make sense of events which are not necessarily linked by a common field? For example, one of our applications produces logs that generate many events/lines such as: [08/Sep/2017:09:20:20 +0200] Logon request from 10.10.10.3 [08/Sep/2017:09:20:21 +0200] Object 662737354 deleted [08/Sep/2017:09:20:21 +0200] User X77262 trying to connect ... [08/Sep/2017:09:20:22 +0200] Logon Denied: Bad password So lines 1, 3 and 4 represent a logon request but I cannot "transact" them as there is no common field. Or can I? In a perfect world session IDs would be introduced in the logs OR more complete log entries, but changing code is a massive undertaking ... How do you guys deal with scenarios such this one? Thanks.

How can I authenticate to the REST API, pass the query, and close the session (in one fell swoop)?

$
0
0
How set several request in one input ? I must first authenticate to the REST API, then pass the query, and at end close the session Regards

Index-time field extraction issue

$
0
0
Hello all, I'm a bit stuck with my issue. I do have this splunk infra : Sources ==> UF ==> Indexer cluster (3 + master) Search head cluster. I'm trying to extract fields at index time to transform it in a future. My props.conf and transfroms.conf are deployed in indexers throught the master. log line look like : date="2017-09-08",time="08:08:00",s-ip="8.8.8.8",time-taken="8",c-ip="9.9.9.9",c-port="45687",s-action="TCP_DENIED",cs-user="foobar" **transforms.conf** [fieldtestextract] WRITE_META = true REGEX=cs-user="([^"]+) FORMAT=csuser::$1 **props.conf** [web:access:file] TRANSFORMS-csuser = fieldtestextract TZ = utc SEDCMD-username = s/,cs-user=\"[^\"]+\",/,cs-user="xxxx",/g The SEDCMD is working like a charm but the tranforms won't work... **fields.conf** on search heads : [csuser] INDEXED = true INDEXED_VALUE = true I don't see my field on search head and obsiously i'm not able to execute query against it. Could you help me figuring out what's wrong with my configuration ? Many thanks in advance.

How can I install multiple instances of the universal forwarder?

$
0
0
My team are the IS Security folks for the company. We are migrating to SPLUNK from McAfee Nitro and currently we only have a need to look at Windows security event logs. We have our business folks using their own deployment of SPLUNK and we don't want to piggy back or share the deployment as this now need to be managed by our operations team and slow down any upgrades we might want to do that business area wouldn't. For these reason we want our own deployment. I have tested running multiple instances on a device by installing, poking the registry to change the service name, zipping up the contents, exporting the registry key then play it back on another device. While this would work with old software deployment strategy it will not work with our new which is puppet. I can install via puppet using the MSI and while I can deploy to a folder of my choosing the service is still installed as splunkuniversalforwader. I am looking for suggestions on how I can implement this.

How can I filter a transaction that contains multiple matches - and force a numeric sort?

$
0
0
I have used the 'transaction' command to isolate transactions that are made up of roughly 45 events each. I have a regex that identifies a TaskName and the TotalMilliseconds for each event, producing 45 matches for each transaction. Questions: When I try to filter the transaction (TotalMilliseconds>500, for example), the criteria is applied only against the first match. How can I ensure ALL matches are considered when filtering? How can in insert a 'tab' character when formatting a concatenated field value? TotalMilliseconds."\t".TaskName and its derivatives do not work. How can I filter the results to show only matches where TotalMilliseconds<500 (for example)? Any attempt I've made so far has only applied the filter to the FIRST match in my list of 45 values. Is there any way to force a numeric sort on a string field? Thanks for looking! Appendix: Query: base search |rex "rex to find ClientID" |rex "long rex that finds TaskName and TotalMilliseconds" |transaction field1 field2 maxspan=5m unifyends=true startswith="beginning" endswith="ending" |search [[[or, 'where']]] TotalMilliseconds>500 <--neither meets my needs Results: ClientID TimeAndTaskName abc123 1127 (UseCaseA) 12 (UseCaseB) 21 (UseCaseY) Goal (filtering TotalMilliseconds>20): ClientID TimeAndTaskName abc123 21 UseCaseY 1127 UseCaseA

Powershell Issue

$
0
0
I wrote the powershell script below that functions when I manually run it as either my domain admin account, or under the local system context. However, when deployed via Splunk, dns.exe on the domain controllers essentially spikes the CPU to 100% (dns.exe and powershell.exe) until I remove the app and restart Splunk. It does however, report in every 300 seconds as configured with the information I am looking for. Any idea why this script wouldn't be playing with Splunk well?? import-module activedirectory $ipaddress = $(ipconfig | where {$_ -match 'IPv4.+\s(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' } | out-null; $Matches[1]) $ErrorActionPreference="SilentlyContinue" $domain = ([System.DirectoryServices.ActiveDirectory.Domain]::GetComputerDomain()).Name $dnstest=Test-DnsServer -IPAddress $ipaddress -ZoneName $domain | select Result if ($dnstest -like '*Success*') { $result="1" write-host ((get-date),"`nDNS Status:","$result") } else{ $result="0" write-host ((get-date),"`nDNS Status:","$result") }

Splunk Add-On for Microsoft IIS Default Settings

$
0
0
This application includes several FIELDALIAS comands in props.conf for the sourcetypes defined. One of these is "FIELDALIAS-s_computername = s_computername as host" which reassigns the host value at search time from the value of s_computername in the event. We don't log the host name in all of our IIS events so Splunk pulled the port (80 or 443) into this field resulting in the majority of our events showing the port for the host. My question is: Is it a standard practice to send IIS logs through a syslog server? This setting seems like it would only be only helpful under that scenario. If IIS logs are sent through a syslog server then I would need to have IIS include the hostname so I could pull it from there. Otherwise all events would have the syslog server as the host. If it is not a standard practice, and I don't think it is, why is this a default setting in the app?

Data is sent to main index only

$
0
0
in system/local directory below is the configuration. [monitor:\\{Log Location}] sourcetype=test index=chilqa disabled = false but it is surprising why data is sent to main index still. is there any other location which is making the index to pass to main index? Thanks. Vikram.

Why isn't my discard working?

$
0
0
I'm trying to discard entries from one of my data sources and it isn't working. Why? All the following are set on the indexer, not the universal forwarder. I've triple checked my work. **inputs.conf** [WinNetMon://inbound] direction = inbound;outbound disabled = 0 index = windows packetType = accept;connect **props.conf** [WinNetMon://inbound] TRANSFORMS-null1 = null1 **transforms.conf** [null1] SOURCE_KEY = LocalAddress REGEX = ::1 DEST_KEY = queue FORMAT = nullQueue The events I'm getting are `source=inbound` and `sourcetype=WinNetMon`

Updating app to convert base xx to Decimal

$
0
0
Can you update your app to convert TO base 10? I have some base36 data. Thanks!

Replace a null value after search appending

$
0
0
Hello All, I have a search query as below: index="alpha_all_aal_event" type=twaReport|search callId=0 userId=a2ebd4aa-f91a-4088-8667-60143707c368|fields *|rename eventTime.$date as eventTime|eval eventTime=(eventTime/1000)|append [search index="alpha_all_careport_event" userId=a2ebd4aa-f91a-4088-8667-60143707c368|fields *|rename eventTime.$date as eventTime|eval eventTime=(eventTime/1000)|streamstats min(eventTime) as limit]|table eventTime eventData.preLimiterSplEstimate eventData.postLimiterSplEstimate eventData.twaThreshold limit And the data is shown below : ![alt text][1] The limit column has just a single value min(eventTime) from one of the search queries, and its null everywhere else. I want to replace the null value of limit, with already existing single value in limit. Can someone please help me how to do this, as this is appended search I am not getting the expected results. [1]: /storage/temp/209786-screen-shot-2017-09-08-at-45524-pm.png

How can I find AD accounts that haven't been used for a specified time period?

$
0
0
Query that can tell me non-disabled active directory accounts that have not been used in 12 or more weeks? All in the title. I'm looking to run a query that can give me this data. Thanks all.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>