How do I write a rex command to extract from up to a particular delimiter (such as comma) or (if there is no delimiter) to the end of string?
I thought of something like `rex field=TEXT "(?.+)(\,|$)"` but it did not work.
For example:
- If TEXT is `12A-,4XYZ`result should be `12A-`(up to `,`)
- If TEXT is `567+4ABC` result should be `567+4ABC` (the entire string)
I have a stats command in my correlation search spl which has an argument *dedup_splitvals=t* not sure what this argument does. Could anyone please help.
Dears,
I'm little confused in my environment how it was configured the mail server for email alerts from splunk.
In my search head under /etc/system/local, i could see below configuration defined for alert_actions.conf.
[email]
reportServerEnabled = 1
reportServerURL = https://splunklicensemstr.xxxxx.com:8089
from = NoReply
We are currently running into an issue on email alerts from splunk. Mails are not receiving from splunk. I'm seeing the below search results in my search head.
07-02-2018 19:38:03.043 -0500 INFO StreamedSearch - Streamed search connection terminated: search_id=remote_len-sh01_scheduler__admin__search__RMD59b8cc0ae9c2e8774_at_1530578280_37144, server=len-sh01, active_searches=3, elapsedTime=0.221, search='litsearch (index=_internal ("mailhost-in.xxxxx.com" OR index="_internal") "mailhost-app.xxxxx.com") | fields keepcolorder=t "_bkt" "_cd" "_si" "host" "index" "linecount" "prestats_reserved_*" "psrsvd_*" "source" "sourcetype" "splunk_server"', savedsearch_name="Email_Test_Jun13"
Can anyone help on the configuration of my email server to splunk.
Thanks,
Ramu Chittiprolu
I am 6.6.3 and we want to upgrade to 7.0.4 but I am having difficultly find the url for the install. Can someone point me in the correct direction?
Thanks!
Hi all, I'm using the Missile map to visualize several IP locations but the result has a weird place: It shows there's a bunch of IP addresses near Africa but I'm pretty sure there's no place near Africa in my case. Cuz when I use `..|iplocation FromIPAddr | geostats count by Country` to test there's no way near Africa. But now it looks like this:
![alt text][1]
Now I have two possible guess:
1. The place is not exactly a country so when I used above command to search it's not included.
2. It's the bridge IP.(But I'm sure no bridge IP would be included in raw data)
So how do I identify it?Thanks!
[1]: /storage/temp/251104-weird.png
When I upload data '('Cannot connect to proxy.', error('Tunnel connection failed: 403 Proxy Error',))' message is displayed.
To resolve this problem, I referred to the following Answer.
https://answers.splunk.com/answers/210955/data-uploader-why-am-i-receiving-a-cannot-connect.html
But there isn't `no_splunk` setting in `splunk-launch.conf`, the setting is in `server.conf`.
Why set it in `splunk-launch.conf`?
In addition, it is described that `no_splunk` in `server.conf` is `localhost` by default, why should I set it again in `splunk-launch.conf`?
If someone knows about it, please tell me.
Here is my query> index="test" (source="*28q*" OR> source="*29q*") | bucket _time> span=1d as day | rex field=_raw> "(?P\S+) - (?P\S+)> \[\d+\/\w+\/\d+:\d+:\d+:\d+> -\d+]\s\"(?P\w+)\s+(?P\S+)\s\S+\s(?P\d+)\s+(?\d+)"> | search LanID !="-" | stats> latest(_time) AS Last_Active_Time,> earliest(_time) AS First_Active_Time> by LanID,day | convert> ctime(Last_Active_Time) | convert> ctime(First_Active_Time) | lookup> Markdowns-EndUserTracker LanID> OUTPUTNEW "User Name",Role | rename> LanID as Users, HTTP_status as> "HTTP_code", source as "LogPath" |> table Users,"User> Name",Role,Last_Active_Time,First_Active_Time,> source, HTTP_code
When i run this query, source field is empty. When i try to print source field directly ( without lookup ) i am getting the results. When i add lookup, why will data from previous search no longer be available ?
I monitor folder on one server with SplunkUniversalForwarder installed. the configuration of **input.conf** as below:
[monitor://E:\A530\archive\Powershell_User]
sourcetype = Powershell_User
crcSalt =
disabled = false
followTail = 0
then I do search in search head with Splunk query:
source="E:\\A530\\archive\\Powershell_User\\20180614_get_aduser_all_for_Splunk.csv"
the total rows of this CSV file are 7168, but only return 5139 rows.
the CSV is well formatted
How can I get all rows in Splunk query?
Thanks
Why is there a difference between the number of events scanned in both these queries?
Using below query getting statistics count 25 and number of events (Events label below search query) as 214.
| tstats values(XXXX.product_name) as "Product Name" from datamodel=XXXX where (XXXX.threat_name="*") by XXXX.threat_name
But, Using
| tstats values(XXXX.product_name) as "Product Name" from datamodel=XXXX where (XXXX.threat_name!="") by XXXX.threat_name
getting statistics count same 25 and number of events (Events label below search query) as 5,468.
Dears,
I have two search heads in my environment and have two different HTTP URLs for my end users for splunk login.
However, I would like to point my two different search head URLs to the common URL at netscaler end. Something like this https://splunkprd-web.com.
I want my front-end Splunk URL to be hitting on SSL. This can be done on netscaler end by pointing the common URL to my two search head servers. At the same time, do i have to keep my individual search heads running on SSL by going to server settings.
I'm even ok with the approach of my individual search head splunk URLs running on http. But my front end URL should be on HTTPS.
Can someone confirm, if this is valid approach.
Thanks,
Ramu Chittiprolu
Experts,
Here is my Log content and I wish to extract fields like
\nmmf-bwce-customerOrder.application\n123\nHold\n2017-06-05T04:04:06.051Z\n123456\nDelivery\nRequest\n\n
All I need is to extract the fields like ServiceName, TransactionId and so on. I have done this thru props.conf and transforms.conf as below and it works perfectly fine. But I wish to move it as a search time extraction (Without a transforms.conf). Any input is appreciated.
my_stanza]
REPORT-xmlkv = xmlkv-alternative
Transforms.conf:
[xmlkv-alternative]
FORMAT = $2::$3
REGEX = <([^\s\>]*):([^\s\>]*)\>([^<]*)\<\/\1:\2\>
I have created a Splunk app. And I have created a new customized navigation menu in this app. I have few XML based dashboards and some HTML dashboards. When I open XML dashboard the navigation menu looks good as created **BUT when I open a HTML dashboard, the navigation menu changes** to grey with two options - Search and Reports.
Here's my default.xml:
I have query which goes like this
sourcetype="A" host=B
|rex "^(?:[^ \n]* ){2}(?P\w+)"|rex "^(?:[^ \n]* ){10}(?P\d+)"|rex "^[^ \n]* (?P[^ ]+)"
|fields user,resp_time,txn_id
| sort -resp_time
I want to be able to see the latest _raw event (i.e. the one with maximum resp_time)
Again, I don't want to see the table. I want to see the actual _raw event
I am facing a weird issue with sid. I have a saved sid with yesterday's (00:00 to 23:59) data, which is showing a dip in my messages during yesterday evening. But, for the same query (since there is no change in the query, i believe I don't need to provide my query here) and for the same time frame, if I try to run as a new search, I do not see the dip.
Since, both are having same query & timeframe, how the results would change? Is there any difference in this? Someone please explain.
I have logs that send from syslog server, so there are 2 timestamps. I would like to use 2nd timestamp to be _time by using TIME_PREFIX. However, it doesn't match if the log come from syslog. It's match if using monitor file.
Jun 15 10:06:58 10.226.48.229 Jun 15 10:06:59 111.111.111.111 1 2018-06-15T10:06:51.424243+07:00 node01 kernel - - - [9079188.370611] RULE 0 -- ACCEPT IN=eth1 OUT=eth2 MAC=00:50:56:a0:e4:fa:00:50:56:b6:0a:53:08:00 SRC=10.60.0.3 DST=10.99.2.198 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=13091 DF PROTO=TCP SPT=55646 DPT=80 WINDOW=5840 RES=0x00 SYN URGP=0
Jun 15 10:06:58 10.226.48.229 Jun 15 10:06:51 111.111.111.111 haproxy[3645]: 1.46.134.132:2195 [15/Jun/2018:10:06:51.292] https-web~ https-backend/www01 116/0/12/3/131 404 424 - - ---- 3/3/0/0/0 0/0 "GET /favicon.ico HTTP/1.1"
My props.conf is
[syslog]
TIME_PREFIX = ^\w+\s\d+\s\d+\:\d+\:\d+\s\d+\.\d+\.\d+\.\d+\s
MAX_TIMESTAMP_LOOKAHEAD = 16
Hi,
I would like to know if there is an option to wait for/ minute X seconds **before indexing the data**.
The goal is to index only the last log in the time range (there is no way to recognize the last log).
Is it possible?
Thanks
Hi, I have some csv files on my Splunk index. The files are named with a date like xxxxx20180703.csv . In the csv files there is a field with a time in 12:30:45 PM format. The timestamp is able to pickup the date and time. However I have an issues where on some of the files(not all) it detects 11pm properly but then it treats 12 AM as the next day and any time after that will be labeled as the next day as well.
Hi All,
I am trying to setup EMC VNX app in SPLUNK, i have downloaded the App and Add on from the SPLUNK website.
After uploading the app in the splunk i am able to see both of them in the apps page.
Now how do i proceed, What are the steps that i need to take for adding/configuring the IP of our VNX to SPLUNK.