We're finishing up our migration from a single search head to a search head cluster. Our company uses F5 load balancers. Per this http://docs.splunk.com/Documentation/Splunk/latest/DistSearch/UseSHCwithloadbalancers , I had the web guys set me up with a VIP that points to our 2 search heads, using layer-7 processing and persistence.
In order to keep the clients from getting the SSL certificate warning every time they log in, I wanted to have a certificate made for the friendly 'splunk.company.com' URL. The web guys are telling me that because Splunk specifies a layer 7 profile, that they can have a cert on the VIP, that it would have to be an individual cert on each search head, which I don't think would prevent the warnings in the browser...
Has anyone else run into this?
↧
SSL certificate for F5 VIP to search head cluster?
↧
Why has The TCP output processor paused data flow?
Hi,
i am not able to receive any data from my forwarder. It stopped working yesterday.port 9997 is open.connection is established.i can telnet to my server(which is my laptop).
here is the error from the splunkd from the forwarder
09-21-2017 15:20:51.293 -0400 WARN TcpOutputProc - Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group default-autolb-group has been blocked for 82200 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.
here is my input file from the forwarder server
[default]
host = HC1xxxxxxCV
[monitor://C:\Program Files (x86)\..........]
disabled = false
followTail = 0
sourcetype=Data Import
ignoreOlderThan = 6d
here is my outputs file from the forwarder
[tcpout]
defaultGroup = default-autolb-group
[tcpout-server://rs1-sbaba-t440.xxxxxxxx:9997]
[tcpout:default-autolb-group]
disabled = false
server = rs1-sbaba-t440.xxxxxxxxxxxxx:9997,rs1-sbaba-t440:9997
[tcpout-server://rs1-sbaba-t440:9997]
i checked on my laptop the reeving is enabled
this is my input file from the receiver
[default]
host = rs1-sbaba-t440
[script://$SPLUNK_HOME\bin\scripts\splunk-wmi.path]
disabled = 0
i read something about internal que being blocked.and set the stopacceptafterqblock attribute on inputs file which i dont see in my receivers inputs file(under local folder)never change the conf files under default folder.
i've been banging my head for hours and since i am stuck probably missing something very simple but can't find it.anyhelp is appreciated.
this thing was working fine till yesterday and now all of a sudden i am not bale to get data
thanks,
↧
↧
Collecting filesystem usage in actual units (GB) rather than percentage
Is there any way to collect JFS disk usage in actual quantities (MB) rather than percentage? I only just now realized that my old nmon analyzer spreadsheet outputs didn't have it either, so I'm assuming that it's not an option within nmon itself - but I thought I'd toss the question out anyway.
I know that filesystem usage in % is available and used for the watermarks; I guess I still have some sense that I might want to alert on real disk free in some cases.
↧
summary index replication in indexer cluster
Can we do summary index replication in indexer cluster by using replication_factor and search factor
↧
Help with a search to print date fields
need to print dates from Thanksgiving onward for the rest of the week until Monday
index="test" source="test" date=* mon=* year=* (STATDATE>=2016-11-22 AND STATDATE<=2016-11-30) SITE=USA | eval day_c = strftime(_time,"%Y-%m-%d") | eval enddate= year+"-11-29"| eval startdate= year+"-11-24" | eval DiffInSecs = strptime(enddate, "%Y-%m-%d")-strptime(startdate, "%Y-%m-%d") | eval td = strftime(DiffInSecs, "%Y-%m-%d %A") | table day_c td
Expected Results:
2016-11-24 2016-11-24 Thursday
Actual results:
2016-11-29 1970-01-05 Monday
2016-11-28 1970-01-05 Monday
2016-11-27 1970-01-05 Monday
2016-11-26 1970-01-05 Monday
2016-11-25 1970-01-05 Monday
2016-11-24 1970-01-05 Monday
2016-11-23 1970-01-05 Monday
2016-11-22 1970-01-05 Monday
↧
↧
How to record/calculate the duration of overlapping transactions
I have a transaction overlap issue. The output below is my data from search query with a transaction command. Here is my search query:
**Search**
index=* (sourcetype=InCharge-Traps AND (State="Notify" OR State="Clear")) OR (sourcetype=SAM_Audit AND (eventtype="Notification Notify" OR eventtype="Notification Clear")) source!="D:\\InCharge\\SAM\\smarts\\local\\logs\\TRAP-INCHARGE-OI_en_US_UTF-8.log"
[| inputlookup New_SLA_Targets where Alert="y"
| fields InstanceName EventName]
| lookup New_SLA_Targets InstanceName EventName OUTPUT Service Target Type Dev_Needed Status Weight SecsDown StartTime EndTime
| sort _time
| transaction Service InstanceName EventName Type startswith=(State="Notify" OR eventtype="Notification Notify") endswith=(State="Clear" OR eventtype="Notification Clear")
| concurrency duration=duration
| eval stime=strftime(_time, "%H:%M:%S")
| eval stime_epoch=_time
| eval etime_epoch=stime_epoch+duration
| eval etime=strftime(etime_epoch, "%H:%M:%S")
| where stime>StartTime AND etimeSecsDown,"Y","N"))
| where Active="Y"
| table _time stime_epoch stime etime_epoch etime duration concurrency InstanceName EventName
**Output**
_time stime_epoch stime etime_epoch etime duration concurrency InstanceName EventName
2017-08-28 10:13:19 1503933199 10:13:19 1503933383 10:16:23 184 1 ualbuacwas5 Down
2017-08-28 10:17:15 1503933435 10:17:15 1503941278 12:27:58 7843 1 ualbuacwas4 Down
2017-08-28 12:22:35 1503940955 12:22:35 1503941180 12:26:20 225 2 ualbuacwas5 Down
2017-08-28 12:29:39 1503941379 12:29:39 1503945457 13:37:37 4078 1 ualbuacwas4 Down
2017-08-28 13:13:43 1503944023 13:13:43 1503947722 14:15:22 3699 2 ualbuacwas5 Down
I need to identify and report the overlapping transactions and the overlapping duration. All other duration's are not important.
So, if you look at the output stime_epoch 1503933435, the end of that transaction overlaps the next at stime_epoch 1503940955. This is the record with the concurrency of 2. I have two overlaps in my data and need to report on the duration of just the overlap. I believe in my example above, it would be 323 seconds. My second would be 1434.
At this point I am stuck. I'm sure that someone out there can help me out.
Thanks in advance,
Rcp
↧
inbuilt index
Can i get metadata about the seaches, dashboard etc created in splunk through any of the inbuilt index ?
↧
How do I convert a timestamp?
Hi,
I have a field with timestamp value "2017-09-21T20:48:48.535427Z" in format. I need to convert it to "09/21/2017 3:48:48 PM", Please advise?
↧
How can I index data in real time?
I have created an alert which checks if logs are not present in last 20 mins per source. I have around 32 source files from single forwarder. Many of my files are not getting indexed in real time and I am receiving this alert frequently.
Can anyone tell me any parameters which needs to be changed so that I can index the data in real time?
is there any mechanism to check what is the inflow rate of the data?
System Info:
I also see my CPU is around 80% idle and working Windows OS. I have 4 Core machine 32gb ram
Splunk Enterprise 6.4.3
↧
↧
Does increasing max_memtable_bytes in limits.conf impact the search head performance?
ES app creating large lookup file the size nearly 600MB file. So as the work around suggested from Splunk docs we increased max_memtable_bytes value to 700MB in limits.conf on all the indexers. After the change search heads working very slow and searches aso working slow.
Does this change have any impact on search heads??
↧
Is this possible -- summary index replication in indexer cluster
Can we do summary index replication in indexer cluster by using replication_factor and search factor
↧
Can I collect data about the searches, dashboards, etc. through Splunk's internal index?
Can i get metadata about the seaches, dashboard etc created in splunk through any of the inbuilt index ?
↧
Help configuring props.conf and transforms.conf to filter Bro logs at the heavy forwarder
I am having trouble configuring my props.conf and transforms.conf to filter bro data at the heavy forwarder. Since the dns datasource is so chatty, I ONLY want to ingest events where the query field contains domains "A" and "B".
I have set my stanzas up according to the following splunk documentation: http://docs.splunk.com/Documentation/Splunk/6.5.2/Forwarding/Routeandfilterdatad; but it is still not working. I'm not sure what I'm doing wrong.
props.conf:
[corelight_dns]
TRANSFORMS-dns= dns_null,dns_parsing
transforms.conf:
[dns_null]
REGEX = .
DEST_KEY = queue
FORMAT = nullQueue
[dns_parsing]
REGEX=\"query\"\:\".+\.[A|B]+\"
DEST_KEY = queue
FORMAT = indexQueue
The above link states two stanzas are needed; the 1st to filter all events to the nullQueue and 2nd to whitelist events matching regex pattern to indexQueue. It also states the nullQueue stanza has to go first. Am I misunderstanding something here?
We're using corelight application for our bro data.
↧
↧
Why can't this user save searches? "Argument "auto_summarize" is not supported by this handler."
For some reason I have one user (unfortunately my manager) who is unable to save report or alert.
He is getting:
"**Encountered the following error while trying to save: Argument "auto_summarize" is not supported by this handler."**
He has exactly the same roles than myself and I am able to save his searches.
We are running:
Splunk Version 6.5.2
Splunk Build 67571ef4b87d
Any help would be appreciated...
↧
Automatically capitalize the first letter of every word that follows a period?
I am looking for the proper SPL to capitalize the first letter of every word that follows a period. I have tried several different ways using the eval/upper command. But can't quite get it right. Any help would be appreciated.
Thanks,
↧
timestamp and line breaks
The timestamp and linebreaking doesn't seem to be working as expected. They are nagios/pnp4nagios logs.
I get a burst of events similar to the below data every few seconds/minutes and it seems the first line of each data burst is being recognized for the TIMET timestamp but all other events within that data burst aren't being handled correctly.
**TIMET::1506034709** = timestamp in epoch time
**DATATYPE::** = start/end of event
Data is sent in this format: **DATATYPE::SERVICEPERFDATA\tTIMET::$TIMET$\tHOSTNAME::$HOSTNAME$\t**
**Here's the data:**
DATATYPE::HOSTPERFDATA TIMET::1506034709 HOSTNAME::host1 HOSTPERFDATA::time=0.000342s;;;0.000000;20.000000 HOSTCHECKCOMMAND::check_tcp!255.255.25.25!443 HOSTSTATE::UP HOSTSTATETYPE::HARD HOSTOUTPUT::TCP OK - 0.000 second response time on 255.255.25.25 port 443
DATATYPE::HOSTPERFDATA TIMET::1506034713 HOSTNAME::host2 HOSTPERFDATA::time=0.000368s;;;0.000000;20.000000 HOSTCHECKCOMMAND::check_tcp!255.255.25.256!443 HOSTSTATE::UP HOSTSTATETYPE::HARD HOSTOUTPUT::TCP OK - 0.000 second response time on 255.255.25.256 port 443
**Here's the sourcetype config: - timestamp/linebreak**
[nagios:core:perfdata]
event_breaks: (I've tried auto and every line)
BREAK_ONLY_BEFORE = ([\r\n]+)DATATYPE
SHOULD_LINEMERGE = true
TIME_FORMAT = %s
TIME_PREFIX = TIMET::
lookahead 128
↧
Adding iam roles to Splunk TA AWS
Given the number of HWF's we have running the AWS TA, we have to hame some form of automation around getting the roles loaed. I have been using the REST API, which works great but I would like to know how I can write this config (i.e. in conf files) to Splunk AWS TA conifg? I ask because we use git for deployment and version control and if I could find how to write these roles into splunk I could do it through git. Another way to ask the question is can I use the deployment server to deploy these roles?
Thanks!
↧
↧
Another JSON Event Break Assistance request ..
An excerpt from my JSON output ...
Trying to Event break at the following line "type": "story", where a new event begins.
Have tried several posts but cannot get it working currently.
{
"total_count": 195,
"data": [
{
"type": "story",
"creation_time": "2017-09-06T01:29:57Z",
"parent": {
"type": "feature",
"id": "45003"
},
"version_stamp": 18,
"release": {
"type": "release",
"id": "14001"
},
"sprint": {
"type": "sprint",
"id": "21001"
},
"description": null,
"invested_hours": 4,
"id": "41051",
"last_modified": "2017-09-18T05:30:31Z",
"phase": {
"type": "phase",
"id": "4029"
},
"owner": {
"type": "workspace_user",
"id": "13010"
},
"author": {
"type": "workspace_user",
"id": "13001"
},
"story_points": null,
"product_areas": {
"total_count": 0,
"data": []
},
"team": {
"type": "team",
"id": "4001"
},
"remaining_hours": 0,
"user_tags": {
"total_count": 0,
"data": []
},
"name": "Add portlet to PSR - tasks planned and milestones",
"estimated_hours": 9
},
{
"type": "story",
"creation_time": "2017-07-31T02:08:15Z",
"parent": {
"type": "feature",
"id": "26056"
},
"version_stamp": 15,
"release": {
"type": "release",
"id": "12002"
},
"sprint": {
"type": "sprint",
"id": "19003"
},
"description": null,
"invested_hours": 0,
"id": "28001",
"last_modified": "2017-08-31T03:13:37Z",
"phase": {
"type": "phase",
"id": "4030"
},
"owner": {
"type": "workspace_user",
"id": "13010"
},
"author": {
"type": "workspace_user",
"id": "13001"
},
"story_points": 0,
"product_areas": {
"total_count": 0,
"data": []
},
"team": {
"type": "team",
"id": "4001"
},
"remaining_hours": 0,
"user_tags": {
"total_count": 0,
"data": []
},
"name": "As a PM, I can manage Projects of the NEC DTS Project Type",
"estimated_hours": 0
},
{
"type": "story",
"creation_time": "2017-07-21T05:11:24Z",
"parent": {
"type": "feature",
"id": "23069"
},
"version_stamp": 14,
"release": {
"type": "release",
"id": "12002"
},
"sprint": {
"type": "sprint",
"id": "19003"
},
"description": null,
"invested_hours": 1,
"id": "26060",
"last_modified": "2017-08-04T03:02:16Z",
"phase": {
"type": "phase",
"id": "4030"
},
"owner": {
"type": "workspace_user",
"id": "6001"
},
"author": {
"type": "workspace_user",
"id": "1008"
},
"story_points": 1,
"product_areas": {
"total_count": 0,
"data": []
},
"team": {
"type": "team",
"id": "4001"
},
"remaining_hours": 0,
"user_tags": {
"total_count": 0,
"data": []
},
"name": "As a BA I can assign requirements to PPM work packages. (many to many relationship)",
"estimated_hours": 1
},
↧
DecisionTree Graph
Hi,
After building a machine learning model using DecisionTreeRegressor or DecisionTreeClassifier, I can use the "| summary" command to list out the content.
Is there any better way to visualize the tree instead of row by row? Say, graphviz can plot a tree nicely (see picture as reference). Any customized codes or Splunk app can do this?
![alt text][1]
[1]: /storage/temp/217615-2017-09-22-15-35-26-testpng-windows-photo-viewer.jpg
↧
Networking Resulotion (DNS) Data Model is not working correctly
Hi All,
Recently I have deployed Enterprise Security App for our customer. I have already getting data from our DNS server and send them to Indexer but on Search Head installed ES App they can't display on dashboard. I checked and found that Network Resulotion (DNS) and Domain Analysis data model can't complete building (alway on status building, please see my attach). I tried to Rebuild and modify Summary Range many time but it seem doesn't working. Does anyone has experienced about this issue?
Thanks,
↧