I am seeing some odd behavior. My setup is this: Splunk 6.3.1 Enterprise, 1 search head, 4 indexers, 1 forwarder Plus licence manager/deployment server.
The Props.conf file is on the search head, all indexers and forwarder. It looks like this:
[json_foo]
FIELDALIAS-curlybrace=office{} as office processors{} as processors
INDEXED_EXTRACTIONS=json
KV_MODE=none
MAX_TIMESTAMP_LOOKAHEAD=30
NO_BINARY_CHECK=true
TIMESTAMP_FIELDS= upTime
TIME_FORMAT= %Y-%m-%dT%H:%M:%S%Z
The inputs.conf file on the forwarder looks like this
[batch:///data/ingest/json-data]
index=foo
sourcetype=json_foo
move_policy=sinkhole
blacklist=\..*\.json
The issue I'm seeing is that I am getting " DateParserVerbose - Failed to parse timestamp. Defaulting to timestamp of previous ..." messages for **some** events. Typically it is all events in a certain file that cause the error. The error shows up in the splunkd.log file on the forwarder AND whichever indexer indexes the event.
A sample of the data is:
[
{
identifier: "ccce-da12-83ac",
city: "baltimore",
upTime: "2016-04-22T16:40:15Z",
filesize: 14423,
user: ["user1", "user2", "user3"]
},
{
identifier: "cc32s-da12-83de",
city: "paris",
upTime: "2016-04-22T16:43:52Z",
filesize: 1223,
user: ["user1", "user2"]
}
]
When I look at the source files, the data looks fine, and when I examine what is indexed in splunk, the upTime value and _time are the same. I've even compared the non-readable values and they are identical.
index=foo | eval mytime=_time | eval mytime2=strptime(upTime,"%Y-%m-%dT%H:%M:%S%Z") | table mytime, mytime2
I'm hoping i can just ignore this since it is just a warning and the values are correct. Can anyone see if I am doing something wrong? Any help would be appreciated.
↧