I am pulling data from 30-40 log groups from 3 different regions using the Splunk Add-on for AWS. I am having an issue where after about 10-15 minutes, I stop receiving the most up to date events from half of my log groups. I receive data initially just fine from all log groups, but it seems after it pulls the most recent data at the time it doesn't check again for more data. The delay and interval settings are set to the default and I've confirmed that the most current events are being received by the Cloudwatch logs service. My only clue seems to be this event in the Splunk internal logs that occurs for my log groups with this issue.
2015-12-08 17:52:22,328 INFO pid=7026 tid=Thread-298 file=aws_cloudwatch_logs.py:_do_was_job_func:130 | Previous job of the same task still running. Exit current job. region=us-west-2, log_group=syslog
This event seems to occur indefinitely every 10 minutes and Splunk never pulls more data from the log group again.
Any ideas?
↧