Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all articles
Browse latest Browse all 47296

How do I handle checksum errors on a file being written to by multiple processes on a Linux VM universal forwarder?

$
0
0
I've got a Universal Forwarder running on a RedHat Linux VM that is monitoring a particular type of error log file. In some cases, there are multiple processes that can write to the same error log file at the same time. It appears that when this occurs, it's triggering a checksum error on the file, which results in the entire error log being reindexed. The files are only appended to - the first 256 bytes of the file are not changing. I suspect (but can't/haven't proven) that there's some sort of race condition that's occurring when multiple processes write to the file simultaneously that's resulting in the log appearing inaccessible to the forwarder, or making the first 256 bytes appear altered, or something equally strange. This is an application error log, so short of changing how the application works (which isn't likely to happen), that behavior isn't going to change. I've considered altering the CRC parameters in my inputs.conf file, but I'm not sure what I would change them to since I'm not convinced that it would make a difference. Has anyone else experienced this behavior? If so, how did you account for it? (The good news is these logs are tiny, so it's not making a HUGE impact on our indexing bandwidth or disk consumption for the index, but it is making it a nightmare to review the logs - where there were 500 errors from a given server in a given day, there are now 10,000+ because of all the times the files get reindexed for checksum errors.)

Viewing all articles
Browse latest Browse all 47296

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>