In my environment, it consists of one search head, two indexers, and one forwarder.
As the flow of data, logs transferred by load balancing from one forwarder are stored with two indexers, and one search head performs distributed search for those logs.
Recently, the partition in the two indexers where Splunk was installed was under pressure, so I investigated the cause and found growing of the data model caused this problem.
So I decided to change the storage location of the data model of indexer 1 based on the following answers.
https://answers.splunk.com/answers/108183/how-to-change-the-path-of-datamodel-summary.html
Then I configured and restarted, but It seemed that only indexer_1 didn't take the log that is target of data model in for about 40 minutes.(* There are two sourcetypes, this log is assumed to be "sourcetype A".)
settings
######################
[volume:sample]
path=/data/sample
maxVolumeDataSizeMB=500000
[index]
tstatsHomePath = volume:sample/index/datamodel_summary
######################
As confirmation, I confirmed the Indexing Rate for each source type on panel "Estimated Indexing Rate Per Sourcetype" of "indexing" which is in "DMC" contents.
After I restored the setting and restarted, I was able to confirm that the Indexing Rate increased on Panel "Estimated Indexing Rate Per Sourcetype".
Also in indexer 2, the log of source type "sourcetype A" was being captured even during that time.
In order to clarify the cause of the problem, I checked the internal logs of indexer_1 and forwarder, but I could not find information that could lead to a solution.
In load balancing forwarding, is there a possibility that the log of one kind of sourcetype will not be sent to one indexer for long time like this?
Or is it my setting mistake?
I am fortunate if someone will help me.
↧