So, I decided to move cold bucket data to NAS storage. All went well, except for one index in which I changed the maxWarmDBCount to 299 instead of the default 300 in the stanza, and when I started Splunk back up, cold bucket storage for that index grew like crazy... so say the cold bucket data was 100GB, it grew to 200, 300, 400GB etc. Actually, some ran its NAS volume out of storage. Why? So far Splunk support can't tell me why. What I expected was, that I would see an additional 750MB of data in the cold bucket folder as I lowered warm bucket count to 299 instead of 300 when it rolls data over from warm to cold.
Any ideas? Why did they double or more? What's the point of lowering maxDBwarmcount if it's going to double or triple or more the amount of data? Defeats the whole purpose.
Sequence:
-Stopped splunk on all indexers
-made my change to point cold storage to a new folder (all the same on all indexers, all configured identically), and modified stanza for one of my indexes that has data in cold to 299
-copied data from old cold directory to new directory
-started splunk
↧