Hello Splunkers,
I am having some questions about how much large kvstore may impact and require physical and virtual memory on search heads .
In deed, in my customer deployment, i had troubles with large kvstore collections (a few millions objects per collection) likely consuming too much memory in our search heads, and generated splunkd crashes (unexpected EOF) because of kernel killing splunkd to free physical memory.
So, for example, i have some kvstore collections that are being used to store that we call the baseline (a knowledge base of standard monitoring values per day of the week, and time of the day span by 5 minutes).
This results is above 2016 objects per server, as we have above 1300 production servers managed, this is results in having a kvstore with more than 3 Millions of objects.
**Documentation does not clearly (as far as i found) mentions memory requirements for kvstore collections:**
http://dev.splunk.com/view/webframework-developapps/SP-CAAAEZJ
http://docs.splunk.com/Documentation/Splunk/latest/Knowledge/ConfigureKVstorelookups
http://docs.splunk.com/Documentation/Splunk/latest/Admin/AboutKVstore
What about memory requirements ? What is it's impact on servers ? Are kvstore full "in memory" ?
So ok, kvstore are first stored on disk, but what part of it goes into memory ?
**I have started some tests and measurements, in the attached jpg file you can see the impact of one kvstore collection being generated (above 3.5 Millions of objects, kvstore were ready to be used at 12h50)**
*Note: Above statistics comes from an sh cluster of 4 nodes, each node has 16 GB physical memory*
alt text
Are there any evaluation method than can be used to estimate memory requirements based on number of objects ? (and acceleration)
In the above example, the kvstore resulting has a size of 2.5 GB with 750 MB for acceleration, not so big finally.
But it's impact on memory utilization of search heads seems quite important.
Thank you in advance for your help.
Guilhem
[1]: /storage/temp/76258-search-head-stats.jpg
↧