Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all articles
Browse latest Browse all 47296

Should metrics support overwriting events instead of duplicating metrics

$
0
0
In Splunk 7.0.0, when sending data to a metrics index, it looks like one can send duplicate metric measurement events (e.g., the same tuple of time, metric name, and dimensions) and the metric index will store all duplicates, thereby affecting the statistics that come out. Is that the intended behavior for the metric store? Other time-series metric stores I have played with use overwrite/last-in logic that only preserves the most-recently indexed value for a given metric tuple. Using similar logic here would seem to make more sense for the the use cases I would see for the metric store, but I freely admit to making assumptions. Please clarify how allowing duplicate metric events is intended to be used / handles. Note, my understanding of a distinct metric tuple is the timestamp (to milliseconds), metric name, and dimension fields. So, assuming you see the following two metric tuples that arrive at the indexer at different times (the first column), only the later one (the top row) would be saved in the index. Right now (as of Splunk 7.0.0), both are saved in the metrics index/store. | indexing timestamp | metric timestamp | metric name | metric value | server | 1506708015.390 | 1506708000.000 | server.power.kwh | 126.06 | na-server-1 | 1506708010.242 | 1506708000.000 | server.power.kwh | 104.56 | na-server-1

Viewing all articles
Browse latest Browse all 47296

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>