Hi all,
We are currently estimating our network bandwidth needs and one of the questions we are trying to answer is about compression ratios for index replication.
So let's assume all our data comes from one site. This is let's say 100GB per day. That goes through the following chain:
Collection (Universal Fws) ==(X GB)==> Filtering (Heavy FW) ==(Y GB)==> Storage (Index Site 1) ==(Z GB)==> Replication (Index Site 2)
I am trying to answer what the values of X, Y and Z (network throughput) would be on average.
* X: Uncooked data. If we use SSL, compression ratio would be 1:14 on average according to [this][1] = 7.14GB
* Y: Cooked data. SSL. Compression ratio 1:14 but no change with the previous [one][2] = 7.14GB
* Z: Indexed data. Bucket is replicated and includes both RAW and Indexes = ?
Any help would be much appreciated.
Thanks,
J
[1]: https://answers.splunk.com/answers/92067/forwarder-output-compression-ratio.html
[2]: https://answers.splunk.com/answers/92067/forwarder-output-compression-ratio.html
↧