Hello,
Let me first preface this by saying that I am very new to Splunk, MapR, NFS, and big data in general. I tried researching, but a lot of documentation / forum Answers go over my head or require adaptations (which I don't know how to do) to fit my scenario.
I have a Splunk cluster with 8 indexers, 3 search heads, and 2 admin nodes. Instead of using another server for frozen data storage, I would like to use a small MapR cluster so that I can create one volume per indexer. My instructions are to "create a NFS-exported directory and then create one dedicated directory per indexer in that NFS-exported directory."
I read this article, http://docs.splunk.com/Documentation/Splunk/6.4.3/Indexer/Automatearchiving, but am unsure whether I should go with automatic archiving or archiving with a script. These instructions to modify indexes.conf are still unclear to me:
[]
coldToFrozenDir = ""
What is an example of such a path in a MapR setting? How do I create a frozen archive? Isn't there a special way to load data into MapR or will Splunk take care of this?
Then I'm reading something about Hunk and Splunk Hadoop Connect which clouds up the picture. I don't think I will be using Hunk, but do I need Splunk Hadoop Connect to make Splunk work with MapR?
Thank you in advance for any guidance or advice you can provide.
↧