Hi Splunk,
I work for a corporate partner and have a question.
Been having issues with auto-finalization of sub-searches and understanding how to configure search/sub-search parameters. Wondering if any attempt has been made to model search time to complete in terms of constraints.
For example, suppose we want to perform a join command with 1000 splunk tickets in an outer search with 1000 events in a sub-search (assume that both searches are 'dense'). Something like the following:
index = ticket_data | head 1000
| JOIN type=left ticket_id max=0 [ search index=event_data | head 1000 ]
Does there exist some sort of a tool that performs convex optimization, to predict the following:
1. When searches would auto-finalize.
2. How to optimize the architecture and deployment configurations such that a particular search would work.
An example of the computer program may look like the following:
Best parameters in terms of speed of the search, p* = arg min p s.t. { ∆t_search head (p) + ∆t_indexers (p) },
where p would involve the constraints and what we optimize:
n = # of tickets in main search (constraint)
f_n = # of fields in tickets (constraint)
m = # of events in join search (constraint)
f_m = # of fields in tickets (constraint)
k = # of indexers used, (independent or constraint)
l = # of search heads used, (independent or constraint)
Configuration parameters (e.g. maxout, maxtime, ttl) (independent or constraint)
Assuming such a tool doesn't exist, any insights or documentation to approach developing this would be greatly appreciated.
Thanks for your help!
↧