Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all articles
Browse latest Browse all 47296

Using result fields for earliest/latest time in secondary search

$
0
0
I have an existing search that finds fields named "RunDate" "StartTime" "EndTime" stored as part of test run summaries. The search then proceeds to convert those time values into usable Unix, via strptime: index="IDX1" sourcetype="SRC" ProjectName="PRJ" | eval stime = strptime(StartTime,"%m/%d/%Y %I:%M:%S %p") | eval etime = strptime(EndTime,"%m/%d/%Y %I:%M:%S %p") | table RunDate stime etime | sort RunDate desc Now is the tricky part... I would like a 4th column that uses the time frame in each row to perform a calculation on values coming from a different index/source. index="IDX2" "HOST" "data.metricId" IN (1234) | stats avg("data.metricValues{}.value") as average | eval total=average/100 Somehow, this needs to be time constrained by "earliest=stime" & "latest=etime" for each RunDate (the results should be a series) Is this possible? To run a secondary search/eval, using calculated values from the primary search as the earliest and latest time constraints? I attempted to do this with a maps search, but it seems that for a maps search to work properly, there must be an overlapping field. In this case, the only thing that overlaps between the two searches are the time parameters.

Viewing all articles
Browse latest Browse all 47296

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>