Other answers I have found don't quite seem to work in my case here. Have seen similar where it can be done based on say "type=" fields and the append/join suggestions don't quite work either.
Hoping someone has a simple solution while I continue to hack/dig for the solution myself. . .
This query LOOKS like what I want as a result. . . total errors / total counts * 100. . . the eval for the error rate does not use the correct correlated bucket error count. It seems to always use the first error count for every bucket percentage. . .
every bucket error rate is 12 / (1 second total count) * 100 because 12 seems to be the First 1 second bucket error count.
each of the join querys works fine by itself to create a nice line/graph. . .
index=prod_stuff source="*xyzzy*" | bucket _time span=1s
| stats count as totalCount by _time, host
| join
[search index=prod_stuff source="*xyzzy*" ("FATAL" OR "ERROR" OR "stringa" OR . . . )
NOT ("WARN" OR "string1" OR "string2" OR "string3" OR "string4" OR . . . )
| bucket _time span=1s | stats count AS totalErrors by _time, host ]
| eval errorRate=totalErrors/totalCount*100 | xyseries _time, host, errorRate
This produces a very nice looking graph if I did not care too much about the numbers being correct.
Don't care if this is done using a join, just most efficient way to do this is what I am looking for.
Thanks
↧