Hi Splunk Experts,
I am sending events to Splunk Enterprise in the following nested JSON format:
{
compliance: Compliance Unknown,
ctupdate: hostinfo,
host_properties: [
{
name: _times,
since: 1508907165,
value: last_onsite
},
{
name: compliance_state,
since: 1508268020,
value: N/A
},
{
name: engine_seen_packet,
since: 1508907165,
value: Yes
},
{
name: guest_corporate_state,
since: 1508268020,
value: N/A
},
{
name: linux_operating_system,
since: 1508907165,
value: 2.0.2
},
{
name: online,
since: 1508907165,
value: Yes
},
{
name: ssh_open_port,
since: 1508959259,
value: 0
},
{
name: va_netfunc,
since: 1508959247,
value: Linux Desktop/Server
}
]
ip: 192.168.1.17,
tenant_id: acd1034578ef
}
I want to create a dashboard that would go over all such events and display a pie-chart with the last online values for each IP received. To achieve this, I've been using the following query:
`ct_hostinfo` `get_sourcetypes`
| spath output=prop_name path=host_properties{}.name
| spath output=prop_val path=host_properties{}.value
| eval prop_key_val=mvzip(prop_name, prop_val, "---")
| mvexpand prop_key_val
| eval prop_key_val=split(prop_key_val, "---")
| eval prop_name=mvindex(prop_key_val, 0)
| eval prop_val=mvindex(prop_key_val, 1)
| search prop_name=online
| dedup ip, prop_name
| stats count by prop_val
The above query does its work but **it doesn't scale.** If I let my dashboard collect data for over ~5k events, I start seeing the following error:
command.mvexpand: output will be truncated at 1300 results due to excessive memory usage. Memory threshold of 500MB as configured in limits.conf / [mvexpand] / max_mem_usage_mb has been reached.
I would like to know if I can somehow scale the above search query so that it can handle greater number of events. Would I need to defined field extractions etc. in transforms.conf/props.conf? I don't know much about field extraction configs.
I would be very grateful, if someone can suggest me a better query or field extractions.
Thanks.
↧