Hi everyone,
I'm currently working on a table in a dashboard which shows the location of psychical and virtual servers. The psychical servers have their own index, and so do the virtual servers. These indexes contain all kinds of information, among which the location of the server. In my query for the table I use a join to combine the two and so far it seems to be working.
This is the code for the table:
index=cmdb_horus source=baseline_servers name=$vm$
| dedup name
| eval VMName=upper(name)
| join VMName type=left [
search index=vcenter_script host=vcenter_platform Type=VM
| dedup VMName
| rename Datacenter as u_datacenter ]
| rename company AS Customer, u_overal_res_group AS "Primary Responsible Group", operational_status AS "Operational Status in CMDB", support_group AS "Primary Solver Group", os AS "Operating System", VMName AS "Server/Node name", u_datacenter AS Datacenter
| sort "Server/Node name"
| table "Server/Node name", "Operating System", "Operational Status in CMDB", Datacenter, Customer
Which results in this:
![alt text][1]
It mostly looks ok, but as you can see in the *Datacenter* column it doesn't show the datacenter location for a couple of 32-bit servers (VM's). So I thought maybe those VM's just didn't have a location in the data, but when I use a query to look up the location of those servers, it returns them without problem:
![alt text][2]
As you can see the location is there in the data. Do any of you might have a suggestion why it won't show that location for all fields when using the first query?
Thanks!
[1]: /storage/temp/218657-knipsel.jpg
[2]: /storage/temp/218658-knipsel2.jpg
↧
Splunk shows certain fields as empty in my table, although the data is available, and works in other searches
↧
Splunk Assigning Random _time to part of my indexed data
Hello,
I have a csv that is loaded weekly and in the beginning of September, ~20,000 records out of my 90,000 records dropped each week were randomly being assigned the time stamp 3/23/15 11:02:55:300 PM while the rest of the 70,000 records were given the time stamp of when the file was dropped in the auto index. I have no idea why and cannot find that date in my data anywhere. Each week ~20,000 records contain this time stamp, but the number is never consistent.
Below is a copy of my props.conf file for the sourcetype used. Can you help me figure out why this is happening? Or the best way to approach this problem? Thank you!
Also: all of my date_month, date_minute, etc fields only contain the info from 3/23/15 date- none of it from the time stamp given to the 70,000 records that have the time the file was dropped into the auto index.
EXTRACT-extractedEmail = (?i)^(?:[^:]*:){3}\d+,\d+,\w+,\w+,\w+,\w+,(?P[^,]+)
EXTRACT-Number = (?i)^(?:[^,]*,){10}(?P[^,]+)
DATETIME_CONFIG =
NO_BINARY_CHECK = true
disabled = false
↧
↧
WinHostMon://service not retrieving the status of some services
We are using WinHostMon://service stanza in input.conf to monitor the service status on windows hosts. But it doesn't seems to be retrieving the status of some services.. Eg: Splunk , Snare... Below is the config used. Any limitation for WinHostMon://Service ?.
[WinHostMon://Service]
index = winsvc
interval = 300
type = service
↧
Splunk Integration with WorkDay
We are exploring integrating WorkDay (https://www.workday.com/) logs with Splunk Enterprise. Are there any documentation or pointers available to look for different integration patterns that can be applied for this?
↧
Is there a way to use VLOOKUP function in Splunk?
Hello,
Among all the jobs that are running on mainframe I need to bring back the ones that correspond specifically to Control-M. For that matter there's .csv file that contains APPL column with 3-4 alphanumeric values that correspond to the first 3-4 JOBNAME values that are specific to Control-M. So I am wondering if there is the way to rebuild VLOOKUP function in Splunk that looking up .csv data it will bring back only JOBNAMEs that correspond to those APPL values?
↧
↧
Splunk Enterprise trial - Http Event Collector not working
I've installed the **splunk enterprise trial**. i've **enabled the HEC** feature as described here http://dev.splunk.com/view/event-collector/SP-CAAAE7F which enable to send machine data from my app into splunk. I tried to send a **POST request using postman to splunk and got no response.**
method: POST
url : http://localhost:8088/services/collector
Authorization : my generated token
why there is no response if i already enabled the HEC feature. it seems that no server listen on that port at all
what i don't understand about splunk is - where is my data stored? is data for SPLUNK ENTERPRISE stored only locally and should be in use inside companies LAN network ? or splunk own servers in the cloud that stored all my data? is Splunk Enterprise and Splunk Cloud have differences on that subject?
thank you for your help.
↧
Field alias whose sourcetype is the same name as another index returns different number of results
Hi,
I'm using Splunk 6.6.3 with the Enterprise Security app, with access only to the web interface.
I have two indexes, each with the same sourcetype:
index=index1 sourcetype=WindowsEventLogs
index=index2 sourcetype=WindowsEventLogs
WindowsEventLogs contains the same fields in both indexes, as expected.
I created an alias named "dhost" which corresponds with the existing field "dest". The field alias has global permissions, readable to everyone.
Next, I obtained the count of "dest" and "dhost" from each index, specifying a 1 minute range from the time picker (9:55:00 - 9:55:59). The results show a different number of events for the original "dest" field, and the aliased "dhost" field:
index=index1 sourcetype=WindowsEventLogs | stats count(dest) 612 (612 events)
index=index1 sourcetype=WindowsEventLogs | stats count(dhost) 335 (612 events)
index=index2 sourcetype=WindowsEventLogs | stats count(dest) 19 (19 events)
index=index2 sourcetype=WindowsEventLogs | stats count(dhost) 4 (19 events)
I expected the numbers to match in each index. For example, I expected 335 to be 612, and I expected 4 to be 19.
I also tried the same scenario with "source" instead of "sourcetype" when creating the field alias, but the results were exactly the same.
If I create a field alias for a sourcetype whose name isn't shared with any other indexes, the numbers for "dest" and "dhost" do match as I expected.
Finally, I've read the Splunk docs, searched Google and answers.splunk.com, and can't find any mention of this behavior. Have I overlooked something?
Thanks.
↧
Obtaining cluster centres details from K-Means algorithm
I am using K-Means algorithm from Machine Learning toolkit to cluster some data.
After algorithm has converged i can see two new fields appended to the original data - cluster ID and cluster distance.
This is great, however I also need cluster centre details for each cluster. I need this information to calculate distance to each cluster centre from new data points and then assign these data points to the appropriate cluster.
Is there any way to accomplish this in Splunk?
↧
how to display a list of hosts which satisfies a condition?
I have a query as follows
| metadata type=hosts | search [| inputlookup ABCD.csv | eval Device=mvindex(split(Device,"."),0) | search NOT "Device Type"="alys*" | rename "Device" as my_hostname | eval host=lower(my_hostname) | fields host ] | eval host=lower(host) | append [| inputlookup ABCD.csv | eval Device=mvindex(split(Device,"."),0) | search NOT "Device Type"="alys*"| rename "Device" as my_hostname | eval host=lower(my_hostname) | eval recentTime=0, lastTime=0, host=lower(host) | fields host recentTime lastTime ] | dedup host | eval category=case(recentTime>=relative_time(now(), "-24h"), "Systems reported to Splunk in last 24 hours", (recentTime0), "Systems reported to Splunk more than 24 hours ago", recentTime=0, "Systems never reported to Splunk") | stats dc(host) AS total_hosts BY category | addcoltotals labelfield=category label="Total" | eventstats max(total_hosts) AS all_totals | search NOT category="Total" | eval Percentage=tostring(round(total_hosts/all_totals*100,2))."%" | fields category total_hosts Percentage | rename total_hosts as "Host Count"
Which gives the result as follows
![alt text][1]
Now instead of this. I want to modify my query to display only the list of hosts which are never reported to Splunk. It appears to be simple but when i tried to add the | search where category="Systems never reported to Splunk" .its not giving me any results. It would be great if anyone can help me to modify the query to display the results like below
never_reported_systems
kjhkj
fkjhk
vkjhk
bkljhk
nkljhk
nkjh
[1]: /storage/temp/218659-today-pic.png
↧
↧
Automatic Lookup of KVSTORE not working
I am using Splunk Enterprise 6.4.7. I have created a kvstore by defining the collection in collections.conf
`[definitions]`
and providing the config is transforms.conf
`
[definitions_lu]
collection=definitions
external_type=kvstore
field_list=_key, name,def,tag
`
I have populated the kvstore via the rest endpoints.
I am now trying to create an Automatic Lookup based on the "name" field in the kvstore.
I keep getting the error from my indexers that "The lookup table 'definitions_lu' does not exist" What do I need to do to get the indexers to recognize the table?
↧
How can I extract the nested JSON at index time
Hello
I have some logs that have nested JSON. If I add INDEXED_EXTRACTIONS = JSON the non-JSON data does not appear but the JSON is expandable and extracted.
Heres a sample of the log
2017-10-31 18:27:07,444 priority=INFO app=apps thread=[stuff-2.0.177-v11111111].HttpsListenerConfig.worker.12 location=MessageProcessor line=151 _message="Message flow..." {appName=[stuff-2.0.177-v11111111, orderValue=10.00, field=1506373, retryCnt=0, field=12fdfg-123dsdf-213423vdc-dfg43, id=123456, field=123456789, field=2, field=220838349} responsePayload='{
"field": 220838349,
"field": 1292975431,
"field": "1506373",
"endTime": "2017-10-31T18:42:05.456Z",
"field": true,
"field": [
{
"field": -1,
"field": "",
"field": "31",
"field": "27",
"field": "16",
"field": {
"amount": 37.4,
"currency": "USD"
},
"field": "HOLD"
},
{
"field": -1,
"field": "",
"field": "31",
"field": "27",
"field": "17",
"field": {
"amount": 37.4,
"currency": "USD"
},
"field": "HOLD"
}
]
}' responseHttpStatus=200 timeTakenInMillis=2003
Any ideas how I can extract, at index time, the JSON portion while also keeping the rest? My current props are
[sourcetype]
TIME_FORMAT = %Y-%m-%d %H:%M:%S,%f
TRUNCATE = 100000
SHOULD_LINEMERGE = true
BREAK_ONLY_BEFORE_DATE = true
Maybe something I can do with transforms??
Thanks!!
↧
is it possible to do a dynamic cidr-based match via lookup on an inline search?
Hi. Is it possible to use match_type=cidr(ipfield) in an ad hoc lookup from the search bar, as opposed to the automatic lookup you'd do with the configuration in transforms.conf? Based on this old question, https://answers.splunk.com/answers/228229/is-it-possible-to-get-a-count-of-ips-from-one-look.html, I'm guessing the answer is no, but I wanted to check. If it's not currently possible, is it an enhancement on the road map?
↧
Finding Unique Pairs of Data in Interchangeable Fields
Hi folks, I'm parsing Cisco Callmanager call detail records in our splunk system and I'd like to see which pairs of telephone numbers have the most calls between them, but here's the tricky bit: I don't care who called who, I want to aggregate calls from A->B and B->A into one counter and list the top 10 pairs of callers who make the most calls to each other.
The code below is giving me a nice list of top calling pairs at the moment, but A->B and B->A are listed as two distinct pairs, how do I aggregate them?
index=cucm | stats count by callingPartyNumber,finalCalledPartyNumber |sort by -count
↧
↧
Limit on size of event/data passed to 'collect' command?
We have a number of scheduled searches that run every few minutes to search for events recently indexed that match certain criteria (e.g. events submitted by security devices). These events are enriched with data from threat intel feeds and then passed to a macro that uses the `collect` command to aggregate the events in a summary index called `alert_events`. Most of the events that pass through this process come out fine, but we've noticed recently that very large events are causing issues. For example, some of the events that a particular scheduled search is alerting on start out with 150 fields extracted at search time, but the event that arrives in `alert_events` index has only 100 fields, and the rest of the fields from the original event are just missing. If I run the scheduled search without the macro calling `collect`, I see all 150 fields, but if I apply the macro at the end of the search, the event indexed in `alert_events` has only 100 fields.
Is there a maximum size (or a maximum number of extracted fields) for events being passed to `collect`? I can't find any such limit documented on Splunk Docs.
I am also open to other explanations for why the results of a given search show 150 fields, and applying `|collect index=alert_events sourcetype=ouralerts source=ouralerts` results in indexed events with only 100 fields. Thanks!
↧
how to display the color meaning on top of the dashboard panel instead of right?
I have a dashboard panel as below
![alt text][1]
As you can see on the color representatio(MSSP ..)n since the words are large they aren't visible on the dashboard. Instead I want to display those 3 on the top instead of right. In-order to do that where exactly should I have to modify the html tags.
[1]: /storage/temp/218663-dashboard.png
↧
Multiselect Tstat Tokens
Hi
I am trying to applay a Multiselect into a token.
For example i can change the value of MXTIMING.NPID to the PID 123 and it works - so that is one value.
What i want to do is active a Multiselect on this token so i can select 123 and 345 and 345 and etc...
I have tried to add in a prefix of OR but no good
INITIAL - Query
| tstats summariesonly=$summariesonly_token$ avg(MXTIMING.Elapsed) AS average FROM datamodel=MXTIMING_TEST WHERE
host=$host_token$
AND MXTIMING.source_path = *$source_path_search_token$
AND MXTIMING.UserName2=$MXTIMING_UserName_token$
AND MXTIMING.NPID=*$MXTIMING_NPID_token$*
AND MXTIMING.MXTIMING_TYPE_DM=$MXTIMING_TYPE_TOKEN$
AND MXTIMING.Context+Command = *$MXTIMING_Context_token$#*
AND MXTIMING.Context+Command = *#$MXTIMING_Command_token$*
AND MXTIMING.Time = *
GROUPBY MXTIMING.Context+Command MXTIMING.NPID MXTIMING.Time
I tried to add in a way to use OR, but i cant seem to find a way - to me this would be the best way
| tstats summariesonly=$summariesonly_token$ avg(MXTIMING.Elapsed) AS average FROM datamodel=MXTIMING_TEST WHERE
host=$host_token$
AND MXTIMING.source_path = *$source_path_search_token$
AND MXTIMING.UserName2=$MXTIMING_UserName_token$
AND MXTIMING.NPID="1123" OR "11232"
AND MXTIMING.MXTIMING_TYPE_DM=$MXTIMING_TYPE_TOKEN$
AND MXTIMING.Context+Command = *$MXTIMING_Context_token$#*
AND MXTIMING.Context+Command = *#$MXTIMING_Command_token$*
AND MXTIMING.Time = *
GROUPBY MXTIMING.Context+Command MXTIMING.NPID MXTIMING.Time
In the end i have to change the TOKEN to equal the full string repeating it self [], however this is long and if i want to use this token again i will have to strip out the token value prefix = MXTIMING.NPID
| tstats summariesonly=$summariesonly_token$ avg(MXTIMING.Elapsed) AS average FROM datamodel=MXTIMING_TEST WHERE
host=$host_token$
AND MXTIMING.source_path = *$source_path_search_token$
AND MXTIMING.UserName2=$MXTIMING_UserName_token$
MXTIMING.NPID=10025 OR MXTIMING.NPID=10784 OR MXTIMING.NPID=11858 OR MXTIMING.NPID=12170
AND MXTIMING.MXTIMING_TYPE_DM=$MXTIMING_TYPE_TOKEN$
AND MXTIMING.Context+Command = *$MXTIMING_Context_token$#*
AND MXTIMING.Context+Command = *#$MXTIMING_Command_token$*
AND MXTIMING.Time = *
GROUPBY MXTIMING.Context+Command MXTIMING.NPID MXTIMING.Time
↧
Color in tables, is this a bug?
I have started to use **color** in my table and found some annoying behavior.
In a dashboard, click edit, and at top of the column, select the the pencil to edit color.
Here you have two option, **Automatic** and **Define rules**.
I do not like the **Define rules** for my table, since I do not now the field value.
Setting it manual will for sure work, but then if I forget some value, it will be blank.
So then I select **Automatic**.
I have a view of max 20 hits pr page.
Ont eh first page, my hit **0B00** becomes blue and other red, green etc.
But when I select next page, it change color to orange, or some other color, or stay blue.
This change form page to page when I select page 1,2,3....10 etc.
So how to have Automatic and get same color for the same field value for all the pages?
Is this a bug, or is the the way it works?
↧
↧
index time field extraction for XML data?
We have a use case where index time extractions for XML data makes a lot of sense yet I do not see an easy way go make it happen. I see that common fomats like csv and json as well supported but nothing for xml. Any ideas?
I see some creative work around but would prefer something more common.
The XML events are very very large so search time xmlkv is very slow. We have the indexer resources to support index time extraction.
Thanks!
↧
Can't drill down specific column in TimeLine chart.
Hi,
In TimeLine App Not be able to drill down with `$row.ColumnName$` except 1st and 2nd column.
| table _time ScriptName FullCommand Duration UniqueIdentifier $row.ScriptName$ $row.FullCommand$
Its working for `$row.ScriptName$` but not for `$row.FullCommand$`. if I replace ScriptName with FullCommand then it will work for `$row.FullCommand$` but not for `$row.ScriptName$` that means its only working for 2nd column values not 3rd column.
thanks
↧
I loaded Oracle add-ons to monitor Oracle logs and internals of the database, I'm getting cannot communicate with task server? Help
I have tried the fixes I found information online. Firewall 9998 is open on inbound and outbound. I'm trying to load ojdbc7.jar into the drivers but the product just sits and spins on something. I imagine it is because of this error.
I went to manage apps, clicked on configurations and filled out the java path that I loaded on the main server. When I click on these tabs I get a momentary, 10 second, error on DBX Server is not available, check if server started and port 9998. I don't know how to check if the DBX server is running. I suspect not.
I also see in messages the following: Unable to initialize modular input "server" defined inside the all "splunk_app_db_connect. Introspecting scheme=server: script running failed(exited with code 1).
Help.
↧