one indexer + one SH, on the Monitor console ,
After configured monitor console to distributed and apply change, SH server shows as both indexer and SH.
Is this expected?
↧
distributed system - one indexer + one SH, on the Monitor console , SH server shows as indexer and SH?
↧
REST API without curl
Is there a way I can make REST API calls to splunk to run a search and return data on JSON via webservice rather than use curl?
Basically I need the HTTP URL equivalent for below which would work when invoked via javascript or when put into browser
curl -u usr:psd -k https://xx.xx.xx.xx:xxxxx/services/search/jobs/export -d search="search index=xxx earliest=-15m latest=now "xyz123"| table c1, c2" -d output_mode=json
↧
↧
I need help accelerating a dataset ... but it says has streaming commands
I am trying to accelerate a dataset I created.. and it tells me I can’t because it has streaming commands.
I’m not sure if there is some better way to accelerate this dataset so its faster for general searches.
Here is the query that builds the dataset:
index=netcool_noi_1 sourcetype=netcool:policylogger netcool_serial=*
| eval unassigned="FALSE"
| eval enriched="FALSE"
| eval correlated="FALSE"
| search reporting_results=*
| rex field=reporting_results "NODE:\s+(?\S+)\s+"
| rex field=_raw "SERVER_SERIAL\:\s+(?\d+)"
| rex field=_raw "REPORTING RESULTS: ENRICHED WITH PARENT CIRCUIT ID FROM PLUCK:\s+(?\S+\s+\S+\s+\S+)\s+"
| rex field=_raw "REPORTING RESULTS: ENRICHED WITH CIRCUIT ID FROM RESOLVE MSS DATA FOR NODE:.*CIRCUIT ID:\s+(?.*)\s+RATE\s+"
| rex field=_raw "REPORTING RESULTS: (?\S+)\s+"
| eval enriched=if(in("ENRICHED", testfield), "TRUE", enriched)
| eval unassigned=if(like(reporting_results,"%UNASSIGNED%"), "TRUE", "FALSE")
| eval correlated=if(in("CORRELATED", testfield), "TRUE", correlated)
| transaction netcool_serial maxevents=7 keeporphans=1 keepevicted=1 mvlist=(enriched, correlated, unassigned)
| eval unassigned=if(in("TRUE", unassigned), "TRUE", "FALSE")
| eval enriched=if(((in("TRUE", enriched) OR (len(parentCircuitId)>=0)) AND (unassigned="FALSE")), "TRUE", "FALSE")
| eval correlated=if(in("TRUE", correlated), "TRUE", "FALSE")
| eval parentfound=if(len(parentCircuitId)>=0, "TRUE", "FALSE")
Any suggestions?
↧
What does Splunk search log "Process delayed... perhaps system was suspended?" mean?
Disctributed system - one SH + one indexer:
During a regular search:
09-07-2018 11:24:50.109 INFO PipelineComponent - Process delayed by 105.740 seconds, perhaps system was suspended?
what does this mean?
↧
Hi All, How do I put multiple charts inside of the single panel ?
Need to have multiple charts inside to one single panel
**strong text**
pie chart
line chart
single value
Thanks
Pari
↧
↧
How do you count the difference in an ongoing count for a given time period?
I have a JMX search going on which tracks orders placed every 30 seconds.
index=dot_jmx mbean_property_destinationName=RTGOrderProcessed | stats values(Messages_Enqueue) AS Orders by jvmDescription, mbean_property_destinationName
The search produces and output like this:
Server Name Orders
SRV1 Processed 11238
SRV1 Processed 11239
SRV1 Processed 11240
SRV1 Processed 11241
SRV1 Processed 11242
SRV1 Processed 11243
SRV1 Processed 11244
SRV1 Processed 11246
SRV1 Processed 11247
SRV1 Processed 11248
SRV1 Processed 11249
SRV1 Processed 11250
This goes in a dashboard with a picker, and for the time period provided by the picker, there should be a way to output a visualization which counts the difference between the earliest count number and the latest count number — the goal is to display the total new orders for the time period requested. (So - displays the count for the last 15 minutes or the last week.)
The guy before me had it like this:
index=dot_jmx destinationName=RTGOrderProcessed | stats values(Messages_Enqueue) AS Orders by jvmDescription,destinationName| delta Orders as Orders p=1 | search Orders < 100| stats sum(Orders) As "Total Process Orders Today"
...but for some reason, when I added the time picker, that stopped working...but the time picker is now a requirement, and the guy is gone.
Any suggestions?
↧
Is there a configuration where I can set the DATETIME_FORMAT for Universal Forwarder?
I am using a Universal Forwarder to send data (log files) to Splunk.
My log files contains a timestamp at the beginning of the row. For example:
(07/09/2018 12:55:40) ;Info;........
The date/time is to be intended as 7 September 2018 12:55:40 (dd/MM/YYYY .....).
Splunk indexes the row as 9 July 2018 12:55:40 (MM/dd/YYYY ......).
Is there a configuration where I can set the DATETIME_FORMAT for Universal Forwarder (looks to me that a props.conf is not used by UF)
Thanks
Kind Regards
Gianluca
↧
coldToFrozenDir
I am trying to create coldToFrozenDir on one of my network index. I am trying to do this in the Development env before going to Prod environment
[volume:frozen]
path = /opt/frozen/network
[network]
homePath = volume:primary/network/db
coldPath = volume:primary/network/colddb
thawedPath = $SPLUNK_DB/network/thaweddb
coldToFrozenDir = volume:frozen/network/frozendb
maxTotalDataSizeMB = 55000
frozenTimePeriodInSecs = 7776000
But I am getting a following error - Problem parsing indexes.conf: Cannot load IndexConfig: Cannot create index 'network': path of coldToFrozenDir must be absolute ('volume:frozen/network/frozendb')
Validating databases (splunkd validatedb) failed with code '1'. If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue
Even I have tried specifying path with no luck -
coldToFrozenDir = /opt/frozen/network
I am getting the same error. Its on the same file system
↧
In trying to create coldToFrozenDir, has anyone seen the following "load IndexConfig" error?
I am trying to create coldToFrozenDir on one of my network indexes. I am trying to do this in the Development environment before going to production environment.
[volume:frozen]
path = /opt/frozen/network
[network]
homePath = volume:primary/network/db
coldPath = volume:primary/network/colddb
thawedPath = $SPLUNK_DB/network/thaweddb
coldToFrozenDir = volume:frozen/network/frozendb
maxTotalDataSizeMB = 55000
frozenTimePeriodInSecs = 7776000
But, I am getting the following error —
Problem parsing indexes.conf: Cannot load IndexConfig: Cannot create index 'network':path of coldToFrozenDir must be absolute ('volume:frozen/network/frozendb')
Validating databases (splunkd validatedb) failed with code '1'. If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue
I've have even tried specifying path with no luck -
coldToFrozenDir = /opt/frozen/network
I am getting the same error. It's on the same file system
↧
↧
[Transforms] nullQueue not working
Hello,
I want to discard events that contain a string "**Content**", the following doesnt work, because I still see events with **Content** after I restarted and re-indexed:
***transforms.conf***
`[allNullQueue]
REGEX = Content
DEST_KEY = queue
FORMAT = nullQueue`
***props.conf***
`[mysrctype]
TRANSFORMS-setnull = allNullQueue`
I tried this in a standalone env, version `7.0.3` and `7.1.2`
I can't find out from where the problem can be.
Any clue?
Thanks
↧
How do I allow multiple users to login with the same shared account?
We have a requirement to allow approx 50 users to login simultaneously to Splunk using the same shared account. The reason we want to do this is because these users do not have access, and we want them to do a bit of hands-on during a live demo.
Does Splunk support this feature or do we need to look for an alternative solution?
↧
Editing monitoring console in a distributed system: why does SH server show as indexer and SH?
one indexer + one SH, on the Monitor console ,
After configured monitor console to distributed and apply change, SH server shows as both indexer and SH.
Is this expected?
↧
What does Splunk search log "Process delayed... perhaps system was suspended" mean?
Disctributed system - one SH + one indexer:
During a regular search:
09-07-2018 11:24:50.109 INFO PipelineComponent - Process delayed by 105.740 seconds, perhaps system was suspended?
what does this mean?
↧
↧
How do I run a query for a user's Internet activity and create a table by date and url?
I need to run a query for a user's Internet activity. I would like to create a table/report for the output that's limited to date, and URL. Thank you.
↧
How to keep stats fields while alerting on a single field created with eval
index=security sourcetype="O365" Status!=Failed | iplocation FromIP | rex field=SenderAddress "(?[\w\d]+(?=\@email\.organization))" | stats latest(DateReceived), count by SenderAddress, FromIP, Subject, AD_Account, Country | where count>30 | appendpipe [stats sum(count) as count by AD_Account | eval FromIP="--------------------------------------------------------------------"] | sort -AD_Account
With this alert I am generating stats for all of the fields in the search, and creating a new field in order to total the amount of events per account. What I am hoping to do, is alert based solely on the "Total" count, which currently populates as a new, separate line, based on the AD Account (The FromIP eval is to create a visual break to make it easier to separate the different accounts) while ALSO retaining all of the individual stats that exist above this newly created line.
Any suggestions are highly appreciated!
↧
Timestamp extarction for varying subseconds
How to extract a timestamp from message having
event1: Timestamp:2018-09-06T00:00:11.214000000, Timezone:UTC
event2: Timestamp:2018-09-06T00:00:11.214, Timezone:CST
where sub seconds can be milliseconds or nano seconds which vary and timezone can be any string like UTC,CST etc
↧
How do I keep stat fields while alerting on a single field created with eval?
Here is an example of the alert I'm trying to generate:
index=security sourcetype="O365" Status!=Failed | iplocation FromIP | rex field=SenderAddress "(?[\w\d]+(?=\@email\.organization))" | stats latest(DateReceived), count by SenderAddress, FromIP, Subject, AD_Account, Country | where count>30 | appendpipe [stats sum(count) as count by AD_Account | eval FromIP="--------------------------------------------------------------------"] | sort -AD_Account
With this alert, I am generating stats for all of the fields in the search, and creating a new field in order to total the amount of events per account. What I am hoping to do is alert based solely on the "Total" count which currently populates as a new separate line. That line is based on the AD Account (The FromIP eval is to create a visual break to make it easier to separate the different accounts). At the same time, I ALSO am retaining all of the individual stats that exist above this newly created line.
Any suggestions are highly appreciated!
↧
↧
Timestamp extraction for varying subseconds and time zones?
How do you extract a timestamp from message having
event1: Timestamp:2018-09-06T00:00:11.214000000, Timezone:UTC
event2: Timestamp:2018-09-06T00:00:11.214, Timezone:CST
where sub seconds can be milliseconds or nano seconds which vary and time zone can be any string like UTC,CST etc.
↧
Microsoft User Activity Monitoring: Are there plans to update this app for Newer Minecraft versions?
Any plans to update this app for use with Minecraft 1.12 and above? Seems that things have changed to output Minecraft stats and other info in JSON files now. And, I'm not seeing any data in the app.
Some examples are as follows:
**logs/latest.log**
[00:45:56] [Server thread/INFO]: lost connection: Timed out
[00:45:56] [Server thread/INFO]: left the game
[00:51:33] [Server thread/INFO]: com.mojang.authlib.GameProfile@00000000[id=,name=,properties={},legacy=false] (/:) lost connection: Disconnected
[00:51:46] [User Authenticator #54/INFO]: UUID of player is 0a0a00aa-0000-0000-00aa-aaa0a000a100
[00:51:46] [Server thread/INFO]: [/:] logged in with entity id 0000000 at (000.0000000000000, 00.0, -0000.0000000000000)
**world/advancements/0a0a00aa-0000-0000-00aa-aaa0a000a100.json**
{
"minecraft:recipes/tools/fishing_rod": {
"criteria": {
"has_string": "2018-09-03 22:11:41 +0000"
},
"done": true
},
"minecraft:recipes/redstone/lever": {
"criteria": {
"has_cobblestone": "2018-09-03 21:32:58 +0000"
},
"done": true
},
"minecraft:recipes/building_blocks/magenta_concrete_powder": {
"criteria": {
"has_sand": "2018-09-03 23:07:03 +0000"
},
"done": true
},
"minecraft:recipes/combat/iron_sword": {
"criteria": {
"has_iron_ingot": "2018-09-03 22:21:14 +0000"
},
"done": true
}
**world/stats/0a0a00aa-0000-0000-00aa-aaa0a000a100.json**
{
"stats":{
"minecraft:picked_up":{
"minecraft:iron_chestplate":2,
"minecraft:azure_bluet":1,
"minecraft:wheat":61,
"minecraft:chicken":31,
"minecraft:oak_stairs":10,
"minecraft:stone_brick_stairs":1,
"minecraft:shield":1,
"minecraft:heavy_weighted_pressure_plate":1,
"minecraft:string":4,
"minecraft:red_bed":1,
"minecraft:iron_axe":5,
"minecraft:iron_leggings":4,
"minecraft:iron_shovel":2,
"minecraft:gold_nugget":2,
"minecraft:spruce_planks":45,
"minecraft:beef":36,
"minecraft:cobblestone_slab":1,
"minecraft:poppy":5,
"minecraft:iron_pickaxe":1,
"minecraft:sugar_cane":86,
"minecraft:chest":4,
"minecraft:iron_boots":2,
"minecraft:egg":46,
"minecraft:sandstone":2,
"minecraft:stick":168,
"minecraft:oak_door":3,
"minecraft:oak_fence":1,
"minecraft:torch":73,
"minecraft:pink_carpet":5,
"minecraft:porkchop":38,
"minecraft:cookie":2,
"minecraft:diorite":2,
"minecraft:redstone":50,
"minecraft:stone_sword":1,
"minecraft:rotten_flesh":18,
"minecraft:arrow":68,
"minecraft:stone_shovel":2,
"minecraft:diamond_boots":1,
"minecraft:blaze_rod":1,
"minecraft:mutton":9,
"minecraft:iron_bars":1,
"minecraft:cod":1,
"minecraft:rail":2,
"minecraft:cobblestone":607,
"minecraft:oxeye_daisy":6,
"minecraft:feather":31,
"minecraft:lilac":1,
"minecraft:andesite":22,
"minecraft:leather":23,
"minecraft:filled_map":1,
"minecraft:oak_log":257,
"minecraft:gunpowder":2,
"minecraft:cobblestone_stairs":11,
"minecraft:lapis_lazuli":41,
"minecraft:dirt":69,
"minecraft:name_tag":1,
"minecraft:red_carpet":5,
"minecraft:bone":12,
"minecraft:apple":2,
"minecraft:stone_bricks":12,
"minecraft:kelp":5,
"minecraft:bow":5,
"minecraft:wheat_seeds":144,
"minecraft:cooked_chicken":16,
"minecraft:stone_pickaxe":3,
"minecraft:iron_ingot":159,
"minecraft:oak_fence_gate":4,
"minecraft:iron_ore":138,
"minecraft:glowstone_dust":2,
"minecraft:iron_sword":9,
"minecraft:oak_planks":65,
"minecraft:oak_slab":1,
"minecraft:cooked_porkchop":58,
"minecraft:stone_brick_slab":2,
"minecraft:quartz":20,
"minecraft:dandelion":6,
"minecraft:gravel":1,
"minecraft:sand":144,
"minecraft:ink_sac":4,
"minecraft:spruce_log":16,
"minecraft:gold_ore":9,
"minecraft:leather_helmet":1,
"minecraft:yellow_carpet":14,
"minecraft:diamond":12,
"minecraft:stone_axe":1,
"minecraft:lever":4,
"minecraft:light_blue_carpet":8,
"minecraft:iron_helmet":2,
"minecraft:granite":11,
"minecraft:painting":278,
"minecraft:bread":11,
"minecraft:oak_sapling":10,
"minecraft:netherrack":96,
"minecraft:coal":632,
"minecraft:diamond_pickaxe":3,
"minecraft:sunflower":4
},
"minecraft:broken":{
"minecraft:stone_axe":1,
"minecraft:iron_sword":1,
"minecraft:iron_pickaxe":4
},
Thanks,Are there any plans to upgrade this for the newer Minecraft versions? Most of the info on the new servers are in JSON format per user it seems
Sources look like:
**/opt/minecraft/logs/latest.log:**
[00:45:56] [Server thread/INFO]: lost connection: Timed out
[00:45:56] [Server thread/INFO]: left the game
[00:51:33] [Server thread/INFO]: com.mojang.authlib.GameProfile@00000000[id=,name=,properties={},legacy=false] (/98.223.8.251:50634) lost connection: Disconnected
[00:51:46] [User Authenticator #54/INFO]: UUID of player is 0a0d00aa-0000-0000-00aa-aaa0a000a000
[00:51:46] [Server thread/INFO]: [/:] logged in with entity id 0000000 at (588.4898613581487, 69.0, -2166.6957830009633)
[00:51:46] [Server thread/INFO]: joined the game
[00:52:35] [Server thread/INFO]: lost connection: Disconnected
[00:52:35] [Server thread/INFO]: left the game
**/opt/minecraft/world/advancements/0a0d00aa-0000-0000-00aa-aaa0a000a000.json**:
{
"minecraft:recipes/tools/fishing_rod": {
"criteria": {
"has_string": "2018-09-03 22:11:41 +0000"
},
"done": true
},
"minecraft:recipes/redstone/lever": {
"criteria": {
"has_cobblestone": "2018-09-03 21:32:58 +0000"
},
"done": true
},
**/opt/minecraft/world/stats/0a0d00aa-0000-0000-00aa-aaa0a000a000.json**:
{
"stats":{
"minecraft:picked_up":{
"minecraft:iron_chestplate":2,
"minecraft:azure_bluet":1,
"minecraft:wheat":61,
"minecraft:chicken":31,
"minecraft:oak_stairs":10,
"minecraft:stone_brick_stairs":1,
"minecraft:shield":1,
"minecraft:heavy_weighted_pressure_plate":1,
"minecraft:string":4,
"minecraft:red_bed":1,
"minecraft:iron_axe":5,
"minecraft:iron_leggings":4,
"minecraft:iron_shovel":2,
"minecraft:gold_nugget":2,
"minecraft:spruce_planks":45,
"minecraft:beef":36,
"minecraft:cobblestone_slab":1,
"minecraft:poppy":5,
"minecraft:iron_pickaxe":1,
"minecraft:sugar_cane":86,
"minecraft:chest":4,
"minecraft:iron_boots":2,
"minecraft:egg":46,
"minecraft:sandstone":2,
"minecraft:stick":168,
"minecraft:oak_door":3,
"minecraft:oak_fence":1,
"minecraft:torch":73,
"minecraft:pink_carpet":5,
"minecraft:porkchop":38,
"minecraft:cookie":2,
"minecraft:diorite":2,
"minecraft:redstone":50,
"minecraft:stone_sword":1,
"minecraft:rotten_flesh":18,
"minecraft:arrow":68,
"minecraft:stone_shovel":2,
"minecraft:diamond_boots":1,
"minecraft:blaze_rod":1,
"minecraft:mutton":9,
"minecraft:iron_bars":1,
"minecraft:cod":1,
"minecraft:rail":2,
"minecraft:cobblestone":607,
"minecraft:oxeye_daisy":6,
"minecraft:feather":31,
"minecraft:lilac":1,
"minecraft:andesite":22,
"minecraft:leather":23,
"minecraft:filled_map":1,
"minecraft:oak_log":257,
"minecraft:gunpowder":2,
"minecraft:cobblestone_stairs":11,
"minecraft:lapis_lazuli":41,
"minecraft:dirt":69,
"minecraft:name_tag":1,
"minecraft:red_carpet":5,
"minecraft:bone":12,
"minecraft:apple":2,
"minecraft:stone_bricks":12,
"minecraft:kelp":5,
"minecraft:bow":5,
"minecraft:wheat_seeds":144,
"minecraft:cooked_chicken":16,
"minecraft:stone_pickaxe":3,
"minecraft:iron_ingot":159,
"minecraft:oak_fence_gate":4,
"minecraft:iron_ore":138,
"minecraft:glowstone_dust":2,
"minecraft:iron_sword":9,
"minecraft:oak_planks":65,
"minecraft:oak_slab":1,
"minecraft:cooked_porkchop":58,
"minecraft:stone_brick_slab":2,
"minecraft:quartz":20,
"minecraft:dandelion":6,
"minecraft:gravel":1,
"minecraft:sand":144,
"minecraft:ink_sac":4,
"minecraft:spruce_log":16,
"minecraft:gold_ore":9,
"minecraft:leather_helmet":1,
"minecraft:yellow_carpet":14,
"minecraft:diamond":12,
"minecraft:stone_axe":1,
"minecraft:lever":4,
"minecraft:light_blue_carpet":8,
"minecraft:iron_helmet":2,
"minecraft:granite":11,
"minecraft:painting":278,
"minecraft:bread":11,
"minecraft:oak_sapling":10,
"minecraft:netherrack":96,
"minecraft:coal":632,
"minecraft:diamond_pickaxe":3,
"minecraft:sunflower":4
},
"minecraft:broken":{
"minecraft:stone_axe":1,
"minecraft:iron_sword":1,
"minecraft:iron_pickaxe":4
},
↧
Collect command does not work for search head/indexer cluster: (error Received event for unconfigured/disabled/deleted )
Hi!
1. i have an indexer on server_A and a search head on server_B
2. there is an index=test_ind on server_A
3. i run a search on server_B (search head) to collect some data to test_in to server_B
but
1) i get this error:
Received event for unconfigured/disabled/deleted index=test_ind with source="source::/opt/splunk/var/spool/splunk/3120f8647b3740cb_events.stash_new"
host="host::server_B"
sourcetype="sourcetype::stash".
So far received events from 11 missing index(es).
2) and no data collected to test_ind
Note: When i run collect command from the same Splunk instance where test_ind is located, everything is fine; the data is collected.
↧