Hello,
I would like to change the dashboard panel title font size using XML, not CSS.
I found the following in one of the posts:
But, when I insert it inside of the and it still changes the title font of ALL the panels in my dashboard.
How would I change the title font for one particular panel only?
Kind Regards,
Kamil
↧
How do I change the panel title font size using XML instead of CSS?
↧
How do you update the checksum for a changed system file in the InstalledFilesHashChecker
http://docs.splunk.com/Documentation/Splunk/7.2.1/SearchReference/Iplocation describes how to obtain updated IP location data. I have set a up a process to update /opt/splunk/share/GeoLite2-City.mmdb with the latest every month.
But then on a restart, we get messages complaining about this change:
11-21-2018 06:07:40.843 +0000 WARN InstalledFilesHashChecker - An installed file="/opt/splunk/share/GeoLite2-City.mmdb" did not pass hash-checking due to reason="content mismatch"
I tried updating the checksum in splunk-6.5.3-36937ad027d4-linux-2.6-x86_64-manifest to match the new file - but to no avail. How do I let Splunk know that the new copy of GeoLite2-City.mmdb is OK?
Ours is a Search Head and Index cluster Enterprise edition - 6.5.3.
↧
↧
will this app work with HyperV 2016 and with Splunk 7.2?
I am trying to use this app for data collection for Hyper-V 2016 Deployments. i have following queries.
1.) will this work with Hyper-V?
2.) will this work with Splunk Enterprise 7.2?
3.) is there any readymade dashboard and visualization available?
Thank you,
↧
Unable to get logs from Azure Storage blob in Splunk?
I have install the Splunk add on for Azure and also configure the storage account.
After that I have Configured the input as blob by specify the interval for pulling data from Blob storage. But still am unable to get the data from blob indexed in Splunk. I have created a separate index for that also named as "Azure". I have troubleshoot also as much i know like i have checked whether listening port is opened or not and also checked the forwarder connection is active or not on the indexer,still unable to get my logs .
Please help me out to troubleshoot it or correct me if am wrong somewhere or missing something.
Thanks.
↧
How to remove characters in a field value?.
I have below entries from my logs and I want to remove ' from the beginning and end of the field value.
valid_from='May 25 13:46:01 2017 GMT ',valid_to='May 25 13:46:01 2019 GMT'
Also how to get the difference in days for the **valid_to-valid_from**?.
↧
↧
Create data table conditinally
My logs are below content :
Export of US successfully transferred to FR
Import successfully ended on US from export of FR with exit code 0
Export successfully ended on SP with exit code 0
means that
* file created on FR server was copied into the server of FR : OK
*the DATA from FR was added to US : OK
based on below logs I Need to create below table.
++++Exports+++
**GE SP FR UK**
**GE** Blank OK OK *KO*
**SP** OK Blank OK *KO*
**FR** OK OK Blank OK
**UK** OK OK OK Blank
Is it possible to create table like this. Export is table and bold tags are rows and columns of table.
I am trying a lot, but not succeeding. Any help will be appreciated.
↧
How do I set choice value if I need fields where value is greater than zero?
I tried below:Non-Zero Zero All All
sourcetype=callrecords
duration="$duration.input$"
| table field1, field2, field3
I cannot get this working when I choose "Non-Zero."
Thanks a lot.
↧
Turn on Monitoring Console Distributed Mode via CLI or REST
I'm trying to automate the build of my Monitoring Console instance.
In the documentation http://docs.splunk.com/Documentation/Splunk/7.2.1/DMC/Deploymentsetupsteps it says that I should:
1. first add the instances being ,monitored as search peers (I've found a cli command for that)
2. then turn on the distributed mode in the GUI like this: http://docs.splunk.com/Documentation/Splunk/7.2.1/DMC/Configureindistributedmode to populate the splunk_monitoring_console_assets.conf and assets.csv lookup (documented here: docs.splunk.com/Documentation/Splunk/7.2.1/DMC/HowtheDMCworks )
I'm wondering is there a way to automate enabling Monitoring Console Distributed Mode step?
↧
splunkd is crashing and I am getting the error message in the crash file
Starting splunk server daemon (splunkd)...
Done
[ OK ]
Waiting for web server at https://127.0.0.1:8000 to be available.splunkd 8595 was not running.
Stopping splunk helpers...
[ OK ]
Done.
Stopped helpers.
Removing stale pid file... done.
WARNING: web interface does not seem to be available!
opt/splunk/var/log/splunk$ more crash-2018-11-20-19:21:02.log
[build 586c3ec08cfb] 2018-11-20 19:21:02
Received fatal signal 6 (Aborted).
Cause:
Signal sent by PID 8595 running under UID 31964.
Crashing thread: IdataDO_Collector
Registers:
RIP: [0x00007FD030389495] gsignal + 53 (libc.so.6 + 0x32495)
RDI: [0x0000000000002193]
RSI: [0x00000000000021EC]
RBP: [0x000055D4025F1710]
RSP: [0x00007FD025BFE618]
RAX: [0x0000000000000000]
RBX: [0x00007FD0318F6000]
RCX: [0xFFFFFFFFFFFFFFFF]
RDX: [0x0000000000000006]
R8: [0x0000000000000008]
R9: [0x00007FD031947598]
R10: [0x0000000000000008]
R11: [0x0000000000000206]
R12: [0x000055D4025800B9]
R13: [0x000055D4026BCA80]
R14: [0x000055D402CD0EA0]
R15: [0x00007FD024533720]
EFL: [0x0000000000000206]
TRAPNO: [0x0000000000000000]
ERR: [0x0000000000000000]
CSGSFS: [0x0000000000000033]
OLDMASK: [0x0000000000000000]
OS: Linux
Arch: x86-64
Backtrace (PIC build):
[0x00007FD030389495] gsignal + 53 (libc.so.6 + 0x32495)
[0x00007FD03038AC75] abort + 373 (libc.so.6 + 0x33C75)
[0x00007FD03038260E] ? (libc.so.6 + 0x2B60E)
[0x00007FD0303826D0] __assert_perror_fail + 0 (libc.so.6 + 0x2B6D0)
[0x000055D40139783C] ? (splunkd + 0x8CA83C)
[0x000055D40139CA7D] _ZN22IdataCollectorCallback4tickEv + 157 (splunkd + 0x8CFA7D)
[0x000055D40118FE98] _ZN17IdataDO_Collector4mainEv + 136 (splunkd + 0x6C2E98)
[0x000055D401B45F40] _ZN6Thread8callMainEPv + 64 (splunkd + 0x1078F40)
[0x00007FD0306F2AA1] ? (libpthread.so.0 + 0x7AA1)
[0x00007FD03043FBDD] clone + 109 (libc.so.6 + 0xE8BDD)
Linux / cwb02qsplunkidx03.keybank.com / 2.6.32-754.3.5.el6.x86_64 / #1 SMP Thu Aug 9 11:56:22 EDT 2
018 / x86_64
Last few lines of stderr (may contain info on assertion failure, but also could be old):
splunkd: /home/build/build-src/ivory/src/pipeline/indexer/IdataDO_Collector.cpp:372: void collec
t__indexes(): Assertion `! name.empty()' failed.
2018-11-20 18:54:30.652 -0500 splunkd started (build 586c3ec08cfb)
splunkd: /home/build/build-src/ivory/src/pipeline/indexer/IdataDO_Collector.cpp:372: void collec
t__indexes(): Assertion `! name.empty()' failed.
2018-11-20 19:08:48.364 -0500 splunkd started (build 586c3ec08cfb)
splunkd: /home/build/build-src/ivory/src/pipeline/indexer/IdataDO_Collector.cpp:372: void collec
t__indexes(): Assertion `! name.empty()' failed.
2018-11-20 19:21:01.445 -0500 splunkd started (build 586c3ec08cfb)
splunkd: /home/build/build-src/ivory/src/pipeline/indexer/IdataDO_Collector.cpp:372: void collec
t__indexes(): Assertion `! name.empty()' failed.
/etc/redhat-release: Red Hat Enterprise Linux Server release 6.10 (Santiago)
glibc version: 2.12
glibc release: stable
Last errno: 0
Threads running: 40
Runtime: 1.137312s
argv: [splunkd -p 8089 restart]
Regex JIT disabled due to SELinux
Thread: "IdataDO_Collector", did_join=0, ready_to_run=Y, main_thread=N
First 8 bytes of Thread token @0x7fd02a414f10:
00000000 00 f7 bf 25 d0 7f 00 00 |...%....|
00000008
x86 CPUID registers:
0: 0000000D 756E6547 6C65746E 49656E69
1: 00050654 0E010800 FEFA3203 0FABFBFF
2: 76036301 00F0B5FF 00000000 00C30000
3: 00000000 00000000 00000000 00000000
4: 00000000 00000000 00000000 00000000
5: 00000000 00000000 00000000 00000000
6: 00000004 00000000 00000000 00000000
7: 00000000 00000000 00000000 00000000
8: 00000000 00000000 00000000 00000000
9: 00000000 00000000 00000000 00000000
A: 07300401 0000007F 00000000 00000000
B: 00000000 00000000 000000CD 0000000E
C: 00000000 00000000 00000000 00000000
D: 00000000 00000000 00000000 00000000
80000000: 80000008 00000000 00000000 00000000
80000001: 00000000 00000000 00000101 2C100800
80000002: 65746E49 2952286C 6F655820 2952286E
80000003: 6C6F4720 31362064 43203034 40205550
80000004: 332E3220 7A484730 00000000 00000000
80000005: 00000000 00000000 00000000 00000000
80000006: 00000000 00000000 01006040 00000000
80000007: 00000000 00000000 00000000 00000100
80000008: 00003028 00000000 00000000 00000000
terminating...
↧
↧
Count of zero and non zero values in a table?
I have a search which generates a table as below. The column value is epoch time.
IP 1542682800 1542684600 1542686400 1542688200 1542690000 1542691800 1542693600
10.7.13.1 0 0 0 59 84 51 0
10.7.13.2 0 61 140 103 136 102 0
10.7.14.3 0 0 0 0 0 0 0
10.7.15.4 0 0 22 6 3 0 0
10.7.15.5 60 12 138 84 15 0 0
10.7.34.6 0 0 0 0 0 0 0
10.7.34.7 0 0 0 0 0 0 0
Search is like this :
base search |
| bucket span=30m _time
| chart count(people) by IP _time limit=500 | sort _time
I am trying to add two columns which would have the count of zero and non-zero values for a particular IP. Any help with this is appreciated.
So for the 1st row above will have zero count 4 and non zero count 3 and so on for each row.
↧
How is colorPalette 'sharedList' defined?
I'm using a dashboard which includes a table, where certain fields are being highlighted. The color format is defined with this SimpleXML:
Is there someplace this is defined, so I could clone & customize it in some way?
↧
why does sendalert command takes longer time to execute script but takes less time when the script gets executed from addon builder ?
the python script takes less time to execute in add-on builder but takes longer time from splunk search. could someone tell me why ?
↧
Split new line in logs to multivalue during ingestion
I have a custom log with the following preview:
`Message="An account was successfully logged on." Security_ID="NT AUTHORITY\SYSTEM\nNT AUTHORITY\SYSTEM" Account_Domain="xxxxx\nNT AUTHORITY" Logon_Type="5"`
When it's ingested into splunk, the fields extracted are
`Message: An account was successfully logged on.`
`Account_Domain: xxxxx nNT AUTHORITY`
`Security_ID: NT AUTHORITY\SYSTEM\nNT AUTHORITY\SYSTEM`
`Logon_Type: 5`
As you can see, the `\n` is not being broken down into multivalues.
What should i modify, so that the output will be as such
`Message: An account was successfully logged on.`
`Account_Domain: xxxxx
NT AUTHORITY`
`Security_ID: NT AUTHORITY\SYSTEM
NT AUTHORITY\SYSTEM`
`Logon_Type: 5`
I've tried modifying and playing around with `props.conf`, `transforms.conf` but to no avail.
Appreciate any help!
↧
↧
Why is Splunk sending logs splitted?
I am trying to send logs from Splunk Enterprise Instance to external server(Syslog, ELK vb.) But Splunk is sending logs splitted. For example computer name in one log and eventcode in another. What is cause of this problem?
![alt text][1]
[1]: /storage/temp/257668-photo6028192028678008182.jpg
↧
Should splunk admin role limited to internal indexes?
Hello,
due to GDPR, should splunk administrator user / role be limited to access all indexes?
How to check if data is therefore correctly indexed using internal indexes (Splunk 7.1.4)?
Thanks in advance.
↧
Compare last two recent events
Hey,
i have different devices, which are sending temperature data to my splunk instance. For alarming I want to compare the temperature data of last two measurements that were send. Ideally I want to do this for all devices at once. So my goal is to create a table like this:
deviceid last_temp second_last_temp difference
xxxxx 25 20 5
xxxxx2 35 18 17
Based on the calculation of the difference I want to configure my alarming...
The events look like this:
{ "deviceId": "4D3F7A", "time": 1542800341, "data": "9e46544000808f41", "duplicate": false, "categoryId":"5bb366f22c9fbb00da468aee", "temperature" : "17.9375" }
I probably just have "a knot in my brain" right now and cant get to a solution.
Thanks in advance.
Max
↧
Filter consumed event types and/or collect start date?
Hi,
Is it possible to configure this app to only collect logs from a particular start date as opposed to all historical logs?
Additionally, is there any way to specify which event types I want to collect as opposed to all?
After install, this app overrun my splunk daily volume allowance disabling search capability. So i am looking for ways to reduce daily volume generated by this input
↧
↧
Conditional execution of query in panel based on value in DropDown Box
Hi could anyone please help.
I have two drop down boxes that executes two queries based on
the two values chosen in two drop down boxes.
$service_family_tok$ and $enter_feature_tok
These values are used in query below in lookup and search.
index=_internal sourcetype=FilmWork
| lookup fd_$l_service_family_tok$_$l_enter_feature_tok$_microservice_map
| search feature=$enter_feature_tok$
Example $service_family_tok$ =EDH and $enter_feature_tok$=STMT
index=_internal sourcetype=FilmWork
| lookup fd_edh_stmt_microservice_map
| search feature=STMT
I have a new option in drop down "ALL" and "ALL".
This only executes first part of query "index=_internal sourcetype=FilmWork for all results does no lookup or seach feature as its not required.
Could anyone assist me in the logic so when user chooses ALL and ALL
the lookup and search part of the query is NOT executed.
In shell script you could append the lookup/search text based on testing
value in $service_family_tok$ and $enter_feature_tok & both not equal to ALL.
Like but I don't know how to do this in Splunk.
eval ALLToken=if(cidrmatch("ALL",$service_family_tok$)
if(ALLToken )
index=_internal sourcetype=FilmWork
else query
index=_internal sourcetype=FilmWork
| lookup fd_$l_service_family_tok$_$l_enter_feature_tok$_microservice_map
| search feature=$enter_feature_tok$
The code does not show properly when I paste here please request and I can send . Thanks
↧
parallel reduce search processing - How do i know it is working? Do i have to use "Redistribute"?
Hi
I have configured the below
http://docs.splunk.com/Documentation/Splunk/7.2.1/DistSearch/Parallelreduceoverview
Am i right to say i have to use the command Redistribute in my search to use this or is this something extra for high-cardinality searches?
But i am not seeing an performance decrease, so how can i check it is working?
I have one search head and 2 indexers (non-Clustered)
I have set the following on the indexers
server.conf
[parallelreduce]
pass4SymmKey = $7$qkfkqE35XUbVp9oAqD2M+bBQVTufnczdRnyIcnuQrbXhAV/u+7QyBaXR
limits.conf
[parallelreduce]
reducers=10.25.5.169:5089, 10.25.53.57:5089
I have added in both indexers here, i am assuming i need to add in it self?
My user can run the command
run_multi_phased_searches
http://docs.splunk.com/Documentation/Splunk/7.2.1/DistSearch/Setupparallelreduce
Then i run the command and add redistribute to the command (If i understand correctly this is what we are to do!!) - But below does not work.
| tstats summariesonly=true chunk_size=1000000000 max(MXTIMING.Elapsed) AS Elapsed FROM datamodel=MXTIMING_V9 WHERE
host=Luas_TestCampaign_PI9_2
GROUPBY _time MXTIMING.Machine_Name MXTIMING.Context+Command MXTIMING.NPID MXTIMING.Date MXTIMING.Time MXTIMING.MXTIMING_TYPE_DM source MXTIMING.UserName2 MXTIMING.source_path MXTIMING.Command3 MXTIMING.Context3 span=1s | redistribute by _time
So the errors i am getting is below - But i don't understand i have tried to put redistribute in multiple parts of the search
Redistribute Processor: Cannot redistribute events that have been aggregated at the search head. Place the redistribute command before transforming commands that do not have a 'by' clause.
http://docs.splunk.com/Documentation/Splunk/7.2.1/SearchReference/Redistribute
Any help would be great - or how can i check what log
↧
Health Status : The percentage of small of buckets created (75) over the last hour is very high and exceeded the red thresholds (50) for index=_internal....
I have been getting the following type message for the _internal and other indexes: The percentage of small of buckets created (75) over the last hour is very high and exceeded the red thresholds (50) for index=_internal, and possibly more indexes, on this indexer.
What could be causes of this and how do I go about troubleshooting to determine what the cause of this may be? I have not been able to find anything yet in logs.
Thanks
↧