Quick Question guys,
Is there any way to detect if there were any successful connection using an account called "donotuse" or other well known/built-in accounts that are not in our AD environment? Rather than seeing a failure and acting on that, it seems that if something were successful, that would be a REAL security issue. Any insight or help is GREATLY appreciated.
↧
Detecting Successful Connections from a User Account
↧
DMC Indexer Performance - Queue information is missing
I've taken over managing two Splunk environments a while back, one in a Test environment and another in a Prod environment. Was looking through the DMC in Indexer Performance and noticed that the indexer queue information is empty, but all other panels are fine. Indexing rate, pipelines, cpu usage and everything are all showing up, but the queues and the fill ratios are empty.
I checked through the indexes and all of the internal splunkd, introspection, metrics, etc are all being indexed for the indexers and other splunk servers, but the queue information is showing as 0% no matter how far I go back.
All other information for indexers, search heads, forwarders, and etc seem to show up, but just these two panels in both environments are not monitoring properly.
Any help is appreciated. Still trying to look through the configuration.
alt text
![alt text][1]
[1]: /storage/temp/279727-capture2.png
↧
↧
Bad Dashboard Interaction: Form Input Derived Token and URL Input Arguments
I am having trouble supporting both URL link parameters for a form input/token and "derived" value tokens which are computed from the value of a form token. As an example, imagine that I have two tokens, **tok** and **derived**. The token **derived** is set when token **tok** is changed, as follows...>10 minutes 1 hour 1 day 1hour minspan=10m
The problem comes when a user links to this dashboard setting **tok** to "10minute" in the URL (with something like "tok=10minute" in the args). In this case, it seems that the token **derived** will not be set at all, causing any dependent dashboard panels to wait on token input.
I can work around by computing **derived** in a search instead, but it sure is convenient to simply set such a related (derived) token right here in the token it depends on. Does this usage of a *change* clause in the input pretty much ensure that bookmarks and other URL links containing token **tok** will be a problem?
↧
Cascade Table View
Hello everyone,
I am trying to put a table view together with no luck. The view is rather simple in theory but I cannot render it using SPL. I'd like to display the values of OS **BY** ip_address **BY** interface **BY** host. I would like them to be contained in one another from the most specific to the least (right to left). Using "values() by " won't give me the view I need. Ultimately, I wanted to show all rows for a field but only one for the common parent. I'd like to see something like the below. Sort of like a cascade effect. I'd appreciate any help!! Please, let me know if I am not being clear enough.
Host | interface | ip_address | OS
Host1 eth0 10.110.x.x linux
windows
OSX
10.110.x.x linux
windows
OSX
10.110.x.x linux
windows
OSX
eth1 10.110.x.x linux
windows
OSX
10.110.x.x linux
windows
OSX
10.110.x.x linux
windows
OSX
Host2 eth0 10.110.x.x linux
windows
OSX
10.110.x.x linux
windows
OSX
10.110.x.x linux
windows
OSX
eth1 10.110.x.x linux
windows
OSX
10.110.x.x linux
windows
OSX
10.110.x.x linux
windows
OSX
↧
What does Splunk-perfmon.exe write to registry Keys?
Have an antivirus reporting some writing attempts from process splunk-perfmon.exe to the following registry keys:
\REGISTRY\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\IDSVx86\Performance
\REGISTRY\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SepMasterService\Performance
\REGISTRY\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SNAC\Performance
\REGISTRY\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SyDvCtrl\Performance
\REGISTRY\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SymEFASI\Performance
\REGISTRY\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SymEvent\Performance
\REGISTRY\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SYMNETS\Performance
\REGISTRY\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SysPlant\Performance
\REGISTRY\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Teefer2\Performance
I have made research in regards performance monitoring however, indicates it should have read access to the registry and I assume executable for opening the subdirectories but nothing about deleting/modifying(write) data into these:
**https://docs.splunk.com/Documentation/Splunk/8.0.1/Data/MonitorWindowsperformance**
I was able to find that some Splunk requirements for antiviruses is to exclude the **splunk-perfmon.exe** from from the scanning list, which I am fine with however, I still need to know what does splunk need to write into Registry Keys:
**https://docs.splunk.com/Documentation/Splunk/7.3.3/ReleaseNotes/RunningSplunkalongsideWindowsantivirusproducts**
Thanks,
↧
↧
Search process did not exit cleanly, exit_code=255, description="exited with code 255". Please look in search.log for this peer in the Job Inspector for more info.
Hi,
In one of our indexer cluster which we query from a search head cluster, only one of the indexer is giving this error while running a query. The error I'm getting is
`Search process did not exit cleanly, exit_code=255, description="exited with code 255". Please look in search.log for this peer in the Job Inspector for more info.`
When going through search.log, for that particular indexer. All I can find is
INFO DistributedSearchResultCollectionManager - Connecting to peer= connectAll 0 connectToSpecificPeer 1
INFO DistributedSearchResultCollectionManager - Successfully created search result collector for peer= in 0.002 seconds
And there aren't any `ERROR` in the search.log
Although I found some errors in splunkd.log for the same indexer which is as below:
ERROR DistBundleRestHandler - Problem untarring file: /opt/splunk/var/run/searchpeers/xxx.bundle
WARN DistBundleRestHandler - There was a problem renaming: /opt/splunk/var/run/searchpeers/xxx.tmp -> /opt/splunk/var/run/searchpeers/xxxx: Directory not empty
I have seen some of the previous answers, stating that there might be not enough free space available in that particular indexer, but when checked, there is still 40% more available space.
I couldn't figure out what was the problem as there was no `ERROR` in the search.log. I'm on Splunk 7.1.3
Thanks in advance.
↧
50G Dev license expired, Splunk Locked me out, How to unlock?
So My DEV license expired, Splunk returned to free license, then my search capability was locked... I went back in later and saw this, so i got another dev license, put it on my dev box expecting things to start working as normal..... But no.... Still locked from doing searches... Silly.... How do i unlock now?
↧
Event timestamp behavior is inconsistent when DATETIME_CONFIG = NONE
I want to use a file's modification timestamp as the Splunk timestamp for the events it contains.
Accordingly, I've set "DATETIME_CONFIG = NONE" in props.conf for the sourcetype. This props.conf is distributed to the forwarder and indexers..
The files are read with a "monitor" input on the forwarder.
Two different behaviors;
i) If Splunk on the forwarder is "not running" when the file is written, the file's events are timestamped as expected: i.e. the timestamp of the event matches the Modify time as reported by "stat" on the file.
ii) However, if Splunk is "running" when the file is written, the file's events are timestamped using "change time" of the input file.
Please advise how I can ensure that a file's events are always timestamped according to the file's modification time, regardless of whether the Splunk forwarder is running at the time the file is written.
↧
Static field value
I have created a dashboard to show windows server uptime.
Now I would like to add application name of all servers. For example, Application A is hosted on Server A and Application B is hosted on Server B. I want to show these application in the dashboard corresponding their respective server names.
index = Index host=ServerA OR host=ServerB OR host=ServerC OR host=ServerD OR host=ServerE | eval Uptime_Days = System_Up_Time/86400 | chart max(Uptime_Days) as "System Uptime in Days" by host | sort -Uptime_Days
↧
↧
One Tile per Table Row
Hello,
I have a search that generates **over 50's rows** and **12 columns**. I need to create **a tile for each row**.
I thought about single value and trellis.
However, these have limitations:
1. Can't trellis tables
2. 20 chart/graph limit before pagination
4. Can't sort on a different field-value pair
5. Only 2 field-value pairs per single value panel
As the number of rows is **dynamic**, the number of tiles needs to be able to change (can't hardcode 50 tiles with the device name).
Here column names in each row that are required for each tile.
Device Name
Status - Time Latest Event
Parameter 1 - Last 5 mins / Last 60 mins / Last 24 hours
Parameter 2 - Last 5 mins / Last 60 mins / Last 24 hours
Parameter 3 - Last 5 mins / Last 60 mins / Last 24 hours
Example
Server123
Up - Wed Jan 22, 2020 12:00:00
Hits: 200 / 2800 / 55000
Inquiries: 150 / 2400 / 53000
Errors: 6 / 10 / 43
*If possible, would like to color code the different time intervals .*
I've seen that Splunk ITSI breaks the 20 tile barrier of Trellis; however, in the screenshots I've seen, only 2 field-value pairs per tile.
We do not have ITSI, so I'm not able to check the code to determine if it could be modified to handle more field-value pairs.
Here are some of my thoughts on how I might be able to accomplish this.
1. Set **tokens** for each row (column value).
2. Use an **** panel to populate the 12 tokens from that row.
3. Cycle through each row creating a new tile. Is there a for-next loop construct within SPL/XML? Is it possible to create a new panel during a search using **** and **** tags?
Your thoughts, ideas, comments, direction appreciated.
Thanks and God bless,
Genesius
↧
issues with escaped quotes and index extrations with regex
ok, so I am trying to pull some fields from the following log file entry:
"127.0.0.1",11/21/2019 8:19:49 PM,11/21/2019 8:19:49 PM,"\CS\Projects\Sample\Development Environment",10429,"Config","Info","7016943","local:{d597da58-6b69-4a9a-b494-0e97e49a43b8}","31C6E90FC53FAAE9B1273378DB1FF34D2338195D","0","0","SIGNING_AUDIT","745","{""Algorithm"":""SHA256"",""CommandLine"":""\""C:\\Program Files\\Microsoft Office\\Root\\Office16\\WINWORD.EXE\"" \/n \""C:\\Users\\tb\\Documents\\Evaluation Guide Supplement.docx"",""Executable"":""C:\\Program Files\\Microsoft Office\\Root\\Office16\\WINWORD.EXE"",""ExecutableHash"":""A5EE905C1E7372904AF2BFD2695337B1214440D0DB89033D26BD070360838905"",""ExecutableSigner"":""CN=Microsoft Corporation, O=Microsoft Corporation, L=Redmond, S=Washington, C=US"",""ExecutableSize"":1951728,""Key"":""31C6E90FC53FAAE9B1273378DB1FF34D2338195D"",""Machine"":""07WKSWIN150536"",""PlaintextBase64"":""DslN3Fo9lTUEJZkwGdYQ1uua+9zkVsji9nZJD3M1qV4="",""PrefixedUniversal"":""local:{d597da58-6b69-4a9a-b494-0e97e49a43b8}"",""WindowsUser"":""ad\\tb""}","CS - Signing Successful","A signing request with key 31C6E90FC53FAAE9B1273378DB1FF34D2338195D from user tb@redacted.com was successfully completed.
Code Signing Audit record:
Key: 31C6E90FC53FAAE9B1273378DB1FF34D2338195D
Artifact: {0E, C9, 4D, DC, 5A, 3D, 95, 35, 04, 25, 99, 30, 19, D6, 10, D6, EB, 9A, FB, DC, E4, 56, C8, E2, F6, 76, 49, 0F, 73, 35, A9, 5E}
Hashing Algorithm: SHA256
Machine: 07WKSWIN150536
Remote Account: tony.hadfield
Authenticated User: tb@redacted.com Command: ""C:\Program Files\Microsoft Office\Root\Office16\WINWORD.EXE"" /n ""C:\Users\tb\Documents\Evaluation Guide Supplement.docx
Application Hash: A5EE905C1E7372904AF2BFD2695337B1214440D0DB89033D26BD070360838905
"
The regex I am using in my transforms.conf works fine on regex101.com:
(?:\"\")(\w+)(?:\"\":)(\"\".*?(?
↧
Splunk HF send only auditd, syslog, linux_secure to 3rd party syslog
I am having trouble wrapping my head around how to configure a HF to forward the sourcetypes of syslog and auditd to a 3rd party syslog host as well as to an indexer, without sending other sourcetypes as well.
I am trying to use a combination of these to docs to help but I have not been successful yet.
Route and filter data
Forward data to third-party systems
My configs look like this right now.
props.conf
[syslog]
TRANSFORMS-routing = routeAll, send_to_syslog
[auditd]
TRANSFORMS-routing = routeAll, send_to_syslog
[cpu]
TRANSFORMS-routing = routeAll
[ps]
TRANSFORMS-routing = routeAll
transforms.conf
[routeAll]
REGEX = (.)
DEST_KEY = _TCP_ROUTING
FORMAT = default-autolb-group
[send_to_syslog]
REGEX = (.)
DEST_KEY = _SYSLOG_ROUTING
FORMAT = syslogGroup
outputs.conf
[tcpout]
defaultGroup = default-autolb-group
[tcpout:default-autolb-group]
server = Y.Y.Y.Y:9997
[tcpout-server://Y.Y.Y.Y:9997]
[indexAndForward]
index = false
[syslog]
defaultGroup=syslogGroup
[syslog:syslogGroup]
server = X.X.X.X:514
sendCookedData = false
type = tcp
↧
Hot buckets filling up
I have 36 indexers each with 2.7gb of space. There are currently 29 of the 36 at capacity and keeping entering abnormal state. How can I get the indexes to roll the data or open up space to solve the alerting?
↧
↧
How to protect ultra-valuable search SPL "secret sauce" when selling service/product?
I have been asked by a client who has a very profitable service whose entire value is in a very painstakingly crafted and highly complex search inside of a dashboard , how can/should he `copy protect` this SPL. I am unsure how to answer him. I am tempted to say `Don't bother, just trust your contract and your client`. It is not so much that the risk is that any single client would take the IP for himself, but rather that somebody would unwittingly share it and release it to the world. I have considered:
0: "Do Nothing".
1: "Close the door": Move the SPL to a `scheduled search` that writes to a lookup, and have the dashboard load the lookup.
2: "Lock the door": Move the SPL (or bits) to one or more `custom search commands`.
3: "Put up a fence": Do the above, but have the python call a compiled binary to do some of the work.
↧
Need CPU and Memory peak utilization of multiple VM's
Hi All,
I am very new to Splunk.
My organisation uses Splunk for all infra monitoring, I am trying to get the "Peak CPU average" (or) the highest CPU hit per instance in last 24 hours of all my Azure VM's (it's Windows and Linux combo).
I am able to get average average using bellow query, but I need peak average - Can you please help.
host=AZR* index="perfmon" source="Perfmon:CPU" counter="% Processor Time" | stats avg(Value) as avgcpu by host
host=AZR* index="perfmon" source="Perfmon:Memory" counter="% Committed Bytes In Use" | stats avg(Value) as AvgMemory by host
↧
Error while installing Carbon Black Agent on Splunk Indexers
I have received multiple errors while trying to install the Carbon Black agent on two indexers.
The first error is this:
error: db5 error(11) from dbenv->open: Resource temporarily unavailable
error: cannot open Packages index using db5 - Resource temporarily unavailable (11)
error: cannot open Packages database in /var/lib/rpm
error: db5 error(11) from dbenv->open: Resource temporarily unavailable
error: cannot open Packages database in /var/lib/rpm
error: db5 error(11) from dbenv->open: Resource temporarily unavailable
error: cannot open Packages index using db5 - Resource temporarily unavailable (11)
error: cannot open Packages database in /var/lib/rpm
error: db5 error(11) from dbenv->open: Resource temporarily unavailable
error: cannot open Packages database in /var/lib/rpm
And the second error is:
![alt text][1]
Please let me know the mitigation of the errors above.
Thank you!
[1]: /storage/temp/280785-splunk-error.png
↧
Splunk DB Connect - data input updates
Hello,
I have an IDM DB in my organization that is connected to Splunk by DB Connect app.
The DB holds data about workflows in the system, their status (in process, completed, etc..), request ID and so on..
Today I noticed that the data inputs are added when the system has new requests, but when a request gets updated in the system (for example the status turns from in process to completed), the data does not change in Splunk.
So the status in my DB is completed, but in Splunk, it's still in process.
I tried running the SQL query again through the DATA LAB INPUTS, Splunk tells me that everything is updated.
Would love to get some help,
Sabina.
↧
↧
Exclude certain log with specific attribute from a search that has mutiple sources
I am trying creating a report that will run on schedule which combines different sourcetype to run from the datamodel like below.
| datamodel Email All_Email search
| search sourcetype = "ms0365log OR sourcetype = "emaillog" OR sourcetype=exchange2019 OR sourcetype=maillog
In the sourcetype=maillog i want during the search to exclude any maillog event that has final_rule!=scanning from the result. When I run the below command for one sourcetype it works well, but when I add the mutiple source type like above it fails.
Single sourcetype works fine
| datamodel Email All_Email search
| search sourcetype = "maillog" |spath final_rule | search final_rule!=scanning
Multiple sourcetype fails
| datamodel Email All_Email search
| search sourcetype = "ms0365log OR sourcetype = "emaillog" OR sourcetype=exchange2019 OR sourcetype=maillog "|spath final_rule | search final_rule!=scanning"
|
any ideas and I don't mind removing spath
↧
i need a splunk query that will the list the applications on the members of the search head cluster. i am running the query on the deployer
i need a splunk query that will the list the applications on each member of the search head cluster. i am running the query on the deployer.
I've cant find a REST command or a splunk internal query to list the apps.
Any assistance is appreciated.
↧
python3.7, splunk 8.0, OS-RHEL8, when running scripted input getting error import BS4
Change the python varaible to phython3 and ran the following commnad
dnf install python3-pip
dnf install python3-beautifulsoup4
pip3 install --user BeautifulSoup4
tried for a scripted input but getting the following error
01-26-2020 18:00:29.522 +0000 ERROR ExecProcessor - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/search/bin/cric.py" import bs4
01-26-2020 18:00:29.522 +0000 ERROR ExecProcessor - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/search/bin/cric.py" ModuleNotFoundError: No module named 'bs4'
How to import BS4
↧