Currently we have Splunk db connect app 3.1.4 version and I want to connect to MongoDB .In the DB CONNECT under the connections what connection type should be selected for MongoDB.I dont see anything for MongoDB.What would we the work around to connect to it.
Thanks in Advance
↧
What connection type should be used for mongodb to SPLUNK db connect
↧
Extract integer value in search from string JSON in log event
I am trying to extract 'timeTaken' value from json inside a log event string in order to build a dashboard.
Example log value:
`2020-02-12 17:50:15.228 INFO 1 --- [io-8080-exec-45] c.m.v.c.RequestInterceptor : {"logType":"RESPONSE","traceId":"03927a51-23d6-4530-a0e6-112b6d4b5539","timestamp":"Feb 12, 2020 5:50:15 PM","requestMethod":"GET","requestUrl":"http://my.url","responseStatus":500,"timeTaken":28}`
Search example :
`index = "my_cluster_name" "kubernetes.labels.app"=my.app | spath log | search log="*"timeTaken"" | rex field=_raw ""timeTaken":(?.*)}" | timechart span=60s avg(timeTaken)`
I also tried:
`index = "my_cluster_name" "kubernetes.labels.app"=my.app | spath log | search log="*"timeTaken"" | rex field=_raw "timeTaken\":(?.*)}" | timechart span=60s avg(timeTaken)`
It appears that the value for `timeTaken` is not populated. I would be grateful for any guidance. Thanks!
↧
↧
Splunk json request for model template validation for structure
Hello, I have complex json being written to splunk and want to do json structure validation for model template , this is to ensure that json is not corrupted for some fields missing or out of order, what is the best way to do model template validation for json request written to splunk and throw out requests that does not match model template structure. For example below is the json request, model template would have fieldnames and types like string, int and mandatory fields.
{
"TestTransaction":{
"OrderEntryType":141,
"Number":69909696,
"CloseDate":"2020-02-03T15:31:38.1260000Z",
"ab":"test",
"Trans":[
{
"Amt":5.45,
"Desc":"test card",
"Id":"961071022758064128",
"Number":7777207236838910,
"ab":"test",
"$type":"test"
}
],
"TotalAmt":5.45,
"SubAmount":4.95,
"TaxAmount":0.5,
"DiscountAmount":0.0,
"Header":{
"ServiceType":null,
"RequestDate":"2020-02-03T15:31:38.1260000Z",
"$type":"Header"
},
"Preparation":"ConsOutOfStore",
"Details":{
"Discounts":[
],
"Items":[
{
"Qty":1.0,
"Sku":null,
"Price":4.45,
"Discounts":[
],
"OverrideDescription":null,
"OverridePrice":null,
"Suffix":null,
"ChildItems":[
{
"Qty":1.0,
"Sku":null,
"Price":0.0,
"Discounts":null,
"IsRefunded":false,
"IsTaxed":false,
"Summary":{
"TotalPrice":4.95,
"DiscountAmount":0,
"SubtotalAmount":4.95,
"$type":"testSummary"
},
"$type":"testItem"
}
],
"Taxes":[
{
"Name":"Sales Tax",
"Amount":50,
"$type":"testTax"
}
],
"ReceiptLines":[
],
"Delivery":null,
"$type":"testDetails"
},
"$type":"trans"
},
"RequestId":"test",
"MessageId":"test",
"$type":"testTransaction"
}
↧
extract a string from email id from raw logs ?
One of the sample log is as follows :-
time="2020-02-12 13:45:37" user-name="abc12345@def-ghi-01.com" proto="HTTPS"
Now I want to extract the abc12345 from the raw logs user-name as "user_name". For doing that I have given the below regex in props.conf for that sourcetype
props.conf :-
EXTRACT-user = user-name=(?[^+@]*)@*\sproto=
Still the above regex doesn't worked on filtering the abc12345 under the user_name. I see the abc12345@def-ghi-01.com under the user_name field.
I want to exclude the @.....com and wanted to extract only the the username string prior to the @ sign.
Any help would be great.
↧
Transforms.conf not using match_type = CIDR(ip) when searching
Leveraging the app ASN Lookup Generator - https://splunkbase.splunk.com/app/3531/ to build a lookup table for that has the following in a lookup table called 'asn'
![asn][1]
the transforms.conf file has the following - note I commented out max_matches to test, with or without that line commented it still wont return results
# cat TA-asngen/default/transforms.conf
[asn]
filename = asn.csv
match_type = CIDR(ip)
#max_matches = 1
If I run the following search
| makeresults |eval src="1.0.0.1"| lookup asn ip as src
Nothing is matched, but it should be matched to asn 1335, any thoughts? I feel like I'm doing something wrong
![no match][2]
[1]: /storage/temp/282622-screen-shot-2020-02-12-at-112109-am.png
[2]: /storage/temp/282623-screen-shot-2020-02-12-at-112425-am.png
↧
↧
Splunk DB Connect 3.1.4 requires Splunk version 6.6.0
Hello dear community,
I noticed that with version 3.1.4 of Splunk DB Connect prerequisit Splunk version was raised from 6.4.0 to 6.6.0.
In 3.1.3 Splunk version 6.4.0 was still supported.
see https://docs.splunk.com/Documentation/DBX/3.1.3/DeployDBX/Checklist
and https://docs.splunk.com/Documentation/DBX/3.1.4/DeployDBX/Checklist
We are running on Splunk 6.4.1. and I am wondering why the support of 6.4.0 was dropped.
Was there a new feature in DB Connect, that made this change necessary?
Or was there a bug that occurs combined with 6.4.0?!
I could not find any hints in the release notes:
https://docs.splunk.com/Documentation/DBX/3.1.4/ReleaseNotes/Releasenotes
https://docs.splunk.com/Documentation/DBX/3.2.0/ReleaseNotes/Releasenotes
We tested DB Connect 3.2.0 today and so far it went well...
But of course I would consider using an earlier version of DB Connect, if there is a good reason for it.
Any clarification we appreciate very much!
↧
How to configure non domain account for WMI access
Hello Everyone,
I have a service account that I need to configure to collect WMI data from domain controllers. This account can't be an admin on the domain controller, so am trying to provide least privilege access to my account as per the documentation below:
https://docs.splunk.com/Documentation/Splunk/8.0.1/Data/MonitorWMIdata
I am running splunk enterprise 8.0.1 and have a basic architecture with only one instance of splunk running as an indexer and search head. I have not installed any forwarder(not allowed to install that on DC as per policy).
So far, I have tried everything in this link:
https://answers.splunk.com/answers/2703/how-to-enable-wmi-data-collection-on-a-domain-server.html
However, I am still not able to connect to the DC. The above link says that it's been tried on windows 2003. I have windows server 2012 and 2016. Could someone please assist me in getting this fixed? Any help would be greatly appreciated. Please let me know if you require further information. I apologize if I have missed anything crucial. I am still a newbie trying to find my way through it.
Thank You!
↧
Modifing macros.conf to include multiple indexes.
How do I modify marcos.conf to include multiple indexes ? Will it recognize wildcards in the index name ?
example:
[event_sources]
definition = (index="win*" OR source=*WinEventLog*)
disabled = 0
↧
How to extract integer value in search from string JSON in log event
I am trying to extract 'timeTaken' value from json inside a log event string in order to build a dashboard.
Example log value:
2020-02-12 17:50:15.228 INFO 1 --- [io-8080-exec-45] c.m.v.c.RequestInterceptor : {"logType":"RESPONSE","traceId":"03927a51-23d6-4530-a0e6-112b6d4b5539","timestamp":"Feb 12, 2020 5:50:15 PM","requestMethod":"GET","requestUrl":"http://my.url","responseStatus":500,"timeTaken":28}
Search example :
index = "my_cluster_name" "kubernetes.labels.app"=my.app | spath log | search log="*"timeTaken"" | rex field=_raw ""timeTaken":(?.*)}" | timechart span=60s avg(timeTaken)
I also tried:
index = "my_cluster_name" "kubernetes.labels.app"=my.app | spath log | search log="*"timeTaken"" | rex field=_raw "timeTaken\":(?.*)}" | timechart span=60s avg(timeTaken)
It appears that the value for `timeTaken` is not populated. I would be grateful for any guidance. Thanks!
↧
↧
How to modify macros.conf to include multiple indexes
How do I modify marcos.conf to include multiple indexes ? Will it recognize wildcards in the index name?
example:
[event_sources]
definition = (index="win*" OR source=*WinEventLog*)
disabled = 0
↧
Monitoring for failing SSL on a Squid proxy with Stream?
All,
I have a Squid web proxy with an in house cert on it. We've gone through and applied the root certs to all our hosts and set it as trusted and it's working great. What I am looking to do is create an alert in Splunk that is a host tries to hit our proxy and fails the, I get an alert.
I am assuming there is a SSL cert disconnect message of some sorts in Stream I can capture. But I am not sure what streams to configure. Point me in the right direction?
↧
Calculate event time, given a startup time and an offset per event?
I have a log source with a terrible timestamping scheme. The first line contains the startup date/time, and each event in the log is marked with a seconds.millis offset from that time (left space-padded).
I know how I would go about this if I wanted to ingest the whole file in one go after it's closed (use python to make a copy with calculated times replacing the offset), but I'm interested in ingesting as the log's written. I have no idea where to start for that.
Does anyone have a ready solution for something like this? If not, I'd appreciate some guidance toward a useful direction to research.
0.012 2020-02-05 17:35:53; Factorio 0.18.3 (build 49258, win64, alpha)
0.013 Operating system: Windows 10 (build 18363)
0.013 Program arguments: "E:\GOG\Factorio\bin\x64\factorio.exe" "--force-opengl"
0.013 Read data path: E:/GOG/Factorio/data
0.013 Write data path: C:/Users/Nate/AppData/Roaming/Factorio [626596/1907076MB]
0.013 Binaries path: E:/GOG/Factorio/bin
0.032 System info: [CPU: AMD Ryzen 7 2700X Eight-Core Processor, 16 cores, RAM: 6553/16310 MB, page: 9579/24406 MB, virtual: 4251/134217727 MB, extended virtual: 0 MB]
0.032 Display options: [FullScreen: 1] [VSync: 1] [UIScale: custom (150.0%)] [Native DPI: 1] [Screen: 255] [Special: lmw] [Lang: en]
0.036 Available displays: 1
0.036 [0]: \\.\DISPLAY1 - NVIDIA GeForce GTX 1070 {0x05, [0,0], 2560x1440, 32bit, 59Hz}
0.684 Initialised OpenGL:[0] GeForce GTX 1070/PCIe/SSE2; driver: 3.3.0 NVIDIA 432.00
0.684 [Extensions] s3tc:yes; KHR_debug:yes; ARB_clear_texture:yes, ARB_copy_image:yes
0.684 [Version] 3.3
0.684 Verbose GraphicsInterfaceOpenGL.cpp:896: [Caps] Tex:32768, TexArr:2048, TexBufSz:131072kB; TexUnits:192; UboSz:64kB
0.684 Graphics settings preset: very-high
0.684 Dedicated video memory size 8192 MB
1.095 Verbose PipelineStateObject.cpp:85: Time to load shaders: 0.411126 seconds.
1.149 Desktop composition is active.
1.150 Graphics options: [Graphics quality: high] [Video memory usage: all] [Light scale: 100%] [DXT: high-quality] [Color: 32bit]
1.150 [Max threads (load/render): 32/16] [Max texture size: 0] [Tex.Stream.: 0] [Rotation quality: normal] [Other: STDC] [B:0,C:0,S:100]
↧
Werid issue with eNcore App
I am doing some testing with this app on 7.3.3 servers. I have noticed that if I Disable the App and Renable the App, when I click on the App link it sends me to another App or a 404 page. If I do a fresh install on a server that has never had the App it works fine. Then if I disable/delete the files/replace the files. Same issue when I click on the app icon. Has anyone else experienced this or have any idea why this is happening?
The app version I am using is 3.6.8
Thank you.
↧
↧
[systemd] splunk start keeps on asking to enter password
I am running 7.3.3 using systemd and running into issues with running splunk restart as splunk user.
*[splunk]$ splunk restart
Send restart to systemctl
**==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===**
Authentication is required to manage system services or units.*
I've read several articles below, but it doesn't appear to fix the issue.
https://answers.splunk.com/answers/724473/major-boot-start-change-with-723-1.html
https://answers.splunk.com/answers/710045/splunk-722-systemd-root-privileges-required-when-s.html
Here's my current systemd script
[root]# cat /etc/systemd/system/Splunkd.service
#This unit file replaces the traditional start-up script for systemd
#configurations, and is used when enabling boot-start for Splunk on
#systemd-based Linux distributions.
[Unit]
Description=Systemd service file for Splunk, generated by 'splunk enable boot-start'
After=network.target
[Service]
Type=simple
Restart=always
ExecStart=/opt/splunk/bin/splunk _internal_launch_under_systemd
LimitNOFILE=65536
SuccessExitStatus=51 52
RestartPreventExitStatus=51
RestartForceExitStatus=52
User=splunk
Delegate=true
CPUShares=1024
MemoryLimit=7831937024
PermissionsStartOnly=true
ExecStartPost=/bin/bash -c "chown -R 1003:1003 /sys/fs/cgroup/cpu/system.slice/%n"
ExecStartPost=/bin/bash -c "chown -R 1003:1003 /sys/fs/cgroup/memory/system.slice/%n"
[Install]
WantedBy=multi-user.target
I also tried adding the following in my /etc/sudoers
splunk ALL=(root) NOPASSWD: /usr/bin/systemctl restart Splunkd.service
splunk ALL=(root) NOPASSWD: /usr/bin/systemctl stop Splunkd.service
splunk ALL=(root) NOPASSWD: /usr/bin/systemctl start Splunkd.service
splunk ALL=(root) NOPASSWD: /usr/bin/systemctl status Splunkd.service
↧
XSD schema validation on json data
Hello, I have complex json being written to splunk and want to do xsd schema validation on the json , this is to ensure that json is not corrupted for some fields missing or out of order, what is the best way to do this ?
Below is the json request which has xsd schema and validation needs done
{
"TestTransaction":{
"OrderEntryType":141,
"Number":69909696,
"CloseDate":"2020-02-03T15:31:38.1260000Z",
"ab":"test",
"Trans":[
{
"Amt":5.45,
"Desc":"test card",
"Id":"961071022758064128",
"Number":7777207236838910,
"ab":"test",
"$type":"test"
}
],
"TotalAmt":5.45,
"SubAmount":4.95,
"TaxAmount":0.5,
"DiscountAmount":0.0,
"Header":{
"ServiceType":null,
"RequestDate":"2020-02-03T15:31:38.1260000Z",
"$type":"Header"
},
"Preparation":"ConsOutOfStore",
"Details":{
"Discounts":[
],
"Items":[
{
"Qty":1.0,
"Sku":null,
"Price":4.45,
"Discounts":[
],
"OverrideDescription":null,
"OverridePrice":null,
"Suffix":null,
"ChildItems":[
{
"Qty":1.0,
"Sku":null,
"Price":0.0,
"Discounts":null,
"IsRefunded":false,
"IsTaxed":false,
"Summary":{
"TotalPrice":4.95,
"DiscountAmount":0,
"SubtotalAmount":4.95,
"$type":"testSummary"
},
"$type":"testItem"
}
],
"Taxes":[
{
"Name":"Sales Tax",
"Amount":50,
"$type":"testTax"
}
],
"ReceiptLines":[
],
"Delivery":null,
"$type":"testDetails"
},
"$type":"trans"
},
"RequestId":"test",
"MessageId":"test",
"$type":"testTransaction"
}
↧
I want to join externally.
I want to join externally.
Index A
id,issue.id,man-hour
a c 2
Index B
issue.id,parent.id,type,subject
b null 111 null
c b null test
Now I want the output as:
id,type,subject,man-hour
c 111 test 2
↧
How to do XSD schema validation on JSON data
Hello, I have complex JSON being written to Splunk and want to do XSD schema validation on the JSON, this is to ensure that JSON is not corrupted for some fields missing or out of order, what is the best way to do this ?
Below is the JSON request which has XSD schema and validation needs done:
{
"TestTransaction":{
"OrderEntryType":141,
"Number":69909696,
"CloseDate":"2020-02-03T15:31:38.1260000Z",
"ab":"test",
"Trans":[
{
"Amt":5.45,
"Desc":"test card",
"Id":"961071022758064128",
"Number":7777207236838910,
"ab":"test",
"$type":"test"
}
],
"TotalAmt":5.45,
"SubAmount":4.95,
"TaxAmount":0.5,
"DiscountAmount":0.0,
"Header":{
"ServiceType":null,
"RequestDate":"2020-02-03T15:31:38.1260000Z",
"$type":"Header"
},
"Preparation":"ConsOutOfStore",
"Details":{
"Discounts":[
],
"Items":[
{
"Qty":1.0,
"Sku":null,
"Price":4.45,
"Discounts":[
],
"OverrideDescription":null,
"OverridePrice":null,
"Suffix":null,
"ChildItems":[
{
"Qty":1.0,
"Sku":null,
"Price":0.0,
"Discounts":null,
"IsRefunded":false,
"IsTaxed":false,
"Summary":{
"TotalPrice":4.95,
"DiscountAmount":0,
"SubtotalAmount":4.95,
"$type":"testSummary"
},
"$type":"testItem"
}
],
"Taxes":[
{
"Name":"Sales Tax",
"Amount":50,
"$type":"testTax"
}
],
"ReceiptLines":[
],
"Delivery":null,
"$type":"testDetails"
},
"$type":"trans"
},
"RequestId":"test",
"MessageId":"test",
"$type":"testTransaction"
}
↧
↧
親IDと子が持つ親IDが一致している場合に子にデータを追加する方法
お世話になります。
以下のようなデータがあります。
Index A(工数データ)
id,issue.id,man-hour
a c 2
Index B(チケットデータ)
issue.id,parent.id,type,subject
b null 111 null
c b null test
以下のように結果を出力したいです。
id,type,subject,man-hour
c 111 test 2
インデックスBは親チケット情報と子チケット情報があり、子チケットには親のIDを持っています。
↧
How to input data via "TA for Nutanix Prism" add-on?
As I read the guide from "TA for Nutanix Prism" on Splunk Base.
There's some description of data input as below:
"On you Splunk Enterprise instance, navigate over to Settings —> Data Inputs —> and **select the Nutanix Prism API endpoints you want to ingest into Splunk. For each input you want to add, select the endpoint** then select new —> and fill out the required form."
But I can't find the right option to match "select the Nutanix Prism API endpoints you want to ingest into Splunk. For each input you want to add, select the endpoint", there are some options like : Local Event Logs, Remote Event Logs, File&Directories, Http Event Collector, TCP/UDP...
Could you give some advice? thanks in advance.
↧
License Utilization from a Disabled Index when collecting data using HEC
Why splunk counts data sent via HEC as consumed license even when destination index is disabled?
I am observing similar behavior in our Pord, Dev and POC environments.
↧