Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Windows Infrastructure: What would cause the Print Job Viewer to stop working?

$
0
0
I set up our print server to send print job info to Splunk recently and it worked for awhile. For some reason, it has stopped working and I have no idea why. Within Windows Infrastructure, I can see print job information within the Print Job Viewer on Monday but nothing since. I searched the info directly (Search->chose the print server as the source) and all of the recent print job information is there. What could cause it to stop showing in the Print Job Viewer?

Can we configure some Universal Forwarders to forward data to port 9998 with SSL on indexers and the remaining Universal Forwarders to forward data to port 9997 without SSL on same indexers?

$
0
0
Can we configure some Universal Forwarders to forward data to port 9998 with SSL on indexers and the remaining Universal Forwarders to forward data to port 9997 without SSL on same indexers? If yes, what do we need to configure?

How to edit my regular expression for a multivalue field extraction with new lines?

$
0
0
Hello, I need REGEX help. I've wasted almost all day trying to do this and only came up with this which is very sloppy. I feel like this could be more efficient and work. When i plug it into Splunk it doesn't do anything in the field extractor "i'll define my own regular expression' section. My Regex: ^Job Dependencies:\s*[([]*(\w+_\w+_\w+_\w+_\w+)[)\]]*|,\s+[([]*(\w+_\w+_\w+_\w+_\w+)[)\]]*,\n|\G\s*[([]*(\w+_\w+_\w+_\w+_\w+)[)\]]*,* I only need the Job dependencies. I know i need to turn them into a multi value field so the expected splunk stats list output can look like this: Job Name Job Dependencies ABC_Job ABC_ABC_AB2_123_ABC123 ABC_ABC_AB2_123_123ABC BCA_BCA_12A_ABC_123ABC DDD_AAA_CCC_12_123ABC (I dont need help with the splunk search, just showing so you guys know what i'm trying to achieve.) Since the Data also has a "Job Prerequisites:" section which have similarly formated data, my regex would capture this data as well, but i don't want it. Please help. Sample data below: Job Name : Job ID: ABC_Job ADF123 Job Prerequisites: (ABC_ABC_AB2_123_ABC123, AB1_ABC_AB2_123_123ABC) Job Dependencies: (ABC_ABC_AB2_123_ABC123, ABC_ABC_AB2_123_123ABC, BCA_BCA_12A_ABC_123ABC, DDD_AAA_CCC_12_123ABC) ________________________________________________________________________________________________ **THERES A CATCH** Sometimes the "Job Dependencies" could have square brackets OR just one dependency for example: Job Dependencies: (ABC_ABC_AB2_123_ABC123, [ABC_ABC_AB2_123_123ABC], BCA_BCA_12A_ABC_123ABC, DDD_AAA_CCC_12_123ABC) OR Job Dependencies: (DDD_AAA_CCC_12_123ABC) Pretty much, i am trying to find the data with under scores (_) after Job Dependencies. I can't get my regex to wrap or work correctly. Any help is greatly Appreciated. Thanks, John

Unhandled Exception in Splunk App for Salesforce: "urllib2.URLError: urlopen error [Errno -2] Name or service not known"

$
0
0
We are attempting to bring the Splunk App for Salesforce into our on-premise Splunk enterprise. When we configured it, it throws the following error: 01-26-2017 18:05:15.808 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" Traceback (most recent call last): 01-26-2017 18:05:15.808 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" File "/opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py", line 395, in 01-26-2017 18:05:15.809 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" run() 01-26-2017 18:05:15.809 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" File "/opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py", line 373, in run 01-26-2017 18:05:15.809 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" session, endpoint = get_salesforce_token(settings, config) 01-26-2017 18:05:15.809 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" File "/opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py", line 227, in get_salesforce_token 01-26-2017 18:05:15.809 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" handle = urllib2.urlopen(req) 01-26-2017 18:05:15.809 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" File "/opt/splunk/lib/python2.7/urllib2.py", line 154, in urlopen 01-26-2017 18:05:15.809 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" return opener.open(url, data, timeout) 01-26-2017 18:05:15.809 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" File "/opt/splunk/lib/python2.7/urllib2.py", line 431, in open 01-26-2017 18:05:15.809 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" response = self._open(req, data) 01-26-2017 18:05:15.809 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" File "/opt/splunk/lib/python2.7/urllib2.py", line 449, in _open 01-26-2017 18:05:15.809 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" '_open', req) 01-26-2017 18:05:15.809 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" File "/opt/splunk/lib/python2.7/urllib2.py", line 409, in _call_chain 01-26-2017 18:05:15.809 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" result = func(*args) 01-26-2017 18:05:15.810 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" File "/opt/splunk/lib/python2.7/urllib2.py", line 1240, in https_open 01-26-2017 18:05:15.810 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" context=self._context) 01-26-2017 18:05:15.810 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" File "/opt/splunk/lib/python2.7/urllib2.py", line 1197, in do_open 01-26-2017 18:05:15.810 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" raise URLError(err) 01-26-2017 18:05:15.811 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-sfdc/bin/sfdc_object.py" urllib2.URLError: Unfortunately, this exception is not handled properly, so we don't have insight into what is causing the problem. I tested the username, password, security token, and URL (test.salesforce.com), as well as the request it creates by using those properties, and it is successful. I've even tested using cURL on the server itself without issue. Anyone have some suggestions of what else to look at? For reference, here's the library that's failing (/opt/splunk/lib/python2.7/urllib2.py): 1180 headers = dict( 1181 (name.title(), val) for name, val in headers.items()) 1182 1183 if req._tunnel_host: 1184 tunnel_headers = {} 1185 proxy_auth_hdr = "Proxy-Authorization" 1186 if proxy_auth_hdr in headers: 1187 tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr] 1188 # Proxy-Authorization should not be sent to origin 1189 # server. 1190 del headers[proxy_auth_hdr] 1191 h.set_tunnel(req._tunnel_host, headers=tunnel_headers) 1192 1193 try: 1194 h.request(req.get_method(), req.get_selector(), req.data, headers) 1195 except socket.error, err: # XXX what error? 1196 h.close() 1197 raise URLError(err) 1198 else: 1199 try: 1200 r = h.getresponse(buffering=True) 1201 except TypeError: # buffering kw not supported 1202 r = h.getresponse()

Dashboard base search cannot use macros

$
0
0
I've come to find out that one cannot use macros within join statements in dashboards have base searches (driving multiple/all panels in the dashboard). For example, the following code doesn't work: index=some_index sourcetype="mysourcetype" earliest=-30d@d | `mymacro` | search [search index=customer_index | `mymacro` | table customer | dedup customer] | stats count by field1 field2 field3 | lookup mylookup customer OUTPUT customer_name as "Customer" | join customer [ search index=some_index earliest=-30d@d sourcetype="mysourcetype" | **`mymacro`** | rex "(?\w*)\s*(?(\d|\.)*)\s*(?.*)" | fields customer version] -30d@dnowTotal Customers| table customer | dedup customer | stats count but if I take the macro out of the join statement, it will work:index=some_index sourcetype="mysourcetype" earliest=-30d@d | `mymacro` | search [search index=customer_index | `mymacro` | table customer | dedup customer] | stats count by field1 field2 field3 | lookup mylookup customer OUTPUT customer_name as "Customer" | join customer [ search index=some_index earliest=-30d@d sourcetype="mysourcetype" | rex field=host "(.*)\.(?[^\.]+).splunkcloud.com" | eval customer = StackId | rex "(?\w*)\s*(?(\d|\.)*)\s*(?.*)" | fields customer version] -30d@dnowTotal Customers| table customer | dedup customer | stats count where the macro **`mymacro`** expands to: rex field=host "(.*)\.(?[^\.]+).splunkcloud.com" | eval customer = StackId Has anyone seen this before? Occurs in 6.4.x and 6.5.x - and if so is there a way around it?

Receiving SSL data into a forwarder - ISAM9 request_syslogs to Splunk forwarder

$
0
0
IBM Security Access Manager v9 build 9.0.1.0 * There is a bug which doesn't allow syslog to be sent of UDP, but TLS-TCP works. The bug is fixed in 9.0.2.0 On the **ISAM9** side, within the proxy I have setup the logcfg parameter to send syslog out. server-log-cfg = rsyslog server=10.10.10.10,port=10265,log_id=server01_msg_webseald-default.log,ssl_keyfile=default_qdsrv.kdb,ssl_stashfile=default_qdsrv.sth **On the Splunk Forwarder side:** ( i send the logs to an intermediate forwarder which sends to the cluster) In the Inputs.conf I have tried the variations - [tcp://:10265], [splunktcp-ssl://:10265], [tcp-ssl:10265] - switching out the : to ://: to //: since docs were not to clear. When using splunktcp or tcp-ssl my splunkd.log (on the forwarder) reports these are reserved for Splunk2Splunk. Also, when I run netstat -apn | grep 10265 ... its not listening. Question: I'm not sure if I generated a SSL cert correctly. I followed this link: [https://answers.splunk.com/answers/130860/how-to-get-tcp-ssl-input-for-splunk-6-0-to-work.html][1] but it can't find the genSignedServerCert.py file referenced in the script `/opt/splunk/bin/genSignedServerCert.sh -d /opt/splunk/etc/certs -n splunk -c splunk -p` so it fails. Has anyone worked on this ISAM9 -> splunk forwarding? Any accurate advice on howto receive SSL data into a forwarder? Splunk 6.5.2 Splunk forwarder 6.4.3 Thank You, Sean [1]: https://answers.splunk.com/answers/130860/how-to-get-tcp-ssl-input-for-splunk-6-0-to-work.html

How to get all indexes and sourcetypes?

$
0
0
After browsing through Splunk Answers, the closest I could get is the following SPL to list all Indexes and Sourcetypes in a single table - | eventcount summarize=false index=* index!=_* | dedup index | fields index | map maxsearches=100 search="| metadata type=sourcetypes index=\"$index$\" | eval retention=tostring(abs(lastTime-firstTime), \"duration\") | convert ctime(firstTime) ctime(lastTime) | sort lastTime | rename totalCount AS \"TotalEvents\" firstTime AS \"FirstEvent\" lastTime AS \"LastEvent\" | eval index=\"$index$\"" | rename index as "Index" "sourcetype" as "SourceType" | fields Index SourceType TotalEvents FirstEvent LastEvent I want to provide the users with the ability to filter by indexes and sourcetypes. Here is what I have so far -
| rest /servicesNS/-/-/data/indexes|rename "title" as index | eval dy = (frozenTimePeriodInSecs/86400) % 365 | eval retention = dy . " days" | dedup index | stats count by indexindexindexALL"\""$index$\"""\"$index$\"| metadata type=sourcetypes index=* | stats count by sourcetypesourcetypesourcetype()SourceType= OR ALL**
| eventcount summarize=false index=* index!=_* | dedup index | fields index | map maxsearches=100 search="| metadata type=sourcetypes index=\"$index$\" | eval retention=tostring(abs(lastTime-firstTime), \"duration\") | convert ctime(firstTime) ctime(lastTime) | sort lastTime | rename totalCount AS \"TotalEvents\" firstTime AS \"FirstEvent\" lastTime AS \"LastEvent\" | eval index=\"$index$\"" | rename index as "Index" "sourcetype" as "SourceType" | fields Index SourceType TotalEvents FirstEvent LastEvent | search $source_type$-3d@dnow
I am unable to achieve 2 things here - 1. When I filter indexes, I want the respective sourcetypes to be filtered in the sourctypes dropdown 2. Display the table with selected indexes and sourcetypes only (should be able to select multiple in both case) The query seems to be slow, but it gives the expected output. Any advice? Thanks!

How to remove numbers from events at search time?

$
0
0
Hi, i have endpoints which are extracted from the log message and some end points are with numbers at the end. can we replace those last digits with * here is the extracted field and values uri = private/credentials/products/CCSID/1001111335764 uri=private/credentials/products/CCSID/1001111336914 can we display like below uri = private/credentials/products/CCSID/* uri=private/credentials/products/CCSID/*

Can you exclude specific files from the Splunk file validation?

$
0
0
After upgrading to Splunk 6.5.1 we began receiving an error message in the GUI stating "File Integrity checks found 1 files that did not match the system-provided manifest. See splunkd.log for details." After doing some digging it turned out to be the file "/opt/splunk/share/GeoLite2-City.mmdb" This is the Maxmind free GeoLite2 city database file that is used in conjunction with the iplookup command. We actually update this file monthly with each new release of the GeoLite2-City.mmdb file. I'm guessing that since this file ships with Splunk it's being checked against the file manifest and is failing the integrity check due to a checksum mismatch. Is there any way to exclude a file from this integrity check? Looking at the docs regarding the integrity check and Health Monitoring console I couldn't find anything regarding exclusion of files. docs.splunk.com/Documentation/Splunk/6.5.1/Admin/ChecktheintegrityofyourSplunksoftwarefiles docs.splunk.com/Documentation/Splunk/6.5.1/DMC/Customizehealthcheck

Best practices for writing log files that have variable number of fields

$
0
0
We are writing our own logs for disk usage and we are using key value pairs. The issue is that each host has a different number of disk partitions. So my logs look like the below. We are not sure what we will do with the data yet. Maybe alert on conditions and maybe collect trending data. What do people typically do in this case? Thanks. 2017-01-27 02:48:00 db_dt="2017-01-27 02:12:00" hostname=myhost1 vol1 = "/dev/sda1" capacity1 = "706G" percentfull1 = "9%" vol2 = "tmpfs" capacity2 = "7.6G" percentfull2 = "1%" 2017-01-27 02:48:00 db_dt="2017-01-27 02:12:00" hostname=myhost2 vol1 = "/dev/sda1" capacity1 = "2.4G" percentfull1 = "84%" vol2 = "tmpfs" capacity2 = "24G" percentfull2 = "1%" vol3 = "/dev/sda3" capacity3 = "1.6T" percentfull3 = "1%" 2017-01-27 02:48:00 db_dt="2017-01-27 02:12:00" hostname=myhost3 vol1 = "/dev/sda1" capacity1 = "474G" percentfull1 = "8%" vol2 = "tmpfs" capacity2 = "12G" percentfull2 = "4%" vol4=/foo capacity4="3G" percentfull4="17%"

How to reset splunk enterprise license ??

$
0
0
I received reset license key, where i have to use it now??

KV Store: Fatal Assertion - Write to OpLog failed

$
0
0
We are running Splunk 6.5.1, and on one of our standalone search heads, upon every restart of splunkd we get the following message: KV Store changed status to failed. KVStore process terminated. KV Store process terminated abnormally (exit code 14, status exited with code 14). See mongod.log and splunkd.log for details. And further, in mongod.log: 2017-01-27T06:57:06.836Z I NETWORK FIPS 140-2 mode activated 2017-01-27T06:57:06.894Z I CONTROL [initandlisten] MongoDB starting : pid=1852 port=8191 dbpath=C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo 64-bit host=FS300 2017-01-27T06:57:06.894Z I CONTROL [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2 2017-01-27T06:57:06.894Z I CONTROL [initandlisten] db version v3.0.8-splunk 2017-01-27T06:57:06.894Z I CONTROL [initandlisten] git version: 83d8cc25e00e42856924d84e220fbe4a839e605d 2017-01-27T06:57:06.894Z I CONTROL [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 2017-01-27T06:57:06.894Z I CONTROL [initandlisten] allocator: tcmalloc 2017-01-27T06:57:06.894Z I CONTROL [initandlisten] options: { net: { port: 8191, ssl: { CAFile: "C:\Program Files\Splunk\etc\auth\mycerts\cacert.cer", FIPSMode: true, PEMKeyFile: "C:\Program Files\Splunk\etc\auth\mycerts\server.pem", PEMKeyPassword: "", allowInvalidHostnames: true, mode: "preferSSL" } }, replication: { oplogSizeMB: 200, replSet: "E3DEAFEB-A8FE-4C69-8C24-6BD8AE669663" }, security: { clusterAuthMode: "sendX509", javascriptEnabled: false, keyFile: "C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\splunk.key" }, setParameter: { enableLocalhostAuthBypass: "0" }, storage: { dbPath: "C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo", mmapv1: { smallFiles: true } }, systemLog: { timeStampFormat: "iso8601-utc" } } 2017-01-27T06:57:06.920Z I JOURNAL [initandlisten] journal dir=C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\journal 2017-01-27T06:57:06.920Z I JOURNAL [initandlisten] recover begin 2017-01-27T06:57:06.920Z I JOURNAL [initandlisten] recover lsn: 0 2017-01-27T06:57:06.920Z I JOURNAL [initandlisten] recover C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\journal\j._0 2017-01-27T06:57:06.934Z I JOURNAL [initandlisten] recover cleaning up 2017-01-27T06:57:06.934Z I JOURNAL [initandlisten] removeJournalFiles 2017-01-27T06:57:06.934Z I JOURNAL [initandlisten] recover done 2017-01-27T06:57:06.998Z I JOURNAL [durability] Durability thread started 2017-01-27T06:57:06.999Z I JOURNAL [journal writer] Journal writer thread started 2017-01-27T06:57:07.507Z I NETWORK [initandlisten] waiting for connections on port 8191 ssl 2017-01-27T06:57:07.509Z I REPL [ReplicationExecutor] New replica set config in use: { _id: "E3DEAFEB-A8FE-4C69-8C24-6BD8AE669663", version: 1, members: [ { _id: 0, host: "127.0.0.1:8191", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: { instance: "E3DEAFEB-A8FE-4C69-8C24-6BD8AE669663", all: "all" }, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } 2017-01-27T06:57:07.509Z I REPL [ReplicationExecutor] This node is 127.0.0.1:8191 in the config 2017-01-27T06:57:07.509Z I REPL [ReplicationExecutor] transition to STARTUP2 2017-01-27T06:57:07.509Z I REPL [ReplicationExecutor] Starting replication applier threads 2017-01-27T06:57:07.510Z I REPL [ReplicationExecutor] transition to RECOVERING 2017-01-27T06:57:07.511Z I REPL [ReplicationExecutor] transition to SECONDARY 2017-01-27T06:57:07.512Z I REPL [ReplicationExecutor] transition to PRIMARY 2017-01-27T06:57:07.567Z I NETWORK [initandlisten] connection accepted from 127.0.0.1:50653 #1 (1 connection now open) 2017-01-27T06:57:07.784Z I ACCESS [conn1] authenticate db: $external { authenticate: 1, mechanism: "MONGODB-X509", user: "emailAddress=email@email.com,CN=searchhead.abc.com,OU=OU,O=OrgName,L=City,ST=state,C=US" } 2017-01-27T06:57:07.784Z I NETWORK [conn1] end connection 127.0.0.1:50653 (0 connections now open) 2017-01-27T06:57:08.557Z I REPL [rsSync] transition to primary complete; database writes are now permitted 2017-01-27T06:57:08.786Z I NETWORK [initandlisten] connection accepted from 127.0.0.1:50679 #2 (1 connection now open) 2017-01-27T06:57:09.027Z I ACCESS [conn2] authenticate db: $external { authenticate: 1, mechanism: "MONGODB-X509", user: "emailAddress=email@email.com,CN=searchhead.abc.com,OU=OU,O=OrgName,L=City,ST=state,C=US" } 2017-01-27T06:57:09.027Z I NETWORK [conn2] end connection 127.0.0.1:50679 (0 connections now open) 2017-01-27T06:57:09.028Z I NETWORK [initandlisten] connection accepted from 127.0.0.1:50680 #3 (1 connection now open) 2017-01-27T06:57:09.252Z I ACCESS [conn3] authenticate db: $external { authenticate: 1, mechanism: "MONGODB-X509", user: "emailAddress=email@email.com,CN=searchhead.abc.com,OU=OU,O=OrgName,L=City,ST=state,C=US" } 2017-01-27T06:57:09.254Z I NETWORK [conn3] end connection 127.0.0.1:50680 (0 connections now open) 2017-01-27T06:57:09.255Z I NETWORK [initandlisten] connection accepted from 127.0.0.1:50681 #4 (1 connection now open) 2017-01-27T06:57:09.471Z I ACCESS [conn4] authenticate db: $external { authenticate: 1, mechanism: "MONGODB-X509", user: "emailAddress=email@email.com,CN=searchhead.abc.com,OU=OU,O=OrgName,L=City,ST=state,C=US" } 2017-01-27T06:57:09.472Z I NETWORK [conn4] end connection 127.0.0.1:50681 (0 connections now open) 2017-01-27T06:57:09.473Z I NETWORK [initandlisten] connection accepted from 127.0.0.1:50683 #5 (1 connection now open) 2017-01-27T06:57:09.693Z I NETWORK [initandlisten] connection accepted from 127.0.0.1:50684 #6 (2 connections now open) 2017-01-27T06:57:09.916Z I ACCESS [conn6] authenticate db: $external { authenticate: 1, mechanism: "MONGODB-X509", user: "emailAddress=email@email.com,CN=searchhead.abc.com,OU=OU,O=OrgName,L=City,ST=state,C=US" } 2017-01-27T06:57:10.759Z I NETWORK [initandlisten] connection accepted from 127.0.0.1:50689 #7 (3 connections now open) 2017-01-27T06:57:10.974Z I NETWORK [initandlisten] connection accepted from 127.0.0.1:50691 #8 (4 connections now open) 2017-01-27T06:57:11.189Z I ACCESS [conn8] authenticate db: $external { authenticate: 1, mechanism: "MONGODB-X509", user: "emailAddress=email@email.com,CN=searchhead.abc.com,OU=OU,O=OrgName,L=City,ST=state,C=US" } 2017-01-27T06:57:11.204Z I - [conn6] Assertion: 17322:write to oplog failed: DocTooLargeForCapp 2017-01-27T06:59:34.918Z I - [conn6] Assertion: 17322:write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 160 storageSize:209715200 @ 28575 2017-01-27T06:59:34.972Z I CONTROL [conn6] mongod.exe index_collator_extension+0x1465d3 2017-01-27T06:59:34.972Z I CONTROL [conn6] mongod.exe index_collator_extension+0xfd72f 2017-01-27T06:59:34.973Z I CONTROL [conn6] mongod.exe index_collator_extension+0xf05ae 2017-01-27T06:59:34.973Z I CONTROL [conn6] mongod.exe index_collator_extension+0xf04d3 2017-01-27T06:59:34.973Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:34.973Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:34.973Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:34.973Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:34.973Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:34.973Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:34.973Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:34.973Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:34.973Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:34.973Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:34.973Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:34.973Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:34.973Z I CONTROL [conn6] mongod.exe index_collator_extension+0x44fb88 2017-01-27T06:59:34.973Z I CONTROL [conn6] mongod.exe index_collator_extension+0x10a343 2017-01-27T06:59:34.973Z I CONTROL [conn6] mongod.exe index_collator_extension+0x166c01 2017-01-27T06:59:34.973Z I CONTROL [conn6] mongod.exe index_collator_extension+0x47e72b 2017-01-27T06:59:34.973Z I CONTROL [conn6] mongod.exe index_collator_extension+0x47e8d2 2017-01-27T06:59:34.973Z I CONTROL [conn6] KERNEL32.DLL BaseThreadInitThunk+0x22 2017-01-27T06:59:34.973Z I CONTROL [conn6] 2017-01-27T06:59:36.029Z W STORAGE [conn6] couldn't make room for record len: 160 in capped ns local.oplog.rs numRecords: 566262 Extent 0 (capExtent) 1:2000 magic: 41424344 extent->ns: local.oplog.rs fr: null lr: 1:c3fbd58 extent->len: 209715200 2017-01-27T06:59:36.029Z I - [conn6] Fatal Assertion 17438 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe index_collator_extension+0x1465d3 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe index_collator_extension+0xfd72f 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe index_collator_extension+0xefec7 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe ??? 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe index_collator_extension+0x44fb88 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe index_collator_extension+0x10a343 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe index_collator_extension+0x166c01 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe index_collator_extension+0x47e72b 2017-01-27T06:59:36.080Z I CONTROL [conn6] mongod.exe index_collator_extension+0x47e8d2 2017-01-27T06:59:36.080Z I CONTROL [conn6] KERNEL32.DLL BaseThreadInitThunk+0x22 2017-01-27T06:59:36.080Z I CONTROL [conn6] 2017-01-27T06:59:36.080Z I - [conn6] ***aborting after fassert() failure It's been condensed a bit, but essentially there's a lot of spamming of the ???, index_collator and the assertion errors before it finally gives up and shuts down. This appears to have happened after replacing our SSL certs, but we have an ES Search Head with the same config (different cert), and the KV_Store is functioning normally. We aren't actually using KV Store for anything on this Search Head yet, and we don't have a lot of experience with it. But, I'm not afraid to do a complete re initialization if that's what it takes to get it functioning again. Any ideas on what's causing this and how to resolve it?

list common uid on two hosts

$
0
0
I am trying to list out common uid on two different hosts. I am using this but this give a visual of all uids including the common ones. sourcetype=access $host1$ OR $host2$ error=2*| chart max(O) over host by uid

Missing Index Even Specifying Index in inputs.conf

$
0
0
Hi, The architect of the deployment is UF(Windows)->HF->Indexer->SH, only UF is installed in Windows platform and all other instances are Linux. The inputs.conf in UF is below: [default] host = XXX-PC index = main sourcetype = Win-UF [script://$SPLUNK_HOME\bin\scripts\splunk-wmi.path] disabled = 0 [monitor://C:\temp\temp.log] disabled = 0 [WinEventLog://Application] disabled = 0 [WinEventLog://Security] disabled = 0 [WinEventLog://System] disabled = 0 [perfmon://FreeDiskSpace] interval = 10 disabled = 0 [perfmon://Memory] interval = 10 disabled = 0 [perfmon://LocalNetwork] interval = 10 disabled = 0 [perfmon://CPUTime interval = 10 disabled = 0 As you can see, I explicitly configure the default index that all windows events collected by UF should go. From search head, I could successfully got all file monitoring events from default index, but I couldn't get any performance events, and I got warning message from SH: Search peer XYZ has the following message: Received event for unconfigured/disabled/deleted index=perfmon with source="source::Perfmon:Memory" host="host::XXX-PC" sourcetype="sourcetype::Perfmon:Memory". So far received events from 1 missing index(es). Why did Splunk still report missing index even I specified the default index to be main? and why not the event be sent to main index?

SNMP MODULAR INPUTS

$
0
0
Hi Splunk Peeps! Im trying to set up the SNMP Modular input to get the snmp traps data but unfortunately Im receiving this error "Failed to register transport and run dispatcher: bind() for (u'SERVERNAME', 162) failed: [Errno -3] Temporary failure in name resolution snmp_stanza:snmp://SNMPTRAP5" BR, Jarize

Chronogram Vizualisation into Splunk

$
0
0
Hello all, I need to merge multiple graphical view to display the evolution of binaries parameters along the time. All the graphs should be time synchronized. Ideally I would like a vizualisation like a Chronogram. Could you please help me in this way ? Thanks by advance, Cyril

How to get to grips with SPL.

$
0
0
Hi guys, I'm new to splunk, and we have recently implemented splunk enterprise in our environment. We are primarily looking at using "splunk app for windows infrastructure" for DPA requirements. We currently have one server doing everything (windows server 2012R2 12GB RAM 12CPU) running in a visualised environment, VMware. I have configured our DC's to be universal forwarders and have set up the data inputs so I can search and query the data along with creating reports and alerts. Can you help me get to grips with SPL? The only other language I know is PowerShell, is there any documentation/video's or cheat sheets on SPL? I have watched the Pluralsight courses on splunk, however these are more of an introduction to splunk and not as in-depth as I need, although they are useful. Thanks in advance, Jake

Ingesting query logs from Oracle Database

$
0
0
Hello All I am looking for options/solutions that would allow me to ingest **queries** run on an Oracle Database using Splunk. Can anyone help me out with that ?

Ingesting Trace Logs into Splunk

$
0
0
I am looking to ingest **SQL Trace Logs** into Splunk. Can anyone direct me on how this could be achieved.

How to write regex to filter events in JSON format?

$
0
0
Hi, Kindly help me with this issue: {"sim-slot":"0","terminal-vendor":"Vendor","default-sms-app":"own","screen-orientation":"portrait","response-code":"200","secondary-device-type":"","international":"0","subject-region":"Lat=0,Lon=0,Alt=0,Acc=0","locale":"en_US","timestamp":"2017-01-19T13:24:22.986+00:00","user-agent":"IM-client/OMA1.0 model/brand-5.1 RCSAndrd/0.0.0 COMLib/0.00.00.rev00000","evt-client-version":"0.0.0","active-cs-call":"no","sbc-ip":"99.99.9.999:9999","transaction-id":"9aa99a9a-9aa9-99a9-a999-a9a9a999aa00","init-service-tag":"audiocall","description":"call-sip-invite-parent","call-id":"ZZZZZZZZZZZ","app-state":"foreground","module":"cs","terminal-sw-version":"0.0","imsi":"99999999999","remote-peer":"+99999999999","cell-id":"99999","platform":"phone-android","client-version":"3.10.32.rev74692","direction":"outgoing","network-bearer":"CELLULAR_LTE","terminal-model":"Model","sim":"mcc(000),mnc(000)","result":"success","identity":"+999999999999","device-id":"imei(9999999999),tac(99999)"} This is my sample log and i need to filter out events having specifically description:call-sip-invite-parent AND response-code:200. Events having response-code other than 200 for description:call-sip-invite-parent should be indexed.Kindly help with the regex
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>