Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How can I send Splunk visualization to Slack?

$
0
0
Hi there, Is there anyway to send splunk visualization to slack channel besides the slack notification alert in splunkbase.

inputs.conf stanza to monitor only current data after changes are pushed to production (ignoring historical data)?

$
0
0
Hi All, I want to ingest the log files from an application server directory using universal forwarder. Log file names are in below pattern ABC.%d-01-2017.log Examples: ABC.09-01-2017.log ABC.09-02-2017.log ABC.09-03-2017.log ABC.09-04-2017.log What should be the stanza in the inputs.conf on my forwarder such that i only monitor and ingest today's file. Also i have lot of old files in the same path,i want to start ingesting the files from the day i push the changes to production[not interested in historical]. Can you please let me know how to go about this without using "ignoreOlderThan" feature. I did look at this , wondering if there is any other way -->https://answers.splunk.com/answers/206950/how-to-configure-inputsconf-on-a-universal-forward.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev Thank you in advance!!

What's the maxSize we can set for the event-processing queues?

$
0
0
On the indexers we have 64 GBs of RAM. We have the following configurations - [queue=AEQ] maxSize = 200MB [queue=parsingQueue] maxSize = 3600MB [queue=indexQueue] maxSize = 4000MB [queue=typingQueue] maxSize = 2100MB [queue=aggQueue] maxSize = 3500MB So, the processing queues can consume altogether up to 13.4 GBs and currently we are at 100% for all the queues. We wonder how high we can set them up while leaving enough RAM for the Splunk processes. The servers are fully dedicated to Splunk...

how to use inputlookup and lookup together to filter events and then output a new field with value mappings

$
0
0
I have a lookup abc.csv with the following values... **header1, header2** value1a, value2a value1b, value2b value1c, value2c value1d, value2d I have a base query that I need to **first filter a fieldX by only values contained in the lookup abc.csv header1 column**. I understand that I can do this using something like, "[ | inputlookup abc.csv | fields fieldX]" but there are two problems here... 1. my splunk fieldX does not have the same name as header1 (and I would like to keep them different). 2. I need to use this lookup command after using several other pipes already, not directly after the base search query (this is because I have to first regex a different field to create the proper mapping values for fieldX) Once the events are filtered, I need use the same lookup file abc.csv to output a new field with the values in header 2. Correct me if I'm wrong but I believe i have to do it this way because it won't let me just use the lookup command (and forego inputlookup altogether) as most of the values in fieldX aren't present in header1 and i get this error... ** "Error in 'lookup' command: Could not find all of the specified lookup fields in the lookup table" **

How do I use results from one search in a subsearch?

$
0
0
Trying to use the results of one query in the sub query search. I am not getting the results I expected. The first search returns about 2400 ids, and I want to pull those same id's from the sub query. The results returned are far less than expected; less than 100. It should be exactly the same count as the first query. index=12_access Server connection terminated |stats count by tid| rename tid AS extid|table extid| join extid [search index=13_access ]| stats count by extid,resource. Is the query logic wrong? Any ideas would be greatly appreciated.

Replacing search peer in an indexer cluster - Best practices/concerns

$
0
0
Hi Splunk experts, We have a 2 site index cluster with 2 indexers per site. The plan is to replace existing disks on the indexers to allocate more space on one indexer at a time. Our current SF and RF setting are below: multisite=true available_sites=site1,site2 site_replication_factor = origin:2,total:3 site_search_factor = origin:1,total:2 Current disk utilization: Site1: indexer1 - 90%,indexer2 - 62% Site2: indexer1 - 83%,indexer2 - 42% Question1: what is the best way to do this activity? Run the **splunk offline --enforce-counts** on one of the indexers, wait for the data to redistribute, complete the drive upgrades, reinstall splunk and re-add the peer to the cluster. Repeat the same on all the indexers. Question2: During this activity, as the replication factor will not be met, does it affect anything? Question3: If I bring the indexer1 - 90% offline, will the space on indexer2 - 62% be sufficient to generate the searchable copies?

Can I have two apps that have two different indexers and indexes for the SAME windows event log monitor stanza

$
0
0
I have an app with an inputs.conf that has a stanza for [WinEventLog://Microsoft-Security-Logs] to an index and uses _TCP_ROUTING to make sure the events go to the correct indexer. I have a group that runs their own splunk environment and wants their data sent to their own index/indexers, but I still need it as well. I would like to create a second app with another [WinEventLog://Microsoft-Security-Logs] stanza that sends the same information to their servers as well. I don't see any facility for having two of the same inputs.conf stanzas, even in two different apps. It seems like the configurations are merged and the last variable read takes precedence. Is there a way to do this?

Splunk DB Connect: Input and temp tables

$
0
0
We have a very complex query that creates temp tables and declares variable. We can execute the SQL in Splunk and it returns the correct results but it will not allow us to save the SQL. Is there any way to work around using temp tables with DB connect inputs?

I want to decorate events from forwarder with json using _meta

$
0
0
We have events coming from hosts that need to have additional information added to them from two configuration files. One file is a plain text file which contains a label for the set of hosts this particular host belongs to. The second file is JSON which contains meta-data about the configuration of these hosts based upon which label is in the first configuration file. This meta-data is information like the source of a data stream the hosts are handling and the destination the data stream is being sent as well as information about the data input rate and related information. When debugging the system these hosts are a part of this information is essential so that we can understand what part of the system they function as. The problem is that _meta in the forwarder inputs.conf file seems to only support very simple single key -> value pairs. What we need though is more complex hash with nested keys and their associated values. Can _meta handle this? If so what would the inputs.conf file look like then? If not, then how can I add/decorate this meta-data which is crucial for understanding our system to the events?

Install error - LXC - Splunk Enterprise/Light - failed with code '1'

$
0
0
Running either Splunk Enterprise or Light for the first time, I receive the error below. The command to start splunk is as follows: /opt/splunk/bin/splunk start Console output: Splunk> All batbelt. No tights. Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Creating: /opt/splunk/var/lib/splunk Creating: /opt/splunk/var/run/splunk Creating: /opt/splunk/var/run/splunk/appserver/i18n Creating: /opt/splunk/var/run/splunk/appserver/modules/static/css Creating: /opt/splunk/var/run/splunk/upload Creating: /opt/splunk/var/spool/splunk Creating: /opt/splunk/var/spool/dirmoncache Creating: /opt/splunk/var/lib/splunk/authDb Creating: /opt/splunk/var/lib/splunk/hashDb New certs have been generated in '/opt/splunk/etc/auth'. Checking critical directories... Done Checking indexes... homePath='/opt/splunk/var/lib/splunk/audit/db' of index=_audit on unusable filesystem. Validating databases (splunkd validatedb) failed with code '1'. If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue root@splunk:~#

I Have problem with DB2 connectivity with Splunk DB Connect V2(2.3.0)

$
0
0
When I try to connect, the message below appears. How can I fix it? com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: [jcc][t4][2043][11550][4.14.137] Exception java.net.ConnectException: Error opening socket to server /172.31.19.149 on port 50,000 with message: Connection refused. ERRORCODE=-4499, SQLSTATE=08001 for info. - Splunk ver: 6.4.2 - Splunk DB Connect V2(2.3.0) - Driver: DB2(unsupported) 4.14 installed Server - centos 5.8 - IBM DB2 9.7

Jva Modular Input - javax.xml.stream.XMLStreamException

$
0
0
Hi, i wrote a Java Modular Input with the Splunk SDK 1.6.2. The Input read a Http Request and have only read at the request at the query part. Is the reading successful the input must send a ok-response. The test in eclipse is fine and work 100%. I get in the _internal index the output: ... Creating ServerStarter Object done Creating Thread Object done Starting Thread done (LogMessage from my Java-class) **javax.xml.stream.XMLStreamException: No element was found to write: java.lang.ArrayIndexOutOfBoundsException: -1 (LogMessage from XMLStream, my PROBLEM :-( )** Start HTTP Server (IO) 192.168.1.10 (LogMessage from my Java-class) ... The Problem the input not recognizes the request in the Modular Input and 2nd answer the request. Somebody a ideas? My inputs.conf.spec [MyScheme://] ipadresse = port = My inputs.conf [MyScheme://http_input_1] ipadresse = 192.168.1.10 port = 54321 index = test sourcetype = test_push My Java Classes are: package com.default; import java.io.IOException; import javax.xml.stream.XMLStreamException; import com.splunk.modularinput.Argument; import com.splunk.modularinput.EventWriter; import com.splunk.modularinput.InputDefinition; import com.splunk.modularinput.MalformedDataException; import com.splunk.modularinput.Scheme; import com.splunk.modularinput.Script; import com.splunk.modularinput.SingleValueParameter; import com.splunk.modularinput.ValidationDefinition; public class ModularInput extends Script { public static void main(String[] args) { new ModularInput().run(args); } @Override public Scheme getScheme() { Scheme scheme = new Scheme("MyScheme"); scheme.setDescription("Read push messages from my Sourece"); scheme.setUseExternalValidation(true); scheme.setUseSingleInstance(true); Argument ipadresse = new Argument("ipadresse"); ipadresse.setDataType(Argument.DataType.STRING); ipadresse.setRequiredOnCreate(true); scheme.addArgument(ipadresse); Argument port = new Argument("port"); port.setDataType(Argument.DataType.NUMBER); port.setRequiredOnCreate(true); scheme.addArgument(port); return scheme; } @Override public void validateInput(ValidationDefinition definition) throws Exception { String ipadresse = ((SingleValueParameter) definition.getParameters().get("ipadresse")).getValue(); int port = ((SingleValueParameter) definition.getParameters().get("port")).getInt(); if (ipadresse == null || ipadresse.isEmpty()) { throw new Exception("The Paramter ipadresse must be set."); } if (port == Integer.MIN_VALUE) { throw new Exception("The Paramter port must be set."); } } @Override public void streamEvents(InputDefinition inputs, EventWriter ew) throws MalformedDataException, XMLStreamException, IOException { for (String inputName : inputs.getInputs().keySet()) { ew.log("INFO", "streamEvents for MyScheme Input " + inputName); String ipadresse = ((SingleValueParameter) inputs.getInputs().get(inputName).get("ipadresse")).getValue(); int port = ((SingleValueParameter) inputs.getInputs().get(inputName).get("port")).getInt(); String splunkHost = inputs.getServerHost(); String splunkUri = inputs.getServerUri(); String[] splunkUriArray = splunkUri.split(":"); int splunkPort = 8089; try { ew.log("INFO", "Parse URI: " + splunkUri); splunkPort = Integer.parseInt(splunkUriArray[2]); } catch (NumberFormatException nfe) { ew.log("WARN", "Can't parse port value from URI String!! Use default Port 8089!"); } ew.log("INFO", "read ip-Adress: " + ipadresse + " port: " + port + " Splunk Port: " + splunkPort); HTTPServer gZR = new HTTPServer(ew, inputName, ipadresse, port, splunkHost, splunkPort); ew.log("INFO", "Creating ServerStarter Object done"); Thread t = new Thread(gZR); ew.log("INFO", "Creating Thread Object done"); t.start(); ew.log("INFO", "Starting Thread done"); } } } The HTTPServer Code: package com.default; import java.io.IOException; import java.net.BindException; import java.net.InetAddress; import java.net.InetSocketAddress; import java.net.Socket; import org.apache.logging.log4j.Level; import com.splunk.modularinput.EventWriter; import com.sun.net.httpserver.HttpServer; public class HTTPServer implements Runnable { private static final int backlog = 3; private EventWriter ew; private String inputName; private String ipadresse; private int port; private String splunkHost; private int splunkPort; public HTTPServer(EventWriter ew, String inputName, String ipadresse, int port, String splunkHost, int splunkPort) { super(); this.ew = ew; this.inputName = inputName; this.ipadresse = ipadresse; this.port = port; this.splunkHost = splunkHost; this.splunkPort = splunkPort; } @Override public void run() { ew.log(Level.INFO.name(), "Start HTTP Server (IO) " + this.ipadresse); HttpServer server = null; InetSocketAddress inetSocketAddress; inetSocketAddress = new InetSocketAddress(ipadresse, port); try { ew.log(Level.INFO.name(), "try create HttpServer"); server = HttpServer.create(inetSocketAddress, backlog); server.createContext("/", new DataHandler(this.ew, this.inputName, this.ipadresse, this.port, this.splunkHost, this.splunkPort)); ew.log(Level.INFO.name(), "Start HttpServer"); server.start(); ew.log(Level.INFO.name(), "Started HttpServer"); } catch (BindException e) { ew.log(Level.ERROR.name(), "Port bound " + e.getMessage()); } catch (IOException e) { ew.log(Level.ERROR.name(), e.getMessage()); } catch (Exception e) { ew.log(Level.ERROR.name(), e.getMessage()); }}}

single value with trends

$
0
0
Hi at all, I'd like to show trends in Single Value panels. Following the example in Splunk 6.x dashboard Examples App, I used a timechart command my_search | timechart count bins=2 but the problem is that in this way I give an information different from all the other panels in my dashboard, because, setting last hour in the Time Picker, I show events in the last hour in all the panels and events in the last half hour in Single Value Panels! What is the best way to proceed: set a double time in Single Value Panels or Timewrap or another solution? How can I set double time in the Single Value panels? I tried also with Timewrap command but in this command I cannot manage delta time. Has someone an idea how to solve this problem? Thank you in advance. Bye. Giuseppe

Able to see the system logs but cannot see the remote logs (in the same server) where the log files are installed.

$
0
0
Able to see the system logs but cannot see the remote logs (in the same server) where the log files are installed. My log files are installed on Server "A". I am using free splunk version 6.6.3 I can see the system files of "A" but cannot see the remote files in "A" (path remains same) Please help!!

Java Modular Input - javax.xml.stream.XMLStreamException

$
0
0
Hi, i wrote a Java Modular Input with the Splunk SDK 1.6.2. The Input read a Http Request and have only read at the request at the query part. Is the reading successful the input must send a ok-response. The test in eclipse is fine and work 100%. I get in the _internal index the output: ... Creating ServerStarter Object done Creating Thread Object done Starting Thread done (LogMessage from my Java-class) **javax.xml.stream.XMLStreamException: No element was found to write: java.lang.ArrayIndexOutOfBoundsException: -1 (LogMessage from XMLStream, my PROBLEM :-( )** Start HTTP Server (IO) 192.168.1.10 (LogMessage from my Java-class) ... In the input is it not possible to write the Event to the Eventqueue. I use the method ew.writeEvent() and try the method ew.synchronizedWriteEvent() from the Splunk SDK. The Stacktrace is: java.lang.ArrayIndexOutOfBoundsException: -2 com.sun.xml.internal.stream.writers.XMLStreamWriterImpl$ElementStack.peek(XMLStreamWriterImpl.java:2010) com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.closeStartTag(XMLStreamWriterImpl.java:1512) com.sun.xml.internal.stream.writers.XMLStreamWriterImpl.writeStartElement(XMLStreamWriterImpl.java:1229) com.splunk.modularinput.EventWriter.writeEvent(EventWriter.java:131) com.splunk.modularinput.EventWriter.synchronizedWriteEvent(EventWriter.java:115) com.default.DataHandler.handle(DataHandler.java:76) com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79) sun.net.httpserver.AuthFilter.doFilter(AuthFilter.java:83) com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:82) My inputs.conf.spec [MyScheme://] ipadresse = port = My inputs.conf [MyScheme://http_input_1] ipadresse = 192.168.1.10 port = 54321 index = test sourcetype = test_push My Java Classes are: package com.default; import java.io.IOException; import javax.xml.stream.XMLStreamException; import com.splunk.modularinput.Argument; import com.splunk.modularinput.EventWriter; import com.splunk.modularinput.InputDefinition; import com.splunk.modularinput.MalformedDataException; import com.splunk.modularinput.Scheme; import com.splunk.modularinput.Script; import com.splunk.modularinput.SingleValueParameter; import com.splunk.modularinput.ValidationDefinition; public class ModularInput extends Script { public static void main(String[] args) { new ModularInput().run(args); } @Override public Scheme getScheme() { Scheme scheme = new Scheme("MyScheme"); scheme.setDescription("Read push messages from my Sourece"); scheme.setUseExternalValidation(true); scheme.setUseSingleInstance(true); Argument ipadresse = new Argument("ipadresse"); ipadresse.setDataType(Argument.DataType.STRING); ipadresse.setRequiredOnCreate(true); scheme.addArgument(ipadresse); Argument port = new Argument("port"); port.setDataType(Argument.DataType.NUMBER); port.setRequiredOnCreate(true); scheme.addArgument(port); return scheme; } @Override public void validateInput(ValidationDefinition definition) throws Exception { String ipadresse = ((SingleValueParameter) definition.getParameters().get("ipadresse")).getValue(); int port = ((SingleValueParameter) definition.getParameters().get("port")).getInt(); if (ipadresse == null || ipadresse.isEmpty()) { throw new Exception("The Paramter ipadresse must be set."); } if (port == Integer.MIN_VALUE) { throw new Exception("The Paramter port must be set."); } } @Override public void streamEvents(InputDefinition inputs, EventWriter ew) throws MalformedDataException, XMLStreamException, IOException { for (String inputName : inputs.getInputs().keySet()) { ew.log("INFO", "streamEvents for MyScheme Input " + inputName); String ipadresse = ((SingleValueParameter) inputs.getInputs().get(inputName).get("ipadresse")).getValue(); int port = ((SingleValueParameter) inputs.getInputs().get(inputName).get("port")).getInt(); String splunkHost = inputs.getServerHost(); String splunkUri = inputs.getServerUri(); String[] splunkUriArray = splunkUri.split(":"); int splunkPort = 8089; try { ew.log("INFO", "Parse URI: " + splunkUri); splunkPort = Integer.parseInt(splunkUriArray[2]); } catch (NumberFormatException nfe) { ew.log("WARN", "Can't parse port value from URI String!! Use default Port 8089!"); } ew.log("INFO", "read ip-Adress: " + ipadresse + " port: " + port + " Splunk Port: " + splunkPort); HTTPServer gZR = new HTTPServer(ew, inputName, ipadresse, port, splunkHost, splunkPort); ew.log("INFO", "Creating ServerStarter Object done"); Thread t = new Thread(gZR); ew.log("INFO", "Creating Thread Object done"); t.start(); ew.log("INFO", "Starting Thread done"); } } } The HTTPServer Code: package com.default; import java.io.IOException; import java.net.BindException; import java.net.InetAddress; import java.net.InetSocketAddress; import java.net.Socket; import org.apache.logging.log4j.Level; import com.splunk.modularinput.EventWriter; import com.sun.net.httpserver.HttpServer; public class HTTPServer implements Runnable { private static final int backlog = 3; private EventWriter ew; private String inputName; private String ipadresse; private int port; private String splunkHost; private int splunkPort; public HTTPServer(EventWriter ew, String inputName, String ipadresse, int port, String splunkHost, int splunkPort) { super(); this.ew = ew; this.inputName = inputName; this.ipadresse = ipadresse; this.port = port; this.splunkHost = splunkHost; this.splunkPort = splunkPort; } @Override public void run() { ew.log(Level.INFO.name(), "Start HTTP Server (IO) " + this.ipadresse); HttpServer server = null; InetSocketAddress inetSocketAddress; inetSocketAddress = new InetSocketAddress(ipadresse, port); try { ew.log(Level.INFO.name(), "try create HttpServer"); server = HttpServer.create(inetSocketAddress, backlog); server.createContext("/", new DataHandler(this.ew, this.inputName, this.ipadresse, this.port, this.splunkHost, this.splunkPort)); ew.log(Level.INFO.name(), "Start HttpServer"); server.start(); ew.log(Level.INFO.name(), "Started HttpServer"); } catch (BindException e) { ew.log(Level.ERROR.name(), "Port bound " + e.getMessage()); } catch (IOException e) { ew.log(Level.ERROR.name(), e.getMessage()); } catch (Exception e) { ew.log(Level.ERROR.name(), e.getMessage()); }}} This is the DataHandler: package com.default; import java.io.IOException; import java.io.UnsupportedEncodingException; import java.net.HttpURLConnection; import java.net.URI; import java.net.URLDecoder; import java.nio.charset.StandardCharsets; import java.util.Arrays; import java.util.Date; import java.util.Objects; import java.util.stream.Collectors; import org.apache.logging.log4j.Level; import com.splunk.modularinput.Event; import com.splunk.modularinput.EventWriter; import com.splunk.modularinput.MalformedDataException; import com.sun.net.httpserver.HttpExchange; import com.sun.net.httpserver.HttpHandler; public class DataHandler implements HttpHandler { private EventWriter ew; String inputName; public DataHandler(EventWriter ew, String inputName) { super(); this.ew = ew; this.inputName = inputName; } @Override public void handle(HttpExchange httpExchange) throws IOException { ew.log(Level.INFO.name(), "DataHandler handle"); URI requestURI = httpExchange.getRequestURI(); if (requestURI != null) { ew.log(Level.INFO.name(), "requestURI: "+requestURI.toString()); String query = requestURI.getQuery(); String queryDecode = ""; try { queryDecode = URLDecoder.decode(query, StandardCharsets.UTF_8.name()); ew.log(Level.INFO.name(), "QueryDecode: "+queryDecode); } catch (UnsupportedEncodingException e) { ew.log(Level.ERROR.name(), e.getMessage()); } if (query != null && !query.isEmpty()) { StringBuilder sb = new StringBuilder(); sb.append(requestURI.getPath()); sb.append(query); ew.log(Level.INFO.name(), "Event creating"); Event event = new Event(); Date time = new Date(System.currentTimeMillis()); event.setTime(time); event.setData(sb.toString()); try { ew.log(Level.INFO.name(), "Start write Event "+ event.getData()); ew.synchronizedWriteEvent(event); ew.log(Level.INFO.name(), "Event writed"); } catch (MalformedDataException e) { ew.log(Level.ERROR.name(), " Write Event: " + e.getMessage()); } catch (Exception e){ ew.log(Level.ERROR.name(), " Other Exception: " + e +" "+ Arrays.asList(e.getStackTrace()).stream().map(Objects::toString).collect(Collectors.joining("\n") )); } ew.log(Level.INFO.name(), "Http Status 200"); String text = "OK"; byte[] response = text.getBytes(); httpExchange.sendResponseHeaders(HttpURLConnection.HTTP_OK, response.length); httpExchange.getResponseBody().write(response); ew.log(Level.INFO.name(), "Send HTTP OK done"); } else { String text = "Bad Request"; byte[] response = text.getBytes(); httpExchange.sendResponseHeaders(HttpURLConnection.HTTP_BAD_REQUEST, response.length); httpExchange.getResponseBody().write(response); ew.log(Level.INFO.name(), "Send HTTP BAD REQUEST done"); } } else { ew.log(Level.ERROR.name(), "HTTP URI NULL"); } } }

serial number for chart

$
0
0
How to get a serial number for chart in splunk? S_no 1 2 3 4 ** in a chart ** ? Thanks in advance

Is there doc on how to migrate a SH deployer and a CM to new servers?

$
0
0
Hi, I've been informed that my existing search-head deployer and cluster master (two different servers) need to get moved to new servers. I can't find any doc on how to do this procedure. Has anyone done it?

Splunk Enterprise support for RHEL 7

$
0
0
To be more specific, anyone know when there will be full support for RHEL 7? With services being moved over to systemd, splunk is still using the depricated init.d script. I have moved it over to a systemd service script and running it manually will stop, start, and restart the service but if I update an application and restart it through the browser it just stops the service. You would think that since almost every linux OS is going to systemd, and has been for years now, that Splunk would update its software to recognize and do both.

BlueCoat ThreatPulse logs

$
0
0
Greetings - I'm using BlueCoat ThreatPulse as a web filter ('cloud' based). The only method to pull their logs is via API. However, there isn't an app for ThreatPulse (and the ProxySG uses syslog). I've tinkered with the RESTapi app but haven't had any luck bringing in data. Is there anyone here that's used the RESTapi with ThreatPulse or have any other suggestions on getting this data into Splunk? Thanks, Jason

Kv store update problem

$
0
0
Hi all, We have about 15 Kvstores running ok but sometimes I detect that we had a update problem because we don´t have all the filtered events there, we lose some... And we have to reload all the kvstore to fix it and again we have a fresh and updated kvstore version. But I don't know why we have this problem, really, I can read this kind of error in _internal logs: 2017-09-19T12:17:33.254Z I COMMAND [conn148894] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1505820444000|496 }, $or: [ { ns: { $regex: "^s_.*" }, $or: [ { op: { $in: [ "i", "u" ] }, o._user: "nobody" }, { op: "d" }, { op: "c", $or: [ { o.create: { $exists: 1 } }, { o.createDatabase: { $exists: 1 } }, { o.drop: { $exists: 1 } }, { o.dropDatabase: { $exists: 1 } } ] } ] }, { op: "c", ns: "admin.$cmd", o.renameCollection: { $exists: 1 }, o.to: { $regex: "^s_.*" } }, { ts: Timestamp 1505820444000|496 } ] } cursorid:27155961562 ntoreturn:0 keyUpdates:0 writeConflicts:0 **exception: getMore executor error: UnknownError no details available code:17406** numYields:0 nreturned:1 reslen:103 locks:{ Global: { acquireCount: { r: 4 } }, MMAPV1Journal: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { R: 2 }, acquireWaitCount: { R: 2 }, timeAcquiringMicros: { R: 3536 } } } 1ms Have you seen that before? We upgrade Kv stores every 5 minutes and a couple of them are very big (+770K lines)... Perhaps Splunk/mongodb is still upgrading a Kvstore when we try to update it again? Thanks a lot for your comments Javier.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>