Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Twitter data to Splunk

$
0
0
Hi team, With the new API for twitter, I've been having difficulty trying to set-up a connection from my host to twitter. **Code:** *Note: I have the actual values for both: OAUTH_1_Client_Key_VALUE and OAUTH_1_Access_Token_VALUE hiding for privacy purposes curl --request POST --url 'https://api.twitter.com/1.1/account_activity/all/SplunkAPI/webhooks.jsonurl=https%3A%2F%2Fsplunk.yooza.tcnz.net' --header 'authorization: OAuth oauth_consumer_key="OAUTH_1_Client_Key_VALUE", oauth_nonce="GENERATED", oauth_signature="GENERATED", oauth_signature_method="HMAC-SHA1", oauth_timestamp="GENERATED", oauth_token="OAUTH_1_Access_Token_VALUE", oauth_version="1.0"' **Configuration:** *In Splunk:* Data inputs » REST » Twitter **Endpoint URL** https://api.twitter.com/1.1/account_activity/all/SplunkAPI/webhooks.json **URL Arguments:** follow=423424432^stall_warnings=true *In Twitter:* **App Name** SplunkAPI **Website URL** https://splunk.yooza.tcnz.net (Is this needed in the technical terms? because we are using a splunk server which wont be avaialble for internet connection) QUESTION: Is there a way to connect twitter to a server? Every time I run the curl command on the server, I would get curl: (7) couldn't connect to host. What have I done wrong? Thank you for helping! :)

Alerting when consumer stopped

$
0
0
Hi, I have an async producer/consumer each logging something like: producer: log.info("id=123, status=produced); consumer: log.info("id=123, status=consumed"); where id is the transaction ID. I want to get alerted only when producer is producing and for some reason consumer stopped consuming. I did write something like: index="myindex" sourcetype="mysourcetype" | transaction id startswith=(status="produced") endswith=(status="consumed") keepevicted=true maxevents=10 | stats count by closed_txn Then I ran both producer and consumer simultaneously and observed Splunk showing 0 and 1 for closed_txn. My assumption is that I should see closed_txn as 1 as both consumer and producer are running. Later I killed the consumer and let the producer keep running. Still I get closed_txn showing up as 1 and 0 whereas I thought Splunk should only report 0 as the transaction failed as there is no log from consumer. I am not sure if I am doing it right. In summary I want to get alerted when there is production but no consumption. I don't want to get alerted when there is no production.

Mac os X intermittent weirdness

$
0
0
Ok so I'm new to Splunk - got it installed and working via the splunk-7.2.6-c0bf0f679ce9-macosx-10.11-intel.dmg uploaded 3 files to the thing and I'm getting intermittent weirdness - I often get no results at all - (even when searching over all time) Some times it works although typically I have to do a re load - when attempting to run a find a second time Splunk fails. Is this normal on the mac? Or am I doing something really silly? In the mean time I might try a linux install

Splunk MINT experience

$
0
0
Has anyone got experience with Splunk MINT. Is it a good approach for getting mobile device apps data in Splunk? Has anyone faced any kind of performance issues with the MINT SDK? We are looking forward to implement it so any inputs or reviews on this app will be helpful.

Group By Replace

$
0
0
Hello, I have several things that come in via different platforms Android (watch, phone, tablet), iOS (Watch, Phone, Tablet), and Web. For counting purposes I just need to know the platform (for now). I was wondering if there was any way possible to group my counts by my replaces. index =blah source=blah earliest=-16m@m latest=-1m@m | stats count(eval(Status=0 OR Status=1)) as Now by Platform | replace android* with Android, *Web* with Web, ip* with iOS, | table Platform, Now As of now my results look like: Platform Now android 96 android 1 android 1306 iOS 3000 iOS 45 iOS 2 Web 1286 Web 956 What I would like: Platform Now Android 1403 iOS 3047 Web 2242 Thanks in advance for any help.

Unable to Generate Pages

$
0
0
I am getting 0 pages when I run generate sessions on Set up of the Web Analytics Add-On. Sessions generated just fine (by the looks of it). I have edited props.conf and eventtypes.conf as per jbjerke's reply to my previous question about ms:iis:auto here - https://answers.splunk.com/answers/727931/is-it-easy-to-ingest-advanced-iis-logs-into-the-sp.html Just hoping someone - jbjerke ? - can help me out with generate pages for ms:iis:auto source type. Thanks.

Setting up a python virtual environment for developing Splunk applications

$
0
0
I am new to Splunk and want to write my own MLTK classes/functions. I want to test my code locally in Anaconda or PyCharm. Therefor I would like to set up a virtual python environment that is identical to the one used in Splunk. Something like the output of a "pip freeze". Have searched the internet, but could not find a list python packages with versions needed to set up this environment. Any ideas where I can find that?

Splunk Enterprise Security / OpsGenie integration issue

$
0
0
Hello, I’d like to know if anyone was able to integrate OpsGenie with the last versions of Splunk (7.2.*X*) and/or last version of Splunk Enterprise Security (5.2.*X*). We use Splunk 7.2.5 and Splunk Enterprise Security 5.2.2 and we’d like to automatically create an alert in OpsGenie whenever an alert is created in Splunk ES. We've installed [OpsGenie Splunk app][1], but it looks pretty obsolete (last version published Oct. 31, 2017) and doesn’t seem to work correctly: - In Splunk you can add OpsGenie as a response action, but you can’t manage any detail, like alert priority, etc. - In Splunk Enterprise Security there is no OpsGenie action in the response action list at all. Do you have any advice? Thanks for the help. Alex. [1]: https://splunkbase.splunk.com/app/3759/

File Integrity Monitoring using Splunk

$
0
0
As Splunk is being recognized as strategic tool , more and more requests are coming if Splunk can be used for one thing or another.. So this time, the query was "Can Splunk be used-as/replace File Integrity Monitoring(FIM) tool". So the idea is, since Splunk UF is installed in majority of hosts/clients, rather than indexing the whole file, UF needs to send information if the file has modified or NOT (like if the cksum got modified). Personally, I was thinking to write it as an "APP" which should cater for Windows/Linux etc. But was checking if you guys have done anything similar to replace Professional FIM tools?

Splunk Add-on for Service Now Madrid version

$
0
0
Hello, I'm looking to integrate Splunk with ServiceNow in order to pull CMDB, incidents and changes information to Splunk. We're currently on Madrid version of ServiceNow and the add-on documentation states it is compatible only with Helsinki, Istanbul, Jakarta, and Kingston. I know for sure this add-on works with London (tested) but has anyone made it work with Madrid version? Best regards, Andrei

Is it possible using rex to create field names that contain a period (.)?

$
0
0
Hello! I'm parsing strings using `rex` and I'd like to define a set of field names that contain the period (.) character. As an example, I'd like to create three fields: `AI1.1.1` `AI1.1.2` and `AI1.1.3`. When using the `rex` command, however, I have only managed to create the fieldnames without the period character. Here is some run anywhere code: | makeresults | eval string = "2,4,2" | rex field=string "(?[\d]*),(?\d]*),(?[\d]*)" If I replace the `rex` command with this one: | rex field=string "(?[\d]*),(?\d]*),(?[\d]*)" Then it no longer works. I tried escaping, the period, but I cannot get it working. Is it possible to do what I'm looking to do? Thank you and best regards, Andrew

Extract Area Code From Phone Numbers

$
0
0
Hi, I wonder whether someone may be able to help me please. I have a list of telephone numbers of varying length, but all with an area code at the beginning e.g. 44 for the UK. What I'm trying to do is put together a regex which looks to see if the first three characters match 350, if they do then extract those 3 digits into my new field, or if they match 44, the extract those 2 digits into the same field. This is what I've put together so far: | rex field=telno "350?(?\d{3})|44?(?area_code>\d{2})" I've clearly gone wrong, because Splunk is returning a "unrecognised character" error. Could someone possibly look at this please and offer some guidance on where I've gone wrong. Many thanks and kind regards Chris

Port 443 not returned ?

$
0
0
Hello there, Thanks so much for the new version of the App as it now takes into account multiple ports ! (and thanks also for your other Apps and blog posts by the way!) There is just one little thing that does not work for me (or that I do not understand correctly). It seems that I can get port 443 listed as result for any tested IP that has 443 open. For instance, If I query IP 151.80.25.159 on Shodan website, I would get ports 22, 80 & 443. But when querying the same IP from Splunk I only got ports 22 & 80, not 443. Any hint ?

add fields after a stats count

$
0
0
In my search i use a couple of stats counts, the problem is that after these commands I miss other that I want to use. For example _time. I dont need a count for these fields so how can I make sure they are stille available later on in the search? My search is for example: index=* "message.Origin"=blabla source="something " | stats count(eval('logger' ="test1")) as "example", count(eval(logger ="test2)) as "example2" by ID After the stats I only have the fields, example, example2 and ID

Mac os X intermittent weirdness High Sierra, Splunk Enterprise install

$
0
0
So Im real new to Splunk, Just go an install up and running trying to run thu the tutorials etc. I've uploaded some data files Situation is this Some times Splunk will work Some times it fails - even when I'm searching over all time Some times the search that worked won't work a second time Some times re loading helps It fails more than it works Am I doing something real silly? Or do we have a buggy install or something? Mac os X 10.13.6 splunk-7.2.6-c0bf0f679ce9-macosx-10.11-intel.dmg Any thoughts greatly appreciated Will try on linux next Thanks

Decouple a process in windows

$
0
0
So, I want to detach a process in windows using python code. What I want to do is, I am spawning a process from Splunk which calls some REST APIs and gets some data(scripted input). Now, when Splunk is stopped, I still want to collect the data. I tried to CreateProcess() with DETACH_PROCESS flag but it still kills the process whenever Splunk stops. I read about it and I assume that Splunk uses some mechanism like Job Objects or something that kills all the child processes. I want this process to not get terminated when its parent gets terminated. I want to remove all its references from Splunk process. I also tried creating more than one processes and exiting them to eliminate any reference Splunk keeps(something like double fork) in Linux but that didn't work. Splunk spawns a service under svchost. Is there any way we can forcefully detach a process from the parent process, so it survives the parent's death?

Running a prediction and anomaly detection in parallel

$
0
0
I want to build a query that can do the following. a. Monitor about 10-15 metrics from the different kinds of system/application logs b. Identify anomalies in these metrics, and if any anomaly is identified in one of the metrics, then run them through a if else loop to check if similar kind of metrics also had an anomaly. c. if similar metrics had an anomaly, then use the predict command to predict values for the next x mins and identify if they are breaching the SLA's d. If they are breaching then send out an alert. We have been able to come till point C. but we are unable to predict values for multiple metrics at same time in parallel and check if they are breaching the SLA. Does it need an external code or can it be done via Splunk? Please advise.

Help identifying fast growing indexes

$
0
0
Hi fellow Splunkers. I am the Splunk admin at my org, however that is mainly more from the Infrastructure side of things so when it comes to actually using Splunk I am a novice. I would like to change this but one thing at a time, Splunk is only one of my problems ;). We've got 4 Indexers, 2 in each DC. Up until last week these there pretty consistent with each other in terms of growth although now one site is growing about 30GB per day quicker than the other. This isn't a big deal, but I'd like to know why. Can someone help me with a search which shows growth per day vs the previous day? Or have any tips to help me try and narrow down what's actually growing faster than normal. Appreciate any help you can offer.

Schedule a cron job for Python Script on Splunk

$
0
0
I want to schedule a python script as a cron job on my splunk application so as to automate the data importing to my application.

Connection oracle.jdbc.driver.T4CConnection@781c5e13 marked as broken because of SQLSTATE(08000), ErrorCode(17410) java.sql.SQLRecoverableException: No more data to read from socket

$
0
0
Hi all. DB Connect losts connection and ceases an ingesstion of a data. I see this exception stack trace in the splunkd.log file. What can I do to solve the problem? Here is the exception: 019-05-16 12:15:01.998 +0300 [QuartzScheduler_Worker-11] WARN com.zaxxer.hikari.pool.ProxyConnection - unnamed_pool_1419397616_jdbc__oracle__thin__@10.67.30.104__1521__ekp - Connection oracle.jdbc.driver.T4CConnection@781c5e13 marked as broken because of SQLSTATE(08000), ErrorCode(17410) java.sql.SQLRecoverableException: No more data to read from socket at oracle.jdbc.driver.T4CMAREngineStream.unmarshalUB1(T4CMAREngineStream.java:453) at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:390) at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:249) at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:566) at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:215) at oracle.jdbc.driver.T4CPreparedStatement.fetch(T4CPreparedStatement.java:1022) at oracle.jdbc.driver.OracleStatement.fetchMoreRows(OracleStatement.java:3590) at oracle.jdbc.driver.InsensitiveScrollableResultSet.fetchMoreRows(InsensitiveScrollableResultSet.java:1008) at oracle.jdbc.driver.InsensitiveScrollableResultSet.absoluteInternal(InsensitiveScrollableResultSet.java:972) at oracle.jdbc.driver.InsensitiveScrollableResultSet.next(InsensitiveScrollableResultSet.java:572) at com.zaxxer.hikari.pool.HikariProxyResultSet.next(HikariProxyResultSet.java) at com.splunk.dbx.connector.resultset.PeekingResultSetIterator.hasNext(PeekingResultSetIterator.java:34) at com.google.common.collect.TransformedIterator.hasNext(TransformedIterator.java:42) at com.google.common.collect.Iterators$6.computeNext(Iterators.java:615) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:145) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:140) at com.splunk.dbx.server.dbinput.recordreader.iterator.EventPayloadRecordIterator.hasNext(EventPayloadRecordIterator.java:76) at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.hasNextRecord(DbInputRecordReader.java:115) at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.readRecord(DbInputRecordReader.java:121) at org.easybatch.core.job.BatchJob.readRecord(BatchJob.java:163) at org.easybatch.core.job.BatchJob.readAndProcessBatch(BatchJob.java:145) at org.easybatch.core.job.BatchJob.call(BatchJob.java:78) at org.easybatch.extensions.quartz.Job.execute(Job.java:59) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>