Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Which ticket systems for SOC that have integration with Splunk do you use/know?

$
0
0
Hi. Any opinion? Will be cool if you using that ticket system and can tell about pros/cons of that product.

Multiple stats counts on different criteria

$
0
0
Hi all, first question on Splunk Answers. I just finished the Fundamentals I training and am now wanting to do some more sophisticated things with the SPL. I have data with status codes 100-900 that tracks the progress of a process that happens daily. I'd like to do a chart that is on a dashboard where dynamic updates happen that shows the progress. So for each client (field), that has a operationnumber i want to be able to show the total operations being processed (max status < 900) for that client and the total complete for that client (max status = 900) ... something like head limit=1 for each client, operationnumber where the items are sorted by - status. While I understand the basics of the SPL there is something i'm not quite getting about how the SPL searches are parsed and executed... e.g. the light bulb has yet to fully go off! I'm actually looking forward to really grokking this so I can start helping others here. Thanks so much, Guy Davis

Splunk - How do i build a timeline chart to trace a transaction that has multiple asynchronous processes

$
0
0
I would like to create a timeline view that shows the begin/end time of every event for a given transaction. The transaction is a series of automated/asynchronous processes that run from a single CreateJob request. I want essentially see a bar chart of the events where the x-axis is the "wall clock" and the y-axis is a list of events. For each event, we have the following data points: transactionId => links all the different events together actionName => the name of the event that is being logged beginTime endTime Sample Data: - timestamp=2018/07/02 12:00:10.572;actionName=ConcludeJob;application=10002;beginTime=2018/07/02 12:00:10.353;endTime=2018/07/02 12:00:10.572;transactionId=123; - timestamp=2018/07/02 12:00:10.345;actionName=storeFile;application=10002;beginTime=2018/07/02 12:00:10.230;endTime=2018/07/02 12:00:10.345;transactionId=123; - timestamp=2018/07/02 12:00:10.201;actionName=retrieveItem;application=10002;beginTime=2018/07/02 12:00:10.172;endTime=2018/07/02 12:00:10.201;transactionId=123; - timestamp=2018/07/02 12:00:05.154;actionName=CreateJob;application=10002;beginTime=2018/07/02 12:00:05.144;endTime=2018/07/02 12:00:05.154;transactionId=123; What I would like to do is to build a timeline dashboard visualization that has the "wall clock" as the x-axis, each event as a line on the y-axis and then a bar for each event that plots the beginning of the bar as when the event started, the end of the bar as when it ended. This way I could see what all is happening in parallel and which events are the "long pole". Here is an example of what I am looking for: ![alt text][1] https://images.template.net/wp-content/uploads/2015/07/Timeline-Chart-With-Overlapping-Event-Excel-Download.jpg [unfortunately, I am new, so I may not be able to put the image right in here] The difference is that in my chart, i would have the following from my sample data: y-axis: - CreateJob - retrieveItem - storeFile - ConcludeJob x-axis: - Hour:Minute:Second.millisecond Thanks! [1]: https://images.template.net/wp-content/uploads/2015/07/Timeline-Chart-With-Overlapping-Event-Excel-Download.jpg

Splunk DB connect

$
0
0
Hi, From Splunk, I am trying to connect to cassandra through SplunkDBConnect , in in Db_connection_types.conf file i made below changes: [cassandra] displayName = DSE Cassandra serviceClass = com.splunk.dbx2.DefaultDBX2JDBC jdbcUrlFormat = jdbc:cassandra://host/keyspace jdbcDriverClass = com.dbschema.CassandraJdbcDriver port = 9042 database = cassandra and deployed necessary jars under Drivers directory, after i restart i am able to see the Driver listed in driver section on Splunk UI, but it is not installed. I verified the logs i am seeing Error message: 2018-06-29 13:56:15.075 -0500 3536@CB2012D060 [main] WARN com.splunk.dbx.service.driver.DriverServiceImpl - action=load_drivers Can not load any driver from files [/C:/Program%20Files/Splunk/etc/apps/splunk_app_db_connect/drivers/cassandra-jdbc-2.1.1.jar] Can someone please help?

Failed to read size=10 event(s) from rawdata in bucket

$
0
0
I have a couple panels that are giving me an error: Failed to read size=10 event(s) from rawdata in bucket Rawdata nay be corrupt, see search log on one of my indexers. How do find out if there is corruption? TIA! David L. Crooks

Programatically accessing external APIs from splunk in Python

$
0
0
Need to access some URLs, from Splunk programatically in Python. Need to know what should be the structure of the app and what files need to be placed in the bin and default folder respectively Basically a splunk app is need to be created which get data from some APIs, we are able to hit those APIs and get the result outside of Splunk, but need to access them from splunk and display the data in dashboard without indexing the data into Splunk.

How to limit runtime for a search

$
0
0
Hi, Is there a setting to limit max runtime for a search? I don't see anything obvious.

need to limit what servers are sending logs to an indexer

$
0
0
We have multiple window SUFs sending logs to a HF that then divide the winevent:security logs between two indexers. One indexer needs to receive logs from all servers, the second indexer needs to receive logs from only particular servers. Any idea on how i can go about setting this up? I have a custom transform and props configs that will split the event ids between the two indexers, but i need to limit which servers go to the second indexer as well.

Event count mismatch when comparing between tstats query and normal(non-tstats) query

$
0
0
When we use tstats query we get more number of events as compared to non-tstats query. But, their statistics counts match. Using below query getting statistics count 25 and number of events (Events label below search query) as 214. | tstats `summariesonly` values(XXXX.product_name) as "Product Name" from datamodel=XXXX where XXXX.threat_name!="" by XXXX.threat_name But, Using `get_index` threat_name!="" | stats values(product_name) as "Product Name" by threat_name getting statistics count 25 and number of events (Events label below search query) as 208.

if-else statement with timeframe

$
0
0
I would like to specify my search to return a previous months + the current months data if the count outputted by just the current months data is less than 0. How would I do that? Right now I have earliest = @mon for the current months data. How do I implement the if else statement? I have this right now. Would this be in the first pipe? eval (if count(data) <=0, earliest= -1mon@mon, else earliest=@mon )

Issues installing Splunk App for Microsoft Exchange on Search Head Cluster

$
0
0
I am attempting to install the Splunk App for Microsoft Exchange on our search head cluster. I drop the binaries into the C:\Program Files\Splunk\etc\shcluster\apps\splunk_app_microsoft_exchange folder on our deployment server and then push the app to the search head cluster by running the *apply shcluster-bundle* The app looks to install on the search heads but when you try access the app from the app menu you get the following error on all search heads... ![alt text][1] If you refresh the page then you occasionally get this error: ![alt text][2] ***Any ideas?*** [1]: /storage/temp/251098-sh-error.png [2]: /storage/temp/251100-sh-error2.png

Not getting data from sample files on search head /forwarder to index cluster

$
0
0
I am new to trying to set up a dev environment with 1 deployment server, 1 search head/forwarder 1 master cluster, 2 indexers within the cluster. I have taken data samples and placed them in directories on the search head ex. opt/splunk//hops/asalogs. I have configured the inputs.conf file to monitor for these to ingest and be sent towards the indexers in my outputs.conf file. My props.conf file has been created to do the parsing of the logs. Bear in mind the hops index mentioned in the stanzas is located on the cluster master Inputs.conf [monitor:///opt/splunk/HOPS/asalogs/*.log] sourcetype=apps:hops:websphere index=hops disabled=false [monitor:///opt/splunk/HOPS/jse/*.xml] sourcetype=apps:hops:jse index=hops disabled=false Outputs.conf [indexAndForward] index = false [tcpout] defaultGroup=indexcluster forwardedindex.filter.disable = true indexAndForward = false [tcpout:indexcluster] server=1.1.1.1:9997,1.1.1.2:9997 disabled = false [tcpout-server://1.1.1.1:9997] Props.conf [apps:hops:jse] CHARSET=UTF-8 MAX_TIMESTAMP_LOOKAHEAD=32 SHOULD_LINEMERGE=true disabled=false TIME_PREFIX=\ [apps:hops:websphere] SHOULD_LINEMERGE=true NO_BINARY_CHECK=true CHARSET=UTF-8 MAX_TIMESTAMP_LOOKAHEAD=32 disabled=false TIME_PREFIX=\[ By all indication from all the documentation that I have read is that it should be working, I am not getting a lot of support locally so I am having to go out to other splunk denizens for assistance. Thanks!

Calculate percentage between 2 reports

$
0
0
Hello I use 2 reports with the code below index="windows-wmi" sourcetype="wmi:DiskRAMLoad" host="$field1$" (Name="mfetp.exe" OR Name="mcshield.exe") Name=$Service$ | head 10 | table _time Name host Name ReadOperationCount ReadTransferCount WriteOperationCount WriteTransferCount | eval _time = strftime(_time, "%Y-%m-%d %H:%M") | stats values(Name) as Name, avg(ReadOperationCount), avg(ReadTransferCount), avg(WriteOperationCount), avg(WriteTransferCount) BY host | rename avg(ReadOperationCount) as ReadOperation_AVG, avg(ReadTransferCount) as ReadTransfer_AVG, avg(WriteOperationCount) as WriteOperation_AVG, avg(WriteTransferCount) as WriteTransfer_AVG On the first report, i get the value corresponding to Name="mfetp.exe" and on the second, i get the value corresponding to Name="mcshield.exe") With the values of the 2 reports, i have to calculate the percentage between the values example : i want to obtain the percentage between avg(ReadOperationCount) from the first report and avg(ReadOperationCount) from the second report have you any idea how to do please??

Splunk Certified User Certificate Version Mismatch

$
0
0
Hello, I already passed the Splunk Fundamental 7.x Part 1 Course, Today I got the certification as well, but the version in the certificate shows 6.x not 7.x!!! Any advice on this version mismatch and how I can rectified ? Cheers Ibra

Adding more rows in the trellis single-value colored format

$
0
0
I have hundreds of values that I want to create a heat map for using the trellis layout, but only two rows with ten values each are displayed. I know I can use the "Next" link to see the remaining but there are over 25 pages for me to click through and I want everything on the same page. Is there a way to display more than 20 charts using trellis layout, so I can have everything on the same page? The documentation makes no mention of any limits, and I want to create a custom visualization but using trellis documentation that mentions the limit would make it easier.

How to limit runtime for a search?

$
0
0
Hi, Is there a setting to limit max runtime for a search? I don't see anything obvious.

Translating dashboard embedded html

$
0
0
We have a Dashboard panel that includes some embedded html that we are trying to enable for i18n translation.

Filter your data queries by time range and domain

When I do the " splunk extract i18n -app " command this shows up in the messages.pot as : msgid "" "

Filter your data queries by " "time range and domain

" I translate these strings and generate the messages.mo file, but the changes never get picked up, they always display in the original text. All the other strings show up translated, but the embedded html sections don't. Suggestions? Thanks Rob.

where clause evaluates differently with presence of fields in prior table command

$
0
0
I am putting a query to findout all SSH connection between internal network and external network. Logic I am trying is very straight forward. I lookup src and dest ip fields in a lookup table. If both src and dest are present in the lookup table then it'[s internal SSH connection else it's external connection. query sourcetype="cefevents" | join MyFlowID max=0 [ search MyApplicationName=ssh ] | transaction MyFlowID src dst | dedup src dst | lookup corp_networks network_prefix as src output valid as srcpresent | lookup corp_networks network_prefix as dst output valid as dstpresent | lookup host_addr_to_name host_addr as src output host_name as "Src Name" | lookup host_addr_to_name host_addr as dst output host_name as "Dst Name" | iplocation src | rename Country as "Src Country" | iplocation dst | rename Country as "Dst Country" | rename spt as "Src Port" dpt as "Dst Port" | table src "Src Name" "Src Country" dst "Dst Name" "Dst Country" "Src Port" "Dst Port" MyApplicationName MyFlowID | where NOT (dstpresent == 1 and srcpresent ==1) Output src Src Name Src Country dst Dst Name Dst Country Src Port Dst Port MyApplicationName MyFlowID 40.40.40.19 50.1.10.2 XXXXX 22 51270 ssh 505345970676954420 40.40.40.40 50.1.10.2 XXXXX 22 52427 ssh 649486050193769777 Query with srcpresent and dstpresent in table command **strong text** sourcetype="cefevents" | join MyFlowID max=0 [ search MyApplicationName=ssh ] | transaction MyFlowID src dst | dedup src dst | lookup corp_networks network_prefix as src output valid as srcpresent | lookup corp_networks network_prefix as dst output valid as dstpresent | lookup host_addr_to_name host_addr as src output host_name as "Src Name" | lookup host_addr_to_name host_addr as dst output host_name as "Dst Name" | iplocation src | rename Country as "Src Country" | iplocation dst | rename Country as "Dst Country" | rename spt as "Src Port" dpt as "Dst Port" | table srcpresent dstpresent src "Src Name" "Src Country" dst "Dst Name" "Dst Country" "Src Port" "Dst Port" MyApplicationName MyFlowID | where NOT (dstpresent == 1 and srcpresent ==1) output: empty as expected... How do I achieve the expected output without putting srcpresent dstpresent in the table command prior to where condition. Table is acting as a filter to eliminate fields.

Splunk_TA_windows how to configure inputs.conf for specific targets?

$
0
0
I'm using Splunk_TA_windows addon as a deployment app. In the local/inputs.conf, I've added a stanza to monitor Windows Event Log where the channel does not exist on all the Windows computers where the Splunk_TA_windows app is deployed. This is causing errors in the splunkd.log. It is possible to configure the inputs.conf for specific computers, or suppress error logging on inputs which are known not to exist on all computers? Is the only answer to make another deployment app for the specific servers?

How to extract with rex from the beginning of a string to delimiter or end of like?

$
0
0
How do I write a rex command to extract from up to a particular delimiter (such as comma) or (if there is no delimiter) to the end of string? I thought of something like `rex field=TEXT "(?.+)(\,|$)"` but it did not work. For example: - If TEXT is `12A-,4XYZ`result should be `12A-`(up to `,`) - If TEXT is `567+4ABC` result should be `567+4ABC` (the entire string)
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>