Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Is it possible to show a custom tooltip whenever a user hovers over a slice of a pie chart, or column in a bar chart using Simple XML with splunk js?

$
0
0
Is it possible to show a custom tooltip whenever a user hovers over a slice of a pie chart, or column in a bar chart? I have been looking at this [blog about adding custom tooltip to a table][1], but I cannot see how to apply that to a chart. The ChartView in splunkjs does not have similar methods as the TableView for adding a renderer. I have data that contains a code (userid) and the name associated with the code (among other things). I am creating a pie chart to display counts by code. However, when the user hovers over the pie-slice, I want to display the code, the name and the counts/percentage in the tooltip. Customers want to count by code, but be able to see what the code maps to in the tooltip. My search looks like this: index=yc | stats sum(counts) as counts values(fullName) as fullName by userid| sort -counts | head 10 In statistics view, this gives me: **userid count fullName** jdoe 35424 John Doe bsmith 4342 Bob Smith mjones 4212 Mary Jones jdoeadeer 1234 John Doe ... So if the user hovers over the largest pie slice I'd like the tooltip to display userid: jdoe fullName: John Doe counts: 35424 counts%: 78.2 [1]: http://blogs.splunk.com/2014/01/29/add-a-tooltip-to-simple-xml-tables-with-bootstrap-and-a-custom-cell-renderer/

Is there any way to to compare two different log sources to get the output

$
0
0
Hi Experts, I need your help to create query to show output when a system is infected with any malware\virus (**Source anti virus**)and same is generating traffic (**strong text**source firewall) Challenge I am facing is that in Antivirus log infected host is dest_ip and in firewall logs source is src_ip. Other thing is I am unable to find any common field between two logs (Anti virus and Firewall) Sample logs for Anti virus : "2016-12-29 12:43:26" Type="SecurityIncident", RowID="AACDE705-F0A9-46B0-BE27-C0ECF81554A7", Name="MalwareInfection", Description="NotImplemented", Timestamp=1451418206600, SchemaVersion="1.0", ObserverHost="hostname", ObserverUser=0, ObserverProductName="SystemCenterEndpointProtection", ObserverProductversion="4.8.0204.0", ObserverProtectionType="AM", ObserverProtectionVersion=0, ObserverProtectionSignatureVersion=0, ObserverDetection="Realtime", ObserverDetectionTime=1451418206600, ActorHost=0, ActorUser=0, ActorProcess=0, ActorResource=0, ActionType="MalwareInfection", TargetHost="Thost", TargetUser="Tuser", TargetProcess="C:\Windows\explorer.exe", TargetResource="file:_E:\O F F I C E\PDFCreatorWebSetup.exe", ClassificationID=2147697638, ClassificationType="Trojan:Win32/Dorv.B!rfn", ClassificationSeverity="Severe", ClassificationCategory="Trojan", RemediationType="Quarantine", RemediationResult="True", RemediationErrorCode=0, RemediationPendingAction="NoActionRequired", IsActiveMalware="False" Sample logs of Firewall Dec 29 15:18:38 FHost 1,2015/12/29 15:18:38,007701001134,TRAFFIC,drop,1,2015/12/29 15:18:32,80.82.79.104,10.X.X.X,0.0.0.0,0.0.0.0,LOG-OUTSIDE,,,not-applicable,vs1,OUTSIDE,INSIDE,ethernet1/1,,Forward to Panorama,2015/12/29 15:18:32,0,1,41237,8080,0,0,0x0,tcp,deny,60,60,0,1,2015/12/29 15:16:06,0,any,0,7857899777,0x8000000000000000,china,UK,0,1,0,policy-deny

PowerShell Modular input doesn't process my sourcetype for the data.

$
0
0
I had a scripted input with power-shell as simply, *.bat files pointing to *.ps1 files and I was able to use my sourcetype by inputs.conf and props.conf. We recently upgraded our system to Splunk 6.3 and decided to user PowerShell Modular input from UI since we would have a chance to edit schedules without restarting system etc.. Although, it is the same same power-shell script and props.conf, Splunk indexer fails to set sourcetype to my predefined type although I pick my source-type from the list. Instead, each time I save from UI as "from the list", whenever I go back to modular input page, it says manual and simply breaking my event into line by line. Please advise as this is a bug for Splunk PowerShell modular input or I am missing something in the process? Thanks upfront for your time.

breaking large JSON array from REST input into event

$
0
0
I have a REST API which returns a very large, but valid, JSON payload. The structure of this JSON is a single array of many objects. Last I checked the response is around 1.2mb or roughly 1million chars. Here is a sample of the JSON, pretty printed (the actual response contains no newlines): [ { "barcode": "10010208", "comment": null, "flagged": 1, "fromCode": "war_rep", "fromStation": "Warehouse Repair", "lastTrackScan": "12/10/2015 12:31:48 AM", "muted": true, "priority": "RED", "reservations": 1, "sku": "TB44_10", "toCode": "war_rep", "toStation": "Warehouse Repair" }, { "barcode": "10011135", "comment": null, "flagged": 1, "fromCode": "cus_rec", "fromStation": "Customer Receiving", "lastTrackScan": "12/09/2015 10:17:12 AM", "muted": true, "priority": "RED", "reservations": 2, "sku": "RR52_8", "toCode": "ins", "toStation": "Pre-Inspection" }, ... many more ] After adding a REST data input that made an HTTP GET req once every 60s in Splunk, we were able to successfully have this JSON broken into events, with one event per object. But following an upgrade, this stopped working. Now, the payload isn't parsed as JSON but appears to be treated as a single event, and is truncated at 10,000 chars. We're still using `sourcetype="_json"` but somehow this isn't working. We're using Splunk Enterprise 6.3.1

How would I generate a Report to Display any delta (By ID, by _time) in FIeld X greater than Y?

$
0
0
So a sample of the data I'm working with is as follows TImestamp | ID | Amount 2015-12-30 09:50:45 | 1 | 28668 2015-12-30 09:50:45 | 2 | 24399 2015-12-30 09:50:45 | 2 | 904 2015-12-30 09:50:45 | 4 | 39292 2015-12-30 09:55:51 | 1 | 1000 2015-12-30 09:55:51 | 2 | 1045 2015-12-30 09:55:51 | 4 | 1035 Essentially, what I'm trying to do is built a Report/Alert that will pop when any user has a variance of say... Greater than 50k between _time (data is imported about every 5-10 minutes, so that's the _time variance). What I've got so far is something like this: sourcetype="Log" *| table _time, ID, subAmount1, subAmount2 | eval amount=(subAmount1+subAmount2 ) | delta amount p=1 as amountVar| eval amountVar=-(amountVar) I can search for an individual ID, and see variances properly between _time, but I'm trying to make a more generic report to simply show highlights on a daily basis for ID's which have a variance greater than a threshold between a certain number of events.

table order with eval

$
0
0
I've got a search that does a |table prior to doing an |eval for ldapfilter. The search results are displayed in a seemingly random order (not the order specified after |table). Is there a better way to do this so I can specify the results display order? index=blah |table _time,UserName,displayName,IpAddress |eval ID=UserName |ldapfilter search="(&(samAccountName=$ID$))" attrs="displayName" Note: For some reason doing the order of |table vs. |eval |ldapfilter heavily impacts search performance. Faster (11-12 secs): index=blah |table _time,UserName,displayName,IpAddress |eval ID=UserName |ldapfilter search="(&(samAccountName=$ID$))" attrs="displayName" Much slower (116-117 secs): index=blah |eval ID=UserName |ldapfilter search="(&(samAccountName=$ID$))" attrs="displayName" |table _time,UserName,displayName,IpAddress

using less css in simple xml view

$
0
0
Hi everyone Is it a way to use less css pre processor in splunk simple xml views? Any link or ressource on how to use less in splunk will be welcome. Thanks

Unix App and /tmp

$
0
0
We're monitoring a large number of RHEL boxes with the Unix App, and I notice that on some in the df sourcetype the MountedOn=/tmp information does not get forwarded. I think this depends on what filesystem /tmp is a part of - perhaps the unix App's implementation of df ignores some filesystems?

Remove default attribute

$
0
0
I have an environment where I want to use apps like Splunk for Nix, but have the logs go to different indexes. Splunk_TA_nix/default/inputs.conf: [monitor:///var/log] whitelist=(\.log|log$|messages|secure|auth|mesg$|cron$|acpid$|\.out) blacklist=(lastlog|anaconda\.syslog) index=os disabled = 1 I don't want the default inputs.conf to have index=os. I want to set the index in another app and be able to upgrade the app without messing with the default inputs.conf of Splunk for Nix each time. For example... serverclass.conf: [serverClass:TEST1] whitelist.0 = 1.1.1.1 [serverClass:TEST1:app:TEST1-IndexConfig] [serverClass:TEST2] whitelist.0 = 2.2.2.2 [serverClass:TEST2:app:TEST2-IndexConfig] TEST1-IndexConfig default inputs.conf [default] index=test1 TEST2-IndexConfig default inputs.conf [default] index=test2 Am I going to be stuck commenting out all the "index=" in the defaults each time I want to upgrade the app? Or can I specify in the local confs to ignore the default conf attribute and respect the [default] in my other app?

IIS log files are not read properly - parts of multiple lines getting put together as one

$
0
0
I have a report that groups webpage request by from an IIS log by SC_STATUS. The results are really bad because splunk appears to be getting confused on what line and what part of a line it's reading, resulting in data like "myurl.com" showing up where "200" for sc_status should be. I have Splunk set up to monitor the folder where log files are stored in real time and I manually selected IIS logs when identifying the format of the files. This is what Splunk has stored for one request: 2015-12-30 15:06:54 W3SVC3 MYWEBSERVER 192.111.11.11 GET /App_Themes/Blue/Blue.css - 80 - 54.69.58.243 HTTP/1.1 Mozilla/5.0+(compatible;+MSIE+9.0;+Windows+NT+6.1;+WOW64;+Trident/5.0) stuff_id=stuff;+user=stuff;+persistcookie=True;+stuffSelection=STUFF1,STUFF2,STUFF3,STUFF4,STUFF5,;+MYWEBSITE=R285025761;+ASP.NET_SessionId=3sgbsssgrvbwizta31fcynmx;+MyWebSite.ASPXAUTH=D2E24F7A75F2114DCF6AFB5DA65C739A2972D39870A74C1735EF0B3A819F27D5E743DE70EB6C5D7ADF944507DA71042D235483889FEA3A736EFBA2E81AB02F47A08BA93D51C6563422CE17055236EA5BBDCC03A03B4389CE042ADDFB89AA7A7D6C7246376DB20045AD709BE50444332F048A79BD65269C0919B0A5ADA4EE415EE1E96BCFBF3D5D33507D663A5671DE9E https://m5.0+(Macintosh;+Intel+Mac+OS+X+10_10)+AppleWebKit/600.1.25+(KHTML,+like+Gecko)+Version/8.0+Safari/600.1.25 MYWEBSITE=R285025761;+ASP.NET_SessionId=o2hgz2wa34vj2v0i2c5zdmis https://mywebsite.thisisawesome.com/Logon.aspx?ReturnUrl=%2f mywebsite.thisisawesome 200 0 0 24916 515 31 This request appears to be a mashup of two or more requests: Part 1: 2015-12-30 15:06:54 W3SVC3 MYWEBSERVER 192.111.11.11 GET /App_Themes/Blue/Blue.css - 80 - 54.69.58.243 HTTP/1.1 Mozilla/5.0+(Windows+NT+6.1;+Trident/7.0;+rv:11.0)+like+Gecko stuff_id=stuff;+user=stuff;+persistcookie=True;+datalistSelection=OFAC,PEP_FO,;+MYWEBSITE=R285025761;+ASP.NET_SessionId=ykvwd2cgbhjcjck45jcy1w13 https://mywebsite.thisisawesome.com/logon.aspx mywebsite.thisisawesome.com 304 0 0 92 593 62 Part 2: 2015-12-30 15:06:38 W3SVC3 MYWEBSERVER 192.111.11.11 GET /Includes/jquery-1.4.2.min.js - 80 - 209.15.236.88 HTTP/1.1 Mozilla/5.0+(Macintosh;+Intel+Mac+OS+X+10_10)+AppleWebKit/600.1.25+(KHTML,+like+Gecko)+Version/8.0+Safari/600.1.25 MYWEBSITE=R285025761;+ASP.NET_SessionId=o2hgz2wa34vj2v0i2c5zdmis https://mywebsite.thisisawesome.com/Logon.aspx?ReturnUrl=%2f mywebsite.thisisawesome.com 200 0 0 24916 515 31 and part of another request in the middle. I can see at least one place where the lines were mashed together. In this snippit, "5671DE9E https://m5.0+(Macintosh;+Int", you can see "https://m" is part of a URL and "5.0+" is part of a user agent but they're put together without a space as if they're one field. Other than that, I'm not sure where the data is coming from in the log file to put that one request together in Splunk. My question is, how do I get Splunk to read my IIS logs properly and not mash up multiple lines into one line? Thanks!

Where can I find more information about how to use the interesting ports lookup table?

$
0
0
I've been reading this link here http://docs.splunk.com/Documentation/PCI/2.1.1/Install/Configureinterestingports and I need more information on how I can create a search using a network datamodel. I want to be able to create a search that uses this lookup table to show all the drops or blocks from certain IPs. I know that I can edit the lookup table that splunk has to add what I need but I need more information than this link provides. Any help would be great. Thanks.

How do I perform a match on a field ONLY on letters that are followed by numbers?

$
0
0
Suppose I have a field like this: `a1234` Is there a way to grab all the letters that are immediately followed by numbers? I know I can substr the first position but I want be able to work with this for example too: `abc1234`. Ty in advance

Can't see Dashboard Visualizations in New App

$
0
0
Hello esteemed Splunkers, I am trying to create a new app with dashboard visualizations. Previously, I created my own dashboard in the `simplexmlexamples` app, using `table_icons_inline.js`, and it works and looks great. However, I wanted to branch out and run the dashboards in an app of my own. So I did everything over again, thinking I would get the same results. But I don't, and I don't see why. Here's what I did: -copied `table_icons_inline.js` and `table_decorations.css` into ` [ My app's directory ]\appserver\static`, just like I did when making the dashboard within the context of the `simplexmlexamples` app. -copied the relevant xml into my new dashboard, especially the important `table id="" ` assignment `_bump`ed the version number I don't see the alert icons anywhere in my new dashboard. Is there anything else I should have done? Thanks for your attention to my question

iplocation/mapping

$
0
0
Is it possible to create a lookup such as below ip,location 10.10.20.x,london 10.10.21.x,brazil 10.10.22.x,miami And show it on the map? Then when clicking on the name will have results of all the failures for that location?

Dashboard is not populating

$
0
0
My App: Palo Alto Networks is not populating any data. I am able to do searches on my index="pan_logs" and also the host="x.x.x.x". Any ideas where I can start troubleshooting? Any help is appreciated. Thanks!

How to get previous search results as a sub-search

$
0
0
Hi all, hope you can help me with this question. What I'm trying to do is, given the information Splunk keeps about triggered alerts in index=_internal, create a dashboard with the alerts triggered in a period of time and, using the sid, get the actual results from that alert. I'm using the following search to get the information about triggered alerts: index="_internal" sourcetype="scheduler" savedsearch_name="Alert*" status=success | table _time app savedsearch_name severity status result_count **sid** and I want to use the **sid** value returned to run a sub-search and get the actual values. Something like this, if possible, would be great: index="_internal" sourcetype="scheduler" savedsearch_name="Alert*" status=success | table _time app savedsearch_name severity status result_count **sid** | append [ loadjob **sid** ] Thanks for your help.

Newbie question on a search string to produce a line graph of multiple Y values

$
0
0
Hi, I've never written a search string for Splunk and some of the answers that I think are for my question really confuse me, so pleae bear with me, as this is probably a really dumb question. I've got the very beginning of my search: source="web_input://CyberQ Which produces the following data: match_cook_temp="718" raw_match_count="5" response_size="1235" match_output_percent="100" response_code="200" match_food1_temp="OPEN" match_cook_set="3560" encoding="ascii" match_food1_set="1670" request_time="613.86013031" match_cook_temp="719" raw_match_count="5" response_size="1235" match_output_percent="100" response_code="200" match_food1_temp="OPEN" match_cook_set="3560" encoding="ascii" match_food1_set="1670" request_time="670.753002167" match_cook_temp="721" raw_match_count="5" response_size="1235" match_output_percent="100" response_code="200" match_food1_temp="OPEN" match_cook_set="3560" encoding="ascii" match_food1_set="1670" request_time="582.855939865" match_cook_temp="721" raw_match_count="5" response_size="1235" match_output_percent="100" response_code="200" match_food1_temp="OPEN" match_cook_set="3560" encoding="ascii" match_food1_set="1670" request_time="580.070018768" match_cook_temp="722" raw_match_count="5" response_size="1235" match_output_percent="100" response_code="200" match_food1_temp="OPEN" match_cook_set="3560" encoding="ascii" match_food1_set="1670" request_time="614.03298378" match_cook_temp="721" raw_match_count="5" response_size="1235" match_output_percent="100" response_code="200" match_food1_temp="OPEN" match_cook_set="3560" encoding="ascii" match_food1_set="1670" request_time="566.085100174" match_cook_temp="725" raw_match_count="5" response_size="1235" match_output_percent="100" response_code="200" match_food1_temp="OPEN" match_cook_set="3560" encoding="ascii" match_food1_set="1670" request_time="915.005922318" match_cook_temp="719" raw_match_count="5" response_size="1235" match_output_percent="100" response_code="200" match_food1_temp="OPEN" match_cook_set="3560" encoding="ascii" match_food1_set="1670" request_time="616.425037384" match_cook_temp="719" raw_match_count="5" response_size="1235" match_output_percent="100" response_code="200" match_food1_temp="OPEN" match_cook_set="3560" encoding="ascii" match_food1_set="1670" request_time="622.943162918" I would like to have a graph, that shows the following fields on a Y axis: cook_temp,cook_set,food1_temp,food1_set,output_percent For the X axis, I'm after this to be time. An example of what I'm after shown in Excel is: ![alt text][1] [1]: /storage/temp/79207-chart.jpg Please could I have a hand with writing the search statement that would generate this. Thanks, Richard.

XML Data Parising

$
0
0
This question seems to need more details. Clear and complete questions are 30% more likely to get answers. Got it, post anyway You don't have enough karma points to post links; if you post any link they won't be shown to other users.

Splunk XML data formatting

$
0
0
I trying to split the xml data while pushing into splunk. I had a tough time working on this as this a combination of XML and CSV format. Input: 10:26:10 PST 16 Nov 2015 AA;systems engineer;seattle 1:26:10 PST 16 Nov 2015 BB;Lead;seattle CC;Tech Lead,Redmond 6:26:10 PST 16 Nov 2015 DD;data architect;annapolis Expected Output: ename position branch AA systems engineer seattle BB Lead seattle CC Tech Lead Redmond DD data architect annapolis

How to add and parse the xml data into splunk

$
0
0
Structure of the XML file looks like this 10:26:10 PST 16 Nov 2015 AA;systems engineer;seattle 1:26:10 PST 16 Nov 2015 BB;Lead;seattle CC;Tech Lead,Redmond 6:26:10 PST 16 Nov 2015 DD;data architect;annapolis I need the output as: ename position branch AA systems engineer seattle BB Lead seattle CC Tech Lead Redmond DD data architect annapolis
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>