Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Forward a log to a different indexer without forwarding _internal index to that indexer

$
0
0
I have a universal forwarder (version 6.2.5) that is forwarding a monitored log file to an indexer. I want to add another monitored log file that should be sent to a different indexer. I got this to work by adding a [tcpout:indexer2] stanza to the outputs.conf and using _TCP_ROUTING = indexer2 in inputs.conf for the new log file. However, the _internal index (splunkd.log etc.) is now being sent to both the original indexer and indexer2. I want the _internal index to be sent only to the original indexer. How can I configure the forwarder to make this happen? Here are the outputs.conf and inputs.conf settings I am currently using: **outputs.conf** [tcpout] defaultGroup = indexer1 [tcpout:indexer1] server = server1:9997 autoLB = true [tcpout:indexer2] server = server2:9997 autoLB = true **inputs.conf** [monitor:///var/log/test1.log] disabled = false index = test sourcetype = access_combined [monitor:///var/log/test2.log] _TCP_ROUTING = indexer2 disabled = false index = test sourcetype = access_combined

Combining multiple CPU percentage instances to a single instance

$
0
0
So I have CPU data from template for Citrix XenApp addon gathering CPU metrics. Each line on the graph is populated from two fields - **%_Processor_Time** (0-100 value) - **instance** (name of the processes chrome#1,chrome#2,iexplore#13, etc.) [Referencing this method][1] its a similar goal but the fields are not as straight forward merging counts of two values. I'm trying to do something similar with the eval I essentially want to do with wildcards be able to lump all the processor stats for a given application into a single field to report on. So in the example below instead of having an average for each instance name I want to do something like the illustrated below. ![alt text][2] | eval source=if(source=="chrome*","chromeTotal",source) [1]: https://answers.splunk.com/answers/111418/merge-timechart-column.html [2]: /storage/temp/192192-processescombined.png

Why does my SH Cluster set with a static captain show members with mgmt_uri = ?

$
0
0
When my SearchHead Cluster is set with a static captain, why does the ./splunk show shcluster-status command return a mgmt_uri =? for all members?

How do I extract the event time?

$
0
0
I tried this but didn't work. | return _time=strftime(_time,"%Y-%m-%d %H:%M:%S")

Help with transaction !

$
0
0
Hi , I have following query written but it is not giving me correct output. So my logs would look like this subject action score x s Hello continue 40 234 585 Hello discard 80 234 585 My query index=myindex (action=discard OR action=continue ) | transaction x s keepevicted=true startswith=eval(action="continue") endswith=eval(action="discard") | search subject=* | stats values(action) AS action,dc(action) AS actioncount by subject | where actioncount=2 It gives me info but its usually the once that are discarded first and continued later. I am trying to get info for other way round. So anything that scores above 80 have action=discard so I want to get alerted on all subjects that had score below 80 and had action=continue but later score went above 80 and now action=discard. The logs are split in several line hence a transaction of 'x' and 's' is required to combine the logs. Thanks in advance for any help !

Wordcloud Custom Visualization: How to add the drilldown option?

$
0
0
In the Wordcloud Custom Visualization, I want to add the drilldown option. How do I do that?

Why is the Automatic lookup not returning latest data?

$
0
0
We have a automatic lookup which is based on a lookup being appended by a report. Lookup is refreshed 6 times a day and automatic lookup appends couple of fields from the lookup to the indexed events. Whenever new records are added to the lookup, the automatic lookup doesn't return the new value when new events are queried. Sometimes it takes 2 hours to get the lookup refreshed but it returns the older records, so it seems that lookup is getting cached. How we can stop the lookup getting cached? Thanks, Varun Negi

Days Between Question

$
0
0
I am trying to determine the days between a static date and current date in this query I added a the 2008r2 column with a static date - table Division Name OS _time | eval 2008r2="1/14/2020"| I was unable to use a previous eval statement on this, I assume I dont have that column formatted properly eval Days=round((2008r2(now(),"@d")-relative_time(2008r2,"@d"))/86400,0)

How do I add an icon to multiple columns in a table on a dashboard?

$
0
0
I'm trying to use use the sample of creating a table of inline icons based on a value of True or False based on a powershell data. Instead of showing the True or False, I want to show a green check in a circle or red ! in a triangle instead (similar to the icons in the Table Icon Set (Rangemap) in the Splunk 6.x Dashboard Examples. I can get one field to display, but not all of them. **Form Data XML**
-7dnow
Active Directory Healtheventtype=msad-dc-health NOT Enabled=False |eval DomainNetBIOSName=upper(DomainNetBIOSName)|eval DomainDNSName=lower(DomainDNSName)|dedup host,DomainDNSName|sort ForestName,Site,DomainDNSName,host|eval DomainTitle="Forest: ".ForestName." (".ForestLevel."), Domain: ".DomainNetBIOSName."\\\\".DomainDNSName." (".DomainLevel.")", "Master Roles"=split(FSMORoles," "), Host=host, "Operating System"=OperatingSystem, Version=OSVersion, GC=GlobalCatalog, "AD Services OK"=ProcsOK, "DNS Registration"=DNSRegister, "SYSVOL Shared"=SYSVOLShare| dedup Host | eval GC=IF(GC=="True","low","severe") | table Host, "Master Roles", "GC", ProcsOK, "DNS Registration", "SYSVOL Shared" | rename ProcsOK as "AD Services OK"$earliest$$latest$20sdelay
**table_icons_rangemap.js** require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function(_, $, mvc, TableView) { // Translations from rangemap results to CSS class var ICONS = { severe: 'alert-circle', elevated: 'alert', low: 'check-circle' }; var RangeMapIconRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { // Only use the cell renderer for the range field return cell.field === 'GC'; }, render: function($td, cell) { var icon = 'question'; // Fetch the icon for the value if (ICONS.hasOwnProperty(cell.value)) { icon = ICONS[cell.value]; } // Create the icon element and add it to the table cell $td.addClass('icon').html(_.template('', { icon: icon, GC: cell.value })); var RangeMapIconRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { // Only use the cell renderer for the range field return cell.field === 'AD_Servicecs_OK'; }, render: function($td, cell) { var icon = 'question'; // Fetch the icon for the value if (ICONS.hasOwnProperty(cell.value)) { icon = ICONS[cell.value]; } // Create the icon element and add it to the table cell $td.addClass('icon').html(_.template('', { icon: icon, AD_Servicecs_OK: cell.value })); } }); mvc.Components.get('table1').getVisualization(function(tableView){ // Register custom cell renderer, the table will re-render automatically tableView.addCellRenderer(new RangeMapIconRenderer()); }); }); **table_decorations.css** /* Custom Icons */ td.icon { text-align: center; } td.icon i { font-size: 25px; text-shadow: 1px 1px #aaa; } td.icon .severe { color: red; } td.icon .elevated { color: orangered; } td.icon .low { color: #006400; } /* Row Coloring */ #highlight tr td { background-color: #c1ffc3 !important; } #highlight tr.range-elevated td { background-color: #ffc57a !important; } #highlight tr.range-severe td { background-color: #d59392 !important; } #highlight .table td { border-top: 1px solid #fff; } #highlight td.range-severe, td.range-elevated { font-weight: bold; } .icon-inline i { font-size: 18px; margin-left: 5px; } .icon-inline i.icon-alert-circle { color: #ef392c; } .icon-inline i.icon-alert { color: #ff9c1a; } .icon-inline i.icon-check { color: #5fff5e; }

Searching issues in comparing data

$
0
0
Hi Splunker beginner here. I'm having an issue in forming the search syntax for comparing the biggest amount of client logs deleted accidentally on a current day as opposed to the average of the previous month. I'm here at the moment `source="deleted.xml" earliest=-d ACCIDENTAL_DELETES > ...`

Trouble Scheduling Alert With Search For Specific Time Range

$
0
0
For some reason, our network goes crazy every day from 2:30 to 2:35. I'm trying to schedule a daily alert that will perform a search from 2:30 to 2:35 and report on that data if it's bad/slow. I've created a daily scheduled alert that "Runs every day" at 15:00. I then tried to specify my search with an "Advanced" time range of -30min for "Earliest" and -25min for "Latest" but Splunk doesn't like this. How can I have an alert that is run daily that will only search for the specific time range of 2:30 - 2:35?

Disable index

$
0
0
We'd like to disable indexing to a certain index temporarily but we don't have access to the forwarder. Will simply disabling the index in the Splunk UI do the trick? It shouldn't delete our data or cause any other issues, correct? Please let me know if anyone has any suggestions Thanks

Can not configure the coldToFrozenDir directories but have all permissions

$
0
0
Hello again everyone, I wrote an indexes.conf to set up an area for frozen data in a Windows Server 2012 R2, Splunk 6.5.2 single server instance in an EMC isilon drive, SMB (Server Message Block) format as \\ssdr-isilon-smb\SplunkArchive The indexes are in stanzas similar to below: [main] homePath = volume:primary/defaultdb/db coldPath = volume:primary/defaultdb/db/colddb coldToFrozenDir = \\sdr-isilon-smb\SplunkArchive\frozenarea\defaultdb\frozen frozenTimePeriodInSecs = 31536000 thawedPath = $SPLUNK_DB/defaultdb/thaweddb When I start up Splunk, it fails as shown in the final lines in the splunkd.log ERROR IndexConfig – In index ‘_audit’: Failed to create directory ‘\\sdr-isilon-smb\SplunkArchive\frozenarea\audit\frozen' (The specified path is invalid.) FATAL IndexerService – Cannot load IndexConfig: In index ‘_audit’:Failed to create directory ‘\\sdr-isilon-smb\SplunkArchive\frozenarea\audit\frozen' (The specified path is invalid.) I can cut and paste into a browser and it shows the area that I want to write these directories to and I have full permissions as "splunkuser" (running splunk) to write here. Any ideas on this? Many thanks.

If statement for earliest time

$
0
0
I have a search that needs to either snap to 7am ( `-7h@d+7h`) or 7pm ( `-7h@d+19h`) depending on whether the time of search ( `now()`) is between 7am-7pm or 7pm-7am. For example, if it is 8:30am, I need to see my search using `earliest=-7h@d+7h`, but if it is 21:15, I need see my search using `earliest=-7h@d+19h`. I tried the following if statement, but it doesn't work. earliest=if(now()>="-7h@d+7h" AND now()<"-7h@d+19h", "-7h@d+7h", "-7h@d+19h") This is for an embedded report, so I wasn't thinking I could use any XML or tokens. Thanks!

Creating a stacked line chart not by time

$
0
0
Hi all, Our machines run through various processes (each one is given a unique run_id), each process can be broken down into different steps. What I want to do is to create a stacked line chart (or area chart) where the duration of each step can be shown for each run_id and a sum of all the steps given. I've created two different queries to get the data to what I want but I'm not sure how to convert either into a readable line chart. Sample table from query 1: run_id duration sum x 4 20 5 6 5 y 10 50 Duration is a multivalue field in this case and the sum is just a single sum of all the steps. Sample table from query 2 run_id step duration cumulative sum x 1 4 4 x 2 5 9 x 3 6 15 x 4 5 20 y 1 10 10 This table shows the step name and the sum is a cumulative sum (using streamstats). I need to use the run_id (run_ids are essentially a marker of when the process occurred) on the y-axis. I know that a stacked column chart would be a much better way to visualize the duration/sum of the steps but we go through nearly a hundred runs a day and it's not feasible to produce that many columns. Does anyone have any advice on how to turn either of these tables into a readable line chart?

Distinct count by hour by type

$
0
0
I currently have a search: ... | eval hour=strftime(_time,"%H") | streamstats time_window=1h dc(vehicle_id) AS dc_vid | timechart max(dc_vid) by hour fixedrange=false This correctly produces the number of distinct vehicles on a particular route by hour. But now assume that there are two different vehicle types: bus and streetcar. So I want to modify the chart to show the same thing, but each bar should be a stacked bar composed of the number of distinct vehicles by `vehicle_type` by hour. I've tried all manner of fiddling with the search and I can't seem to get it. BTW: the existing search shows each hour as a different colored bar. I don't actually care about that. For the new chart, two colors would be fine (one for each vehicle type in the stacked bar).

How to include a few events from the log prior to the event that triggered the alert?

$
0
0
I would like to setup a scheduled alert which includes the event that triggers the alert, plus a few events prior the "main" event.

Find Client PC Interactive Logon from Domain Controller Logs

$
0
0
I am looking to get desktop (domain user) interactive and RDP logons from Domain Controller logs. I don't know if this is possible. I have looked up and down splunk>answers and found similar questions answered, but none definitively answer my question in particular. So when a domain user logs on to a desktop PC anywhere in the domain, I want that to show up on my search. So far I am searching for (EventCode=528 OR EventCode=540 OR EventCode=552 OR EventCode=4624 OR EventCode=4648). Really only 4624 gets results, and the only results I am seeing are for Logon Type 3 which correspond to mapped network drives and printers, etc. Not the stuff I'm interesting in. I'm interested in Logon Type 2, 7, and 10 mostly. The list of Logon Types can be found at https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventID=4624 If I look at the Event Viewer on a client PC, I do see Event Code 4624 with the Logon Types I want (3, 7, 10), but these don't appear on domain controller logs. Am I missing something? I'm trying to avoid installing the UF and TA on each workstation as it would likely make me go over my license. Is there a way for me to tell if a user performed an interactive/remote logon or unlock from my domain controller logs? Thanks for your help.

How to set up Multiple Developers authoring Multiple Dashboards with Version Control protection?

$
0
0
Hi, Sorry if this has already been asked. It should be a common question, but I've not been able to find an answer by searching... We would like to switch using Splunk to serve hundreds of dashboards. The built-in Splunk web interface is great for a single person to author their first dashboard. It does not seem meant for team development as I don't see any change history being preserved: a new version of a dashboard overwrites the old one, and there is no record of who changed what? So, how do teams author dashboards together? I've used the 'vi' editor to directly edit files in splunk/etc/apps/search/local/data/ui/views, but then it requires human intervention to go to http://splunk:8000/debug/refresh to pick up the new version of the file just edited. Is there a unix-side command line equivalent? When you put those dashboards together using version-control, how do you setup your Splunk instances? You'll need a production one where all the users are using the production version of dashboards, maybe a staging one for testing dashboards before production, and then one or more for each of your dozen of developers for them to develop in a isolated way. Any best practices? any document on this? Thanks, Jill

Time Difference between events that reoccur with identical messages but at a different time

$
0
0
I am trying to find a query that can calculate the time difference between 2 events. It should give me the time for the devname, but notice in the screenshot that the same device can have the same set of events happen again and each clubbed, two event instance should be treated separately. The time calculation needs to start when `msg="*Backup to Master*"` and conclude when `msg="*Master to Backup*"` Here are two queries I tried unsuccessfully. Any help is much appreciated! **Queries:** index=pci_logid=0103027001 | transaction devname startswith=eval(msg="*Backup to Master*") endswith=eval(msg="*Master to Backup*") | search eventcount = 2 | table devname duration _time index=pci_bjs_index msg="*Master to Backup*" OR msg="*Backup to Master*" | transaction devname startswith=eval(msg="*Backup to Master*") endswith=eval(msg="*Master to Backup*") keepevicted | eval StartTime=_time | eval EndTime=_time+duration | table devname StartTime, EndTime **Events:** ![alt text][1] Thanks in advance for input! Please also notice the repeats of devnames aforementioned. [1]: /storage/temp/193173-events.jpg
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>