Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Why I am getting the error below when executing a simple query from the DB Connect connection Query TAB ?

$
0
0
![I am using Oracle 11 release 2. I have also downloaded the corresponding JDBC driver ojdbc6.jar from the Oracle website.Whe I am trying to execute a the query SELECT 1 FROM DUAL in the query tab I get the following error. ][1] [1]: /storage/temp/63244-query.png

Filtered search from 2 searches

$
0
0
I have 2 searches: 1. Search(AAA)|rename _time as TimeA|table TimeA host; 2. Search(BBB)|rename _time as TimeB|table TimeB host How to create a new search: Search(???)|table host; (or Search(???)|table TimeA TimeB host) Which will only list the hosts that TimeB is older(or smaller) than TimeA (there might be more than 1 results TimeA and TimeB for each host, in that case, just pick the latest one to compare)

Discarding events using TRANSFORMS-null

$
0
0
I'm trying to bring in Cisco CDR files for some very basic splunk searches. The standard CDR format has a header row, then a "datatype" row, then the actual data. So the first two rows look something like this: "cdrRecordType","globalCallID_callManagerId","globalCallID_callId","origLegCallIdentifier","dateTimeOrigination","origNodeId","origSpan","origIpAddr","callingPartyNumber","callingPartyUnicodeLoginUserID","origCause_location","origCause_value","origPrecedenceLevel","origMediaTransportAddress_IP","origMediaTransportAddress_Port","origMediaCap_payloadCapability","origMediaCap_maxFramesPerPacket","origMediaCap_g723BitRate","origVideoCap_Codec","origVideoCap_Bandwidth","origVideoCap_Resolution","origVideoTransportAddress_IP","origVideoTransportAddress_Port","origRSVPAudioStat","origRSVPVideoStat","destLegIdentifier","destNodeId","destSpan","destIpAddr","originalCalledPartyNumber","finalCalledPartyNumber","finalCalledPartyUnicodeLoginUserID","destCause_location","destCause_value","destPrecedenceLevel","destMediaTransportAddress_IP","destMediaTransportAddress_Port","destMediaCap_payloadCapability","destMediaCap_maxFramesPerPacket","destMediaCap_g723BitRate","destVideoCap_Codec","destVideoCap_Bandwidth","destVideoCap_Resolution","destVideoTransportAddress_IP","destVideoTransportAddress_Port","destRSVPAudioStat","destRSVPVideoStat","dateTimeConnect","dateTimeDisconnect","lastRedirectDn","pkid","originalCalledPartyNumberPartition","callingPartyNumberPartition","finalCalledPartyNumberPartition","lastRedirectDnPartition","duration","origDeviceName","destDeviceName","origCallTerminationOnBehalfOf","destCallTerminationOnBehalfOf","origCalledPartyRedirectOnBehalfOf","lastRedirectRedirectOnBehalfOf","origCalledPartyRedirectReason","lastRedirectRedirectReason","destConversationId","globalCallId_ClusterID","joinOnBehalfOf","comment","authCodeDescription","authorizationLevel","clientMatterCode","origDTMFMethod","destDTMFMethod","callSecuredStatus","origConversationId","origMediaCap_Bandwidth","destMediaCap_Bandwidth","authorizationCodeValue","outpulsedCallingPartyNumber","outpulsedCalledPartyNumber","origIpv4v6Addr","destIpv4v6Addr","origVideoCap_Codec_Channel2","origVideoCap_Bandwidth_Channel2","origVideoCap_Resolution_Channel2","origVideoTransportAddress_IP_Channel2","origVideoTransportAddress_Port_Channel2","origVideoChannel_Role_Channel2","destVideoCap_Codec_Channel2","destVideoCap_Bandwidth_Channel2","destVideoCap_Resolution_Channel2","destVideoTransportAddress_IP_Channel2","destVideoTransportAddress_Port_Channel2","destVideoChannel_Role_Channel2","IncomingProtocolID","IncomingProtocolCallRef","OutgoingProtocolID","OutgoingProtocolCallRef","currentRoutingReason","origRoutingReason","lastRedirectingRoutingReason","huntPilotPartition","huntPilotDN","calledPartyPatternUsage","IncomingICID","IncomingOrigIOI","IncomingTermIOI","OutgoingICID","OutgoingOrigIOI","OutgoingTermIOI","outpulsedOriginalCalledPartyNumber","outpulsedLastRedirectingNumber","wasCallQueued","totalWaitTimeInQueue","callingPartyNumber_uri","originalCalledPartyNumber_uri","finalCalledPartyNumber_uri","lastRedirectDn_uri","mobileCallingPartyNumber","finalMobileCalledPartyNumber","origMobileDeviceName","destMobileDeviceName","origMobileCallDuration","destMobileCallDuration","mobileCallType","originalCalledPartyPattern","finalCalledPartyPattern","lastRedirectingPartyPattern","huntPilotPattern" INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,VARCHAR(50),VARCHAR(128),INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,VARCHAR(64),VARCHAR(64),INTEGER,INTEGER,INTEGER,INTEGER,VARCHAR(50),VARCHAR(50),VARCHAR(128),INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,VARCHAR(64),VARCHAR(64),INTEGER,INTEGER,VARCHAR(50),UNIQUEIDENTIFIER,VARCHAR(50),VARCHAR(50),VARCHAR(50),VARCHAR(50),INTEGER,VARCHAR(129),VARCHAR(129),INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,VARCHAR(50),INTEGER,VARCHAR(2048),VARCHAR(50),INTEGER,VARCHAR(32),INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,VARCHAR(32),VARCHAR(50),VARCHAR(50),VARCHAR(64),VARCHAR(64),INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,INTEGER,VARCHAR(32),INTEGER,VARCHAR(32),INTEGER,INTEGER,INTEGER,VARCHAR(50),VARCHAR(50),INTEGER,VARCHAR(50),VARCHAR(50),VARCHAR(50),VARCHAR(50),VARCHAR(50),VARCHAR(50),VARCHAR(50),VARCHAR(50),INTEGER,INTEGER,VARCHAR(255),VARCHAR(255),VARCHAR(255),VARCHAR(255),VARCHAR(50),VARCHAR(50),VARCHAR(129),VARCHAR(129),INTEGER,INTEGER,INTEGER,VARCHAR(50),VARCHAR(50),VARCHAR(50),VARCHAR(50) I'm trying to discard that second row via the method listed on Splunk's "Route and Filter Data" article, but for some reason it isn't working (which is to say, the second row is being indexed). I suspect a problem with the regex in transforms.conf, but I'm really not sure. Here's what the relevant config files look like: Inputs.conf: [monitor://C:\Cisco_CDR\*\cdr*] disabled = false host_segment = 2 index = cisco_cdr sourcetype = CiscoCDR transforms.conf: [setnull] REGEX = ^INTEGER.* DEST_KEY = queue FORMAT = nullQueue props.conf: [CiscoCDR] HEADER_FIELD_LINE_NUMBER = 1 INDEXED_EXTRACTIONS = csv KV_MODE = none NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = dateTimeOrigination TIME_FORMAT = %s category = Structured description = Cisco Call Detail Record format disabled = false pulldown_type = true [source::C:\Cisco_CDR\*\cdr*] TRANSFORMS-null = setnull Any help would be appreciated.

Ever wonder which dashboards are being used and what users are using them?

$
0
0
The dashboard below should help answer that question for you. The User dropdown uses a `|rest` search to get a list of LDAP users so if you don't have access to run `| rest` or aren't using LDAP then that dropdown won't populate but the dashboard will still work fine you just won't be able to look at all dashboard usage for a single user. You can drilldown on any dashboard that shows in the chart to see the specific users that are using the dashboard per day. To go back to the main chart select the "Reset Drilldown" button. NOTE - You will need the tokenlinks.js available for the reset button to work. I got it from the 6.x Dashboard Examples App.
-14d@dnowAll Users|rest /services/authentication/users splunk_server=local |fields title type realname|rename title as userName|rename realname as Name | search type=LDAP | eval display=userName+" - "+Name | fields userName displaydisplayuserName*user= OR user=
Distinct count of users that visited each dashboard per day - (Top 25)Select a dashboard to see more info about itindex="_internal" source=*access* user!="-" $user$ source="/opt/splunk/var/log/splunk/splunkd_ui_access.log" "en-US/app" | rex field=referer "en-US/app/(?<app>[^/]+)/(?<dashboard>[^?/\s]+)" | search app="search" dashboard!="job_management" dashboard!="dbinfo" dashboard!="*en-US" dashboard!="search" dashboard!="home" dashboard!="alerts" dashboard!="dashboards" dashboard!="reports" dashboard!="report" | bucket _time span=1d | stats dc(dashboard) as c by dashboard user _time | timechart span=1d limit=25 useother=f count by dashboard$field1.earliest$$field1.latest$$click.name2$Distinct count of users that visited each dashboard per dayindex="_internal" source=*access* user!="-" user=* "/$dashboard$?" source="/opt/splunk/var/log/splunk/splunkd_ui_access.log" "en-US/app" | rex field=referer "en-US/app/(?<app>[^/]+)/(?<dashboard>[^?/\s]+)" | search app="search" dashboard!="job_management" dashboard!="dbinfo" dashboard!="*en-US" dashboard!="search" dashboard!="home" dashboard!="alerts" dashboard!="dashboards" dashboard!="reports" dashboard!="report" | bucket _time span=1d | stats dc(dashboard) as c by dashboard user _time | timechart span=1d limit=25 useother=f count by dashboard0Distinct users that visited $dashboard$index="_internal" user=* sourcetype=splunkd_ui_access user!="-" "/$dashboard$?" source="/opt/splunk/var/log/splunk/splunkd_ui_access.log" root="en-US" | bucket _time span=1d | stats values(user) as "Unique Users" by _time$field1.earliest$$field1.latest$

error during deleting index

$
0
0
Hi I was trying to delete index from Setignes - > Indexes - Delete option I have got such error message: Timed out while waiting for splunkd daemon to respond (Splunkd daemon is not responding: ('Error connecting to /servicesNS/-/ticket/data/indexes: The read operation timed out',)). Splunkd may be hung. now I can not open Index section Could you help ? Regards /adrian

Merge case sensitive and not case sensitive username event windows

$
0
0
Hello, thanks all in advance for your response. Can i merge events of windows, in particular field User_Name, when there are multiple occurrences with upper and lower case in username? For example: DOMAIN\name or DOMAIN\NAME or DOMAIN\Name or DOMAIN\namE i want to create a search (regex or other) that have as output a single result and not multiple results. there is chance to get it? Thanks. Mark.

What does sendCookedData actually do on a heavy forwarder (i.e. what does 'cooked' mean at a technical level)?

$
0
0
What transformations / processing happens when data is cooked on a heavy forwarder? Is it the same as the data being indexed just without local storage (barring also setting indexAndForward to true)? Or rather if an app says it has 'index time operations' will they happen during the heavy forwarder's processing of data? I can see that props.conf changes are applied but I don't have a lot of leeway for testing at the moment. I have a heavy forwarder sitting in front of an indexer cluster as a means of load balancing / homogenizing data that doesn't play nicely with load balancing or splunk in general. Some apps that people here have requested we get set up say they can't handle indexing in an indexer cluster, so I'm trying to verify if we can shove those out onto the heavy forwarder and end up with usable data in our cluster.

Are there a standard set of attack vectors to search and alert for?

$
0
0
So I wanted to field this question out to the community. I'm looking to ensure that I'm covering as many attack vectors with my alerting as possible. I know that all environments differ in many ways, but has the community come up with a list of common attack vectors (queries) that all networks should be looking for? Examples would be: SSH brute force attempts Inactive accounts being used Brute force attempts that have 1 success I would really like to know what others are doing. No suggestion is too simple or crazy. If this has been discussed in the past, can you point me in that direction?

How to change my stats sum(x) search to an hourly timechart sum(y)?

$
0
0
Hi I have the following search which displays the sum of a field, but I am trying to put a time chart in hourly which shows the sum of that particular hour. …..My Search……| rex "value(?\d+.\d+)" | stats count by amount |stats sum(amount) as total How to modify my search to display the hourly count? Any help or Suggestions?

Can I add more details to my license usage by time search to see how much is going to DEBUG logs?

$
0
0
I use the License Usage search (generally when I click through on a host or source from the License Usage page) and can manipulate the hosts or time blocks with no problem. But I'd like to narrow down the information and determine how much license usage is going to DEBUG logs. If here is my original string: index=_internal source="*license_usage.lo*" type=Usage | bucket _time span=60m | stats sum(b) as bytes by _time h | eval mb=bytes/1048576 | rename h as host | rename mb as Mbytes | search host="*-prd-*" Where would I put the term "[DEBUG]" to only count events that include that word? Thanks!

How to Restrict the permissions to a Dashboard to only certain users in LDAP?

$
0
0
I know there is a way to do that by creating a role and assigning that role to the Ldap Group, but I want to know is there any other way that I can assign directly to certain users that I prefer to instead of assigning that role to all the members in a dashboard. Any suggestions will be helpful Thanks

Failed to load search page, 500 internal error

$
0
0
After loading the login page, I log in with my local account. Then it redirects to the following page. Any one else ever encounter this issue? Splunk version 6.0.5 ![alt text][1] web_service.log 2015-10-08 18:31:42,587 DEBUG [5616b68b7e3edb950] _cplogging:55 - [08/Oct/2015:18:31:42] HTTP Traceback (most recent call last): File "/ngs/app/splunkp/splunk/lib/python2.7/site-packages/cherrypy/_cprequest.py", line 606, in respond cherrypy.response.body = self.handler() File "/ngs/app/splunkp/splunk/lib/python2.7/site-packages/cherrypy/_cpdispatch.py", line 25, in __call__ return self.callable(*self.args, **self.kwargs) File "/ngs/app/splunkp/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/lib/routes.py", line 366, in default return route.target(self, **kw) File "", line 1, in File "/ngs/app/splunkp/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 38, in rundecs return fn(*a, **kw) File "", line 1, in File "/ngs/app/splunkp/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 102, in check return fn(self, *a, **kw) File "", line 1, in File "/ngs/app/splunkp/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 151, in validate_ip return fn(self, *a, **kw) File "", line 1, in File "/ngs/app/splunkp/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 231, in preform_sso_check return fn(self, *a, **kw) File "", line 1, in File "/ngs/app/splunkp/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 344, in check_login return fn(self, *a, **kw) File "", line 1, in File "/ngs/app/splunkp/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 365, in handle_exceptions return fn(self, *a, **kw) File "", line 1, in File "/ngs/app/splunkp/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 420, in apply_cache_headers response = fn(self, *a, **kw) File "/ngs/app/splunkp/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/controllers/view.py", line 1007, in render can_alert, searches = self.get_saved_searches(app) File "/ngs/app/splunkp/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/controllers/view.py", line 870, in get_saved_searches searches = en.getEntities('saved/searches', namespace=app, search='is_visible=1 AND disabled=0', count=-1, _with_new='1') File "/ngs/app/splunkp/splunk/lib/python2.7/site-packages/splunk/entity.py", line 131, in getEntities offset = int(atomFeed.os_startIndex or -1) AttributeError: 'str' object has no attribute 'os_startIndex' 2015-10-08 18:31:42,588 INFO [5616b68b7e3edb950] _cplogging:55 - [08/Oct/2015:18:31:42] HTTP Request Headers: USER-AGENT: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11) AppleWebKit/601.1.56 (KHTML, like Gecko) Version/9.0 Safari/601.1.56 CONNECTION: keep-alive COOKIE: session_id_8000=1d54e9b368326fcdf71333dc391b0d0976d98882; splunkweb_csrf_token_8000=15741175691427473097 ACCEPT-LANGUAGE: en-us ACCEPT-ENCODING: gzip, deflate HOST: splunksearchhead-ist-prod-sox-1:8000 Remote-Addr: 17.168.82.187 ACCEPT: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 [1]: /storage/temp/64255-screen-shot-2015-10-08-at-114350-am.png

Attempting to run Splunk, why am I getting "Problem parsing indexes.conf: Cannot create index 3rdIndex: path of homePath must be absolute"?

$
0
0
[volume:primary] path = opt/splunk/splunk_data maxVolumeDataSizeMB = 2000000 [3rdIndex] homePath = volume:primary/3rdIndex/db coldPath = volume:cold/3rdIndex/colddb thawedPath = $SPLUNK_DB/3rdIndex/thaweddb maxDataSize = auto_high_volume When attempting to run splunk, it results in the message: Problem parsing indexes.conf: Cannot create index 3rdIndex: path of homePath must be absolute ('opt/splunk/splunk_data/3rdIndex/db') What's strange is that I have other indexers with the same stanzas in indexes.conf, except that on those, the volume definitions were split to a separate indexes.conf. Any ideas?

After updating the Splunk App for Web Analytics to version 1.42, why do I now get zero results for a real-time search?

$
0
0
I updated the Web Analytics app, and now I get zero results. I get nothing in the real-time dashboard which, data model aside, I should be seeing. It did work before, not sure what happened... Anyone run into this?

How to manage tsidx files? Can these files be stored on an indexer or indexer cluster?

$
0
0
I have recently written Splunk searches which will search proxy logs for "unique" destination hosts (domains). My initial search filters out domains that we are not interested in receiving in the results. Tscollect is then used to write specific fields from the proxy logs to the namespace websense_exclude_ns. We are currently using version 6.2.3 of Splunk, and are not using Enterprise Security. I have two questions regarding this: 1. The namespace folder and tsidx files were created in the `/opt/splunk/var/lib/splunk/tsidxstats` folder of the search head. Is there a way for these files to be stored on an indexer (indexer cluster)? 2. Being that we do not have the Enterprise Security application and the SA-Utils app, is there a way to automate the management of the tsidx files?

Using the transaction command to determine the length of an "active" session.

$
0
0
I have a system for which I'd like to be able to report on how much time individual users spend logged in. However, there are a few constraints: * When a user opens a new session, it is logged as a Session_Start event. During this time, a user can either log off (ending their session completely [see Bob below]), or a user can disconnect (say by.. closing their laptop screen), which the application registers as a disconnect, but keeps the session until a 1 hour timeout period passes. At this point the session is terminated (see Carol). * There could also be a scenario where a user gets disconnected but then is able to reconnect (for example, losing wifi while moving between rooms in the office), or closing their screen to go out for a quick lunch. _time UserID EventType 10/14/15 08:00 AM bob Session_Start 10/14/15 10:00 AM bob Session_End 10/14/15 08:00 AM alice Session_Start 10/14/15 08:30 AM alice Disconnect 10/14/15 09:00 AM alice Reconnect 10/14/15 10:00 AM alice Session_End 10/14/15 08:00 AM carol Session_Start 10/14/15 10:00 AM carol Disconnect 10/14/15 11:00 AM carol Session_End Doing a nice and simple `transaction` is a starting point: | transaction UserID startswith=EventType=Session_Start endswith=EventType=Session_End From there I can easily do a `timechart span=1d sum(duration) by UserID` to get the type of report I want. This works in Bob's case just fine. But for Alice and Carol, they've both been given extra time. Alice disconnected at 8:30, and then reconnected at 9. That gives her an extra 30 minutes on that `sum(duration)`. The sum for Carol is off as well, since he simply closed his laptop screen (for example), and called it a day. The system ended his session an hour later after the timeout passed. I'm struggling to find a good way to approach this. At this point, I'd be happy with just solving the issue demonstrated in Carol's case. Solving Alice's scenario would be a bonus. Any thoughts?

I have a huge JSON event... How can I parse it in splunk

$
0
0
Hi team, I have a huge JSON event out of which I want to parse only a few fields. I am using splunk 6.2.2. I tried to use field extractor but it behaves vague sometimes. It doesnt show up the value intended and displays blank. (Its unable to get the field consistently) {"appName":"Jump","container":"Victor","file":"/root/go/src/winapp/Apping.po","func":"us/plugins/FriendlyPuddling.(*Bpp).Run","level":"info","line":226,"msg":"Publish:Count","time":"2011-10-11T12:30:20-05:00","values":"{\"Batch Processing Time Distributing\":{\"15m.rate\":0.9826029793743358,\"1m.rate\":0.7673693674154273,\"5m.rate\":0.9483630678135541,\"75%\":1.5436909406295e+12,\"95%\":1.74949310431975e+12,\"99%\":1.79064985634111e+12,\"99.9%\":1.799911077441748e+12,\"count\":216619,\"max\":1799940089952,\"mean\":1.2837037035597275e+12,\"mean.rate\":0.5485299415610289,\"median\":1.286439928609e+12,\"min\":486941182907,\"stddev\":3.0184924744029645e+11},\"Fetched Row Distribution\":{\"75%\":1978,\"95%\":17389.049999999865,\"99%\":39212.23999999999,\"99.9%\":46182,\"count\":2238,\"max\":46182,\"mean\":3060.0583333333334,\"median\":700.5,\"min\":1,\"stddev\":7079.815472480549},\"Fetching Time Distributing\":{\"15m.rate\":0.008681253423841689,\"1m.rate\":0.08356982935086356,\"5m.rate\":0.019364352288179648,\"75%\":7.172491315e+08,\"95%\":2.642261091499997e+09,\"99%\":5.112135282370001e+09,\"99.9%\":5.937299762975e+09,\"count\":2238,\"max\":5940880066,\"mean\":6.478074835447471e+08,\"mean.rate\":0.005666979245204095,\"median\":3.334446995e+08,\"min\":46682300,\"stddev\":9.676797891857135e+08},\"Process Time Distributing\":{\"15m.rate\":0.0002367706853682747,\"1m.rate\":1.6264306597703286e-15,\"5m.rate\":8.353983878553906e-06,\"75%\":4.80734255e+07,\"95%\":8.514038799999999e+07,\"99%\":1.51294623419999e+08,\"99.9%\":2.02493557e+08,\"count\":265,\"max\":202493557,\"mean\":2.8526392818867926e+07,\"mean.rate\":0.0006710334622203678,\"median\":1.4835899e+07,\"min\":358789,\"stddev\":3.140586975657637e+07},\"Processed Appling in Last Batch\":{\"value\":3989},\"Process Rows in Last\":{\"value\":68796},\"Row Fetching Rate\":{\"15m.rate\":41.18225939270318,\"1m.rate\":368.06998983168285,\"5m.rate\":84.80994027088143,\"count\":6422723,\"mean.rate\":16.26337709341583},\"Row Processing Rate\":{\"15m.rate\":12.538154985883653,\"1m.rate\":1.0371260388157613e-10,\"5m.rate\":0.5328477092064106,\"count\":6422723,\"mean.rate\":16.263630383100814},\"Total Applis Processed\":{\"count\":31782478},\"Total Process Timing\":{\"count\":572465},\"Total Query Execution Timing\":{\"count\":52154458},\"Total Rows Count\":{\"count\":514522696}}\n","version":"1.0.0"} I want to extract the below fields only Total Applis Processed Total Rows Count Total Query Execution Timing Row Fetching Rate Processed Appling in Last Batch Could you please let me know is there a way this could fields could be easily extracted from Splunk.... Any help would be highly appreciated.

How does Splunk 6.3 provide native support for ingesting data retrieved by Powershell scripts?

$
0
0
Hey all, We've recently upgraded to Splunk 6.3 and I had a quick question about this release note: "Powershell Input. Native support for ingesting data retrieved by Powershell scripts. See the Splunk Add-on for Microsoft PowerShell manual." Namely, what exactly changed? The Splunk Add-on for PowerShell doesn't appear to have been updated, but I don't want to remove it until I understand exactly how 6.3 natively supports "ingesting data retrieved by Powershell scripts." Can anyone point me to better documentation? Thanks!

Setting of alert_actions.conf does not populate saved searches default information

$
0
0
I was going to use my application's setup screen to populate the following items in the alert_actions.conf file. [email] action.email.cc = me@myserver.com action.email.to = him@myserver.com action.email.subject = Splunk Alert: $name$ I successfully set this information with my setup screen. However, when I go to a saved search, and activate the alert email, these these are not populated with the default information. Any ideas why? thanks.

Sendemail command: How to include the "view dashboard" link when scheduling a PDF?

$
0
0
Hi I am scheduling a PDF for a dashboard using the sendemail command. Why does it not print "View dashboard" link in the message? Is it possible to add it?
Viewing all 47296 articles
Browse latest View live




Latest Images