Hello guys,
I have a problem with French logs so I tried to create props.conf and deploy it :
[fzs]
TIME_PREFIX = ^\([0-9]*\)\s
TIME_FORMAT = %d/%m/%Y %H:%M:%S
Log example :
**(1002561)** 01/04/2017 23:59:01 - blablabla
I've understood that the time_prefix will ignore the (number) and space before the french date.
Should it work? My logs from April are not coming however it worked from January to March 2017.
Thanks a lot!
↧
Why is my props.conf configuration no longer working on my French timestamp and FileZilla server logs?
↧
How to deploy windows TA over different environment / indexes
Hello,
I plan to deploy windows TA to collect logs on AD and perhaps other windows servers/hosts as well.
However I already have different indexes for different environments so I don't want to use the default ones (windows,wineventlog, perfmon).
I use a deployment server and I'd like to find the best approach to do so.
So far I'm thinking about creating multiple version of the windows TA (i.e. 1 for each env) with a local inputs.conf file with the index name to be deployed on the UF.
I will deploy the original TA version on all my search heads+indexers.
what do you think? any other idea?
thanks.
↧
using javascript sdk with angular 2 webpack
Hi,
I am trying to write a hello world application using latest angular2 (uses webpack module bundler). The test application is supposed to login and run a basic test query using Splunk Javascript SDK. Since Splunk Javascript SDK is based on RequireJS and Backbone, I am looking for steps on how to run it inside angular2 apps. Does anyone have steps for this?
Alternately, is Splunk planning on supporting the javascript SDK with angular2 ?
thanks,
DS
↧
How do I Configure a Workflow to POST JSON Encoded Data?
When creating a work flow with a POST request everything is automatically URL encoded according to the [docs.][1]
"Splunk software automatically HTTP-form encodes variables that it passes in POST link actions via URIs. This means you can include values that have spaces between words or punctuation characters."
The endpoint I am hitting expects this data to be in JSON format. Is there anyway to change this so that the request sends application/json instead?
[1]: http://docs.splunk.com/Documentation/Splunk/latest/Knowledge/CreateworkflowactionsinSplunkWeb#Set_up_a_POST_workflow_action
↧
set earliest and latest time stamp
How to set earliest to 26th of previous month and latest to 25th of current month? if hard corded then 26th of Feb to 25th of March is the cycle. Please help with some examples. Thanks!
↧
↧
size of batches
Once a day we drop 20k files into a batch directory for processing.
It can take 2-3 hours for all of the files to be processed (by heavy forwarder) .
During that time the file count remains consistent and then when it is done, we see Splunk will purge the 20k files.
Is there something we can do to chunk this up into smaller batch sizes, so that for instance, it digests 1000 files, deletes them and then moves onto the next 1000 files . (or indeed deletes each processed file before digesting another one)
I ask because during this 2-3 hour window we need to make sure splunkd is not cycled, otherwise it will start again and create multiple duplicate transactions.
An alternative it to make these Monitor instead of batch, but these files won't change until 24 hours later and having 40k files being monitored (we have another set of files in a similar situation) seems to be a unnecessary burden on the system.
↧
DR deployer storage setup
Do we need to use the shared storage for Deployer and DR Deployer. Is there any better solution to sync data between two Deployers if we don't use shared storage . What is the standard solution used by splunk teams to maintain DR Deployment servers or Deployers up to date with primary counterparts.
↧
How to edit my search to find today's license usage by index?
Hi,
I am trying to get the today's license usage, so I have following query:
| rest splunk_server=ec2-52-210-213-64 /services/licenser/pools
| stats sum(used_bytes) as used | eval usedGB=round(used/1024/1024/1024,3)
But when I try get the split by index with:
index=_internal source="*license_usage.log" type=usage idx="*" earliest=@d latest=now| timechart span=1d sum(b) as b
| eval usedGB=round(b/1024/1024/1024,3)
I see that values returned by both are not the same. Anybody knows why? How can I get the today usage of the license by index?
Thanks in advance,
↧
How to find duration between multiple timestamps of different users?
Hi all,
The boundary of the logs: date and user. Total logs is more than 1000 logs.
1. How should I list the date? I am thinking to use stats and list the timestamps per date and user. User will have multiple timestamps, varies from 2 to 10 timestamps per day. Sample format like below.
**currentDate User list(time)
3/30/17 user1 9:00:00, 10:00:00, 11:00:00
3/30/17 user2 9:00:00, 12:00:00, 14:00:00
3/31/17 user1 7:00:00, 14:00:00, 19:00:00
3/31/17 user2 13:00:00, 17:00:00, 20:00:00
...**
4. Then, on a new column, find duration of the first and the second timestamps, second to third timestamps, etc. Sample format like below.
**currentDate User list(time) Duration in hour
3/30/17 user1 9:00:00, 10:00:00, 11:00:00 1, 1
3/30/17 user2 9:00:00, 12:00:00, 14:00:00 3, 2
3/31/17 user1 7:00:00, 17:00:00, 19:00:00 10, 2
3/31/17 user2 13:00:00, 17:00:00, 20:00:00 4, 3
...**
5. Lastly, count users who have duration of 10 hours.
↧
↧
Silent hit of limit
Dear splunk employees,
Can you please implement an improvement to splunk notifications: if any configuration limitations are hit - inform user.
I've faced with this problem several times and the recent one is as follows: we have scheduled search that uses map command to put a specific date into the dbquery search and than performs other calculations.
And since it's a subsearch, it has a limit of 500000 events. One day we exceeded this number but didn't notice it as no indication of it was available, so results were misleading :(
Please, make such notifications near search bar or if it is a scheduled search, send an alarm with listed results or if it is server-side limit, send an alarm to admin's email.
Hope for your help! Thanks
↧
How to create dynamic commands in search?
I would like to change the commands within a dashboard.
I have a dropdown box like this:
All 2.4 GHz 5.0 Ghz *
Then I would like the `timechart` to reflect whats selected in dropdown box.
source="snmp://Cisco-Wifi-clients"
| eval info=case(
radio=="Dot11Radio0"
,"avg(low) AS 2.4GHz"
,radio=="Dot11Radio1"
,"avg(high) AS 5.0Ghz"
,1==1,"avg(high) AS 5.0Ghz avg(low) AS 2.4GHz")
| timechart $info$
But this does not work.
Anyone have another way to get this to work?
Here is the base idea:
This work:
index=_internal user=* | timechart count by user limit=10
This does not.
index=_internal user=* | eval test="count by user limit=10" | timechart $test$
↧
Splunk_TA_New_Relic internal server error
I'm seeing internal server errors returned from the Splunk add on for New Relic
04-03-2017 16:37:04.992 -0400 ERROR AdminManagerExternal - Stack trace from python handler:\nTraceback (most recent call last):\n File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 129, in init\n hand.execute(info)\n File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 590, in execute\n if self.requestedAction == ACTION_LIST: self.handleList(confInfo)\n File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/splunk_aoblib/rest_migration.py", line 38, in handleList\n AdminExternalHandler.handleList(self, confInfo)\n File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/splunktaucclib/rest_handler/admin_external.py", line 40, in wrapper\n for entity in result:\n File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/splunktaucclib/rest_handler/handler.py", line 120, in wrapper\n raise RestError(500, traceback.format_exc())\nRestError: REST Error [500]: Internal Server Error -- Traceback (most recent call last):\n File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/splunktaucclib/rest_handler/handler.py", line 113, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/splunktaucclib/rest_handler/handler.py", line 299, in _format_response\n masked = self.rest_credentials.decrypt_for_get(name, data)\n File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/splunktaucclib/rest_handler/credentials.py", line 184, in decrypt_for_get\n clear_password = self._get(name)\n File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/splunktaucclib/rest_handler/credentials.py", line 388, in _get\n string = mgr.get_password(user=context.username())\n File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/solnlib/utils.py", line 150, in wrapper\n return func(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/solnlib/credentials.py", line 118, in get_password\n all_passwords = self._get_all_passwords()\n File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/solnlib/utils.py", line 150, in wrapper\n return func(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/solnlib/credentials.py", line 232, in _get_all_passwords\n all_passwords = self._storage_passwords.list(count=-1)\n File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/solnlib/packages/splunklib/client.py", line 1459, in list\n return list(self.iter(count=count, **kwargs))\n File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/solnlib/packages/splunklib/client.py", line 1419, in iter\n items = self._load_list(response)\n File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/solnlib/packages/splunklib/client.py", line 1325, in _load_list\n entries = _load_atom_entries(response)\n File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/solnlib/packages/splunklib/client.py", line 201, in _load_atom_entries\n r = _load_atom(response)\n File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/solnlib/packages/splunklib/client.py", line 196, in _load_atom\n return data.load(response.body.read(), match)\n File "/opt/splunk/etc/apps/Splunk_TA_New_Relic/bin/splunk_ta_new_relic/solnlib/packages/splunklib/data.py", line 77, in load\n root = XML(text)\n File "/opt/splunk/lib/python2.7/xml/etree/ElementTree.py", line 1300, in XML\n parser.feed(text)\n File "/opt/splunk/lib/python2.7/xml/etree/ElementTree.py", line 1642, in feed\n self._raiseerror(v)\n File "/opt/splunk/lib/python2.7/xml/etree/ElementTree.py", line 1506, in _raiseerror\n raise err\nParseError: not well-formed (invalid token): line 237, column 37\n\n
It seems related to authentication based on the error, however I cannot even load the Configuration tabs to set up proxy info. I can't find configuration examples on how to make these changes from config files either.
EDIT: some additional info
We're running Splunk 6.5.2 in a distributed environment on all Ubuntu Server 16.04 nodes - the add-on has been installed on the indexers as well as the stand alone search head I'm using to test, the Splunk app for New Relic has been installed on the stand alone search head. I have obviously not tested the app as the add on is not working.
↧
Need to wrap a Python script in Windows Batch File
I have python script that I need to pass in Windows batch file. I'm not sure how to pass it. I'm trying below but it is not working.
set ZENOSS_SERVER_STANZA=zenoss
C:\Program Files\Splunk\bin\splunk cmd python C:\Program Files\Splunk\bin\splunk\etc\apps\TA-zenoss\bin\zenoss_create_event.py -s %ZENOSS_SERVER_STANZA% -f %SPLUNK_ARG_8_%
I need to pass this batch file for alerts. Any help is appreciated.
↧
↧
Windows inputs.conf
Hi there..wondering if anyone would be willing to share / paste some production inputs.conf files for their Windows hosts, related to IT Operations mainly, though not exclusively PS..I am familiar with the the filters in 6.x vs Transforms and just would like to see some fresh ideas, thanks!
↧
"Event Action" button not displayed for some users
I use Splunk as an admin and most of my users are power users. Following a syntactically valid search, a list of matching events is available to the user (so far, so normal). When an event is expanded, there is an 'Event Actions' button that allows users to, among other things, view the raw event.
Some of my users report that they don't have this button.
Because we have a Gordian knot of LDAP and AD authentication mechanisms, overlapping and inherited roles, and opaque role-index mappings, i can't easily figure out what makes those particular users different from the rest.
Question: Is it possible to construct a role that prevents the "Event Actions" button from being displayed ?
↧
Not seeing the _audit index/log from my windows U/Fs but I am seeing _internal
Hello,
I'm having a situation where I am not seeing the _audit index/audit.log on any of my Universal Forwarders from a single instance Search Head/Indexer. I AM seeing the _internal from all of them though. I have seen activity as of today - very little of it - in the audit.log file under the Program Files\splunkforwarder\var...\audit.log and Everyone has read access to it.
The outputs.conf file in the default directory has not been edited and the entry outputs.conf:forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry) is present.
I don't see anything in the local directory that would overwrite this.
Any ideas?
Thanks in advance.
↧
How to use dbx to lookup Ids?
I have a variable portion of a log file that is structured, all IDs are numeric. There are over 100K possible different IDs. It is not a fixed set of IDs, so coding individual field names doesn't work.
[id-1=cnt-1, id-2=cnt2, id-3=cnt3, ...,id-n=cnt-n]
this is parsed as
| rex field=stats max_match=100 "(?\d*=\d*)"
I want to replace the numeric id-n with a name from a table. I have the dbx lookup defined that provides
id-n, name.
Thanks from a newbie
↧
↧
Is it possible to use fillnull for fieldnames with a specific pattern?
Hi,
is it possible to use fillnull for fields with a specific pattern? Wildcards are not working, but I want to avoid using fillnull for each and every field.
For example I would like to fillnull all fieldnames that contain "ABC".
Cheers
Heinz
↧
Cluster bucket rebalancing: How long is too long?
My first few attempts at rebalancing were pretty great. No muss, no fuss. They ran for about 12 hours and like magic my cluster was firing on all cylinders. Beautiful.
'Stuff happens' and I'm now in the situation again where I've introduced new servers to the the cluster (replacing old). Now I'm way out of balance. "No problem," says I. "Data rebalancing is awesome."
Not so fast. Literally. I fired it off late Friday night. By Monday morning the process was reading 0.14% done, .01% more than right after starting the process 56 hours previously. By my math that's about 650 days to complete.
I stopped the process, and restarted it for 1 index only -- 648 buckets using 1 TB of disk. After running for the last 18 hours, it's at 3% complete. So slow as to not really be usable.
12 servers in the cluster, 4 are new; all are:
- 20 core Xeons (2 threads each)
- 22 1.6TB SSDs
- 128 GB of RAM
- Splunk 6.5.1
- Avg load over the weekend was under 2
Any suggestions appreciated.
EDIT:
> splunk btool server list clustering | grep max_peer
max_peer_build_load = 2
max_peer_rep_load = 5
max_peer_sum_rep_load = 5
Jon
↧
How do I pull only specific status entries from a sourcetype at a given time?
I have **one source-type** with column names **srno** for a ticket.
***Scenario:*** Ticket status gets updated per it's life cycle/flow (i.e. first open, Assigned, workInProgress, Fixed, closed). For a same ticket, splunk would have a multiple entries in the source-type, as and when it gets updated.
***Question***: How do I find only tickets that are open at a given time?
Example srno = 1, first it gets opened so status is open....while it moves through it's flow, the status gets changed and it is now in "closed" status
Similarly srno = 2 is in Assigned status
srno = 3 and 4 are in open status.
Now, I want a query that only gives me srno 3 and 4 at this point as all others were opened before but now in different status so they should not appear in my result. ( In RDBMS world it is easy as there is only one record at a given time and we can have a where clause srno=open, however, splunk source type would have all the entries whenever a ticket gets updated)
**My current approach:**
At this point , I first run a query for status!=open (without dedup srno) and download to excel and run a query for status=open (without dedup srno) and download to excel. Grab results of both downloads and use vlookup in excel to get my desired result.
does anyone think of any better solution in one query?
↧