Hi,
I'm trying to automate the packaging of a custom Splunk app (v6.5) so that I can deploy it to another environment. However, i'm having issues packaging the app from my powershell script. When I run the command "Splunk package app..." it runs fine from the command line, but in PowerShell it hangs until I kill it. There are no error messages, it just seems to run forever without any output. Has anyone experienced this before, or have an idea of what i could be doing wrong?
Thanks!
↧
Running "Splunk package app" in powershell hanging until killed
↧
Microsoft Azure Active Directory reporting Add-on for Splunk: How is the data collected if this is installed on the search head?
If we elect to install this add-on to only a search head, how is the data collected? We have everything configured per the Details tab but no luck in displaying any results in Search.
↧
↧
Trying to create a saved searched via the CLI: "Argument 'actions' is not supported by this handler"
Hi,
I'm trying to create a saved search in Splunk enterprise 6.5 via the CLI. The exact command I'm running is:
**splunk add saved-search -name "X"**
However, I'm getting the error "Argument "actions" is not supported by this handler."
I checked the documentation link:[http://docs.splunk.com/Documentation/Splunk/6.5.0/Admin/CLIadmincommands](http://docs.splunk.com/Documentation/Splunk/6.5.0/Admin/CLIadmincommands) and it tells me that the saved-search argument is available in 6.5, but when i run the splunk.exe help for the add command "saved-searches" is not listed.
Does anyone know if there's a way to save the search in 6.5?
↧
Upgrade of a Search Head Cluster (v6.4.2 > 7.0.0) - Can I do a rolling upgrade?
Hi at all,
I have to upgrade a Search Head Cluster from version 6.4.2 to 7.0.0 and I have a doubt:
in https://docs.splunk.com/Documentation/Splunk/7.0.0/DistSearch/UpgradeaSHC there's written:> Starting with version 6.5, you can perform a rolling upgrade. This allows the cluster to continue operating during the upgrade. To use the rolling upgrade process, you must be upgrading from version 6.4 or later.
It's not so clear for me if I can perform a rolling upgrade from 6.4.2 to 7.0.0 or I must before upgrade from 6.4.2 to 6.5 (not rolling upgrade) and after I can perform the rolling upgrade to 7.0.0.
Anyone has already performed this upgrade?
bye.
Giuseppe
↧
Splunk Mobile: PDFs downloading with search rather than values
Hi all,
when am downloading pdf it is downloaded with queries instead of values .
please anyone help to this issue.
thanks![alt text][1]
[1]: /storage/temp/217867-asa.jpg
↧
↧
How do you pass saved search parameters to a Python script?
Hi,
I am trying to pass arguments from a savedsearch result to a python script, and it does not work. Code below.
savedsearches.conf
[test_search]
action.log_message = 1
action.log_message.param.name = $name$
action.log_message.param.condition = $result.condition$
action.log_message.param.host = $result.host$
action.log_message.param.source = $result.source$
alert.digest_mode = 0
alert.suppress = 0
alert.track = 1
counttype = number of events
cron_schedule = */1 * * * *
disabled = 1
dispatch.earliest_time = -5m
dispatch.latest_time = now
enableSched = 1
quantity = 0
relation = greater than
request.ui_dispatch_app = search
request.ui_dispatch_view = search
search = index=main host=test_host source=test_source status=* earliest=-2m latest=now | eval condition=if(status!="OK","CRITICAL","OK") | stats last(condition) as condition by host,source
alert_actions.conf
[log_message]
is_custom = 1
label = test
description = test
icon_path = appIcon.png
alert.execute.cmd = test.py
payload_format = json
disabled = 0
param.name =
param.condition =
param.host =
param.source =
test.py
#!/bin/python
import json
import sys
import os
import datetime
timestamp = datetime.datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ")
name = config['name']
condition = config['condition']
host = config['host']
source = config['source']
f = open('temp.txt', 'w')
sys.stdout = f
sys.stderr = f
print(host, source, name, condition, timestamp)
f.close()
And I get no output. If hard code some values in the script directly, then the file will be written every time the script is triggered.
Expected output
('test_host', 'test_source', 'test_search', 'condition' , 'timestamp')
Thank you in advance.
Regards,
↧
Newsletter app: Will this be updated to be compatible with Splunk 7.0?
I put this on my Splunk 7.0 dev install and the Newsletter tab is essentially unreadable. I assume that's because it is only released for 6.3 as it says on Splunkbase.
↧
Indexes are not available to select from "Available search indexes" during role creation since upgrade to 7.0.0
Since upgrading to splunk 7.0.0 I am not able to select our indexes from our indexcluster from "Available search indexes" during user role creation in the Splunk web gui. The indexes do exist and the Index-Role authorization is still working well using the authorize.conf files within the searchhead cluster.
I have seen this has been a bug in the early versions of Splunk 6 and this looks like the same issue.
Has anyone experienced this issue, before or in Splunk 7.0 ?
↧
Only show logs where field value has a decimal place
Hi all,
I'm trying to run a search that only finds specific events in a log which have field X equal to a number with a decimal place. Creating the search of simply X>0 returns all log events with any number, which is a good start. Now I'm just looking to filter the results a bit more so only logs having field X equal to any number that has a decimal place will be displayed. What would be the best way to accomplish this?
Thanks.
↧
↧
Shorten a URL to it's Primary Domain Name from Bluecoat Logs
I'd like to shorten a URL collected from bluecoat logs so that it only lists the primary domain name.
For example:
abcvod.abcnews.com to just abcnews.com
or
**anything.**google.com to just google.com
I've searched the previous questions and I've not found any working options.
↧
Why my rest query to /services/authentication/users suddenly don't work anymore ?
Hi,
I use this query almost every day :
| rest /services/authentication/users
But today it doesn't work, I get this error message :
Failed to parse XML Body:
↧
Docker Config option for Splunk web.conf error
I am using Splunk/splunk:latest version(7.0.0) and docker compose version (3.4) .
Also deploying an nginx proxy with context root as /splunk to forward to splunk web at 8000.
The web.conf is added to the container as a docker config at /opt/splunk/etc/system/local/web.conf as root user, and also starting the container as root user.
The splunk container fails to start with error: chown: changing ownership of ‘/opt/splunk/etc/system/local/web.conf’: Read-only file system
web.conf:
-------------
[settings]
root_endpoint=/splunk
--------------------
Docker-Compose:
--------------------
version: "3.4"
services:
enterprise:
image: splunk/splunk
environment:
SPLUNK_START_ARGS: --accept-license
SPLUNK_USER: root
ports:
- "8000"
- "8088"
configs:
- source: web.conf
target: /opt/splunk/etc/system/local/web.conf
uid: '0'
gid: '0'
mode: 0440
deploy:
replicas: 1
restart_policy:
condition: on-failure
configs:
web.conf:
file: web.conf
↧
Cisco eStreamer eNcore is grouping events
Cisco eStreamer eNcore is grouping events as seen in the indexer when searched. The old eStreamer client did not do this.
It this normal behavior for certain events grouped together.
Any help would be appreciated. Splunk V6.5.2 Cisco devices are V6
↧
↧
Heavy forwarder not sending new data
Installed a heavy forwarder on an instance to ingest exported data from our old SIEM, and needed props set on the data so I don't have to bounce my indexers. I got 2 of my 14 gb files w/out issue, and have the correct fields assigned to them. I've added new files to the directories being monitored, and they're not being ingested. The files are seen by splunk list monitor, and the REST page
(services/admin/inputstatus/TailingProcessor:FileStatus) shows the two files that were ingested as:
/splunk/Splunk/IIS/172.30.59.32/IIS_10_16_results_172.30.59.32.txt
file position 1226615332
file size 1226615332
parent /splunk/Splunk/IIS/172.30.59.32/*.txt
percent 100.00
type finished reading
while the files that aren't being ingested look like:
/splunk/Splunk/IIS/172.30.59.32/IIS_11_16_results_172.30.59.32.txt
parent /splunk/Splunk/IIS/172.30.59.32/*.txt
type unknown (scanned)
A btool for inputs looks like:
/opt/splunk/etc/apps/iis/local/inputs.conf [monitor:///splunk/Splunk/IIS/172.30.59.32/*.txt]
/opt/splunk/etc/apps/iis/local/inputs.conf disabled = false
/opt/splunk/etc/apps/iis/local/inputs.conf host_segment = 4
/opt/splunk/etc/apps/iis/local/inputs.conf index = iis
/opt/splunk/etc/apps/iis/local/inputs.conf sourcetype = ms:iis:historic
/opt/splunk/etc/apps/iis/local/inputs.conf [monitor:///splunk/Splunk/IIS/PCWOSS01C/*.txt]
/opt/splunk/etc/apps/iis/local/inputs.conf disabled = false
/opt/splunk/etc/apps/iis/local/inputs.conf host_segment = 4
/opt/splunk/etc/apps/iis/local/inputs.conf index = iis
/opt/splunk/etc/apps/iis/local/inputs.conf sourcetype = ms:iis:historic
/opt/splunk/etc/apps/iis/local/inputs.conf [monitor:///splunk/Splunk/IIS/PCWOSS01D/*.txt]
/opt/splunk/etc/apps/iis/local/inputs.conf disabled = false
/opt/splunk/etc/apps/iis/local/inputs.conf host_segment = 4
/opt/splunk/etc/apps/iis/local/inputs.conf index = iis
/opt/splunk/etc/apps/iis/local/inputs.conf sourcetype = ms:iis:historic
And I'm seeing internal data from the HF. So I don't see how my outputs could be a problem, but here they are:
/opt/splunk/etc/system/local/outputs.conf [indexer_discovery:master1]
/opt/splunk/etc/system/local/outputs.conf master_uri = https://172.30.63.61:8089/
/opt/splunk/etc/system/local/outputs.conf pass4SymmKey = $1$seRzZzfgCPVD5mk=
/opt/splunk/etc/system/local/outputs.conf [tcpout]
/opt/splunk/etc/system/local/outputs.conf defaultGroup = group1
/opt/splunk/etc/system/local/outputs.conf forwardedindex.0.whitelist = .*
/opt/splunk/etc/system/local/outputs.conf indexAndForward = 0
/opt/splunk/etc/system/local/outputs.conf [tcpout:all_indexers]
/opt/splunk/etc/system/local/outputs.conf maxQueueSize = 500MB
/opt/splunk/etc/system/local/outputs.conf [tcpout:group1]
/opt/splunk/etc/system/local/outputs.conf autoLBFrequency = 30
/opt/splunk/etc/system/local/outputs.conf forceTimebasedAutoLB = true
/opt/splunk/etc/system/local/outputs.conf indexerDiscovery = master1
/opt/splunk/etc/system/local/outputs.conf useAck = true
↧
Varying behavior when assigning same value to different tokens
Hi,
I am updating two different token with same value but I am seeing differernt behavior. (probably its taking as different datatype)
$graph_time_earliest$ - 1500 $selection.earliest_GC$ - 1500
Initial Value is same for both token.
graph_time_earliest = 1506990840
selection.earliest_GC = 1506990840
OutPut:
$graph_time_earliest$ = 1506990840 - 1500
$selection.earliest_GC$ = 1506989700
In $selection.earliest_GC$, its subtracting the specified value and update output. but in $graph_time_earliest$ it just append as a string. I am not sure why its taking one token as a string. Can we specify data type or its automatic?
Please suggest if I am missing something. Thank you.
↧
How do you write rex to extract unstructured field?
I have the below log. I want to extract the sixth column as a field, in that column I have different types values. Some of them are decimals some of the are single digit as you can see. I tried IFX it's not working as expected and don't how to write rex for this kind of values, help me to write rex for this field.
10/1/2017 0:10:01 all 9.13 0 1.68 6.6 0 82.59
10/1/2017 0:20:01 all 7.46 0 0 5.74 0 85.17
10/1/2017 0:30:01 all 9.05 0 129 1.53 0 88.13
10/1/2017 0:40:01 all 7.77 0 1.45 1.23 0 89.54
10/1/2017 0:50:01 all 7.08 0 1.5 1.41 0 90.02
10/1/2017 1:00:01 all 6.46 0 1.43 1.82 0 90.29
10/1/2017 1:10:01 all 45.4 0 4.2 29.27 0 21.13
10/1/2017 1:20:01 all 61.74 0 4.74 31.19 0 2.32
10/1/2017 1:30:01 all 64.17 0 4.72 26.31 0 4.81
10/1/2017 1:40:01 all 47.54 0 4.23 19.44 0 28.79
10/1/2017 1:50:01 all 44.59 0 3.68 17.47 0 34.27
10/1/2017 2:00:01 all 49.16 0 4.22 13.47 0 33.15
10/1/2017 2:10:01 all 41.98 0 3.95 16.47 0 37.59
Thanks.
↧
Creating dashboards based on field-names rather than field-values in nested-json
Hi Splunkers,
I have events coming to Splunk Enterprise in the following JSON format:
{
ip : 1.1.1.1
mac : 010203040506
policies : {
policy_name_1 : {
rule_name_in_policy1 : {
status : Unmatched
timestamp : 15012456757
}
},
policy_name_2 : {
rule_name_in_policy2 : {
status : Matched
timestamp : 15012446751
}
},
policy_name_3 : {
rule_name_in_policy3 : {
status : Matched
timestamp : 15012456487
}
}
}
username : abstract
}
I want to create a 'matched' dashboard which shows a pie chart conveying "rule_name_in_policy1 is matched by 25 hosts, rule_name_in_policy2 is matched by 3 hosts,... and so on). To achieve this, I can roughly think of a search string that would store the rule_names in a variable_a and possibly do a "timechart count by variable_a". But I don't know how to do this. I also can't figure out how to filter out all instances of (policies.policy_name_x.rule_name_in_policyx.status=Matched).
I'm new to SPL. Can someone please help me with writing the correct search string?
↧
↧
--- Article Removed ---
***
***
*** RSSing Note: Article removed by member request. ***
***
***
*** RSSing Note: Article removed by member request. ***
***
↧
ITSI - Is there a setting to have the Service Analyzer Screen refresh
Hi,
The Service Analyzer screen appears to be static and does not change after initial viewing of the Services and KPI's.
I would like to see the screen refresh every 5 minutes, but I can't seem to find the documentation on how to do this (assuming that my first statement is correct).
So, my questions are:
1) Is there a setting for this?
2) If so, what .conf file and parameter is it?
Thank you.
↧
Search data for All Time but only graph a specified time range
Hello,
I am charting IT help desk tickets and I need to make a chart showing how many tickets are opened and closed every month. The timestamp for _time is the ticket failure_date. To accurately reflect how many tickets are closed per month I need to search "All_Time" so if a ticket were opened in say December 2016 and then closed in March 2017 it'll be captured in the graph.
Now I can get all the data to graph but I would like to only graph select months if possible. Below is the current search I am using:
sourcetype=Current_file
| where STATUS != "DRAFT"
| eval FAILURE_DATE=strptime(FAILURE_DATE, "%m/%d/%Y %H:%M")
| eval CLOSED_DATE=strptime(CLOSED_DATE, "%m/%d/%Y %H:%M")
| eval STATUS=mvappend("Open","Closed")
| mvexpand STATUS
| eval _time=case(STATUS="Open", FAILURE_DATE, STATUS="Closed", CLOSED_DATE)
| timechart span=1mon count by STATUS
↧