Hello everyone
I cannot package an app by following (at least tried) the instructions given on:
http://dev.splunk.com/view/SP-CAAAEMY#package
I did run this:
./splunk package app my_app
when in /$Splunk_HOME/bin and also in /$Splunk_HOME/etc/apps/my_app
got prompted in both cases for splunk user and pass, put the right ones and received error:
"an authentication error occurred: client is not authenticated"
I am the "splunk" user. And also the same if I am root and su - splunk
I have a Splunk Enterprise (expired Free) 6.5.2 on a Centos 7 Linux
please advise on how to prepare a .spl for my app.
at your disposal for further info
thank you very much
best regards
Altin
↧
Package app error "an authentication error occurred: client is not authenticated"
↧
How can I debug my logs and whitelist a word?
Hi Everyone.
How to discard all the debug logs for a sourcetype and whitelist a word "AuthIDDetection" whenever this comes in events from sourcetype
Please, could someone help with this --- I have sourcetype "xyz" to which I am discarding keyword "debug" from the events, we are discarding all the debug logs. I would like to discard all the debug logs for sourcetype -- xyz and whitelist a word "AuthIDDetection" whenever this comes in events from sourcetype "xyz" from the same debug logs.
current props.conf --
[sourcetype-xyz]
TRANSFORMS-set=xyz-setnull,setparsing
transforms.conf --
[xyz-setnull]
REGEX= debug|\\|Notice
DEST_KEY=queue
FORMAT=nullQueue
Could you please help with this.
FYI , Am following this documentation - http://docs.splunk.com/Documentation/Splunk/4.3.1/Deploy/Routeandfilterdatad#Discard_specific_events_and_keep_the_res
↧
↧
Backslash regex WinEventLog
Hi guys,
I have the log below and need get the third part of the this log using regex. Can you help me with this?
String samples:
`D:\Program Files\Microsoft SQL Server\MSRS11.MSSQLSERVER\Reporting Services\RSTempFiles\reportserver\e5f6c90d\311c6586\assembly\dl3\380c6db9\00776c62_b2e8cc01\__AssemblyInfo__.ini D:\Program Files\Microsoft SQL Server\MSRS11.MSSQLSERVER\Reporting Services\RSTempFiles\reportserver\e5f6c90d\311c6586\assembly\dl3\380c6db9\00776c62_b2e8cc01\__AssemblyInfo__.ini D:\Program Files\Microsoft SQL Server\MSRS11.MSSQLSERVER\Reporting Services\RSTempFiles\reportserver\e5f6c90d\311c6586\assembly\dl3\380c6db9\00776c62_b2e8cc01\__AssemblyInfo__.ini`
String that i want: `MSRS11.MSSQLSERVER\Reporting Services\RSTempFiles\reportserver\e5f6c90d\311c6586\assembly\dl3\380c6db9\00776c62_b2e8cc01\__AssemblyInfo__.ini`
Thanks.
↧
Alert when anomaly occurs
I have what seems to be a relatively simple request but can't seem to get it right.
Let's say I have a user logging into a system, they typically use the same IP. At a certain point, I want to mark them as "flagged" when that IP changes. I've done similar work with groundspeed correlation but can't seem to get this right. There is a lot of "proprietary" code I can't disclose so this will have to be done with some basic back and forth text.
1. User Logs in from X (0-500 times) in the past 30 days
2. User becomes flagged (based on criterion I have defined)
3. Subsequent events NOT matching X (above) generate alerts*
*Caveat here is that I want the alert/report to display with CHANGE:
1. How many times did this change from X to Y occur
2. Did X go to Y then back to X
3. Did X go to Z then back to X
Think of this as anomaly detection based on multiple key-value pairs. I apologize if this doesn't quite make sense but I'll come up with a better way to represent it as I go along.
My final goal is to be able to correlate events quickly across multiple sourcetypes, creating "transactions" where we can view an anomaly, and mark it as a potential alert vector. The values may be IP[s], they could be usernames, they could be workstation names. Essentially, any of the above.
Another great example is a login, externally, to a web portal. When that user logs in, there are network logs, web server logs, etc. that match these all together. Rather than having to go from one search to another, I'd like to search on ONE value and have it give me all pertinent data.
↧
What do we mean by multiple root event search in Data Model Acceleration?
Hello to all the Splunkers!
I have an very important question which needs to be addressed before we do an uplift of our Splunk version.
We are planning to uplift our Splunk version from 6.3.2 to **6.6.2 or 6.6.3**
We are using data model acceleration in our current Splunk version i.e. **6.3.2**
As Splunk **6.6.3** is very new release (21 August 17) so mind says we should go with **6.6.2**
Her comes the bone of contention I see following in 6.6.3 release notes:
**Date resolved Issue number Description**
**2017-07-25 SPL-142801, SPL-142771 Only one root event search in a DM gets accelerated**
I have no idea what is meant by this above Issue and this is making me wonder which Splunk version I should go for.
As far as my limited knowledge with Splunk I knew that we can have only one root event per data model and that is the way our current Data Model are designed.
Please help me clear my understanding and let us decide which Splunk version shall we go for.
Thanks in advance!
Regards,
Inderjot Singh Rasila
↧
↧
Query many fields with the same part in the name
Hi,
i have events in one sourcetype with over 90 similar fields like field1, field2 ... field90.
I can write a query like: search index=a sourcetype=2 field1=* field2=* ..field90=* | stats min(field1), max(field1, min(field2), max(field2)
is there a way reduce the long query to something like: index=a sourcetype=2 field*=* | stats min(field*) max(field*) ?
THX
↧
Reading 1000+ overwritten json files on time interval
I have 1000+ json files located in a directory and those files will be overwritten by every day. the file name starting with same characters as shown below,
1000010496,1000011820,1000013553,1000010097,1000010362...
my issue is that splunk forwarder is not reading all the files. I have tried flushing fishbucket,deleted indexed data,crcSalt,adding timestamp in filename and none of this have helped me to get entire data. even very less count of source files are showing in splunk. how to read this 1000+ files repeatedly without missing data?
json files starts like below,
$result = [
{
'advisory_type' => 'Security Advisory',
'date' => '10/12/17',
'advisory_name' => 'CL-SA-2017:0061',
} ....
....
Thanks in advance.
↧
Password Spraying Query
Hi,
I am trying to create a query that would list all denied logons (EventCode 4625), from a single workstation to many hosts using one or more account ids. This needs to be within a certain time window, say 1 minute.
How can I achieve this? So far I came with this:
index=win_sec (EventCode=4625 AND Logon_Type=3) (Account_Name!="*$" AND Account_Name!="Guest" AND Account_Name!="Administrator") |
eval Account_Name=mvindex(Account_Name,-1) |
bucket _time span=1min |
chart values(host) as host, values(Workstation_Name) as workstation, values(Failure_Reason) as failureReason, values(_time) as timeSpan over Account_Name
This gives me a starting point but I cannot tell if the denied logons happened within the same time window. What am I missing here?
Thank you,
Rob
↧
Renewing or generation files appsCA.pem and appsLicenseCA.pem
I have created my own Certificate authority. All certificates (root certs, server and web) was generated and applied successfully, byt I have issue with my apps updating by WEB with following log messages.
08-29-2017 21:30:42.179 +0300 ERROR SSLCommon - Can't read certificate file errno=33558530 error:02001002:system library:fopen:No such file or directory
08-29-2017 21:30:42.179 +0300 ERROR HTTPClient - Couldn't initialize SSL Context for HTTPClient in loadAppLicenseSSLContext
08-29-2017 21:30:44.076 +0300 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read server certificate B', alert_description='unknown CA'.
08-29-2017 21:30:44.551 +0300 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read server certificate B', alert_description='unknown CA'.
08-29-2017 21:30:45.022 +0300 WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read server certificate B', alert_description='unknown CA'.
08-29-2017 21:30:45.022 +0300 ERROR ApplicationUpdater - Error checking for update, URL=https://apps.splunk.com/api/apps:resolve/checkforupgrade: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed
How can I generate or renew appropriate files: appsCA.pem and appsLicenseCA.pem
Thanks!
↧
↧
Should I "normalize" data prior to indexing?
I have the opportunity to pull in some ticket system data and create some statistics / visualizations. The data consists of many “categories”. However, there are some details in the SUMMARY field that keep me from grouping/counting etc by SUMMARY as the SUMMARY value is unique in the last couple of characters. Here’s a sample of the SUMMARY field data
Pastebin extraction fn:23l4dixr
Pastebin extraction fn:xx3l9dib
Pastebin extraction fn:dk244diL
I would like to group/count by "**Pastebin extraction**". First attempt (successful) was to built regexes that I applied to the file BEFORE pulling into splunk that removes the unique fn:xxxxxxxx at the end of the SUMMARY field. I then created a separate index and pulled the data in using the CSV sourcetype. Due to the column headers, it appears splunk had no issues parsing the field data. This allowed me to group/count which was a good learning experience in and of itself. But now, I have no details if I need them.
It seems that most folks likely don’t massage data prior to a forwarder picking up the data. Perhaps then, the normalization, if you will, occurs just prior to indexing? Or perhaps during query? Maybe it’s possible either way?
At any rate, I’d appreciate a breadcrumb / link to some reading on how to remove the step of pre-processing of the data and to perform this a bit further down the line.
Is learning to properly use props.conf and transforms.conf my only (or best) approach?
What if I want to retain the unique details “just-in-case” and don’t want it removed prior to indexing?
Apologies if my terminology is not up to snuff.. just getting started with Splunk.
Thanks,
Sudsy
↧
Normalizing IBM log collector data into Splunk fields
I'm trying to monitor log data that is displayed below, and extract the fields into ones that can be used in Splunk
Nov 6 07:51:03 S10125BA QAUDJRN: [JS@0 event="JS-Actions that affect jobs" event_type="S-Start" sev="1" actual_type="JS-S" job_type="Subsystem monitor" job_sub_type="No subtype" chg_job="ALLSYL100" chg_user="QSYS" chg_job_no="866512" effective_user="QSYS" jobd_name="" jobd_library="" jobq_name="" jobq_library="" outq_name="*DEV" outq_library="" printer_device="PRT01" library_list="QSYS QSYS2 QHLPSYS QUSRSYS QGPL QTEMP" eff_group_prf="" supplemental="" jrn_seq="9863803" timestamp="20161106075103429000" job_name="ALLSYL100" user_name="QSYS" job_number="866512" eff_user="QSYS" logical_partition="001" admin_user="yes"]
The log should begin with JS@O event= The fields I'm Most interested in are :
JS@O event
event_type
actual_type
job_type
effective_user
timestamp
job_name
job_number
admin_user
I've tried using the Splunk field extractor but have had no luck pulling out the fields I need. Please help
↧
Getting "External search command 'predict' returned error code 1" when using "Forecast timeseries assistant in Splunk MLTK app
Hi All,
My Complete query to predict the future forecast is as shown below :-
index=predict sourcetype=anktest | search busy!=null | timechart count(eval(busy>500)) as critical , count(eval(write>500)) as overwrite | eval serverbusy=critical | table _time serverbusy
| predict "serverbusy" as prediction algorithm="LLP" future_timespan="5" holdback="0" lower"95"=lower"95" upper"95"=upper"95" | `forecastviz(5, 0, "serverbusy", 95)`
I am using kalman filter algorithm, after running forecast button i am getting below error:-
External search command 'predict' returned error code 1.
What do i need to do to get rid of this error ?
↧
Getting follwoing error when I click on Fit model Error in 'fit' command: Unsupported platform: Windows x86
Getting follwoing error when I click on Fit model Error in 'fit' command: Unsupported platform: Windows x86
↧
↧
Sending the same data to 2 diffetent Splunk enterprise platforms
Hello,
I have a requirement of sending the same data from the SplunkForwarder agents to 2 different Splunk enterprise platforms.
I need 2 different solutions
1. how can we achieve this by changing the configs at SplunkForwarder agents only
2. how can we achieve this by changing the configs at indexers or HeavyForwarders only
Thanks in advance.
Regards,
Thippesh
↧
How to send dashboard panels result as email atatchemet
I am having a dashboard comprises of 2 panels. I want to export and sent both of the panels result as csv attachment via Email whenever required. Single file with couple of tab OR two different files are fine. The dashboard should work in such a way that once I enter the date and time range and submit, the panels results should be delivered to given email address as attachment.
Please guide me on the possibilities to adopt this scenario.
↧
Upgrading Multi-Site Indexer Cluster
We are looking to upgrade our entire environment from 6.6.0 to 6.6.3, a software bug keeps causing the Indexers to crash so we have lost an entire site for the past few days. When it comes to upgrading the docs state to wait for the sites replication and search factor to be met, as the indexers have been down for a few days this may take some time. So is it advised to wait for this to happen before upgrading the next site i.e waiting 24 hours before starting to upgrade the second site? Will having each site on different versions cause any issues, or having one site on a different version to the Cluster Master or Search Heads?
↧
Use one or two tcp/udp ports for two different sources of syslogs if I want them in separate sourcetypes
In my app, I want syslogs from two different sources in two different sourcetypes (since they both are of different types). I have two options for this:
- enable two ports and assign different sourcetypes to both
- collect them on single port and assign different sourcetypes using regex (will require much analysis of logs)
What is the recommended approach ?
Thanks,
Kashyap
↧
↧
Python for Scientific Computing for Windows 32-bit is not available in splunk site, where can I download Python for Scientific Computing for Windows 32-bit
Python for Scientific Computing for Windows 32-bit is not available in splunk site, where can I download Python for Scientific Computing for Windows 32-bit
↧
Why is my Splunk search running for 10 minutes when embedded in .js code?
I followed along in the following example and created a search which leverages token manipulation. The search takes about 10 mins to run. The same search runs in less than 3 seconds when run manually (e.g. typing the query in the search box). I am trying to understand why the search runs and sometime times out when embedded within the .js code.
Example: Token manipulation using a Simple XML extension
http://dev.splunk.com/view/SP-CAAAE5J
search = eventtype=msad-account-lockout Account_Name="someaccountname" | eval src_nt_host=if(isnull(src_nt_host),if(isnull(src_host),host,src_host),src_nt_host) | eval lockout=if(EventCode==644 OR EventCode==4740,"Yes","No") | stats latest(_time) as time,latest(src_nt_host) as target-host,latest(lockout) as lockedout, by dest_nt_domain, user | search lockedout="yes" | eval time_date_stamp = strftime(time, "%Y-%m-%dT%H:%M:%S%z") | rename dest_nt_domain as target-domain | table time_date_stamp,user,lockedout, target-host, target-domain
↧
Matching an IP address from a lookup table of cidr ranges
I am trying search events where the destination IP is in a lookup table consisting of a list of CIDR ranges (and three other columns that note the zone, firewall, and context), and I'm having issues getting output to return the subnets that matched the src and dest IPs. My search is as follows:
index=symantec sourcetype=symantec:ep:risk:file action=allowed OR action=deferred AND Risk_Action="Virus found" | rename actual_action as "Action" dest as "Host" dest_ip as "Host IP" user as "User" Risk_Action as "Detection Type" signature as "Malware Name" | fields "Host IP"
| lookup ip_cidr cidr_range as "Host IP" OUTPUT cidr_range as ip_match
I followed the info from the link - https://answers.splunk.com/answers/305211/how-to-match-an-ip-address-from-a-lookup-table-of.html, but the events are returned with the Host IP field and the ip_match field, but the value for the ip_match field is "NONE".
What I'm trying to do is have each Host IP compared to the CIDR range, which then when it matches, pulls the other three fields so I can create a table that identifies the location of each system.
Thx
↧