I am running Splunk ES v4.7.2 and upgraded it, along with the rest of my servers to Splunk Enterprise v7.1.2. After having some issues, I found that ESv4.7.2 isn't compatible with v7.1.2. What steps do I need to take in order to revert my Splunk ES server back to Splunk Enterprise v7.0.x?
Thanks!
↧
Reverting to Splunk Enterprise v7.0.x from v7.1.2.
↧
I need to apply background color to single value panel .
I need to apply background color to single value panel. pls help me in modifying below javascript.
the current code applies color to value but i need to apply on the panel background .
@niketnilay @Ayn . pls help
require([
"splunkjs/mvc",
"splunkjs/mvc/simplexml/ready!"
], function(
mvc
) {
//Function to define range to override colors for Selected Single Value based on Single Value Result
function OverrideColorRangeByValue(selectedElement,singleValueResultIN){
switch (true) {
case singleValueResultIN>=0 && singleValueResultIN<5.9:
selectedElement.css("fill", "green");
break;
case singleValueResultIN>=6 && singleValueResultIN<6.9:
selectedElement.css("fill", "yellow");
break;
case singleValueResultIN>=7.0:
selectedElement.css("fill", "red");
break;
default:
selectedElement.css("fill", "grey");
}
}
//Get Single Value by id=single1 set in SimpleXML
mvc.Components.get('single1').getVisualization(function(singleView) {
singleView.on('rendered', function() {
if($("#single1 .single-result").text()!== undefined){
//Get the Single Value Result from svg node with class "single-result"
singleValueResult=parseFloat($("#single1 .single-result").text());
OverrideColorRangeByValue($("#single1 .single-result"),singleValueResult);
}
});
});
//Get Single Value by id=single2 set in SimpleXML
mvc.Components.get('single2').getVisualization(function(singleView) {
singleView.on('rendered', function() {
if($("#single2 .single-result").text()!== undefined){
//Get the Single Value Result from svg node with class "single-result"
singleValueResult=parseFloat($("#single2 .single-result").text());
OverrideColorRangeByValue($("#single2 .single-result"),singleValueResult);
}
});
});
//Get Single Value by id=single3 set in SimpleXML
mvc.Components.get('single3').getVisualization(function(singleView) {
singleView.on('rendered', function() {
if($("#single3 .single-result").text()!== undefined){
//Get the Single Value Result from svg node with class "single-result"
singleValueResult=parseFloat($("#single3 .single-result").text());
OverrideColorRangeByValue($("#single3 .single-result"),singleValueResult);
}
});
});
});
↧
↧
Same sourcetype, different regex
Hello Friends,
I have the following issue
I have two types of logs: A & B
A & B are from the same Index, have the same source type and same source (wish of the Client)
BUT they differ in two aspects:
1) the one contains the value "cisco_aaa" and the another "cisco_bbb"
2) log A has the structure FIELDNAME=VALUE (for allffileds)
log B has the structure FIELDNAME = VALUE\ (for all fields)
since they belong to the same sourcetype i have no idea how to delete this \ after the value
Ideas:
1)split them in two different sourcetypes, apply regex in props.conf
Please help
↧
Need help to send alerts to HP Operation Manager
Am trying to send webhook to HP Operation Manager using json payload. But am getting authentication error. Where should I mention token. Please help me on this.
↧
Creating a alert from a lookup table
I have a lookup table that is written to when a user clicks on a button to confirm that they have checked logs on a dashboard. In the lookup table, these are the fields that are available and how the values are written to the csv file.
Time | User
1537277863 john.doe@splunk.com
I want to set an alert to run daily to see if there is at least 1 entry within the last week from the time the alert runs in the lookup table. My issue is when I use a stats count, it will always show a entry for the last week since any entry in the lookup table will count towards the count. How can I search in the lookup table a week back to see if there are any entries?
Thanks for your help.
↧
↧
Calculating a percentage from a lookup table
I have a lookup table that is written to when a user clicks on a button to confirm that they have checked logs on a dashboard. In the lookup table, these are the fields that are available and how the values are written to the csv file.
Time | User
1521641008 john.doe@splunk.com
1521641345 jane.doe@splunk.com
1521641376 john.doe@splunk.com
1521641456 john.doe@splunk.com
1521727607 john.doe@splunk.com
1523969108 jane.doe@splunk.com
I want to check to verify that a user has checked logs per week over a span of time (lets say 6 months). I want to see how many times the logs were checked and give a percentage.
I've gotten to this point in my search but I'm unable to figure this out. I'm running this search over a the last 6 months
| inputcsv audit_check.csv
| rename Time as _time
| timechart span=1w count(User)
| addcoltotals
I get the following
_time | count(User)
2018-03-01 0
2018-03-08 0
2018-03-15 4
2018-03-22 1
2018-03-29 0
2018-04-05 0
etc......
5
So anything over 1 is considered checked. 0 is considered not checked. I'd like to get the percentage of successful audit checks per week over the 6 month period.
Thanks for your help.
↧
Able to log to the web ui but cannot access REST api
I have an enterprise license and I have an admin user which I can used to login via the web ui : `http://localhost:8000/en-US/app/`
This user also has the required user role which allows rest api access. I have also restarted the splunk service a few times.
I still cannot login via the REST api
curl -k https://localhost:8089/services/auth/login --data-urlencode username=admin --data-urlencode password=splunklocal
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 164 100 129 100 35 686 186 --:--:-- --:--:-- --:--:-- 872Login failed
Any help will be very much appreciated!
↧
OPSEC LEA - SIC ERROR 119 - SIC Error for lea: Client could not choose an authentication method for service lea
Hello,
I have setup the OPSEC LEA add-on and did all the configuration required. It worked for 2 weeks and suddenly stopped.
I tried to reset all the configuration from scratch after multiples tentative but no success.
Now I am able to setup the configuration and pull the SIC certificate.
Unfortunately when I add an inputs, I am getting the below error message in the _internal logs:
2018-09-18 14:42:08,846 +0000 log_level=ERROR, pid=8911, tid=Thread-9, file=ta_opseclea_data_collector.py, func_name=get_logs, code_line_no=75 | [input_name="GVA-MDS-PRI" connection="GVA-MDS" data="non_audit"]log_level=0 file:lea_loggrabber.cpp func_name:check_session_end_reason code_line_no:1056 :ERROR: Session end reason: SIC ERROR 119 - SIC Error for lea: Client could not choose an authentication method for service lea
I had already a look at the other forum discussion but none of the provided solutions worked for me.
Does anyone has an idea how to resolve that?
Thanks for your help.
SirHIll
↧
Why does updating a token with JavaScript not cause panels dependent on those tokens to refresh?
Hello,
I am having trouble understanding why a token update via JavaScript does not cause a panel that is dependent on that token to refresh. I have a date input called `status_date` that is configured to set a token `tok_status_date` during initialization (with today's date), and when it changes. JS file is as follows:
require([
'jquery',
'splunkjs/mvc',
'splunkjs/mvc/simplexml/ready!'
], function ($, mvc) {
// initialization
var tokens = mvc.Components.get("default");
document.getElementById("status_date").valueAsDate = new Date();
tokens.set("tok_status_date",document.getElementById("status_date").value);
$('#status_date').on("change", function (e){
tokens.set("tok_status_date",document.getElementById("status_date").value);
});
});
An HTML panel that I have configured will always show the value of `tok_status_date`, even when I update the value of the input.
I also have an XML panel whose search is dependent on `tok_status_date`, but it does not update whenever I change the value of the input:| makeresults | eval Day = strftime(strptime("$tok_status_date$","%Y-%m-%d"),"%d/%m/%Y") | table Day
Why is this? Am I missing something in the JavaScript file?
Thank you and best regards,
Andrew
$tok_status_date$
↧
↧
Audit domain admins access
Hello,
I have purchase Splunk Enterprise 1GB/day and I want to configure the forwarder on Domain Controller to send data about Security Events on Event Viewer. I want to index all access of domain admins.
How can I limit the indexer to send only events of access by domain admins?
Thanks
↧
Custom Search Commands SCPv2 Cannot Handle Large Event Sets
I have seen some promotional material lauding how the new SCPv2 enables custom search commands to process millions of events with lower memory overhead now that they can operate in a true streaming/chunked fashion. However, I cannot seem to get any CSC's with the v2 protocol to handle more than a few hundred thousand events (even using the default implementation that simply yields the records passed).
For example, consider the following example StreamingCommand:
from splunklib.searchcommands import dispatch, StreamingCommand
@Configuration()
class simpleStreamingCmd(StreamingCommand):
def stream(self, records):
for record in records:
yield record
if __name__ == "__main__":
dispatch(simpleStreamingCmd, sys.argv, sys.stdin, sys.stdout, __name__)
Commands.conf configuration:
[simplestreamingcmd]
filename = simplestreamingcmd.py
chunked = true
Using a search that inputs a CSV of 1,000,000 events and feeds those events to the simple streaming command (which simply yields them right back), the following error is thrown (found in search.log):
09-18-2018 11:00:31.750 ERROR ChunkedExternProcessor - Failure writing result chunk, buffer full. External process possibly failed to read its stdin.
09-18-2018 11:00:31.750 ERROR ChunkedExternProcessor - Error in 'simplestreamingcmd' command: Failed to send message to external search command, see search.log.
I started with a much more complex CSC to accomplish a specific task and eventually reduced it down to the simple example you see here, trying to figure out where the problem lies. I have tried writing StreamingCommands, EventingCommands, and ReportingCommands on multiple different search heads, and even tried multiple versions of Splunk (6.5.3 and 7.0.2) and updated to the latest Python SDK. Regardless of those, this seems to happen every time more than 300,000 events are passed to any chunked SCPv2 CSC.
Any thoughts on what might be going on here? I would really like to use SCPv2, but unless I am doing something wrong here, this seems like a rather fundamental issue with it.
I have seen a couple other users reporting what appears to be the same issue here: https://github.com/splunk/splunk-sdk-python/issues/150
↧
Can you help me configure my forwarder to send data between Domain Controller and Event Viewer?
Hello,
I have purchase Splunk Enterprise 1GB/day and I want to configure the forwarder on Domain Controller to send data about Security Events on Event Viewer. I want to index all access of domain admins.
How can I limit the indexer to send only events of access by domain admins?
Thanks
↧
The Splunk Add-on for Check Point OPSEC LEA stopped and I received the following error: "SIC ERROR 119 - SIC Error for lea: Client could not choose an authentication method for service lea"
Hello,
I setup the OPSEC LEA Add-on and did all the configuration required. It worked for 2 weeks and suddenly it stopped.
I tried to reset all the configurations from scratch after multiples tentative but no success.
Now I am able to setup the configuration and pull the SIC certificate.
Unfortunately, when I add an inputs, I am getting the below error message in the _internal logs:
2018-09-18 14:42:08,846 +0000 log_level=ERROR, pid=8911, tid=Thread-9, file=ta_opseclea_data_collector.py, func_name=get_logs, code_line_no=75 | [input_name="GVA-MDS-PRI" connection="GVA-MDS" data="non_audit"]log_level=0 file:lea_loggrabber.cpp func_name:check_session_end_reason code_line_no:1056 :ERROR: Session end reason: SIC ERROR 119 - SIC Error for lea: Client could not choose an authentication method for service lea
I have already taken a look at the other forum discussion, but none of them provided solutions that worked for me.
Does anyone have an idea how to resolve this?
Thanks for your help.
SirHIll
↧
↧
How can I round to the nearest half
Hello
I ahve some values that are in the format of : 0, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5
I am trying to find that average and only want whole and half numbers so nothing like 1.7 only avg's like 1, 2.5, 4, 3.5 etc.
I thought maybe if I multiplied by 2 and then divided that by the count and then again in half that would work but its not quite right.
|eval tmpscore=(score * 2)
|eval "Maturity Level"=round(((tmpscore/count)/2),1)
"score" being the sum of all the values of a field
Any ideas how I could get this type of rounding to work?
Thanks as always
↧
how to call javascript function from XML ?(note:,script code should be in source (coding) window not in splunk installed directory)
Javascript code should be in source (coding) window instead of placing .js in splunk installed directory
↧
With the eval command, how can I round to the nearest half?
Hello,
I have some values that are in the format of : 0, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5
I am trying to find that average and only want whole and half numbers so nothing like 1.7 only avg's like 1, 2.5, 4, 3.5 etc.
I thought maybe if I multiplied by 2 and then divided that by the count and then again in half that would work but its not quite right.
|eval tmpscore=(score * 2)
|eval "Maturity Level"=round(((tmpscore/count)/2),1)
"score" being the sum of all the values of a field
Any ideas how I could get this type of rounding to work?
Thanks as always
↧
Tableau connection issue with Splunk ODBC 2.1.1 After Upgrading Splunk to 7.1 from 7.0
Tableau connection issue with Splunk ODBC 2.1.1 After Upgrading Splunk to 7.1 from 7.0
We are having a Tableau connection Issue with Splunk ODBC 2.1.1 after Upgrading Splunk to 7.1 from 7.0. Every time we upgraded before, we didn't have an issue.
I'm not sure if there was a big change from 7.0 to 7.1.2 but for some reason we are now getting errors on our Tableau.
We tried downgrading to ODBC 2.0 and 2.1.0 , but that didn't work.
Here's the error from =Tableau refresh.
com.tableausoftware.nativeapi.dll.TableauException: [Splunk][SplunkODBC] (60) Unexpected response from server. Veriy the server URL. Error parsing JSON: Text only contains white space(s)
Not sure what changed or what release notes
↧
↧
How do I make a search string to get Real Time data from multiple *.txt files?
Dear Team,
I'm trying to to get data from two *.txt files into a single Line Chart.
For example, with the following string, I get the data into the Line Chart:
(host=jp) source="/home/jp/pings/targets/googledns.txt" | timechart avg(time)
But, what I am trying to do is also get data from another .txt file, at the same time:
(host=jp) source="/home/jp/pings/targets/defaultGateway.txt" | timechart avg(time)
... so in one Line Chart, it would show the data from both files.
With the following string, in Real Time, it only shows sheet1 in the Line Chart:
(host=jp) source="/home/jp/pings/targets/googledns.txt" | timechart avg(time) as sheet1 |appendcols [search (host=jp) source="/home/jp/pings/targets/defaultGateway.txt" | timechart avg(time) as sheet2]
I verified that when I change from Real Time -> 30 minute windows... to... Last 15 minutes... it shows sheet1 and sheet2.
This means that the script you provided is not for Real Time reading of data, due to it it only shows sheet1.
Could you please provide us a string that is capable to read multiple .txt files in Real Time mode?
Thank you in advance
Kind regards
JP
↧
Able to log to the web ui but cannot access REST API
I have a Splunk Enterprise license and I have an admin user who can login via the web ui : `http://localhost:8000/en-US/app/`
This user also has the required user role which allows REST API access. I have also restarted the Splunk service a few times.
I still cannot login via the REST API
curl -k https://localhost:8089/services/auth/login --data-urlencode username=admin --data-urlencode password=splunklocal
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 164 100 129 100 35 686 186 --:--:-- --:--:-- --:--:-- 872Login failed
Any help would be very much appreciated!
↧
How come custom search commands (CSC) SCPv2 cannot handle large event sets?
I have seen some promotional material lauding how the new SCPv2 enables custom search commands to process millions of events with lower memory overhead now that they can operate in a true streaming/chunked fashion. However, I cannot seem to get any CSC's with the v2 protocol to handle more than a few hundred thousand events (even using the default implementation that simply yields the records passed).
For example, consider the following example StreamingCommand:
from splunklib.searchcommands import dispatch, StreamingCommand
@Configuration()
class simpleStreamingCmd(StreamingCommand):
def stream(self, records):
for record in records:
yield record
if __name__ == "__main__":
dispatch(simpleStreamingCmd, sys.argv, sys.stdin, sys.stdout, __name__)
Commands.conf configuration:
[simplestreamingcmd]
filename = simplestreamingcmd.py
chunked = true
Using a search that inputs a CSV of 1,000,000 events and feeds those events to the simple streaming command (which simply yields them right back), the following error is thrown (found in search.log):
09-18-2018 11:00:31.750 ERROR ChunkedExternProcessor - Failure writing result chunk, buffer full. External process possibly failed to read its stdin.
09-18-2018 11:00:31.750 ERROR ChunkedExternProcessor - Error in 'simplestreamingcmd' command: Failed to send message to external search command, see search.log.
I started with a much more complex CSC to accomplish a specific task and eventually reduced it down to the simple example you see here, trying to figure out where the problem lies. I have tried writing StreamingCommands, EventingCommands, and ReportingCommands on multiple different search heads, and even tried multiple versions of Splunk (6.5.3 and 7.0.2) and updated to the latest Python SDK. Regardless of those, this seems to happen every time more than 300,000 events are passed to any chunked SCPv2 CSC.
Any thoughts on what might be going on here? I would really like to use SCPv2, but unless I am doing something wrong here, this seems like a rather fundamental issue with it.
I have seen a couple other users reporting what appears to be the same issue here: https://github.com/splunk/splunk-sdk-python/issues/150
↧