Dear Experts,
we have around 40 UF installed and pointing to old deployment server, Help is required we want UF point to new heavy forwarder.
On which files we need to make changes via old deployment server so that it point to new deployment server.
Thanks in advance.
↧
Need steps to migrate from old deployment server to new deployment server
↧
Design dashboard
I have prepared dashboard and set range on count.
But my concern is I have to different range on count for CUSTOMEREVENTS(field from mbExecutingGroupName).
PFB Current view of dashboard,
mbExecutingGroupName count
CUSTOMEREVENTS 102
CUSTOMEREVENTS 72
CUSTOMEREVENTS 66
CUSTOMEREVENTS 56
BEM_FRAMEWORK 48
CUSTOMEREVENTS 46
VPG 40
CUSTOMEREVENTS 39
CUSTOMEREVENTS 38
CUSTOMEREVENTS 36
CUSTOMEREVENTS 35
ADAPTERSVC 24
In short I have set range on count column on the basis of mbExecutingGroupName column.
Is it possible?
Splunk Version -6.5.0
↧
↧
What mean is "ApplicationLicense - app license disabled by conf setting."?
I found this message in splunkd.log of Forwarder.
"INFO ApplicationLicense - app license disabled by conf setting."
What is this message mean?
I didn't install any app or add-on in forwarder.
↧
Files not indexing due to fast rotation
Hi All,
Hope you are doing good.
I have come across a difficult situation in indexing a file. We have few Universal Forwarders, on which files will be rotated very fast (within seconds) during mid night. Once they reach the specified size limit, they will be gzipped and moved to archive folder (we are not monitoring this folder). Due to this fast rotation, we are unable to see the logs from those files at that particular time (not indexing may be). The inputs.conf stanza is configured as below:
[monitor:///logs/user/*.op]
blacklist = (\.\d+|\.gz)
index = index
sourcetype = sourcetype
recursive = true
We have default value for the throughput on the Universal Forwarders. Could you please help me in resolving this issue?
Thanks in advance.
↧
Qualys TA in distributed deployment questions
Hi Guys,
I've got a few questions regarding issues I'm having with this TA.
1) I've set this TA up in my clustered environment and have host_detection working fine on our heavy forwarder, however, knowledge_base is not working on our search heads. It downloaded it the first time I set the TA up and has never updated it since even though it's set for every day (86400 seconds). My inputs.conf looks like this
*[qualys://knowledge_base]
duration = 86400
index = aam_prod_app_qualys
start_date = 1999-01-01T00:00:00Z
disabled = 0*
When I try manually running the knowledge base with /opt/splunk/bin/splunk cmd python run.py -k my output is as follows:
> QG Username: ********> QG Password:> TA-QualysCloudPlatform: 2017-09-21T09:20:44Z PID=38953> [MainThread] INFO:> TA-QualysCloudPlatform - Using proxy>_internal proxy> handler]]> > TA-QualysCloudPlatform: 2017-09-21T09:20:44Z PID=38953> [MainThread] INFO:> TA-QualysCloudPlatform - Making> request:> https://qualysapi.qualys.com/msp/about.php> with params={}>_internal making> https://qualysapi.qualys.com/msp/about.php> request with> params={}]]> >_internal Error, but we're using stored creds,> so we will sleep for 300 seconds and> try again, as this is a temporary> condition. Retry Count:> 1]]> > TA-QualysCloudPlatform: 2017-09-21T09:20:45Z PID=38953> [MainThread] ERROR:> TA-QualysCloudPlatform -> Authentication Error, but we're using> stored creds, so we will sleep for 300> seconds and try again, as this is a> temporary condition. Retry Count: 1> Traceback (most recent call last):> File "/opt/splunk/etc/apps/TA-QualysCloudPlatform/bin/qualysModule/lib/api/Client.py",> line 246, in get> request = urllib2.urlopen(req, timeout=300) # timeout set to bail in> case of timeouts> File "/opt/splunk/lib/python2.7/urllib2.py",> line 154, in urlopen> return opener.open(url, data, timeout)> File "/opt/splunk/lib/python2.7/urllib2.py",> line 435, in open> response = meth(req, response)> File "/opt/splunk/lib/python2.7/urllib2.py",> line 548, in http_response> 'http', request, response, code, msg, hdrs)> File "/opt/splunk/lib/python2.7/urllib2.py",> line 473, in error> return self._call_chain(*args)> File "/opt/splunk/lib/python2.7/urllib2.py",> line 407, in _call_chain> result = func(*args)> File "/opt/splunk/lib/python2.7/urllib2.py",> line 556, in http_error_default> raise HTTPError(req.get_full_url(), code,> msg, hdrs, fp)> HTTPError: HTTP Error 401: Unauthorized> ^CTraceback (most recent call last):> File "run.py", line 150, in > qapi.client.validate()> File "/opt/splunk/etc/apps/TA-QualysCloudPlatform/bin/qualysModule/lib/api/Client.py",> line 199, in validate> response = self.get("/msp/about.php", {},> SimpleAPIResponse())> File "/opt/splunk/etc/apps/TA-QualysCloudPlatform/bin/qualysModule/lib/api/Client.py",> line 268, in get> time.sleep(300) # Sleep for 5 minutes> KeyboardInterrupt
Any idea's on how to sort this? :'(
2) This problem is not as big or urgent - does anyone know what parameters to change to extract the full knowledgebase information? I've read that there is a parameter called "details" that is set by default to "basic", does anyone know which script this parameter is in to change to "all"? Is it as simple as just changing it in the code or do I need to do something else? Our aim is to bring down a list of solutions with the QID's in the knowledge base because as far as I'm aware this is something Qualys also stores but doesn't give with the knowledge base by default.
Sorry for such a huge question but any advice would be appreciated.
Cheers!
↧
↧
JIRA jql query is not working from Splunk
I am very new to Add-on for JIRA. I have referred the website "https://splunkbase.splunk.com/app/1438/" and installed the Add-on for JIRA with the current version 2.2.1 locally. I have also installed the JIRA v7.5.0 in my local system and created some sample tickets in it.
When I try to fetch the details of JIRA from Splunk, it gives me no results found. The JQL query which I am trying is below:
| jira jqlsearch "issue = SRET-9"
1. Below is the settings which I provided in bin folder "config.ini"
[jira]
hostname = http://localhost:8080/
username= username
password = password
jira_protocol = http
jira_port = 443soap_protocol = httpsoap_port = 8080
2. Settings in default and local folder "inputs.conf"
[jira:SRE_test]
sourcetype = jiraindex=jira
interval = 60
server = localhost:8080
protocol = http
port = 443
jql = issueType in (epic, story)
fields = *
username= username
password = password
disabled = 1
3. Settings in default folder "jira.conf"
[jira]
default_project = SRE_test
tempMax = 1000
keys = link,project,key,summary,type,priority,status,resolution,assignee,reporter,created,updated,resolved,fixVersion,components,labels/label
# Fields containing durations, force them to return seconds instead of something human-readable. Optional.
time_keys = timeestimate, originalestimate, timespent
# Custom fields to display. Optional.
custom_keys =
4. Settings in README folder "inputs.conf.spec"
[jira:SRE_test]
# JIRA server, e.g., jira.example.com
server = http://localhost:8080# username is used to query REST API
username = username
# password is used to query REST API
password = password# REST API protocol, e.g., https
protocol = http
# REST API port, e.g., 443
port = 443
# JQL query to filter tickets which are indexed, e.g., issueType in (epic, story)
jql =
# Fields to index, a comma separated field list, e.g., key, summary, type. Default is * all fields
fields =
I am unaware where the issue is lying with or any other additional settings are needed to be configured. Could anyone please help me on this asap.
↧
Is there a Splunk TA that can collect all system related logs?
Hi,
We are actually trying to collect the following data from a universal forwarder and index it in Splunk. Following are the various types of data we are looking for.
a. Ping response
b. CPU pct used
c. All file systems, %used, %free
d. INODES, %used, %free
e. IPCS, CBYTES & Queue
f. IO Wait
g. System Load
h. Top processes consuming res memory, virtual memory, cpu
Will we be getting any logs from Universal forwarder by default which will contain all the above data. Or is there any Splunk TA that will help us in getting the above logs.
Once we get these logs we are planning to create some dashboards for the purpose of monitoring.
Regards,
A.Bharadwaja
↧
I am running splunk query which is scheduled to run in every minute to pull the events of last minute. Randomly i getting this xml parse error.
I am running splunk query which is scheduled to run in every minute to pull the events of last minute. Randomly i getting this xml parse error.
**Splunk query :-** search index=os sourcetype=cpu all earliest=-2m@m latest=-1m@m |dedup host| eval fields=split(_raw,\" \") | eval num=mvindex(fields,-1)| eval cpuUtilization = 100-num |eval human_readable_time=strftime(_time, \"%Y-%m-%d %H:%M:%S\") |table human_readable_time host cpuUtilization
**Error :-**
java.lang.RuntimeException: java.lang.RuntimeException: ParseError at [row,col]:[363,318]
Message: The character sequence "]]>" must not appear in content unless used to mark the end of a CDATA section.
at com.splunk.Job.refresh(Job.java:900)
at com.splunk.Job.isReady(Job.java:823)
at com.splunk.Job.isDone(Job.java:770)
at MAPDashboardMain$1.run(MAPDashboardMain.java:175)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:729)
Caused by: java.lang.RuntimeException: ParseError at [row,col]:[363,318]
Message: The character sequence "]]>" must not appear in content unless used to mark the end of a CDATA section.
at com.splunk.AtomObject.scan(AtomObject.java:198)
at com.splunk.AtomEntry.parseValue(AtomEntry.java:220)
at com.splunk.AtomEntry.parseDict(AtomEntry.java:143)
at com.splunk.AtomEntry.parseStructure(AtomEntry.java:189)
at com.splunk.AtomEntry.parseValue(AtomEntry.java:230)
at com.splunk.AtomEntry.parseDict(AtomEntry.java:143)
at com.splunk.AtomEntry.parseContent(AtomEntry.java:118)
at com.splunk.AtomEntry.init(AtomEntry.java:95)
at com.splunk.AtomObject.load(AtomObject.java:121)
at com.splunk.AtomEntry.parse(AtomEntry.java:77)
at com.splunk.AtomEntry.parseStream(AtomEntry.java:57)
at com.splunk.Job.refresh(Job.java:898)
... 11 more
Caused by: com.sun.xml.stream.XMLStreamException2: ParseError at [row,col]:[363,318]
Message: The character sequence "]]>" must not appear in content unless used to mark the end of a CDATA section.
at com.sun.xml.stream.XMLReaderImpl.next(XMLReaderImpl.java:604)
at com.splunk.AtomObject.scan(AtomObject.java:193)
... 22 more
Can someone please explain why this occuring ?
↧
Confusing search results
Hi! I have two identical searches running on the same search head but with different time frames. What confuses me is that where the searches overlap in time, the results are different from one to the other, which doesn't make much sense to me. The two searches are:
index=XXXXXXXXXXXX sourcetype=XXXXXXXXXXX earliest=0 latest=@h | dedup src_ip sortby +_time | table src_ip,_time
and
index=XXXXXXXXXXXX sourcetype=XXXXXXXXXXX earliest=-1h@h latest=@h | dedup src_ip sortby +_time | table src_ip,_time
As you can see the searches are identical except for the time frames. When I run the second search it results in MORE events over the last hour of the search than the first search over the same last hour. The searches are run at the same time. Any ideas why this happens?
↧
↧
PCI compliance and Splunk
Hi folks, My company got Enterprise Splunk and we want to integrate Splunk and PCI compliance. I am New to it so can you please recommend which course i should take to get more familiar. Also we will use it to collect syslogs to monitor customers networks and our core as well so i guess we need Multitenancy. I just finished Fundamentals 1.
Thanks in advance!
↧
Default.meta application context datamodel version number purpose
For a statistical solution with Splunk we make use of multiple datamodels which have different Splunk version numbers connected though the *.meta files.
Documentation is not clear on what the exact purpose of this version number is.
\Splunk\common\metadata\default.meta app contents:
Field datamodels statistical and user_upload:
version = 6.5.0
Field datamodel internal_statistics:
version = 6.6.3
Questions:
- Why is this field not updated to 6.6.3 with Splunk upgrade?
- Are the data models with version 6.6.3 still working for Splunk 6.5.0 still working?
↧
Need a new Splunk Enterprise trial license for fundamentals training
I have installed Splunk Enterprise trial version in the past to learn how to use Splunk.
Now, I have been invited for Splunk trainings, but before you can enter paid trainings you need to accomplish the course 1 fundamentals training which consist of 13 modules where you need a Splunk environment with admin rights.
The problem is that I have already used that license (Splunk Enterprise Trial) before, when I did not know I would be attending official Splunk trainings.
I have already started the fundamentals training and have a deadline for this first course, but I can't install the right environment I need to complete this training.
I have allready tried to call to USA for information on new license, but no one answered.
After that I have called Germany Splunk support (they did answer), but they haven't further responded to my mail or phone.
I really need a new license for the trial Enterprise version, or I will have to cancel the second and third course for me (since I need to pass the first weblearning training to enroll the next).
So...How can I fast get a new license for the TRIAL ENTERPRISE VERSION to complete the weblearning?
Thanks in advance!
Kind regards,
Danny Karouw
↧
How to replace every backslash in an input form token with a double backslash
Hello,
please I would like to know how I can replace a single "\" backslash with a double "\\" backslash in a form input (simple xml) before submitting it.
I have tried with this code, but it does not work with splunk 6.5.x
* * replace('progetto',"\\\","\\\\\\")
Thanks in advance.
↧
↧
Retain common fields in main and subsearch after join?
Hi all,
I'd like to join 2 Windows events using instance_ID as following:
`sourcetype="WinEventLog:security" EventCode=299 | join instance_ID [search sourcetype="WinEventLog:security" EventCode=500] `
For fields common to both searches, only the one in subsearch can be retained e.g. EventCode=500 in above search.
Shall I rename such fields in either main or subsearch (except the ones used in join) before joining ?
Off-topic: will there be ways faster than join for the same query?
Sorry for the newbie question.
Thanks a lot.
Rgds
/ST Wong
↧
Need help to implement Tracker in my Splunk.
Hi ,For my current project i need to implement a Tracker functionality which basically shows various phases of Onboarding.Example (Documents Collected-->Processed-->Approval Done-->Complete).This i want to implement in some sort of a Timeline which highlights the Stage at which the current request is in.Something like the Attached file.Can you please let me know how its done![alt text][1]
[1]: /storage/temp/217608-new-receive.jpg
↧
Advanced Dashboard using external picture
Hi folks,
I need show the status of some places that have some servers and IT objects in one picture attached. I have ideia that how I need do the querys but how I put the results in each piece of picture? Example: For the Build 1, I have X query, For the Build 2, I have Z query until now ok, but how I put each query in each square?
![alt text][1]
[ ]s
Rafael Martins
[1]: /storage/temp/217611-monitorserversbybuilds.png
↧
How to display the results without any other field names appended
I am trying to execute the below query in Splunk Enterprise.
index=x sourcetype=y|join TABLE_NAME [|inputlookup Domain_Module_List.csv |search (Domain ="Inventory")] |eval DATA_MB =round(DATA_KB/1024,2) |eval INDEX_MB = round(INDEX_SIZE_KB/1024,2) |timechart span=1mon limit=25 sum(DATA_MB) as datamb,sum(INDEX_MB) as indexmb by Domain|foreach indexmb* datamb* [eval size<>='datamd<>'+'indexmd<>']|fields - datamd* indexmd*
Below is the result which I am getting:
_time size: Inventory size: Platform size:Financial
2017-08 1546672397.67 22240.14 745
2017-09 991610023.13 4040.69 603
Time and Domain name are the two fields which I am trying to fetch. Ideally the Domain name display should be Inventory, Platform, Financial but it is showing as size: Inventory size: Platform and size:Financial.
Could anyone please help me to get rid of "size:" from the above results.
↧
↧
Want to display stack trace message along with other feilds.
Hello,
I have many stacktraces including keywords like "stackoverflow", "deadlock","Database connection closed". I want to search these errors and display time, host, sourcetype, source, the error message and the count of the error appeared. I have acheived this but partially. Below is my search statement. Can anyone help me in this.
index=websphere | eval test_msg=case(match(_raw,"The connection to the database is closed"),"The connection to the database is closed",match(_raw,"SQLEXCEPTION"),"SQLEXCEPTION") | stats count by test_msg
↧
winfra-admin role creation
Another admin recently removed the winfra-admin role in attempts to "clean up" the splunk deployment, and I have attempted to recreate it via re-installation of the addon for windows / infrastructure / dns apps, but none seem to be re-creating that role.
I manually created the role and associated, but I had to just blanket rights. Is there way to have the system regenerate the "winfra-admin" role with the appropriate rights?
↧
Dashboard time picker truncated, and other atrocities
We have a heavily used metrics dashboard that is showing a lot of data to execs. The data is filtered by a (mostly) universal time picker at the top of the dash. The time picker is showing a truncated date. In edit/token options/default, you can see From Aug 1, 2016 to Aug 31, 2017. However, the actual time picker is showing "From Aug 1, 2016 to Aug...".
This wouldn't be a huge deal, if the dates displayed correctly when you click the drop down arrow on the time picker. However, the dates that are displayed (in the date range picker) are from yesterday to today, in this case "Sep 20-Sep 21". So, the user doesn't know the exact time period that's being searched on by default.
I'm expecting the dates being displayed after you click the dropdown is a feature, but how do I show the entire date range in the time picker's default view?
↧