What I am trying to do is getting a particular sourcetype forwarded from the heavy forwarder to a syslog server. In addition, I want the data to also go to my indexers. Is it possible to do this? What configuration would be needed?
↧
Forward Data to Syslog Server and Indexers?
↧
Need help to parse & flatten XML Attribute data in nested format.
We have data coming in XML in the following format:
Sample Event 1:
Sample Event 2:
Please note that the data is exclusively in XML **attributes**, and not in elements.
I am aware that we can possibly do it via Python pre-processing, but for now we need to flatten out the data using SPL.
We have tried multiple combinations of **spath** and **mvexpand**. However, since data is in attribute tags, we cannot split it into separate rows to show in a table form, when it is of the form given in the second XML event.
I am not sure we can handle this using a **regex** since, apart from a few, the attributes are not uniform throughout.
Can someone please help?
Thanks in advance.
Regards,
Anirban.
↧
↧
How do i reset the admin password or create a new admin in Splunk 7+.
Renaming etc/passwd to passwd.bak and using user-seed.conf doesn't seem to work. I'm on mac..
↧
How do you chart two searches with separate time range on the same chart
I'm trying to chart open tickets (using a time range of "All-time" and resolved tickets by user for the current month. I've been able to chart the two fields data in the same chart but am looking for help on setting different time ranges for the two searches or fields.. for example show open tickets using an all-time time range to show all open tickets regardless of month & total tickets resolved within the selected month from the dropdown.
This is the current query I'm using but my open tickets numbers are not accurate as it's only showing opened tickets for the selected month.
index=test sourcetype="test*" User=* Group="HelpDesk"
| dedup Tickets
| eval State=if(Closed!="0" OR Status="Closed" OR Status="*Reject*" OR Status="Abort*","Resolved","Open")
| eval Time=strftime(_time, "%m/%d/%Y %I:%M:%S %p")
| rex field=Time "(?\d+)/"
| rex field=Time "(?\d{4})"
| lookup datemonth.csv date_month OUTPUT datemonth
| search datemonth="August" date_year=2018"
| chart count by User State
**** The datemonth and date_year fields are populated by dropdown tokens in the dashboard
↧
Sourcetype Inheritance: How to inherit parent sourcetype to child sourcetypes?
Hope you all have faced this situation.. We got incoming mixed data from a single source (eg source=my_application.log) . This currently is parsed at arrival as `sourcetype=my:application` . But this contains valuable information of `application:audit` and `application:transactions` for example.
Most of the search-time extractions are similar for audit & transactions. But currently I have to copy all of the logic on each sourcetype which is pure duplication of code.
Any ideas/tricks to ensure the search-time extractions done on parent-sourcetype can be inherited to child sourcetypes?
Expecting something like below
[my:application]
# all common extractions here
## Hope to inherit all work done in above sourcetype
[my:application:audit]
# some very specific extractions for audit only
[my:application:transaction]
# some very specific extractions for txns
↧
↧
Custom time picker
Hello,
I am looking to remove some extra options from Time picker. I have disabled them through GUI (User Interface >> Time ranges).
When I check using CLI it shows these are disabled but those options are still present. (I have checked by removing brower caching)
PS: I don't have times.conf for App specific. My app using default one.
Please advice if I am missing something.
![alt text][1]
![alt text][2]
Thanks
[1]: /storage/temp/255784-time-picker-disable.png
[2]: /storage/temp/255785-time-picker-disable-cli.png
↧
Upgrading Splunk server to RHEL 7.5
We are planning to upgrade the VM server to RHEL 7.5 with splunk distributed deployment installed in them.
Do we have any documentation or best practices regarding steps? thanks!
↧
Drilldown in Bar chart with value that is not contained in grouping
Hello
I have the following chart set up and would like to add a drilldown on a value that is currently not contained in the query.
Runtime sourcetype=avq_test_case type=run task_templ="$task_templ$" result=$result$ db=$db$
| eval t_start=strptime(timestamp_start, "%Y-%m-%d %H:%M:%S")
| eval t_end=strptime(timestamp_end, "%F %H:%M:%S")
| eval t=(t_end-t_start)
| chart max(t) as "time in s" by timestamp_start, result
| rename timestamp_start as "Timestamp"
| sort t $field1.earliest$ $field1.latest$ 1
The search has a field defined called uuid, however I cannot refer to it in the drilldown link, I tried $row.uuid$ and $uuid$, neither worked.
avaloq_kupima_run_details?form.uuid=$uuid$
Is it somehow possible to add a new dimension which can be accessible in the drilldown? Or is it possible to overwrite the value of a bar s.t. I can encode the UUID inside?
One option I looked into was to add instead of the result (which is success/failed) an evaluated variable which contains both the result status and the UUID. Problem with that approach is that I cannot assign the field colors as charting.fieldColors does not support wildcards or regex.
Any ideas?
↧
"java.sql.SQLException: JZ0SA: Prepared Statement: Input parameter not set, index: 0."
Issue Description : Configured Sybase to Connect with Splunk and works Fine.While USing Rising Column option for the query below we receive this error :
"java.sql.SQLException: JZ0SA: Prepared Statement: Input parameter not set, index: 0."
SELECT "DBA"."f_Day_ComponentMetrics"."Day",
"DBA"."f_Day_ComponentMetrics"."Node Name",
"DBA"."f_Day_ComponentMetrics"."Node Location",
"DBA"."f_Day_ComponentMetrics"."Node Family",
"DBA"."f_Day_ComponentMetrics"."Node Availability (avg)",
"DBA"."f_Day_ComponentMetrics"."Node Availability (min)",
"DBA"."f_Day_ComponentMetrics"."Node Availability (max)",
"DBA"."f_Day_ComponentMetrics"."lt_TimeStamp"
FROM "DBA"."f_Day_ComponentMetrics" WHERE "Node Name" > ?
ORDER BY "Node Name" ASC
↧
↧
How can I redirect splunkd.log to splunk forwarder container's stdout
With splunk 6.6.3 release, I am able to see the error messages in splunkd.log. These error messages are about connectivity failure messages from splunk light forwarder to splunk heavy forwarder. I would like to know if there is a configuration option to redirect the contents of splunkd.log to stdout, so that these error messages will be seen in splunk forwarder container. Please let let know.
↧
Upload CSV files for Monitoring using Splunk Universal Forwarder
Hi
I have a Splunk Universal Forwarder installed on Windows Systems and I am able to get Installed Softwares (1st phase PoC)
Now I intend to get CSV reports from AV server for all Windows Systems and use them to further analyse my Systems Status.
The AV CSV report will be updated on a daily basis by IT team and I intend to pick up the changes only and update my analysis.
I have tried to do a pilot run of uploading a CSV file using UF on my own windows 10 system as per below steps:
1. Created a custom CSV file.
2. Stopped the UF
3. Added a monitor command in the inputs.conf file at the path
C:\Program Files\SplunkUniversalForwarder\etc\apps\Splunk_TA_windows\local
4. The inputs.conf entry reads as below:
[monitor://C:\Users\\Desktop\splunk\*.csv]
disabled = 0
index = index1
sourcetype = csv1
5. Restarted the Splunk UF
6. I could see the logs in the Index
7. Prob 1: Now I tried to change the CSV file and added some more rows but the same were not immidiately visible.
8. Prob 2: I tried to create a new Index index2 and change the inputs.conf file to redirect the logs to new index, but I see no logs in SPlunk Search
9. Prob3: I have created a completely new file and changed its location but kept the Index to index1, but still I dont see any logs.
I am currently perplexed as to how exactly the Splunk Forwarder will behave.
P.S. I have not edited the props.conf or transform.conf files, as I am not sure that they are needed.
Any HELP highly Appreciated
Regards
VS
↧
I want to trigger an alert if an continuously number repeats more than 5 times
say an example.
i have an fields which has repeated numbers. if an number repated more than 5 times i need to clear an alert.
for example if number "3" repeats more than 5 times. i need to filter it.
1
1
1
2
3
3
3
3
5
↧
AIX 6.1 data to splunk 6.6.4
Hello,
Having trouble getting splunk forwarders to report from AIX 6.1 systems to splunk.
Facts:
System: AIX 6.1
Forwarder: splunk forwarder 6.5.9 for AIX (splunkforwarder-6.5.9-eb980bc2467e-AIX-powerpc.tgz)
Splunk environment: 6.6.4
What is the way to debug? There is no network issue, telnet works. We are monitoring AIX 7.1 with 6.6.4 forwarder with no problems.
Thanks!
↧
↧
How do I copy the dashboards from the search app to a new distributed search system(
We have created a new Splunk 6.6.3 cluster environment with 3SH and 6 indexers. I've been asked to copy the saved searches, dashboards, etc from the old system to the new system. Unfortunately it seems all of the dashboards were created under the default search application. How do I move from the \etc\apps\search\local to the new clustered system?
↧
SH Cluster Member's Reporting
When I run the search below, only one SH shows in the results...But... I do know that there are 18 SH's out there which do show up in the SH Clustering page with the role of Member. Does the search result mean that only one of the 18 is actually doing any work?
| rest /services/server/info |search server_roles = shc_member
↧
Splunk App for Infrastructure oddity
I have installed the Splunk App for Infrastructure (ver 1.1.1) and have 3 test Linux boxes working perfectly. However, a Linux box was rebooted and now the app says that the server is now "inactive". I have restarted the splunkd daemon on the system that was rebooted and it still says "inactive". Do I have to remove the Linux box from the app, remove the UF and configs from the Linux box and then add the server back like I did initially?
↧
Search Head > Indexer > Forwarder
Hi, quite new to Splunk. I have had a look at the various documentation and have managed to come this far (see below).
I have installed a Universal Forwarder on two of my machines. This is sending logs to one instance of my Splunk Enterprise (also known as the indexer). Here I can see all my logs and search. Is there anything else I need to do at this point, to configure the indexer?
How do I get this data from the indexer to a search head? And how do I configure this? I have had a look online and I think I need to do something with Distributed Search but cannot seem to get it working. E.g for Search Peers, what goes in Peer URI? Distributed search authentication? I have followed the guide but cant seem to understand what goes in these fields.
How does my indexer server talk to the search head one?
Thanks in advance.
Abdul
↧
↧
Join Multiple Source Types with Common Field and Search
When I try to join three sourcetypes on CommonField, I don't get all the fields to populate in a table.
Example:
sourcetype1: CommonField, Field1, Field2, Field3
sourcetype2: CommonField, FieldX, Field Y, Field Z
sourcetype3: CommonFIeld, FieldA, FIeldB, Field C
Query:
source=data* | transaction CommonField keepevicted=true | table Field1, FieldX, FieldY, FieldA, FieldC
It does not populate all fields in the table. How can I join three sourcetypes on CommonField, and once joined, I can search as if each joined event has all those fields?
Thanks in advance!
↧
Unable to filter on extracted fields when searching using JS SDK.
Hello,
I am using JS SDK for splunk, and have written a Node App. Now when I do a search, I get the results back, but I would like to remove duplicates and would like to use dedup on an extracted fields. When I use this it does not work, but the same search string works fine on GUI and returns unique events.
Splunk "version":"6.5.2"
Search String : search index=aaa fileter1 filter2 | dedup extractedField1'
↧
INDEXED_EXTRACTIONS on summary events?
It would be really cool to be able to have all of the fields in a summary index automatically converted to indexed fields. You could then use tstats against a summary index directly with significant speed increases.
Has anyone attempted this?
↧