I want to set up a REST API call to https get request but this site will return a zip file instead of xml, jason , or text. Is there a way I could set it to index the zip file? If not, is there any workaround? This is the description from the site:
![alt text][1]
[1]: /storage/temp/215587-a.png
↧
REST API option for compressed file? Can I index a zip file?
↧
What is the manifest file and is there an issue if it is missing?
Sounds like I have a manifest file/hashing issue that appears whenever I restart splunkd on an endpoint, like the following:
# ./splunk stop
Stopping splunkd...
Shutting down. Please wait, as this may take a few minutes.
Stopping splunk helpers...
Done.
# ./splunk start
Splunk> 4TW
Checking prerequisites...
Checking mgmt port [8089]: open
Checking conf files for problems...
Done
Checking default conf files for edits...
Cannot find any source of hashes. Manifest file '(null)' not present?
Problems were found, please review your files and move customizations to local
All preliminary checks passed.
Starting splunk server daemon (splunkd)...
Done
Is this just an annoying error or are there real problems when the manifest is missing? Our UNIX operations team tells me that they install the UF via a package manager, and claim that there is no manifest present on the systems.
Thanks!
↧
↧
Splunk Add-on for Imperva SecureSphere WAF -- Help with generating fields
After installing the add on, the imperva field is not generating. The only thing that was added is the tag. How do I get it to generate extra fields?
↧
How can I create a column that counts how many Field Bs there are per Field A?
Have this:
search... | stats values(interfaces) AS Interfaces by circuit
![alt text][1]
Thank you in advance!
[1]: /storage/temp/215586-cusersv907863documents3.jpg
↧
When I sent JSON data from kafka to Splunk over TCP it shows up as {"event":{"a":"b"}}
I am trying to send json format data from consuming from kafka to Splunk forwarders over TCP..
- If I send json data from kafka {"a": "b"} over tcp (I have a module that sends json to tcp on port 9999)
- It is consumed by universal forwarder and then sends this data to Splunk.
When I search this data on Splunk it shows up as {"event":{"a":"b"}}
**Why json is getting wrapped inside "event" ? how to avoid it ?**
splunkforwarder/etc/system/local/inputs.conf
[tcp://9999]
disabled = 0
_TCP_ROUTING = index1
sourcetype = fromLocal
splunkforwarder/etc/system/local/outputs.conf
[tcpout:index1]
server=xx.xxx.xxx.xxx:9997
Splunk version: 6.6.2
UniversalForwarder version: 6.6.2
↧
↧
split event into multiple events using SPL
Hello, a beginner question. I've a search query that produces a single JSON even such as this:
{
Error/type/0 : type_value0
Error/type/1 : type_value1
Error/type/2 : type_value2
Error/desc/0 : desc_value0
Error/desc/1 : desc_value1
Error/desc/2 : desc_value2
Error/logfile/0 : file_value0
Error/logfile/1 : file_value1
Error/logfile/2 : file_value2
}
I want to transform this into a table like this (end output):
# type desc logfile
0 type_value0 desc_value0 file_value0
1 type_value1 desc_value1 file_value1
2 type_value2 desc_value2 file_value2
I'm thinking splitting the input JSON event into multiple small events could help me get to the end result. Or may be there is a better way.
How do I achieve this?
Thanks in advance for your help.
↧
How do I send Cisco Meraki FW logs?
I am trying to send logs from Cisco Meraki FW to our Splunk instance. No universal forwarder is on the FW. Can I still have the logs sent to Splunk?...would it be on port 514 or 9997?
Thank you
↧
Why is my sourcetype on the indexer when I import a JSON file?
I am trying to import JSON file on Splunk Enterprise, my sourcetype is below:
CHARSET=UTF-8
INDEXED_EXTRACTIONS=json
KV_MODE=none
NO_BINARY_CHECK=true
SHOULD_LINEMERGE=true
TIMESTAMP_FIELDS=timestamp
find below is also the Json file format example :
"cve" : {
"CVE_data_meta" : {
"ID" : "CVE-2011-3177"
},
"affects" : {
"vendor" : {
"vendor_data" : [ ]
}
},
"problemtype" : {
"problemtype_data" : [ {
"description" : [ ]
} ]
},
"references" : {
"reference_data" : [ {
"url" : "https://bugzilla.suse.com/show_bug.cgi?id=713661"
}, {
"url" : "https://github.com/yast/yast-core/commit/7fe2e3df308b8b6a901cb2cfd60f398df53219de"
} ]
},
"description" : {
"description_data" : [ {
"lang" : "en",
"value" : "The YaST2 network created files with world readable permissions which could have allowed local users to read sensitive material out of network configuration files, like passwords for wireless networks."
} ]
}
},
"configurations" : {
"CVE_data_version" : "4.0",
"nodes" : [ ]
},
"impact" : { },
"publishedDate" : "2017-09-08T18:29Z",
"lastModifiedDate" : "2017-09-08T18:29Z"
},
Question: The sourcetype is on the indexer, do you have any idea what is wrong?
↧
How can I turn this JSON event into a table with various fields?
Hello, a beginner question. I've a search query that produces a single JSON event such as this:
{
Error/type/0 : type_value0
Error/type/1 : type_value1
Error/type/2 : type_value2
Error/desc/0 : desc_value0
Error/desc/1 : desc_value1
Error/desc/2 : desc_value2
Error/logfile/0 : file_value0
Error/logfile/1 : file_value1
Error/logfile/2 : file_value2
}
I want to transform this into a table like this (end output):
# type desc logfile
0 type_value0 desc_value0 file_value0
1 type_value1 desc_value1 file_value1
2 type_value2 desc_value2 file_value2
I'm thinking splitting the input JSON event into multiple small events could help me get to the end result. Or may be there is a better way.
How do I achieve this?
Thanks in advance for your help.
↧
↧
How to display calculated fields as part of same graph
Hello, I'm attempting to display three calculated fields (total_meeting_hours, total_use_no_meeting_hours, and hours_not_in_use) as a part of a pie chart. Each of these fields should represent a calculated portion of the total pie chart, with their values given on mouse over and their field names showing as their labels for their respective portions on the graph.
I'm currently attempting to achieve this (seen below) by using mvappend to create a multi-value field (display_val) and then split on that field in the chart function. However when splitting by this field it displays the numeric time values for each of the fields instead of the field names (i.e "11.023" hour time instead of "total_meeting_hours"). How would I go about accomplishing this? I attempted to create a separate multi-value field and combine it with the other for labeling, but have so far been unable to do so. I'd greatly appreciate help!
| convert num(SM_C.value.data.elapsedTime) as use_duration num(SM_C.value.data.MeetingElapsedTime) as meeting_duration
| where (use_duration < 50000 and use_duration > 0)
| eval meeting_duration = if(meeting_duration < use_duration, meeting_duration, meeting_duration = 0)
| stats sum(use_duration) as total_duration sum(meeting_duration) as total_meeting_duration
| eval total_meeting_hours = (total_meeting_duration / 60 / 60)
| eval total_use_no_meeting_hours = (total_duration / 60 / 60) - total_meeting_hours
| eval hours_not_in_use = (168) - (total_duration / 60 / 60)
| eval display_val=(mvappend(hours_not_in_use, total_use_no_meeting_hours, total_meeting_hours))
| eval display_label=(mvappend("hours_not_in_use", "total_use_no_meeting_hours", "total_meeting_hours"))
| mvexpand display_val
| mvexpand display_label
| chart eval(sum(display_val) / 168) as Percentage by display_val
↧
is it possible to set a timestamp to year value only?
Hey everyone, i know Splunk is only for machine data, but I was trying to use it for some other non-machine data that only provides the year as the time-stamp. Is there any way to configure the time-stamp to only use the year format? No, month, day, hour or the like. I was looking at editing the props.conf file but i'm not really sure what i would put in the time format section. Could someone help me figure this out please or let me know if it is impossible?
↧
Accidentally Removed the admin role, now my admin account won't work.
While trying to create another admin role, somehow I removed all the capabilities from the original admin role. Now I cannot do anything as admin.
Is there anything I can do as root on the splunk server?
↧
Splunk Deployment Migration
We are migrating datacenters and the current virtual deployment server has been replicated to the new facility. I can bring it up, change the IP and hostname but is there a central way to redirect existing universal forwards to the newly IP'ed deployment server? Most suggestions I've seen online are outdated and end up saying do it manually anyway.
↧
↧
Where can I find the internal logs of the service which is having the version 5.0.1 splunk
Hi,
I'm trying to find the var/log/splunk/ folder logs to check the errors and warning but in the older versions splunk 5.0.1 I'm not able to find any of the logs can anyone please address me at this stage.
Sathish
↧
Eliminating rows from stats output
I created the following search to audit the changes made to our network infrastructure:
`(index=ise Protocol=Tacacs MESSAGE_CODE=5202) OR (index=acs process="Tacacs-Accounting" MESSAGE_CODE=3300)`
`| rex field=CmdSet mode=sed "s/^\[(?: )?|CmdAV= ?\]?|CmdArgAV=(?:)?|(?:)?\s\]//g"`
`| where CmdSet!=""`
`| lookup dnslookup clientip AS Address OUTPUT clienthost AS Device`
`| eval Device=(if(isnull(Device),Address,Device)), Time=strftime(_time,"%H:%M:%S")`
`| eval Date=strftime(_time, "%m")."-".date_mday."-".date_year`
`| stats list(CmdSet) AS Command, list(Time) AS Time BY Date,User,Device`
Here's some sample output:
Date User Device Command Time
09-14-2017 admin access-switch switchport access vlan 600 13:13:32
interface GigabitEthernet 1/0/26 13:13:25
no shutdown 13:13:57
shutdown 13:13:56
09-14-2017 admin core-router transfer upload start 17:36:08
transfer upload password 17:36:08
transfer upload username transfer 17:36:08
transfer upload filename core-router-confg 17:36:07
transfer upload serverip 10.10.10.1 17:36:07
transfer upload datatype config 17:36:07
transfer upload port 21 17:36:06
transfer upload mode ftp 17:36:06
There's a couple of issues I'm really struggling with:
1. I would like to eliminate rows from the stats output where the Command starts with 'transfer upload' or any number of other command snippets. I have spent the day trying various techniques like `|where` but I can't seem to figure how eliminate these rows.
2. I can't figure out how to sort the rows by Time. When I use the `sort` command, I lose all of the grouping and it becomes table output. Is there a way to sort the Commands in the stats output based on the Time column (also preserving the value in the Time column)?
3. There are some rows where the list() limit of 100 is a factor. Is there a better way to construct this search to work around that limit (as opposed to increasing the limit)? I tried using values(), but I seem to loose the relationship between the Command and Time fields.
Really struggling here, thanks.
↧
Is it possible to set a timestamp to year value only?
Hey everyone, i know Splunk is only for machine data, but I was trying to use it for some other non-machine data that only provides the year as the time-stamp. Is there any way to configure the time-stamp to only use the year format? No, month, day, hour or the like. I was looking at editing the props.conf file but i'm not really sure what i would put in the time format section. Could someone help me figure this out please or let me know if it is impossible?
↧
Is there an easy way to redirect existing universal forwarders to a new Splunk deployment?
We are migrating datacenters and the current virtual deployment server has been replicated to the new facility. I can bring it up, change the IP and hostname but is there a central way to redirect existing universal forwards to the newly IP'ed deployment server? Most suggestions I've seen online are outdated and end up saying do it manually anyway.
↧
↧
Where can I find the internal logs in the Splunk 5.0.1 file directory?
Hi,
I'm trying to find the var/log/splunk/ folder logs to check the errors and warning but in the older versions splunk 5.0.1 I'm not able to find any of the logs can anyone please address me at this stage.
Sathish
↧
Stats table manipulation
I created the following search to audit the changes made to our network infrastructure:
`(index=ise Protocol=Tacacs MESSAGE_CODE=5202) OR (index=acs process="Tacacs-Accounting" MESSAGE_CODE=3300)`
`| rex field=CmdSet mode=sed "s/^\[(?: )?|CmdAV= ?\]?|CmdArgAV=(?:)?|(?:)?\s\]//g"`
`| where CmdSet!=""`
`| lookup dnslookup clientip AS Address OUTPUT clienthost AS Device`
`| eval Device=(if(isnull(Device),Address,Device)), Time=strftime(_time,"%H:%M:%S")`
`| eval Date=strftime(_time, "%m")."-".date_mday."-".date_year`
`| stats list(CmdSet) AS Command, list(Time) AS Time BY Date,User,Device`
Here's some sample output:
Date User Device Command Time
09-14-2017 admin access-switch switchport access vlan 600 13:13:32
interface GigabitEthernet 1/0/26 13:13:25
no shutdown 13:13:57
shutdown 13:13:56
09-14-2017 admin core-router transfer upload start 17:36:08
transfer upload password 17:36:08
transfer upload username transfer 17:36:08
transfer upload filename core-router-confg 17:36:07
transfer upload serverip 10.10.10.1 17:36:07
transfer upload datatype config 17:36:07
transfer upload port 21 17:36:06
transfer upload mode ftp 17:36:06
There's a couple of issues I'm really struggling with:
1. I would like to eliminate rows /AFTER/ the stats command where the Command starts with 'transfer upload' or any number of other command snippets. I have spent the day trying various techniques like `|where` but I can't seem to figure how eliminate these rows. I realize I can do this with a regex before the stats, but I'm trying to learn some more advanced techniques.
2. I can't figure out how to sort the rows by Time. When I use the `sort` command, I lose all of the grouping and it becomes table output. Is there a way to sort the Commands in the stats output based on the Time column (also preserving the value in the Time column)?
3. There are some rows where the list() limit of 100 is a factor. Is there a better way to construct this search to work around that limit (as opposed to increasing the limit)? I tried using values(), but I seem to loose the relationship between the Command and Time fields.
Really struggling here, thanks.
↧
Field showing an additional and not visible value --"none"-- under timestamp field
Hi all,
I have a problem with a field call "timestamp".
I have created a custom python script and added as "Data input". The script is executed every 5 minutes and makes an API call, parse the json response and send it to the indexer.
This is a sample raw event:
{"rev_pingdeath_count": 0, "fwd_tiny_count": 0, "dst_address": "X.X.X.X", "timestamp": "2017-09-15T16:05:00.000Z", "start_timestamp": "1505491512000000", "fwd_cwr_count": 0, "user_src_location": null, "rev_synrst_count": 0, "fwd_xmas_count": 0, "server_app_latency_usec": 0, "rev_rst_count": 0, "dst_is_internal": "true", "user_dst_businessUnit": null, "fwd_bytes": 1022, "fwd_synfin_count": 0, "rev_ack_count": 8, "user_dst_pod": null, "total_perceived_latency_usec": 0, "bandwidth_bytes_per_second": "0", "user_src_department": null, "user_src_businessUnit": null, "rev_psh_count": 4, "dst_hostname": "appServerXXXX", "user_src_lifecycle": null, "rev_pkts": 8, "fwd_nc_count": 0, "src_address": "Y.Y.Y.Y", "rev_finnoack_count": 0, "user_src_pod": null, "dst_enforcement_epg_name": [], "rev_nc_count": 0, "rev_cwr_count": 0, "user_src_datacenter": null, "fwd_synrst_count": 0, "fwd_ack_count": 7, "srtt_available": "SRTT_NONE", "rev_allzero_count": 0}
There is only one timestamp field on each event, as far I have been able to see, but when I do a ** index=main source=mysource | head 1 | table timestamp** I get the following data:
![alt text][1]
Where is the **none** value coming from ??. This **none** is present in every single event.
Splunk version 6.5.2, single instance.
Thanks and regards,
[1]: /storage/temp/214602-screen-shot-2017-09-15-at-31427-pm.png
↧