Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How do I enable Get-WindowsUpdateLog PowerShell as requested in the docs?

$
0
0
All, Any more guidance here? Is there an input I need to enable I am missing? As they expecting this as a scheduled task? How is there a TA? What interval? Not a windows guy here so I a little confused on the ask. ###### Windows Update Log ###### # Windows update logs for newer Windows OS like Windows 10 and Windows Server 2016 are generated using ETW (Event Tracing for Windows). # Please run the Get-WindowsUpdateLog PowerShell command to convert ETW traces into a readable WindowsUpdate.log. # Further to index data of the log file thus generated, you may also require to change the location mentioned # in the below monitor stanza to provide the path where you have generated the file after conversion.

Installing the Splunk App for Windows Infra and getting a error about my TA version?

$
0
0
All, Just installed the latest Splunk for WIndows Infra and the latest Splunk_TA_Windows. When I go through the guided setup I get this Update required: v5.0.0 installed. It does not match with v4.8.3 or 4.8.4 But in the installer? Or bug I need to worry about?

Splunk Add-on for Cisco UCS is not collecting data

$
0
0
Hello Everyone, I have installed the Splunk Add-on for Cisco UCS Manager and configured the managers, template and task as mentioned in the Splunk documentation, however there is no data collecting at Splunk side. I'm using Splunk version:7.0.1 and Cisco UCS Add-on version: 2.0.3. The error message is mentioned below. Error: 2018-08-21 15:33:55,569 ERROR 139727937750784 - Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_cisco-ucs/bin/ta_util2/thread_pool.py", line 180, in _run job() File "/opt/splunk/etc/apps/Splunk_TA_cisco-ucs/bin/cisco_ucs_job_factory.py", line 26, in __call__ results = self._func() File "/opt/splunk/etc/apps/Splunk_TA_cisco-ucs/bin/cisco_ucs_data_loader.py", line 90, in collect_data self._connect_to_ucs() File "/opt/splunk/etc/apps/Splunk_TA_cisco-ucs/bin/cisco_ucs_data_loader.py", line 206, in _connect_to_ucs raise Exception(msg) Exception: Failed to get cookie for xxxxdc2sapsfi01.ent.xxxxxx.net, errcode=551, reason=Authentication failed I have also tried pulling data from UCS manager with exactly the same configuration for different environment with Splunk version 6.6 and Cisco UCS add-on version 2.0.2, and I'm able to load the data to Splunk. Please help me with this issue. Thanks

Python SDK: StreamingCommand only returns data in fields where fields are in the first record.

$
0
0
I'm writing a search command using the Splunk Python SDK to pull in data from an external API into search results. The goal is to add fields to each record based on the data returned from the API. Example: `search ... | CUSTOM_COMMAND source_ip` outputs the search results with enriched data from the API. The external API returns different fields based on the query; for example, one query could return fields A, B, and C, but another query could only return fields A and B. Due to this, different records could have different fields. I make the Splunk field name whatever the key of the API data is. For example, if the API returns {'keyA': 'valueA', 'keyC': 'valueC'}, then new fields called keyA and keyC will be added to the Splunk record and returned to the search. *Here is the issue...* it appears that if Splunk doesn't see a key in the first result, it won't show that key for any of the later results even if a value was added to that key. If the first record is returned where fields keyA and keyC added from the external API call, then I'll be able to see any other records below that have values for keyA and keyC. *However*, if there is a record later down the search results where a value is added to a field named keyB, the value **will not** be displayed in the results; keyB will be blank for all results unless there is some value for keyB in the first record. If I manually add some junk value to keyB in the first record, all records below that are supposed to have a value for keyB will display that value. I've been operating under the assumption that Splunk doesn't really care about records having different fields, but I'm not too sure what to think of this... Am I misunderstanding something about how Splunk operates? Please let me know what I can clarify.

Trouble with UTC time

$
0
0
I have some search results that return values in the format %Y-%m-%d %H:%M:%S. For example: ...some search... | table UpdateTime This would yield the following table: UpdateTime ------------------ 2018-06-06 13:49:28 2017-12-22 08:23:21 I know for a fact that time string is in UTC, not my local time. All I need to do is display the number of minutes that have elapsed between that UTC string and the time the event was recorded (_time). Everything I try keeps giving me negative numbers for recent events, I assume because it is treating the UpdateTime field as being in local time, not UTC. Can anyone help me?

Show two plots on chart with different values?

$
0
0
Hello All, I have some data coming in from NetApp that shows snapshot name and snapshot volume used. I need to show all the volume names/space used from 48 hours ago on-top of one from 24 hours ago. The goal is to show growth of our snapshots each day. by base search is : sourcetype=dbx3_netapp_vault_utilization this will return the below data: (server/host names redacted) ![alt text][1] I need the chart to show the name with Volume Used at 24h and 48 on the same line graph...any help? [1]: /storage/temp/255763-2018-08-21-14-35-04.png

How to plot multiple values on single line chart

$
0
0
Hi All. I run the below search sourcetype=dbx3_netapp_vault_utilization it returns the below: (names redacted) ![alt text][1] [1]: /storage/temp/255764-2018-08-21-14-35-04.png I need to create a line chart that shows the "name" and "volumeUsed" from 48 hours ago compared to 24 hours ago so we can trend our snapshot size.

Splunk alert and shutting down a physical port on a switch

$
0
0
Have anyone used Splunk to act upon an alert and shut down a physical port on the switch? This would require running a scrip when an alert is triggered. I just want to reach out to the community and see if something like has been done already.

How do I Embed Splunk SSL cert in client application?

$
0
0
My program has a variable, a string that contains the Splunk PEM certificate. Every request that is sent to the API includes my program verifying the cert being presented by the Splunk server is the one in the application. Okay so the error message I get is that: "Couldn't login to splunk: Post https://10.0.0.18:8089/services/auth/login: x509: cannot validate certificate for 10.0.0.18 because it doesn't contain any IP SANs" What am I missing here? There must be something about the Splunk architecture I am missing as related to certificates. BTW this is a FREE spunk server I am running at home.

How to edit ps.sh to limit process getting in ingest for Splunk Add-on for Unix and Linux

$
0
0
Hello, I'm trying to only get a certain server processes to ingest to splunk index using Splunk Add-on for Unix and Linux script by editing the ps.sh script by adding grep command in there. like below. However i'm getting error like ERROR: Unsupported option (BSD syntax) or ERROR: Garbage option. edit: CMD='ps auxww|grep nc' Could someone please direct me to document how to add grep in or some guidance how to get this ps.sh script to works? thank you

Performing Sum Calculation when Field values are combined

$
0
0
First problem: Fields are extracted in Interesting Fields, and I'm trying to combine data with **Account** and **RequestorCode** must be the field with identical data values. I need help to get *sum of ElapsedTime value* when data are combined. Below is my query index= | eval Service= case(Service = "X", "TEST", Service = "Y", "TEST") | table _time Service Account ElapsedTime RequestorCode Below is my sample ----SAMPLE DATA---- Account: 123 Service: X ElapsedTime: 80.0ms RequestorCode: XX1 Account : 123 Service: Y ElapsedTime: 20.0ms RequestorCode: XX1 ---OUTPUT--- Account: 123 Service: Z ElapsedTime: 100.0ms RequestorCode: XX1 Second Problem: Fields are extracted in Interesting Fields, and I need help to show the data with *the highest elapsed time* when **Account** and **RequestorCode** must be the field with identical data values. Below is my sample ----SAMPLE DATA---- Account: 123 Service: A ElapsedTime: 70.0ms RequestorCode: XX1 Account : 123 Service: B ElapsedTime: 50.0ms RequestorCode: XX1 ---OUTPUT--- Account: 123 Service: A ElapsedTime: 70.0ms RequestorCode: XX1

Have Alert Check Three Times before Sending Email

$
0
0
Currently, we are trying to set up an alert for our AWS Instances to report if the CPU is >= 90%. What we want to have happen is once Splunk sees this, it will test two more times (waiting a shorter amount of time to check), then send out the actual alert. It will continue this pattern until the alert clears. Example: Alert is scheduled on cron to run on the 45 minute mark of the hour, every hour. At 10:00am, Splunk sees that there is a server that is sitting at 91%. At this point, it would not send out an alert, but wait 5 minutes, checks again, showing it's still at 91%; but still does not send out the alert. On the third check, with another 5 minutes passing, and the results still the same, this is when Splunk would send out the alert to the requested email. This process would repeat until the alert clears. I have found when trying to create an alert that there is the Throttle option; thinking that maybe if we set the time for every 45 minutes; once it sees the error, and is throttled for 10 minutes or so, after the throttle, the alert would be sent out, then go back and throttle again for another 10 minutes. (Please let me know if that makes sense, or if Throttle only suppresses immediately when active, but does not cause splunk to check again after the throttle has been engaged.)

Why does the PDF Exporter work ok on Windows laptop but not while installed on Linux?

$
0
0
We are trying to use the Smart PDF Exporter to generate pdf reports from our Splunk instance. When we install the app on Splunk 7.0.4 running on Linux, we are experiencing several issues. 1. When we add the Smart Export panel to our dashboards, all actions on the dashboard become extremely slow. Things like scrolling down or even clicking save take minutes. Even when the dashboard generates it takes several minutes longer to complete. 2. The PDF that gets generated is not formatted correctly. It's even worse then the standard Export PDF. I've tried all browsers from Firefox, Chrome, IE, Edge and they all look bad. The instructions say that PhantomJS is only needed to schedule pdf generation. Is that correct or is it also needed in other places depending on the installation? When I tried to look at the log files suggested in the instructions, they are empty. Is there something that needs to be done to turn on messages getting written to the log files? Does any one know any other tips to debug this? I have installed the app on a local instance running Windows and the results were great so I was excited to perhaps be able to generate a pdf that kept the formatting and coloring of the dashboard.

Does Anyone Have Field Definitions for Cisco IOS Technology Add-On?

$
0
0
We have been asked to provide definitions for the following field names for events produced by parsing Cisco switch logs with the Cisco IOS TA. I realize that some field names are self-explanatory but does anyone have a 'key' that defines what all or most of the field names below mean? Thanks. NetAdapter SwitchModule SwitchPort VMServer _raw _time action ap_mac app as_number authenticator bytes cdp_local_interface cdp_local_vlan cdp_neighbor cdp_remote_interface cdp_remote_vlan chaddr change_type config_source correlation_tag date_hour date_mday date_minute date_month date_second date_wday date_year date_zone dest dest_int dest_interface dest_ip dest_mac dest_port dest_vlan detected_on_interface device_time direct_ap_mac disable_cause dvc dvportID enabled event_id eventtype facility filename filename_line host icmp_code icmp_code_id icmp_type ids_type index line linecount message_text message_type mnemonic mode neighbor num_packets object_category packets port_status process_id product proto protocol proxy_action punct range reason reliable_time reported_hostname rule severity severity_description severity_id severity_id_and_name severity_name source sourcetype spanning_tree_instance speed splunk_server splunk_server_group src src_int src_int_prefix src_int_prefix_long src_int_suffix src_interface src_interface_description src_ip src_mac src_port src_vlan state_to status subfacility switch_id tag tag::app tag::eventtype time_of_day timeendpos timestartpos transport type unit user user_type vendor vendor_action vendor_category vendor_explanation vendor_message_text vendor_recommended_action vlan_id

Splunk ES Incident dashboard not working with Splunk Enterprise 7.1.2

$
0
0
We upgraded our Splunk enterprise to 7.1.2 from 7.0 version in a SH that has Splunk ES version 4.7.2. After the upgrade, we notice that Incident Review dashboard doesn't work as expected. If we upgrade the Splunk ES, will this be fixed? Also, if we plan on upgrading, should we do Splunk ES first them Splunk Enterprise? What will be the sequence for these upgrades? Thanks!

How to calculate the difference between two fields from different sources?

$
0
0
Hi All, please. How to get the difference between two fields from different sources? For example, know what is contained in one that is not contained in another. It reads AV (Antivirus). Example: source = AV_X HostName = Server01 HostName = Server02 HostName = Server03 HostName = Server04 HostName = Server05 source = AV_Y CompName = Server01A CompName = Server02 CompName = Server03 CompName = Server04 CompName = Server08A source = AV_Z cName = Server01A cName = Server02 cName = Server03B cName = Server04B cName = Server05 Thank you in advance.

BMC Remedy API to pull assets information ?

$
0
0
Hi All, Need help to pull the assets information from BMC Remedy , we tried by using REST API Modular Input add-on however no luck yet . If we use postman we are able to pull the assets information. Anyone used Rest API Modular Input add-on ? Thanks in advance.

how to use if condition in splunk?

$
0
0
I want to create the below query in splunk to monitor logs, can someone let me know the logic? If “TAG=” and “ABC-??? WHERE ??? IS NOT ” THEN it will trigger email alert...

Why do i get a no value in Country while using iplocation

$
0
0
Hi, With the below query i am able to list the country and request count by response time split. wall_time != NULL client_ipaddress != NULL |iplocation client_ipaddress| eval Latency=case(wall_time<500, "0-0.5s", wall_time>=500 AND wall_time<1000, "0.5s-1s",wall_time>=1000 AND wall_time<3000, "1s-3s", wall_time>=3000 AND wall_time<6000, "3s-6s",THREAD_WALL_MS>=4000 AND wall_time<10000, "6s-10s",wall_time>=10000 AND wall_time<30000, "10s-30s", wall_time>=30000, ">=30s")| chart span=1w count as RequestCount over Country by Latency | sort -RequestCount, -Latency But the query seems to be resulting 1 row with no value for the country field. Why is it so ? Anything i am missing out ? ![alt text][1] [1]: /storage/temp/255768-screen-shot-2018-08-22-at-112855-am.png

lookuptable compare with new event

$
0
0
i called all the errors and created to lookup table , iam thinking to create job to which will take the last 5 min of errors and compare with errors in lookuptable , if it doesn't match it will trigger alert ( means finding new error from existing) Can we do this via splunk query ? , if so can you please share the sample query
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>