Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How to Install IBM Websphere MQ Modular Input for Splunk??

$
0
0
Hi Folks , how are you? I would like to know how can I install the App "IBM Websphere MQ Modular Input for Splunk" and in which directory can I put the inputs.conf.spec file?? Thank you

Dev License Expiring

$
0
0
I logged in and saw a message saying that my developer license would expire end of January. How do I go about requesting a renewal or do I need to be requesting for another license?

Upgrade Splunk ES from 5.7.1 to 6.1.0 for Splunk 8.0.1

$
0
0
Hello, We are running Splunk 8.0.1 with Splunk ES 5.7.1 (python3 enabled). Everything works fine. Then we just downloaded Splunk ES 6.1.0 from splunkbase.com and tried to upgrade. Clicked "Install app from file" and pick the spl file. Once click "Upload", got an error page showing: PR_CONNEC_RESET_ERROR. splunkd.log shows this error === 01-27-2020 11:00:13.984 -0500 ERROR ChunkedExternProcessor - stderr: Traceback (most recent call last): 01-27-2020 11:00:13.985 -0500 ERROR ChunkedExternProcessor - stderr: File "/opt/splunk/etc/apps/SA-Utils/bin/sendmsg.py", line 5, in 01-27-2020 11:00:13.985 -0500 ERROR ChunkedExternProcessor - stderr: from cStringIO import StringIO 01-27-2020 11:00:13.985 -0500 ERROR ChunkedExternProcessor - stderr: ModuleNotFoundError: No module named 'cStringIO' 01-27-2020 11:00:13.989 -0500 ERROR ChunkedExternProcessor - EOF while attempting to read transport header 01-27-2020 11:00:13.990 -0500 ERROR ChunkedExternProcessor - Error in 'sendmsg' command: External search command exited unexpectedly with non-zero error code 1. === Splunk ES 6.1.0 shall support python3, right? Thanks for your help!

How to upgrade Advanced XML file to simple XML file ?

$
0
0
Hi Team, I am in a process of updating Splunk to version 8.0 and to Python 3.X.Also, for that I have remove all the usage of Advanced XML and convert it into the simple XML views. But there are few dashboards/XML which seems to be impossible to convert them back to simple XML. Could you please guide me the way how can i actually remove advance XML ? Or any kind of leads would be highly appreciated because manually changing them back to simple XML seems like a very tedious task. For ex:- APP Name:- Splunk Add for *Nix Location:- tc_matrix_nix/default/data/ui/views/Setup.xml Setup.XML *False1splunk.search.jobTrue1Falseauto Thanks in advance

Basic installation/configuration of Maps+ app

$
0
0
I am currently trying to get Maps+ functioning in our environment and had some questions before doing so: Does this app include all visualizations/capabilities in the download? Does the app require internet access to pull additional data? If we decide to not use the API functionality, will the Leaflet plugins be able to do everything as advertised? What are the standard installation steps for getting the app to work? Thanks for any information in advance.

How to I set a sampling ratio for initial search for Splunk MLTK? Do we have specific SPL command for that?

$
0
0
I am trying to train clustering model but keep running in the memory limit error because the data is big. I would like to use event sampling but I am not aware of the command for it. **How to I set a sampling ratio for initial search for Splunk MLTK? Do we have specific SPL command for that?**

Query to chart with 2 different data over a time period

$
0
0
index=my-index ns=my-namespace app_name=my-api DECISION IN (YES, NO) | chart list(DECISION) BY PRODUCT_ID For above query, how could I possibly chart it for a query of 90 days. I want the data to be shown weekly. There are 11 possible ids for the value PRODUCT_ID. Thus total 3 things to consider. PRODUCT_ID (11 types), DECISION (2 types) and the timeline to be shown weekly for a 90 day period. How can I chart this in Splunk? Bit confused as to what chart would fit this scenario and how to write the query to chart this. Appreciate any advice. Thanks.

In Splunk DB Connect app, under splunk dbx_job_metrics.log, what are read_count and write_count fields mean

$
0
0
Hi, I am trying to troubleshoot why db input fails to fetch results sometimes. We have a db connect input which runs every hour and it fails occasionally. When looking at splunk internal logs, I have seen two kinds of messages in in splunk_app_db_connect_job_metrics.log file. What is the meaning of read_count and write_count fields SUCCESS MESSAGE: 2020-01-25 07:39:00.683 -0500 INFO c.s.dbx.server.task.listeners.JobMetricsListener - action=collect_job_metrics connection=Vertica_Prod_LongRun_Conn jdbc jdbc_url=null db_read_time=0 hec_upload_time=3 hec_record_process_time=0 format_hec_success_count=1 hec_upload_bytes=647 status=COMPLETED input_name=DWH.Vertica.UnmatchedImpressionsMonitoring batch_size=1000 error_threshold=N/A is_jmx_monitoring=false start_time=2020-01-25_07:16:00 end_time=2020-01-25_07:39:00 duration=1380682 read_count=19 write_count=19 filtered_count=0 error_count=0 FAILED MESSAGE: 2020-01-25 06:27:45.809 -0500 INFO c.s.dbx.server.task.listeners.JobMetricsListener - action=collect_job_metrics connection=Vertica_Prod_LongRun_Conn jdbc_url=null status=FAILED input_name=DWH.Vertica.UnmatchedImpressionsMonitoring batch_size=1000 error_threshold=N/A is_jmx_monitoring=false start_time=2020-01-25_06:16:00 end_time=2020-01-25_06:27:45 duration=705808 read_count=18 write_count=18 filtered_count=0 error_count=0 I attached a screenshot of logs from when query starts until it fails. Please let me know if more details are needed. Any insights into this are much appreciated! ![alt text][1] [1]: /storage/temp/280796-screen-shot-2020-01-27-at-123930-pm.png

Splunk Azure Marketplace Instance

$
0
0
I have created 5 instances of splunk azure marketplace standalone instances. I am unable to ssh in to 4 of the machines. Any guidance on this will be helpful. As well i observed when an azure vm is stopped and started the splunk service does not come up and hence it is not resolvable over https.

Executing saved search with dynamic search time

$
0
0
Hello, I have to execute one saved search which required some arguments. These arguments are generated dymanically in one file. So I am looking to read that file and pass those argument to saved search. search to get argument index=mlc_live sourcetype=csv | table host_name earliest latest Output host_name earliest latest RSAT43 1578927600 1579016736 I am looking to execute saved search for time from above search. (earliest and latest time) My query: index=mlc_live sourcetype=csv | table host_name earliest latest | map maxsearches=1 search="| savedsearch "TEST_KPI_MTE_ALERT_FUNCTION" host_token=$host_name$ earliest=$earliest$ latest=$latest$" But in above query, saved search is producing results for all time. Is there a way to execute saved search for specific duration taken from another file/search query ? FYI : my saved search is a complex function which calling another multiple saved search searches so I can't replace this saved search with normal query.

Use Folder Name/Path as TimeStamp

$
0
0
Greetings--- I am in the process of building an add-on. I am building this add-on to utilize input data stored in folders with the structure: basedir\APP\log\activity\MLBB\TierData\01272020\en\SA\Week.Normal.Classic.csv basedir\APP\log\activity\MLBB\TierData\01272020\en\SA\Week.High.Classic.csv basedir\APP\log\activity\MLBB\TierData\01272020\en\SA\Week.Very-High.Classic.csv basedir\APP\log\activity\MLBB\TierData\01272020\en\SA\Week.All-Levels.rank.csv basedir\APP\log\activity\MLBB\TierData\01272020\en\SA\Week.Normal.rank.csv basedir\APP\log\activity\MLBB\TierData\01272020\en\SA\Week.High.rank.csv basedir\APP\log\activity\MLBB\TierData\01272020\en\SA\Week.Very-High.rank.csv basedir\APP\log\activity\MLBB\TierData\01272020\en\SA\Week.All-Levels.brawl.csv basedir\APP\log\activity\MLBB\TierData\01272020\en\SA\Week.Normal.brawl.csv basedir\APP\log\activity\MLBB\TierData\01272020\en\SA\Week.High.brawl.csv basedir\APP\log\activity\MLBB\TierData\01272020\en\SA\Week.Very-High.brawl.csv I would like to use the date in the folder path (in this case, 01272020) as the Timestamp, ideally at Index Time. I see this documentation: https://docs.splunk.com/Documentation/Splunk/latest/Data/HowSplunkextractstimestamps And this article: https://answers.splunk.com/answers/94763/set-timestamp-based-on-file-source-path.html But when I place: > EVAL-_time=strptime(file_name, "%m%d%Y") in my props.conf, it didn't seem to work.

APP Splunk Audit linux Help Configure

$
0
0
Guys, I would like one. yours. He has little experience with Splunk and many doubts. Here at my job, I need to configure the Linux Audit app. The query that comes ready from the APP is showing three errors when performing the search. You can help me fix this problem. | tstats count WHERE [|inputlookup auditd_indices] [|inputlookup auditd_sourcetypes] BY _time span=1h | predict count as prediction future_timespan=0 algorithm=LLP | rename lower95(prediction) as lower, upper95(prediction) as upper | eval range=upper-lower | eval difference=case(count>lower AND countupper, round((count-upper)/range,1)) | search difference=* | table _time difference Dispatch Command: Unknown error for indexer: brlxp*******. Search Results might be incomplete! If this occurs frequently, please check on the peer. command="predict", No data Unknown sid.

Splunk version 8.0 strips out data-toggle attribute in dashboard markup

$
0
0
I have a dashboard with javascript tabs using the example from this [blog] (https://www.splunk.com/en_us/blog/tips-and-tricks/making-a-dashboard-with-tabs-and-searches-that-run-when-clicked.html). It is working fine in Splunk 7.2 . Testing it in version 8.0 and found that the html of the rendered dashboard is missing the data-toggle attribute. For example, before it looks like:tab name After it looks like this: tab name It seems like the splunk engine strips out the data-toggle="tab" attribute. Has anyone seen this? Is this a bug?

Multi-site architecture - Is it possible to set different retention timeframes for the same index at two different sites?

$
0
0
In the multi-site architecture models, it is not clear what settings on retention are available to you. Is it possible to set different retention timeframes for the same index at two different sites. For example, index A, I want to retain for 2 years at site A, but only 180 days at site B. Furthermore, is it possible to set up search head affinity for local users at site B to search the data at site B first (with only 180 days of lookback), and then, if a longer lookback is asked for, go back and request the rest from Site A?

How to chart with 2 different data over a time period?

$
0
0
index=my-index ns=my-namespace app_name=my-api DECISION IN (YES, NO) | chart list(DECISION) BY PRODUCT_ID For above query, how could I possibly chart it for a query of 90 days. I want the data to be shown weekly. There are 11 possible ids for the value PRODUCT_ID. Thus total 3 things to consider. PRODUCT_ID (11 types), DECISION (2 types) and the timeline to be shown weekly for a 90 day period. How can I chart this in Splunk? Bit confused as to what chart would fit this scenario and how to write the query to chart this. Appreciate any advice. Thanks.

How to execute saved search with dynamic search time?

$
0
0
Hello, I have to execute one saved search which required some arguments. These arguments are generated dymanically in one file. So I am looking to read that file and pass those argument to saved search. search to get argument index=mlc_live sourcetype=csv | table host_name earliest latest Output host_name earliest latest RSAT43 1578927600 1579016736 I am looking to execute saved search for time from above search. (earliest and latest time) My query: index=mlc_live sourcetype=csv | table host_name earliest latest | map maxsearches=1 search="| savedsearch "TEST_KPI_MTE_ALERT_FUNCTION" host_token=$host_name$ earliest=$earliest$ latest=$latest$" But in above query, saved search is producing results for all time. Is there a way to execute saved search for specific duration taken from another file/search query ? FYI : my saved search is a complex function which calling another multiple saved search searches so I can't replace this saved search with normal query.

Whitelist file "missing" in Threathunting app !

$
0
0
I have installed threat hunting app and configured "threathunting" index as well , when i navigated to "About this app" tab , i found one of the whitelist file missing out of 13, when i checked below link for lookups , i did not find "missing" lookup file below link i used for lookups: https://github.com/olafhartong/ThreatHunting/commits/master/files/ThreatHunting.tar.gz i am wondering the above link got last update about 8 months ago , since then no update , where i can get missing empty lookup ? splunk version: 7.2.6 App version: 1.4.1 ![alt text][1] [1]: /storage/temp/279748-threathunting-app.png

Errors with v2 of Dropbox Business App for Splunk

$
0
0
Hi, Installed v2 of the Dropbox app on our server running Splunk 7.1.3. I get a bunch of python errors and no data gets added to the index. I can use cURL to pull the events from the CLI so communication/authentication aren't an issue. We are currently running Python 2.7.5 on this server. I'm wondering if the app supports Python 2.7.5 or if I need to upgrade to Python 3.7 **Can anyone confirm which version(s) of Python the dropbox for business app supports??** 01-27-2020 14:04:53.843 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" ConnectionError: HTTPSConnectionPool(host='api.dropboxapi.com', port=443): Max retries exceeded with url: /2/team_log/get_events (Caused by ReadTimeoutError("HTTPSConnectionPool(host='api.dropboxapi.com', port=443): Read timed out. (read timeout=5.0)",)) 01-27-2020 14:04:53.843 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" raise ConnectionError(e, request=request) 01-27-2020 14:04:53.843 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" File "/opt/splunk/etc/apps/splunk-app-dropbox/bin/splunk_app_dropbox/requests/adapters.py", line 487, in send 01-27-2020 14:04:53.843 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" r = adapter.send(request, **kwargs) 01-27-2020 14:04:53.843 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" File "/opt/splunk/etc/apps/splunk-app-dropbox/bin/splunk_app_dropbox/requests/sessions.py", line 609, in send 01-27-2020 14:04:53.843 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" resp = self.send(prep, **send_kwargs) 01-27-2020 14:04:53.843 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" File "/opt/splunk/etc/apps/splunk-app-dropbox/bin/splunk_app_dropbox/requests/sessions.py", line 488, in request 01-27-2020 14:04:53.843 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" return self.http_session.request(method, url, **requests_args) 01-27-2020 14:04:53.843 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" File "/opt/splunk/etc/apps/splunk-app-dropbox/bin/splunk_app_dropbox/splunk_aoblib/rest_helper.py", line 43, in send_http_request 01-27-2020 14:04:53.843 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" proxy_uri=self._get_proxy_uri() if use_proxy else None) 01-27-2020 14:04:53.843 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" File "/opt/splunk/etc/apps/splunk-app-dropbox/bin/splunk_app_dropbox/modinput_wrapper/base_modinput.py", line 476, in send_http_request 01-27-2020 14:04:53.843 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" use_proxy=False, 01-27-2020 14:04:53.843 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" File "/opt/splunk/etc/apps/splunk-app-dropbox/bin/input_module_dropbox.py", line 110, in send_http_request 01-27-2020 14:04:53.843 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" response = send_http_request(helper, cursor, access_token, start_time, category) 01-27-2020 14:04:53.843 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" File "/opt/splunk/etc/apps/splunk-app-dropbox/bin/input_module_dropbox.py", line 59, in collect_events 01-27-2020 14:04:53.843 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" input_module.collect_events(self, ew) 01-27-2020 14:04:53.842 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" File "/opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py", line 72, in collect_events 01-27-2020 14:04:53.842 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" self.collect_events(ew) 01-27-2020 14:04:53.842 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" File "/opt/splunk/etc/apps/splunk-app-dropbox/bin/splunk_app_dropbox/modinput_wrapper/base_modinput.py", line 127, in stream_events 01-27-2020 14:04:53.842 -0500 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/splunk-app-dropbox/bin/dropbox.py" Traceback (most recent call last):

How to get different drill down for different values within one dashboard?

$
0
0
I have a dashboard with few values withing three panels. What i am trying to accomplish is to make sure with the click of each value within a panel, I can see the drill down of those values. For now, when i click on different values I get the same drill down. Thank you in advance! Here is some of XML:$trellis.value$trueEpicsStatus

JSON is one huge single entry - Is there a way to break it apart in Splunk?

$
0
0
Hello. We just installed the REST API Modular Input App into Splunk in order to capture Dynatrace logs from the Dynatrace SaaS environment. The output format from Dynatrace is in JSON. It works good as the Dynatrace data is coming right into Splunk. The problem is I can not seem to figure out how I can split it apart inside Splunk. For example, each line item in Splunk should be based on the Dynatrace “logId” rather than one huge single line entry. As you can from the start of the JSON output from Dynatrace, there is actually 70 log entries in this JSON. Each one should be its own entry in Splunk. Does anyone know if there is something I can do via the REST API Modular Input App configuration (Data Input configuration) that would tell it to split this JSON by the “logId”? If not, is there a way I could tell Splunk to do it? As you can see, every single log entry is all listed as one huge line item in Splunk. ![alt text][1] Here is what the returned JSON looks like directly in Google Chrome. ![alt text][2] Thank you [1]: https://answers.splunk.com/storage/attachments/280797-json.png [2]: https://answers.splunk.com/storage/attachments/280798-json2.png
Viewing all 47296 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>