Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

I had to modify splunklib/binding.py to make Splunk_SA_Scientific_Python_darwin_x86_64 import ssl in Anaconda work

$
0
0
After calling Just to get the simplest use out of numpy, I had to change in splunklib/binding.py import ssl to from OpenSSL import SSL ...SSL in capitals. Now I don't know if there will be a run-time error when/if ssl is used. I don't think this is the correct fix. Mac OS Fresh install of Splunk Enterprise dev. Install of Splunk_SA_Scientific_Python_darwin_x86_64

How to configure splunk enterprise in distributed environment

$
0
0
We need to install Splunk Enterprise in one windows machine (server) , which can read all the logs files ( generated inside in machinne itself in directory) and many other windows OS users (clients) with different splunk account can access/analyze those logs from his own machine and create own dashboard as well accordingly. The user's(clients) machine dont'have splunk enterprise. So how we can do that. What are the process that server creates instances for many other users and user can access the logs from server machine. That’s my question. I search a lot but not able to find the relevant answer can you please help me on this and provide path farward. Thanks,

How to extract the one time header on top of the real header.

$
0
0
Hi, I'm new to splunk and would like some help with tackling my task at hand, - NO INDEX DATE STIME ETIME REP ACTIVITY RESULT ID TYPE PLACE 17892 4/10/2015 14:13:48 14:14:03 15 CYCLE_REP GOOD NONE ONE_TIME T Date , Time ,Model ID,SEATPAD ID,OffsetA,OffsetB,SEATPAD Type,Result,Job, 4/10/2015,12:14:06,KC10,1,0.2,-1,101,FAILED,C:\ONE_TIME\Type\NO A.mdb, 4/10/2015,12:14:06,KC2,2,0.3,-0.3,102,GOOD,C:\ONE_TIME\Type\NO A.mdb, 4/10/2015,12:14:06,KC2,3,-0.5,-0.02,103,GOOD,C:\ONE_TIME\Type\NO A.mdb, 4/10/2015,12:14:06,KC90,4,-0.5,-1,104,FAILED,C:\ONE_TIME\Type\NO A.mdb, 4/10/2015,12:14:06,KC90,5,-0.03,-2,105,FAILED,C:\ONE_TIME\Type\NO A.mdb, 4/10/2015,12:14:06,KC10,6,-0.04,-0.6,106,FAILED,C:\ONE_TIME\Type\NO A.mdb, - How do I indexed the one time header on top of the real header as the sample above? When the csv file is added to splunk, only the header which starts at Date, Time, Model ID.....,Job, is indexed and fields can be extracted. The header on top of that and the information that comes with it, is ignored. Any help is welcomed. I have tried changing the props.conf, which indexed at line NO INDEX.., but then I cannot extract the field properly, since the other information doesn't use the same header.

Splunk not indexing milli seconds

$
0
0
Hi All, I configured an input in which the timestamp field is in format 20180830112930314 (%Y%m%d%H%M%S%3N). The same has been configured in props.conf on Splunk Indexers but still I am seeing event time as 2018/08/30 11:29:30.000. I mean Splunk is showing 000 as milli seconds even if the timestamp field has milli seconds other than 000. Could you please help me to find out the issue? Thanks in advance.

Stream Metrics to Azure Event Hub to be pulled By HF

$
0
0
I'm using an HF to pull log/metric data from Azure event Hub, I know how to stream Activity log/diagnostic logs to Azure Event Hub, but I don't understand how can I stream Metrics to Azure Event Hub, currently I'm configuring Metrics through Azure Monitor. I found this https://github.com/Microsoft/AzureMonitorAddonForSplunk/wiki/Configuration-of-Azure but it still not clear/ can someone clarify that?

common sql query to have for multiple sites dashboard with same metrices

$
0
0
I have a server in 30 sites which each sites have a same dashboard with same metrics but host will be in different(thats not a prbm and it will be passed from input). if any changes is needed there is pain of updating in all dashboard. is there any way to make it a common and single update will be efficient to do? pls help with any options available

No Data input following 7.1.2 upgrade on 2008 server

$
0
0
Hello, I have upgraded my Splunk Enterprise 6.5.1 to 7.1.2 on a Windows 2008 R2 (https://answers.splunk.com/answers/672130/splunk-win2008r2-upgrade-65-to-71.html for my last thread). I have enabled the TLS 1.2 support on 2008 R2 with regedit but I didn't modify anything else as I didn't modify the alert_actions.conf and ldap.conf in my configuration. Upgrade went well bu after that, it seems my local data inputs aren't working anymore. Several machines are sending in FTP logs on the Splunk and I'm monitoring the folders were are pushed the log files. It's probably not the best but it worked for the last 2 years. Files are indeed pushed on those folders but they are not processed by Splunk anymore. I do not see them in the Sources of my Data Summary. As stated in documentation, the Windows universal forwarder installation package no longer includes the Splunk Add-on for Windows. To be honest, I'm not sure if this is linked so I tried to install the last universal forwarder. I wasn't able to install it : the error message is the default one from Windows (error has occurred setup has ended prematurely, your system was not updated). Can you help me understand why my local file monitory / data inputs aren't working anymore ? Thank you in advance for your help. Best regards, Quentin

How to color one cell in a table BASED on the value of other cell in xml?

$
0
0
I have two fileds: Value and Status Value contains the actual numeric value and Status contains the state (Green,Amber,Red) in textual format. I need to change the color of value field based on the text of the Status field. So if value is 2 and Status is Green, the Value should colored Green I cannot use direct color option using the edit(art brush) since the Status field evaluates the state. Please help. Thanks

How search by unicode value?

$
0
0
Hi, I have the following example record: 30/08/2018 13:30:27.996;VM1;ASH;AccessModule;processPacketBuffer;MSISDN;xxxxxxxxxxxx;;INFO;;;Return Access ; "msisdn":"xxxxxxxxx","Type":"\u0006","APN":"aaa","imsi":"xxxxxxxx","imei":"xxxxxxxxx","SGSN":null,"Remote IP Address":"xx.xx.xx.xx","TotalTimeInMS":0} I can not search by Type, because it a unicode value and splunk does not parse it correctlly. The are 2 possible Type values: 1. "\u0006" 2. "\u0003". I am using the following splunk search: mysearch | spath input=anyparams | search Type="\u0006" The problem is that i receive no result, How should i use the search, when the field contain unicode value? Thanks in advance, Yossi

Tenable Python Error

$
0
0
Hi, I've installed splunk-add-on-for-tenable both 5.1.2 & 5.1.4, but neither work. Al I see in the ta_nessus.log is: 2018-08-30 13:20:31,950 INFO pid=6088 tid=MainThread file=nessus.py:main:260 | Start nessus TA 2018-08-30 13:20:32,039 INFO pid=6088 tid=MainThread file=nessus_config.py:get_nessus_conf:71 | Try to get encrypted proxy username & password 2018-08-30 13:20:32,040 INFO pid=6088 tid=MainThread file=nessus.py:get_nessus_modinput_configs:142 | Set loglevel to ERROR 2018-08-30 13:20:34,036 ERROR pid=6088 tid=MainThread file=nessus_data_collector.py:_collect_scan_data:300 | Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/nessus_data_collector.py", line 294, in _collect_scan_data page_size) File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/nessus_data_collector.py", line 245, in _collect_scan_data_of_one_scan scan_info) File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/nessus_data_collector.py", line 217, in _collect_one_host_scan_info if port_items[2]: IndexError: list index out of range 2018-08-30 13:20:35,851 ERROR pid=6088 tid=MainThread file=nessus_data_collector.py:_collect_scan_data:300 | Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/nessus_data_collector.py", line 294, in _collect_scan_data page_size) File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/nessus_data_collector.py", line 245, in _collect_scan_data_of_one_scan scan_info) File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/nessus_data_collector.py", line 217, in _collect_one_host_scan_info if port_items[2]: IndexError: list index out of range 2018-08-30 13:20:37,080 ERROR pid=6088 tid=MainThread file=nessus_data_collector.py:_collect_scan_data:300 | Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/nessus_data_collector.py", line 294, in _collect_scan_data page_size) File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/nessus_data_collector.py", line 245, in _collect_scan_data_of_one_scan scan_info) File "/opt/splunk/etc/apps/Splunk_TA_nessus/bin/nessus_data_collector.py", line 217, in _collect_one_host_scan_info if port_items[2]: IndexError: list index out of range Any Ideas? T.I.A.

Override sourcetype and redirect to another index

$
0
0
Hi Guys, I want to override sourcetype for all events before being indexed and redirect some of those events (those with ERROR) to another index with the overridden sourcetype. So I need events to be spread between two indexes: test1 and test2 (with ERROR events) and I need all of the event have the same access_combined sourcetype. I use oneshot command to ingest data from a file: >splunk add oneshot C://opt/log.txt -index test1 -sourcetype test_sourcetype and now my **props.conf** looks like this: [host::myhost] LINE_BREAKER = \d+(&) SHOULD_LINEMERGE = false TRANSFORMS = custom_sourcetype TRANSFORMS = route_notfound LINE_BREAKER is here because its a oneline log, so I need to break it into events and it works fine. and my **transforms.conf**: [custom_sourcetype] SOURCE_KEY = _raw REGEX = .* DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::access_combined [route_notfound] REGEX = ERROR DEST_KEY = _MetaData:Index FORMAT = another_index and if I use those transforms seperately they work fine (i switch them off by using # in props.conf) but they do not work together.... How can I do those two things in one step? before data being indexed?

Splunk Add-on for Cyberark XSL file is faulty?

$
0
0
Can someone explain to me what the idea is behind some of the choices made in the XSL file that is bundled with the Splunk TA for Cyberark? It places the Cyberark "Reason" field in both the cn2 part of the CEF message as well as the msg part. Even though cn2 is actually labeled "Ticket ID". Also: in the msg part, the Reason value is passed through a replacer that escapes any '=' signs, to prevent issues with Splunk's field extractions. While in the cn2 part, the reason field is dumped without escaping. So if the Reason field from Cyberark contains key value pairs, this completely messes up the field extractions. Why duplicating data and moreover: why doing it in an inconsistent way? Relevant snippet from the xsl: cn2Label="Ticket Id" cn2="" msg="Failure: " Also, this final bit with the severity choice, does this print the text "Failure:" after the content of the msg field? What is the point in that? Shouldn't that be printed at the start of the msg field? The original arcsight.sample.xsl as bundled with cyberark (that probably was the inspiration for the file bundled with the splunk TA) does not use the cn2 field, and populates the msg field in a more sensible way: "Reason, ExtraDetails, Failure: Message" (with Failure printed only based on severity). msg=, , Failure:

How to add training-test split line in my forecast chart

$
0
0
Hello Splunkers, I created my forecast chart in Splunk Machine Learning Toolkit and I want to add a training-test split line as I can see in the showcases of "Forecast Time Series". In addition, I would like the training-test split line to by a dynamic one, as the forecast chart is in a dashboard with inputs regarding prediction algorithm, future timespan and holdback. So, the training-test split line can move depending on the value of holdback that the user chooses. Could you please advise me on how a dynamic training-test split line can be created? Thank you in advance! Afroditi Here is the search that creates the forecast chart: **index=main sourcetype=Mssql:Memory |eval hostname=case(host="CZCHOWV227", "MSSQL_CZCHOWV227", host="CZCHOWV227.PRG-DC.DHL.COM", "MSSQL_CZCHOWV227") |where hostname="MSSQL_CZCHOWV227" |eval _time=_time-21600 |rename second as ple |where ple<=1000 |timechart span=5min avg(ple) as ple |predict ple as ple_prediction algorithm="LLP" holdback=100 future_timespan=120 upper95=high lower95=low|`forecastviz(120, 100, "ple", 95)` |appendcols [|search sourcetype=Mssql:Memory |eval hostname=case(host="CZCHOWV227", "MSSQL_CZCHOWV227", host="CZCHOWV227.PRG-DC.DHL.COM", "MSSQL_CZCHOWV227") |where hostname="MSSQL_CZCHOWV227"| eval ple_threshold=300| table _time ple ple_prediction ple_threshold ]** The forecast chart created is: ![alt text][1] [1]: /storage/temp/254820-ple.jpg

Help formatting a table - highest CPU users per hour over a day

$
0
0
G'Day I've got some data I'm pulling out of some events with a search: HOUR - Two digit hour of the day PROCESS - Name of a running process CPU_USAGE - The CPU the process used during the hour What I want is a table with Hour in the first column, then the 10 processes with the highest CPU usage within that hour. Not the most popular process (which is what TOP seems to give me), but the ones with the highest CPU usage. So 240 rows it the finished table, 10 per hour. I can get the top 10 in the first hour. I can get the 10 highest users, but I can't seem to get the highest 10 users within each hour. Something like: 00 ProcessA 75% 00 ProcessB 60% ... 00 ProcessG 10% 01 ProcessC 90% 01 ProcessA 45% 01 ProcessG 40% ... 01 ProcessF 3% 02 ProcessB 80% ... Any hints would be appreciated. The second part is creating a chart to show the same...

I am unable to complete splunk 7.1.2 installation on my Mac OS 10.13. I did not get the splunk's little helper window. Kindly help me out.

$
0
0
I followed the procedure mentioned in the third module of splunk fundamentals 1 course to install splunk on Mac OS 10.13. all the steps were completed , splunk short cut icon is created on the desktop too. but I did not get the terminal window or splunks little helper which asks for password.

Displaying results of same search over period of time

$
0
0
I have the following search and am looking to display its results over the past 30 days. It currently shows the results but but only the current day is accurate. Any advice would be much appreciated... index=data NOT ID="*" earliest=-30d@d latest=now|regex name!="[a-z].*"|dedup id2|timechart span=1d count

Math against two searches

$
0
0
I have two searches that use the same index and each return a numerical total, differing only in the period of time of the data they look at. How would perform math on the search results for example adding or calculating percentages?

SNMP -- Correcting date/time output and rogue ap mac address

$
0
0
Hello, I just configured an SNMP-Trap on an RHEL box to send to Splunk. Getting the following output: Agent Hostname: (hostname) \N Date: 5 - 8 - 8 - 9 - 6 - 4461316 CISCO-LWAPP-AP-MIB::cLApRogueApMacAddress.0 = STRING: 0:d:67:83:2a:f2 Is there a way to correct the date format to show a proper time? I want to make rogue AP detections actionable, it seems that the format tosses the first hex of the mac address onto the ApRogueApMacAddress itself (the .0 prior to string value) Using the following format options: format2 %V\n% Agent Address: %A \n Agent Hostname: %B \n Date: %H - %J - %K - %L - %M - %Y \n Enterprise OID: %N \n Trap Type: %W \n Trap Sub-Type: %q \n Community/Infosec Context: %P \n Uptime: %T \n Description: %W \n PDU Attribute/Value Pair Array:\n%v \n -------------- \n

Timechart trend over the same interval as the search range

$
0
0
Hi! I have a scenario where we have used "| stats count" and gotten the total number for the range that we picked. This has been working fine but now we'd like to use timechart to get trends. However, when using timechart the number becomes the latest "bucket" instead of the total number. Example: Searching with a time range of 60 minutes would give me the value for the last minute. Been fiddling around with some suggestions but haven't found a reliable solution. This last one: | timechart [search index=_internal | head 1 | addinfo | eval timerange= info_max_time-info_min_time | eval span=if(round(timerange/3600) == infinity, 1, round(timerange/3600))."h" | return span] count | appendpipe [stats count | where count=0] generates errors like "Error in timechart command. The value for option span (infinityh) is invalid. Any ideas of what I'm doing wrong? /Patrik

In "host_regex = /export/data/syslog-ng/(.*?)/messages" , what does the "/(.*?)/" mean?

$
0
0
In our Splunk forwarder, in the path: /opt/splunk/etc/apps/app01/default we have many stanzas such as: [monitor:///export/data/syslog-ng/sentry*/messages] disabled = false host_regex = /export/data/syslog-ng/(.*?)/messages index = asalg sourcetype = cisco_asa and under every stanza there is the following line: host_regex = /export/data/syslog-ng/(.*?)/messages I am very curious to know what the "/(.*?)/" means? Thank you.
Viewing all 47296 articles
Browse latest View live