Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Non-Admin view license pools by index

$
0
0
Hello All, What capabilities do I need to add to a non-admin role to have view into the LIcense Usage Report so that they can split the pool by index, etc...? Currently they are inheriting user and then I added license_edit and license_tab and license_view_warnings is there anything else? We are running 7.1.2 thanks ed

アラートの検索モードを詳細モードに設定する方法

$
0
0
cronスケジュールで結果が一定数ならメール送信を行うアラートを作成しました。 アラート画面>サーチで開くで確認した場合デフォルトが詳細モードになっており、欲しい値が取れています。 その状態でcronスケジュールで実行した際、値が上手く取れていません。 メールにあるView results in Splunkからサーチ画面を見に行くとFastモードでの実行となっており、手動で詳細モードに変更すると欲しい値が見れる。という状態です。 cronのアラートでの検索モードを詳細モードに変更するにはどうすれば良いでしょうか。 Hi Splunker I'm not good at English, so I will use Google translation How do I set alert search mode to verbose mode We created an alert with cron schedule. Although there is no problem on the confirmation screen, it is impossible to obtain the desired value on the result screen after sending mail. It seems to take a value if it is in verbose mode, and it can not take a value if it is Fast mode. When Alerts> open in Search is confirmed, detailed mode is selected. Apparently it is in Fast mode only when running with cron. How can I handle verbose mode with alerts?

splunk migrate and upgrade from 7.0.5 to 7.1.2 with different splunk home dir

$
0
0
I have manually migrate previous instance to a new box, migration was successful. Migration steps as follow: 1. copied all indexers and all configs under $splunkHome$ to the new box 2. rpm -i 7.0.5 splunk software 3. we changed our splunk home dir from /opt/splunk to /splunkhome/splunk 4. I have to change etc/splunk-launch.conf to the new home dir - splunk can come up. But, when I try to upgrade splunk by suing command rpm -U splunk-7.1.2-software I found splunk did a install under /opt and did not do any upgrade under /splunkhome is this expected? is there a way to fix it? Thanks inadvance

CERTIFICATION EXAM

$
0
0
how do i get Splunk ID by account splunk on linked: https://splunkcommunities.force.com/customers/apex/CP_ProfilePage Is that my screenname ?

CERTIFICATION EXAM

$
0
0
Hi ALL! From now on, all of Splunk's exams will go to VUE and have to pay?

How to upgrade requests module shipped with splunk enterprise

$
0
0
I am using python requests module in my splunk app python scripts. For third party ssl certificate, I am getting [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:676) issue because splunk enterprise ship older version of requests module. Is there any way to update splunk shipped requests module from version 2.3.0 to version greater than 2.9. Note: Splunk enterprise ship requests 2.3.0

Can we receive TCP data on Port 80 from Panorama?

$
0
0
I want my Splunk Heavy Forwarder to receive TCP data on port 80 using Panorama.I have installed Palo Alto Networks add-on for Splunk on said Heavy Forwarder. Am I required to make any specific configurations in the add-on. I am not interested in using Wildfire, Aperture etc. I am only interested in getting firewall data in my Splunk indexer, firewalls are already configured to store data in Panorama. Total no. of firewalls are 6 in number. I have created a TCP data input in my Heavy forwarder for that. I have also asked security team to create a profile for Http(s) server (which will be Splunk) on Panorama, Do I need to follow any more steps? Any ideas or suggestions? @btorresgil, @adonio, @panguy

Compare the Avg value for a Particular Application_Name for today with the last week for any particular timing.

$
0
0
index = abc earliest=-70m@m latest=@m| stats avg(AVERAGE_RESPONSE_TIME) as Today by Time Application_Name |eval Today= round(Today,2) |appendcols [search index = abc earliest=-7d@m-70m latest=-7d@m |stats avg(AVERAGE_RESPONSE_TIME) as LastWeek by Time Application_Name |eval LastWeek= round(LastWeek,2)|eval _time=relative_time(now(),"-7d")]|lookup RESP_LOOKUP_App Application_Name as Application_Name OUTPUTNEW RESP_DEVIATION_THRESHOLD | eval AVG_RESPONSE_Deviation=(Today/LastWeek)*100|table Time Application_Name Today LastWeek AVG_RESPONSE_Deviation RESP_DEVIATION_THRESHOLD | where AVG_RESPONSE_Deviation>RESP_DEVIATION_THRESHOLD My Aim is to compare the Avg value for a Particular Application_Name for today with the last week for any particular timing. E.g Today For a particular time "t" for a particular Application_Name "x" , I am calculating the Average of "AVERAGE_RESPONSE_TIME" field mentioned in the logs.When I am trying to find the average of AVERAGE_RESPONSE_TIME field for that application name "x" for the particular time "t" in the last week,it is not showing the correct value.I guess it is considering other Application_Name and showing the Average of Average_Response_Time for other application_names for that time. Please help to modify the query

Azure monitoring add-on Insufficient privileges to complete the operation

$
0
0
HI , I would like to ask why do we get this error on running the azure-setup.ps1 - rename the file to testinn.ps1 but still showing error ![alt text][1] [1]: /storage/temp/254872-capture.png

List of highest time difference between the log entries

$
0
0
I have a use-case, where i need to find which process took more time during the execution. I don't have sufficient logs to track every service. Note: My service is running in single thread. Is there a way to find the list of highest difference between the log statements. Example 9/6/18 2:33:02.282 AM ----- 9/6/18 2:34:02.282 AM ------ 9/6/18 2:37:02.282 AM ------ 9/6/18 2:45:02.282 AM ------ In above log lines i could see the highest time take for last logger (approx of 8 min), similarly do we have any query or feature to find the list of highest time consumer.

Strange Json Parsing error

$
0
0
Hi community, I have a strange issue when i try to parse a json : i have a basic json like this with 100 line: {"id": "0APIClUQ6m77NUk9PVA", "name": "Applications"} {"id": "0ALB842-DsLmBUk9PVA", "name": "Automatisation Support"} When i try to index with my props : [ _json ] SHOULD_LINEMERGE=true NO_BINARY_CHECK=true CHARSET=UTF-8 INDEXED_EXTRACTIONS=json KV_MODE=none category=Structured description=JavaScript Object Notation format. For more information, visit http://json.org/ disabled=false pulldown_type=true i got this error message many times : 09-07-2018 08:46:09.039 +0200 ERROR JsonLineBreaker - JSON StreamId:18348078207081945231 had parsing error:Unexpected character while expecting '"': '\\' - data_source="C:\Scripts\Gsuite_Report\teamdrive_settings\2018-09-07_teamdrive_report.json", data_host="DSCRIPT02", data_sourcetype="_json" 09-07-2018 08:46:09.039 +0200 ERROR JsonLineBreaker - JSON StreamId:18348078207081945231 had parsing error:Unexpected character while looking for value: '\\' - data_source="C:\Scripts\Gsuite_Report\teamdrive_settings\2018-09-07_teamdrive_report.json", data_host="DSCRIPT02", data_sourcetype="_json" can you help me ?

When using "map" command, if arg that pass to "map" is string, it not work.

$
0
0
Splunk ver : 7.1.2 When I using `map` command, if arg that pass to `map` is string, results is never displayed. But if arg is int or string that contain space, it work! Below search is examples. * Since it is a sample, it is weird search, but please do not mind. Not working case: | makeresults count=3 | eval field1="test" | table field1 | map search="| stats count | fields - count | eval map_field1=$field1$ | table map_field1" Working case1: | makeresults count=3 | eval field1=111 | table field1 | map search="| stats count | fields - count | eval map_field1=$field1$ | table map_field1" Working case2: | makeresults count=3 | eval field1="this is test" | table field1 | map search="| stats count | fields - count | eval map_field1=$field1$ | table map_field1" Is this specification, or issue? *if it is, I'm so sorry. Please someone tell me.

SPL-154876, SPL-152598 The "srtemp" directory can grow to hundreds of GB- Workaround before version 7.1.2

$
0
0
Hello guys, Could you please share information about the workaround for issue SPL-154876, SPL-152598( that is fix at 7.1.2) if it is not possible upgrade, what can be done for latest version. Regards, Desislava

How to solve the error "mongodb has exhausted the system memory capacity" error ?

$
0
0
HI, I have a standalone Splunk setup in Windows. I have started getting the "mongodb has exhausted the system memory capacity" error in my mongod.log and in splunks.log in am getting "KV Store initialization has failed" which I am thinking is because of the Mongo DB memory issue. Please suggest me how to solve the problem. I saw another thread where it is telling to change KVStore memory size, but i read somewhere that it is not recommended. Also, do not know whether that is a correct solution or now. Please let me know. Details mongod.log as below - 2018-09-07T08:54:11.046Z F STORAGE [conn9] MongoDB has exhausted the system memory capacity. 2018-09-07T08:54:11.046Z F STORAGE [conn9] Current Memory Status: { page_faults: 10422817, usagePageFileMB: 234, totalPageFileMB: 123515, availPageFileMB: 52, ramMB: 65191 } 2018-09-07T08:54:11.046Z F STORAGE [conn9] VirtualProtect for C:/Program Files/Splunk/var/lib/splunk/kvstore/mongo/local.1 chunk 4101 failed with errno:1455 The paging file is too small for this operation to complete. (chunk size is 67108864, address is 4014000000) in mongo::makeChunkWritable, terminating 2018-09-07T08:54:11.046Z I - [conn9] Fatal Assertion 16362 2018-09-07T08:54:11.073Z I ACCESS [conn194] Successfully authenticated as principal __system on local 2018-09-07T08:54:11.185Z I CONTROL [conn9] mongod.exe index_collator_extension+0x146b13 2018-09-07T08:54:11.185Z I CONTROL [conn9] mongod.exe index_collator_extension+0xfe14f 2018-09-07T08:54:11.185Z I CONTROL [conn9] mongod.exe index_collator_extension+0xf0847 2018-09-07T08:54:11.185Z I CONTROL [conn9] mongod.exe ??? 2018-09-07T08:54:11.185Z I CONTROL [conn9] mongod.exe ??? 2018-09-07T08:54:11.185Z I CONTROL [conn9] mongod.exe ??? 2018-09-07T08:54:11.185Z I CONTROL [conn9] mongod.exe ??? 2018-09-07T08:54:11.185Z I CONTROL [conn9] mongod.exe index_collator_extension+0x450e38 2018-09-07T08:54:11.185Z I CONTROL [conn9] mongod.exe index_collator_extension+0x10a6f3 2018-09-07T08:54:11.185Z I CONTROL [conn9] mongod.exe index_collator_extension+0x1670f1 2018-09-07T08:54:11.185Z I CONTROL [conn9] mongod.exe index_collator_extension+0x47fa0b 2018-09-07T08:54:11.185Z I CONTROL [conn9] mongod.exe index_collator_extension+0x47fbb2 2018-09-07T08:54:11.185Z I CONTROL [conn9] KERNEL32.DLL BaseThreadInitThunk+0x14 2018-09-07T08:54:11.185Z I CONTROL [conn9] 2018-09-07T08:54:11.185Z I - [conn9] ***aborting after fassert() failure

I would like to attend the Splunk PS Architect Practice Lab, but how do I get the login ID / PW issued?

$
0
0
I am a Splunk partner and would like to complete labs for the Splunk Implementation Fundamentals course. When I click the lab I am taken to a Splunk enterpise login screen where the following message is displayed "Partners, you will receive an email containing your username and password once you have been enrolled for a lab in the partner learning portal." I tried login with oxygen account by that did not work. I would like access to all the labs in the Implementation Fundamentals course. I would like to attend the Splunk PS Architect Practice Lab, but how do I get the login ID / PW issued?

Searching Events in a Tree Based Structure

$
0
0
I have a set of events as follows for a chain of SQL Server blocked processes. It's a tree based structure. I am trying to join the data set on itself to determine which resources are blocking the most. Either result is okay but prefer the first one. I'm able to figure out a solution for result #2 using a join and searching the same data twice. However, the number of events is more than 10K so it truncates the results. I have seen the selfjoin command in docs but am not certain how to join between two fields in the same data set. Does anyone know how to produce either of the results below? **Sample Events** Process ID, Blocked By Process ID, Resource Name, Wait Time 1, 0, Resource 1, 0 2, 1, Resource 2, 15 3, 1, Resource 3, 10 4, 2, Resource 4, 5 **Result set 1** Blocker, Total Blocked Victim Time Resource 1, 30 <- recursively sum the wait time Resource 2, 5 **Result set 2** Blocker, Total Blocked Victim Time Resource 1, 25 <-- only sum the wait time of the children (not grandchildren, etc) Resource 2, 5

Elapsed time between dates, only including hours between 09:00 and 17:00

$
0
0
I have an Incident "Open Date" in following format DD/MM/YYYY HH:MM and an Incident "Close Date" in same format. I want to calculate the amount of time between the two dates but only calculating the hours between 09:00 and 17:00. Can any one advise? Thanks

Help with arrays, expands and joins

$
0
0
Hello So I have data in TSV format that I am indexing. Some of the fields are arrays in the format of ['23458567','234523456978090','234568957078654'] if the array is empty its simply filled with []. When we do searches we have to join tables and so some searches contain several joins to follow id's through the data flow. These id's are in the array format above and we sed out the single quotes and the brackets to get just values the mvexpand and join. The problem I have is that when we do the sed it removes the records that contain the empty array value [] but those are valid values as well. I was trying to do a conditional eval with a macro but that wont work or is not valid. Something like: |eval RS=if(related_vendors == "[]", "[]", `fp_mvexpand(related_vendors)`) This is what the macro does: rex mode=sed field="$arg1$" "s/[][]//g" | rex mode=sed field="$arg1$" "s/'//g" | makemv delim="," $arg1$ We do this so we can join on the array values like: |`init("assessments")` | fields id,info_subType,related_vendors,info_severity | dedup id | `fp_mvexpand(related_vendors)` | eval RV = mvindex(related_vendors,0) |join type=left RV [ `init("vendors")` |fields id infor_name |rename id as RV info_name as Vendor| fillnull value="none" Vendor| dedup Vendor] | stats count(Vendor) by info_subType In this example related_vendors in the the assessments table is the same as the id in the vendors table. So we strip out the brackets and single quotes and mvexpand, then mvindex and join to vendors But I dont get records where related_vendors = [] and I assume its because we stripped out the [] Any thoughts on how I could accomplish this? Thanks for all the help everyone!

Allow multiple users to login with the same shared acocunt

$
0
0
We have a requirement to allow approx 50 users to login simultaneously to Splunk using the same shared account. The reason we want to do this is because these users do not have access and we want them to do a bit of hands-on during a live demo. Does Splunk support this feature or do we need to look at an alternative solution?

How to calculate duration of overlapping events from multiple Services

$
0
0
I have been working on this for quite sometime and it appears I am just going in circles. Maybe some Splunk Savant will be able to work the kinks out. I have a set of normalized data which contains the starttime, endtime, AppName, InstanceName, Type, EventName, duration. My data looks like this: 1535432400, 1535432700, App1, measure, 1, _m_WS_Time ws/cb_App1_Requests.v4_1.ws.producer.App1_requests/cb_App1_Requests_v4_1_ws_producer_App1_requests_Port?_getTaskList, 300,1535436019_0 1535443200, 1535443500, App1, measure, 1, _m_WS_Time ws/cb_App1_Requests.v4_1.ws.producer.App1_requests/cb_App1_Requests_v4_1_ws_producer_App1_requests_Port?_getTaskList, 300,1535446818_0 1535446800, 1535447100, App1, measure, 1, _m_WS_Time ws/cb_App1_Requests.v4_1.ws.producer.App1_requests/cb_App1_Requests_v4_1_ws_producer_App1_requests_Port?_getTaskList, 300,1535450417_0 1535447730, 1535448030, App4, alvelca01, 1, App4 Doc_Admin_Prod High PurePath Response Time, 300,1535641220_4 1535468400, 1535469000, App1, measure, 1, _m_WS_Time ws/cb_App1_Requests.v4_1.ws.producer.App1_requests/cb_App1_Requests_v4_1_ws_producer_App1_requests_Port?_getTaskList, 600,1535472019_0 1535471219, 1535474819, App2, ualbuacwas6, 1, App2 Online - High Active Thread Count, 3600,1535472017_0 1535471219, 1535474819, App2, ualbuacwas5, 1, App2 Online - High Active Thread Count, 3600,1535472017_0 1535471269, 1535474869, App2, ualbuacwas7, 1, App2 Online - High Active Thread Count, 3600,1535472017_0 1535471319, 1535474919, App2, ualbuacwas6, 1, High App2 WCAX JDBC Pool Percent Usage, 3600,1535472017_1 1535471319, 1535471449, App2, ualbuacwas7, 1, High App2 WCAX JDBC Pool Percent Usage, 130,1535472017_1 1535479849, 1535483449, App2, ualbuacwas5, 1, High App2 JDBC Pool Percent Usage, 3600,1535482816_1 1535481100, 1535481103, App3, ip-10-14-6-210.ec2.internal, 1, Application Process Unavailable (unexpected), 3,1535482817_0 1535481100, 1535481107, App3, ip-10-14-6-44.ec2.internal, 1, Application Process Unavailable (unexpected), 7,1535482817_1 1535481164, 1535481165, App4, alvelcw01, 1, Application Process Unavailable (unexpected), 1,1535641220_3 1535481348, 1535484948, App2, ualbuacwas8, 1, App2 Online - Hung Threads, 3600,1535482816_2 1535481348, 1535484948, App2, ualbuacwas7, 1, App2 Online - Hung Threads, 3600,1535482816_2 1535481348, 1535484948, App2, ualbuacwas6, 1, App2 Online - Hung Threads, 3600,1535482816_2 1535512218, 1535512288, App2, ualbuacwas5, 1, Application Process Unavailable (unexpected), 70,1535515215_0 I have tried to use concurrency with transaction: base search .... | concurrency start=stime duration=duration output=overlay | table _time Service EventName duration overlay The concurrency command is not splitting the Services out but now that I've looked at it, is shouldn't. It's calculating the concurrency across all overlaps not by Service overlaps. What I am looking for is the durations of the overlaps by Service. Alot like what the Timeline visualization does. Example: The first event for App1 starts at 10:30am and its duration is 300 seconds. The next event for App1 starts at 10:32 for 300 seconds, etc, etc,etc. I want the time for the Service's total durtion of events from the first overlapping event to the last. To throw a wrench into the mix. Some events for a service so not overlap and they have to be measured individually because they don't overlap. Any help at this point would be a bonus. Thanks in advance.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>