Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Qualys VM App for Splunk Enterprise: When will the app be compatible with Splunk 6.5.1?

$
0
0
Hi - I see the app (Qualys VM App for Splunk Enterprise) description does not list Splunk 6.5.1 version as compatible with this app. Any idea when will this app be made compatible with the new Splunk version? Thanks!

Image on Dashboard

$
0
0
Hello All, I am trying to embed image in a user manual dashboard and tried various method from this forum but it is image is not loading. I have copied the image in ../apps/myapp/appserver/static directory. I am trying to show the image like below Complete code in html tag

1. Go to Development Kit tab and Click Dashboard Test Image
........so on Tried Splunkweb Prefix and various other method, not sure now where to look. Thanks Hemendra

Configuration initialization for C:\Program Files\Splunk\etc took longer than expected (1869ms) when dispatching a search (search ID: puneeth__puneeth__search__search1_1480395364.26344); this typically reflects underlying storage performance issues

$
0
0
![alt text][1] [1]: /storage/temp/174296-capture.png why this error how do we fix this!

How to find the time taken to create a particular index in Splunk?

$
0
0
Hi All, I have to find the "time taken it took to create my index in Splunk". Can anyone please help me how to find that in Splunk

Anonymizing diag - string index out of range error

$
0
0
When anonymizing a diag as per the following: https://docs.splunk.com/Documentation/Splunk/6.5.0/Troubleshooting/AnonymizedatasamplestosendtoSupport An error is encountered on certain log files:
Error reading file /Users/arowsell/Documents/xxtest/ananon_diag_test//test_fail_1_event_bad.log and getting terms: string index out of range
Steps to reproduce: - Run following command on attached file test_fail_1_event_bad.log: find pathtomyuncompresseddiag/ -name *.log* | xargs -I{} ./splunk anonymize file -source '{}' - Run the same command on the file test_fail_1_event_good.log and the error does not occur. The only difference between the files seem to be the trailing white spaces at the end of the event.

Enterprise security search not returning results

$
0
0
Hi There, I am working on an app and would like my data to be visible in the Enterprise Security dashboards. I believe I have successfully mapped my data to the relevant CIM data model. I can Pivot and use |tstats searches on my data from my app or the Splunk search app, however when running the same searches in the context of enterprise security I get no results. For example, when running this search from the Search app I get the expected results: | tstats `summariesonly` values(All_Email.protocol) as protocol, avg(All_Email.size) as avg_size, count from datamodel=Email.All_Email where * All_Email.size>`large_email_threshold` by All_Email.src,All_Email.src_user,All_Email.dest | `drop_dm_object_name("All_Email")` | eval avg_size=floor(avg_size) | sort 100 - avg_size | fields protocol, src, src_user, dest, count, avg_size However, when running it from Enterprise Security I get 0 results. It is almost like Enterprise Security does not have permission to the Data Model (although the data model has the default settings of Everyone Read and Admin Write for All Apps). Can anyone help with this? Many Thanks

Splunk 6.2.3 vulnerability in dashboard

$
0
0
Hello guys, there is a vulnerability in Splunk, it's possible to edit the search of a dashboard using web browser's developer tools or OWASP, this can be restricted by role, however it's possible to remove timechart then show raw logs which we don't want : // // SEARCH MANAGERS // var search1 = new SearchManager({ "id": "search1", "status_buckets": 0, --> "search": "index=myindex | timechart span=1d count", <--- "earliest_time": "-7d@h", "cancelOnUnload": true, "latest_time": "now", "app": utils.getCurrentApp(), "auto_cancel": 90, "preview": true, "runWhenTimeIsUndefined": false }, {tokens: true, tokenNamespace: "submitted"}); To finish our aim is to avoid user able to see raw data, only table or timechart. Thanks a lot!

How to edit my eval statement to convert numerical values from a field into MB?

$
0
0
I have a field [**B**] that consists of some numbers and strings. 10 gb 20 gb 30 gb I would like to implement a eval statement that would convert the numerical values into MB. B=*"GB"| eval GB=*/1024 | table GB I'm not sure what is incorrect about this statement?

How to calculate and display in a table the average of each user's field values for n number of days?

$
0
0
I have a search which gives the result as follows for one day Users dc(users_activity_count) average(All_users_activity_count) A 2 2 B 3 2 C 1 2 Similarly how can I calculate the average of each users_activity count for n number of days like below Users dc(users_activity_count) average(All_users_activity_count) average(each_users_activity_count_forlast7days) A 2 2 B 3 2 C 1 2 Where `average(each_users_activity_count_forlast7days)` should display the average of each user for last 7 days, which is user A's Average,B's,C's....

How to build a regular expression that will split a field on the first underscore?

$
0
0
I need to use regex to split a field into two parts, delimited by an underscore. The vast majority of the time, my field (a date/time ID) looks like this, where AB or ABC is a 2 or 3 character identifier. 11232016-0056_ABC 11232016-0056_AB I use the following rex command to extract, and it works great. | rex field=originalField "(?.*)\_(?.*)" For example: originalField = 11232016-0056_ABC subField1 = 11232016-0056 subField2 = ABC However, I have a few special cases where `originalField = 11232016-0056_ABC_M`, where M could be anything alphanumeric following an additional underscore. When I use the above rex command, I get the following result: originalField = 11232016-0056_ABC_M subField1 = 11232016-0056_ABC subField2 = M I want to see the following: originalField = 11232016-0056_ABC_M subField1 = 11232016-0056 subField2 = ABC_M Basically, I need it to split at the first underscore and ignore all subsequent underscores.

Splunk App for Jenkins: Where can I find examples about creating alerts based on job duration?

$
0
0
I'm looking for some examples on creating alerts based on job duration. Is this something that can be done with the data?

Has anyone setup Tipping Point and/or AccelOps to feed logs to Splunk?

$
0
0
We're looking to use Splunk to grab metrics from Tipping Point and AccelOps to come up with dashboards and reports. i've been looking around online and there doesn't seem to be an app for either. Has anyone out there setup Splunk to grab metrics from these before? Did you use Splunk DB Connect or API? Thanks

Logged in as an Administrator, but why am I unable to delete a field or perform an action?

$
0
0
Hello, As you can see below, I can't do any action, and when I click on PERMISSIONS, I get this error: Splunk could not retrieve permissions for resource data/props/extractions [HTTP 404] https://127.0.0.1:8089/servicesNS/desenvolvimento/search/data/props/extractions/dbx2%20%3A%20EXTRACT-GPVersion; [{'code': None, 'type': 'ERROR', 'text': "\n In handler 'props-extract': Could not find object id=dbx2 : EXTRACT-GPVersion"}] Even, as Administrator. Cheers, ![alt text][1] [1]: /storage/temp/173318-screen-shot-2016-11-29-at-63403-pm.png

Splunk IT Service Intelligence: What is the best KPI to monitor if a host is up and sending data?

$
0
0
In Splunk IT Service Intelligence (ITSI), what is the best KPI to monitor if a host, forwarder, or entity is up and sending data? I have noticed if I shutdown a forwarder, the Service/KPI does not show any indication that an entity has stopped providing KPI data other than the sparkline showing 0 values.

How to edit props.conf to resolve "Could not use strptime to parse timestamp" error?

$
0
0
Hello i have a time stamp as **[17/Oct/2016:16:09:51 +0000]** and my props.conf looks like: TIME_PREFIX = \[ MAX_TIMESTAMP_LOOKAHEAD = 26 TIME_FORMAT = %Y/%b/%d:%H:%M:%S +0000 when i do this, i am getting error: **Could not use strptime to parse timestamp from "[17/Oct/2016:16:09:51 +0000]"** can anyone let me know where is my mistake???

How can I join a search to a lookup to see where data is missing?

$
0
0
Hello, I am writing a search to figure out which users haven't loggedtheir hours. For a list of all users I have a lookup: Name,Id,Team Name1,id1,Team1 Name2,Id2,Team1 Name3,Id3,Team2 Name4,Id4,Team2 The data for their hours is in the following format: Id,Date,Task,Hours Id1,2016-01-01,Task1,3 Id1,2016-01-01,Task2,4 Id1,2016-01-01,Task3,1 Id2,2016-01-01,Task2,6 Id2,2016-01-01,Task4,2 Id3,2016-01-01,Task4,4 Id1,2016-01-02,Task2,4 Id1,2016-01-02,Task3,5 Id2,2016-01-02,Task1,5 Id2,2016-01-02,Task2,2 Id3,2016-01-02,Task1,4 Id3,2016-01-02,Task2,1 Id3,2016-01-02,Task3,2 What I'm trying to create is a chart that tells me the total hours a user has logged per day, regardless of task. In order to get the full list of users, I start with the lookup (in the above example user Name4 has not logged any hours, but I need the chart to tell me that). No matter what I use, I cannot get the entire list of hours by day for each user. I can only get the latest one: | inputlookup resource_list.csv | join type=outer Id [ search index="hours" | stats sum(Hours) as Hours by Id, Date ] | table Name, Team, Hours, Date | sort Team, Name I have also tried using `mvzip`to create a multivalue that I expand afterwards, but this doesn't work either. I'm looking to get output like: Name,Team,2016-01-01,2016-01-02... Name1,Team1,8,9... Name2,Team1,8.7... Name3,Team2,4,9... Name4,Team2,0,0... Is this possible? Any help would be greatly appreciated! Thank you and best regards, Andrew

My Splunk Add-on for Check Point OPSEC LEA configuration works on an indexer, but why not on a heavy forwarder?

$
0
0
I am trying to deploy the Splunk Add-on for Check Point OPSEC LEA on a heavy forwarder and the configuration is not working. I tried it on the indexer directly and it worked, but when I try to configure it on the forwarder with the same setup as the one on indexer with an added outputs.conf that sends data to port 5515, it doesn't work. I am assuming I need to then only listen on 5515 at the Indexer.

How to link Windows Security logs with MS Active Directory events?

$
0
0
Hi, So I was wondering if anyone has sucessfully linked the standard Windows Security events with those from MS Active Directory ( `index=msad admonEventType=Update`). I have been able to do it with 4738 Accounts Enabled, but am having trouble with logon failures and I am sure it won't be the last one. As an example, here is the one I did for Accounts Enabled. The two `dedup` are to get rid of duplicate entries from multiple DCs for each event type: (index=wineventlog AND EventCode=4738 AND Account_Name!="ANONYMOUS LOGON" AND "Account Enabled") OR (index=msad AND admonEventType=Update AND objectCategory="CN=Person,CN=Schema,CN=Configuration,DC*") | lookup active-users.csv identity AS Account_Name OUTPUT identity as sAMAccountName | dedup whenChanged keepempty=true | dedup RecordNumber keepempty=true | transaction sAMAccountName maxspan=1m | search index="wineventlog" AND index=msad | eval Time=strftime(_time,"%H:%M:%S") | eval Date=strftime(_time," %d-%m-%Y") | lookup active-admins.csv identity AS src_user OUTPUTNEW nick AS Administrator, AccountType AS Admin_Account_Type | table Date, Time, Account_Domain, sAMAccountName, displayName, Administrator, Admin_Account_Type, userAccountControl, Old_UAC_Value, New_UAC_Value Then the one I am having trouble with linking because the standard props/transforms for Windows Sec events and msad events differ for the same field e.g. username for WinSec is either "Account_Domain" or "user" while msad it is "sAMAccountName" or potentially "mailNickname"... (index=wineventlog AND EventCode=4625 AND Account_Name!="ANONYMOUS LOGON" AND Account_Name!="svc*" AND Account_Name!="*$") OR (index=msad AND admonEventType=Update AND objectCategory="CN=Person,CN=Schema,CN=Configuration,DC*" AND sAMAccountName!="svc*" AND sAMAccountName!="*$") | dedup whenChanged keepempty=true | dedup RecordNumber keepempty=true | lookup active-users.csv identity AS Account_Name, identity AS sAMAccountName OUTPUT identity | fillnull value=NULL Failure_Reason, signature, Workstation_Name The 'fillnull' was filling in a few WinSec fields that provide a lot more info e.g. signature can provide "Username does not exist" as the reason the logon failed or that the "Username is correct but the password is incorrect" so that the msad events would come up in a stats command. As I was trying to do a stats count on the various fields so I could pull all of the logon failure events per host or username together... Using v6.5 with std. Windows Sec Event (Non-XML), the "AND"s in the main search are primary to make it easier to read with the new syntax highlighting.

Will I be able to upgrade indexers in a cluster without any downtime?

$
0
0
All, A bit concern for us lately is Splunk downtime. Search head clustering has been helpful, so now we're looking at the indexing tier.. Per my reading here: http://docs.splunk.com/Documentation/Splunk/6.5.1/Indexer/Takeapeeroffline Seems I can upgrade node by node without downtime as along as my search factor is high enough? Am I reading this correctly?

Regex Help for special characters

$
0
0
Hi, My log looks like this. I am trying to get the average response time by service. ServiceInvoker (service_A) : executeFlow : Time Take is = 3378 ServiceInvoker (service_B) : executeFlow : Time Take is = 378 ServiceInvoker (service_C) : executeFlow : Time Take is = 338 Here is what i have: index=app |rex '\ServiceInvoker\s+"((?\S+))"\s+:\s+executeFlow\s+:\s+Time\s+take\s+is\s+=\s+(?\d+)' | stats sparkline(avg(response_time),1m) as processTime_trend, avg(response_time),count BY service The brackets that are surrounding the service name is causing an issue for retrieving the results. Any help or ideas would be appreciated. Thanks in advance
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>