Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How can I search top 10 users of splunk ?

$
0
0
How can I search top 10 users of splunk ? Any query Help ?? Iam not sure below query is correct ? index=_audit action="success" info=succeeded | stats count by user | sort - count | head 10

JSON input keeps erroring out, even though the JSON is clean

$
0
0
I'm attempting to parse Azure API Management Gateway logs, which come in JSON format. It starts out like this: { "records": [ { "time": "2017-10-11T13:04:54.8339905Z", "resourceId": "/SUBSCRIPTIONS/#####/RESOURCEGROUPS/...", "properties": {"method":"POST","url":"https://example.com/v#/testing/testing","responseCode":200,"responseSize":722,"requestSize":1890} } , { "time": "2017-10-11T13:04:52.9585550Z", "resourceId": "/SUBSCRIPTIONS/#####/RESOURCEGROUPS/...", "properties": {"method":"POST","url":"https://example.com/v#/testing/testing","responseCode":200,"responseSize":8058,"requestSize":1979} } ] } I use a script to transform that into one record per line, as so: {"time": "2017-10-11T13:04:54.8339905Z","resourceId": "/SUBSCRIPTIONS/#####/RESOURCEGROUPS/...","properties": {"method":"POST","url":"https://example.com/v#/testing/testing","responseCode":200,"responseSize":722,"requestSize":1890}} {"time": "2017-10-11T13:04:52.9585550Z","resourceId": "/SUBSCRIPTIONS/#####/RESOURCEGROUPS/...","properties": {"method":"POST","url":"https://example.com/v#/testing/testing","responseCode":200,"responseSize":8058,"requestSize":1979}} When I import an individual post-transform file and select `_json` as the sourcetype, I receive no errors in the import process or in the index="_internal" afterwards, and the search results show up as expeccted. When I monitor a directory that contains post-transform files, I get this error a whole lot: JSON StreamId:##### had parsing error:Unexpected character while expecting ':': 'c' - data_source="/var/log/import.json", data_host="splunk-host", data_sourcetype="_json" I did a quick Python script to open the post-transform file and try to json.loads each line, and the files are clean, so far as I can tell. What am I missing?

stats sum command dosen't works

$
0
0
Hi guys, I already used the "stats sum" command several time but I just noticed that for one particular index, the command return no results even if I have several events available and the field where the command is applied is present. Below my command and the result ![alt text][1] For an other index, the same commands works fine: ![alt text][2] [1]: /storage/temp/217806-capture-decran-2017-10-09-a-180134.jpg [2]: /storage/temp/217807-capture-decran-2017-10-09-a-180042.jpg

How do I use lookup to filter results? Needs to be "contains" rather than equals.

$
0
0
I am looking to filter events in splunk by values in a lookup table. I implemented the solution from this question, and it is partial working: https://answers.splunk.com/answers/110381/use-lookup-to-filter-events.html The change that I need to make is to have my lookup values be used in a "contains" filter as apposed to a literal/equal filter. my sample code: -search string- [| inputlookup searchtermsample2.csv | fields query] | stats count by searchTerms So... how do I rewrite so that there are wildcards on each side of the lookup field?

Source IPs Communicating with Far More Hosts Than Normal (Assistant: Detect Spikes)

$
0
0
Hello All I was wondering if someone could break down what the following search does and what the final outputted fields mean? This search was taken from the **Splunk Security Essentials app**... | (tag=network tag=communicate) OR (index=pan_logs sourcetype=pan*traffic) OR (index=* sourcetype=opsec) OR (index=* sourcetype=cisco:asa) | convert mktime(_time) timeformat="%Y-%m-%dT%H:%M:%S.%3Q%z" | bucket _time span=1d | stats dc(dest_ip) as count by src_ip, _time | eventstats max(_time) as maxtime | stats count as num_data_samples max(eval(if(_time >= relative_time(maxtime, "-1d@d"), '$outlierVariableToken$',null))) as $outlierVariableToken|s$ avg(eval(if(_time upperBound) AND num_data_samples >=7, 1, 0)

Infoblox Events

$
0
0
We are trying to forward Infoblox to our SPLUNK. I provided IP and Port to our Network Eng. folks to configure infoblox forwarding. They came back indicating this: data.destination.splunk > set mode forward Certificate set-up is required before Data Collector can forward to Splunk. Please use command 'certificate import' to do so Is there a Splunk certificate I can import?? data.destination.splunk > certificate import The data.destination.splunk > certificate import command imports the Forwarder Certificate from a SCP server or an FTP server. Syntax: data.destination.splunk > certificate import ://loginname@serverIP:[port:]path Example: data.destination.splunk > certificate import scp://root@10.2.1.1:999/DC2/ What cert is this looking for?

How to properly use OR and WHERE in splunk

$
0
0
Hi, I'm new to splunk, my background is mainly in java and sql. I was just wondering, what does the operator "OR" mean in splunk, does it have a different meaning? for example, am i using it correct in this instance: host = x OR host = y | Futhermore, I was told the key word "WHERE" has a different meaning compared to SQL. Could you please explain this to me? I've looked every where for this answer, but haven't really understood other peoples explanation and was hoping you could dumb it down as much as possible. Thanjs

What version of TLS does the splunk python SDK use?

$
0
0
Hello! I've got a problem: My python script is not able to get a connection to our splunk server. This is my code: SPLUNKCONNECTION = client.connect( host=URL, # Server URL app=APP, # Name of the app port=9089, # we have got an Offset of 1000 in our ports scheme="https", # https is used version="6.6.3", # Splunk version as seen in "Help -> Infos" in the top right corner username=USER, # Username who shall be logged in sharing="app", password=PASS) # Password of the user This is the given error: ssl.SSLError: [Errno 1] _ssl.c:499: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure All "capslock" names are prefilled variables. The entries are correct in the variables, i already checked it. We have the theory that there are problems with the SDKs TLS Version and our server. So we want to check this. But if someone knows a general solution for this Problem, I would be happy. Thanks for the help! Cheers, Torben

Too many open files, despite ulimit of 65536

$
0
0
My splunk indexer stopped with lots of ERROR messages which ended in "Too many open files". ulimit -n shows 65536 for the splunk user. I was able to just start splunk again, and it is running fine now for two days. Also, when I count current open files with> lsof -u splunk | awk 'BEGIN { total = 0; } $4 ~ /^[0-9]/ { total += 1 } END { print total }' it shows about 2600 open files for user splunk. So nothing serious. So obviously increasing the nofile limit would not help to prevent this in the futur. Any ideas on what could be done to further analyse this? I am currently monitoring the number of open files over time. Maybe during crash time (0:03 am) nofiles was different form now in the afternoon.

How to regularly write the filtered events to the an index

$
0
0
if I have an index `test`, the index has too many events, I need to filter by keyword and write the result to the index `Useful_logs`. for example: Filter conditions: `index=test sourcetype=abc "login" "user" "deviceId"` then at the zero of every day,filter the events of the previous day Write the filtered event to index `Useful_logs` Finally, I can use `index=Useful_logs` to search for the log I want. Of course, maybe some friends will let me configure the "transforms.conf" file. But I want to keep all the logs of the test index, but also write useful logs to the new index (the Useful_logs index). So what should I do?

SSO on siteminder

$
0
0
Hi at all, I have the following problem: We configured SSO with Siteminder using SAML. The problem is that this Siteminder is used only for authentication and not also for profiling so we're not able to configure Splunk roles and when authenticating we receive from Splunk the following error message "**Saml response does not contain group information**". Watching Siteminder's logs we can see that arriving on Splunk the following parameters (after authentication on Siteminder's Authentication Schema):UIDxxxxxxxxxx.xxxxx@xxxx.xxxxxx.com Anyone encountered this problem? Thank you in advance. Bye. Giuseppe

How to get the response time value?

$
0
0
I want to get the response time in terms of value(a Number). How can I get it? Following script returns me the visual representation of response time not in number. I want to get the number. index=abc source=XYZ buildNumber=13 type=REQUEST

echo command in splunk

$
0
0
How can I print out any value or any result in splunk? Does splunk have any echo command system? eval didn't help me much.

Why are my accelerated reports not leaving the "Summarization not started" state?

$
0
0
In my search head cluster, one of my accelerated searches does not seem to be able to run its summarization. It's summary status keeps flipping between: `Summarization not started` and `0% Complete` It also reports that: `Last Updated: 22h 59m ago`. How do I diagnose why this report does not run?

Converting alphanumeric field to numeric values

$
0
0
I've seen numerous questions out there that touch on this topic but haven't found an answer that actually meets my specific use case. I have data from several sources that report numeric data (such as bandwidth or other datatypes) but instead of returning the value as a number (such as 39600) it returns in this format: 39.6K. I'm able to ingest those values but Splunk, unsurprisingly, doesn't know how to handle that - it treats it as text instead of a number. Long story short, I need a way to translate the following data points into numeric values, either at ingest time or at search time: Congestion 39.6K 55.3K 41.2K 40.2K 39.9K 38.9K 40.9K We only need to return the first value after "Congestion" - the 39.6K value. The other values are previous poll results and we're collecting that already. The output should end up looking like: Congestion 39600 This specific data set should never go above "K", but I have other datasets that might go into M or G, etc., so I need something as flexible as possible. I've tried using rex and sed but I've not had any success yet with it. If anyone can provide any help, it'd be greatly appreciated as it will solve multiple issues for us...

SSO on SiteMinder using SAML error message: "**Saml response does not contain group information**"

$
0
0
Hi at all, I have the following problem: We configured SSO with Siteminder using SAML. The problem is that this Siteminder is used only for authentication and not also for profiling so we're not able to configure Splunk roles and when authenticating we receive from Splunk the following error message "**Saml response does not contain group information**". Watching Siteminder's logs we can see that arriving on Splunk the following parameters (after authentication on Siteminder's Authentication Schema): UIDxxxxxxxxxx.xxxxx@xxxx.xxxxxx.com Anyone encountered this problem? Thank you in advance. Bye. Giuseppe

How to alert when a deviation has been detected in volume between two time periods?

$
0
0
I currently use the following query to compare volume counts between current day and a week ago: sourcetype=abc index=xyz source=foo earliest=-0d@d latest=now | bucket _time span=30m | stats count by _time | eval ReportLabel="Today" | append [search sourcetype=abc index=xyz source=foo earliest=-7d@d latest=-6d@d | bucket _time span=30m | stats count by _time | eval ReportLabel="PreviousWeek" | eval _time=_time+(60*60*24*7)] | chart max(count) as count over _time by ReportLabel I'm interested in leveraging this query (if possible) to alert me if volume counts between the two time periods deviate by a certain percentage. Since the alert would run every 30 minutes, I'd have to adjust the timeframes accordingly. - How would I capture a specific half hour period from the previous week to reference against current day? - How could a deviation calculation be applied?

Converting alphanumeric field to numeric values (39.6K:39600)

$
0
0
I've seen numerous questions out there that touch on this topic but haven't found an answer that actually meets my specific use case. I have data from several sources that report numeric data (such as bandwidth or other datatypes) but instead of returning the value as a number (such as 39600) it returns in this format: 39.6K. I'm able to ingest those values but Splunk, unsurprisingly, doesn't know how to handle that - it treats it as text instead of a number. Long story short, I need a way to translate the following data points into numeric values, either at ingest time or at search time: Congestion 39.6K 55.3K 41.2K 40.2K 39.9K 38.9K 40.9K We only need to return the first value after "Congestion" - the 39.6K value. The other values are previous poll results and we're collecting that already. The output should end up looking like: Congestion 39600 This specific data set should never go above "K", but I have other datasets that might go into M or G, etc., so I need something as flexible as possible. I've tried using rex and sed but I've not had any success yet with it. If anyone can provide any help, it'd be greatly appreciated as it will solve multiple issues for us...

Monitor SQLite DB file with DC Connect

$
0
0
I read the following articles: https://www.splunk.com/blog/2016/09/13/using-db-connect-with-sqlite.html http://docs.splunk.com/Documentation/DBX/3.1.1/DeployDBX/Installdatabasedrivers#Install_drivers_for_other_databases Latest DB Connect is running, Java 8 is installed, and SQL Explorer (under Data Lab) lists my connection. When I select my DB from the dropdown menu, I get "Invalid Database Connection". I can't find anything in the logs as to why it thinks that. I can see the SQLite driver listed under Drivers. **Configuration under ...\Splunk\etc\apps\splunk_app_db_connect\local** db_connection_types.conf [sqlite] displayName = SQLite serviceClass = com.splunk.dbx2.DefaultDBX2JDBC jdbcDriverClass = org.sqlite.JDBC jdbcUrlFormat = jdbc:sqlite: ui_default_catalog = $database$ db_connections.conf [default] useConnectionPool = true maxConnLifetimeMillis = 1800000 maxWaitMillis = 30000 maxTotalConn = 8 fetch_size = 100 [VPN] connection_type = sqlite database = C:/Rest/DB.SQLite host = localhost identity = owner jdbcUrlFormat = jdbc:sqlite: jdbcUseSSL = 0 Tried `database = C:\Rest\DB.SQLite` and `database = C:\\Rest\\DB.SQLite`.

Monitor SQLite database file with Splunk DB Connect

$
0
0
**Update** I realized I needed to create an identity, according to the first link. I created one to mirror the config with no password. Now it's saying it can't get schemas. I read the following articles: https://www.splunk.com/blog/2016/09/13/using-db-connect-with-sqlite.html http://docs.splunk.com/Documentation/DBX/3.1.1/DeployDBX/Installdatabasedrivers#Install_drivers_for_other_databases Latest DB Connect is running, Java 8 is installed, and SQL Explorer (under Data Lab) lists my connection. When I select my DB from the dropdown menu, I get "Invalid Database Connection". I can't find anything in the logs as to why it thinks that. I can see the SQLite driver listed under Drivers. **Configuration under ...\Splunk\etc\apps\splunk_app_db_connect\local** db_connection_types.conf [sqlite] displayName = SQLite serviceClass = com.splunk.dbx2.DefaultDBX2JDBC jdbcDriverClass = org.sqlite.JDBC jdbcUrlFormat = jdbc:sqlite: ui_default_catalog = $database$ db_connections.conf [default] useConnectionPool = true maxConnLifetimeMillis = 1800000 maxWaitMillis = 30000 maxTotalConn = 8 fetch_size = 100 [VPN] connection_type = sqlite database = C:/Rest/DB.SQLite host = localhost identity = owner jdbcUrlFormat = jdbc:sqlite: jdbcUseSSL = 0 Tried `database = C:\Rest\DB.SQLite`, `jdbcUrlFormat = jdbc:sqlite://`, and `database = C:\\Rest\\DB.SQLite`.
Viewing all 47296 articles
Browse latest View live