Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Problem with data retention policy

$
0
0
Hello, I have 2 IDX and one CM which is acting as a deployment server and License master aswell, and 2 SH in cluster. I did the data retention for 180 days period. That means, whatever is older than 180 days should move to NAS location by using coldToFrozenScript. But data is not moving properly to NAS and not archiving. Latest event is showing before 8 months. i.e on july, 2018. Indexes.conf : [windows_server_security] coldPath = $SPLUNK_DB\windows_server_security\colddb enableDataIntegrityControl = 0 enableTsidxReduction = 0 homePath = $SPLUNK_DB\windows_server_security\db maxTotalDataSizeMB = 512000 thawedPath = $SPLUNK_DB\windows_server_security\thaweddb repFactor = auto maxWarmDBCount = 150 frozenTimePeriodInSecs = 15552000 rotatePeriodInSecs = 60 coldToFrozenScript = "$SPLUNK_HOME/bin/python" "$SPLUNK_HOME/bin/coldToFrozenExample.py" "$DIR" I did attached images from CM dashboard and indexes.conf what I mentioned. Can anyone help me with that? Thanks in advance. any help would be appreciated. ![alt text][1] [1]: /storage/temp/272695-dataarchieve.png

Panels in dashboard disapprer

$
0
0
Dear Experts, I created one dashboard with many searches. I spent almost 5 hours to build that. I created that on 7/05/2019. now today when I visit. most of the panels are disappeared and the remaining data is also not have accurate information. whats happened. can anyone help me to figure out.

timewrap compare last week with avg last three months

$
0
0
Hi team! I want to compare last week with avg last three months. This is my code right now. I need some help pls. sourcetype="sophos*" * severity=high earliest=-90d@d | timechart span=1month count | timewrap month series=short | eval mean=(s1+s2)/2 | where s0 < mean | table s0 mean | rename s0 AS "Last Week" mean AS "Avg last 3 months" Cuold you give me some advice pls? Thank you a lot.

Splunk Web server recv-q filling up, unable to connect

$
0
0
I have a heavy forwarder running on a dedicated RHEL 7.5 server, I'm trying to connect via the web interface running on port 8000. I have tested this port from the client machine and by all accounts there is sufficient network access for this to work. While trying to connect, the browser will spin indefinitely waiting for a response. After running netstat on the Heavy Forwarder I found that every time I try to initiate a connection I can see the web server process's recv-q go up but the process never seems to do anything with those requests. As such I've searched all appropriate system and application log files but can't find any indication as to what's going on. I have restarted both the Splunk application and the server itself with no change. The only other thing pointing towards the problem is that when I restart the Splunk service, it gets stuck on "Waiting for web server to be available..." Has anyone experienced this before? I'm not sure what else I can test to reveal the issue. I have multiple other identically configured Heavy Forwarders that are working fine.

Is it possible to create a chart like this in splunk?

$
0
0
Hi All, I am trying to do up a chart which consist of 4 different fields as well as the total for each month. Am wondering if it is possible to do up a chart like this with splunk? Thanks. ![alt text][1] [1]: /storage/temp/273646-capture7.png

Can anyone help resolve the issue with my search for events relating to USB violations

$
0
0
![alt text][1] [1]: /storage/temp/273648-usb-event-search-results.png

Insufficient permission for Splunk service now addon

$
0
0
For `eventtype="snow_ta_log_error"` I am getting below error- ERROR pid=89870 tid=MainThread file=rest.py:splunkd_request:53 | Failed to send rest request=https://127.0.0.1:8089/servicesNS/nobody/Splunk_TA_snow/storage/passwords/https%5C%3A%252F%252Fxyz.service-now.com%3Adummy%3A, errcode=403, reason=Insufficient permission. ERROR pid=89870 tid=MainThread file=snow_ticket.py:_get_service_now_account:160 | Failed to get clear credentials for https://xyz.service-now.com How to troubleshoot this permission error. Thanks,

error in clustering and replication

$
0
0
Hello, I am getting the following error in my deployment server or Cluster master. Eventhough ouputs.conf is correct. outputs.conf : [tcpout] defaultGroup = indexers [tcpout:indexers] server = IDX1:9997, IDX2:9997 autoLB = true forceTimebasedAutoLB = true autoLBFrequency = 40 Please help me with the error. Any help would be appreciated. Thanks! TCPOutAutoLB-0 Root Cause(s): More than 20% of forwarding destinations have failed. Ensure your hosts and ports in outputs.conf are correct. Also ensure that the indexers are all running, and that any SSL certificates being used for forwarding are correct. Last 50 related messages: 05-12-2019 19:30:02.091 -0400 WARN TcpOutputProc - Cooked connection to ip=10.184.132.110:9997 timed out 05-12-2019 19:26:33.000 -0400 WARN TcpOutputProc - Cooked connection to ip=10.184.132.110:9997 timed out 05-12-2019 19:25:33.180 -0400 WARN TcpOutputProc - Cooked connection to ip=10.184.132.110:9997 timed out 05-12-2019 19:23:03.692 -0400 WARN TcpOutputProc - Cooked connection to ip=10.184.132.110:9997 timed out

I am writing a subsearch to get a user details as input for someother search but it is not working when i include the subsearch . need help asap

$
0
0
index=* [search index=_internal [| rest /services/authentication/current-context splunk_server=local | fields username | rename username as user ] |top user limit=1 | fields user ]

Usage intermediate forwarders with load balancer

$
0
0
End of last year we migrated from Splunk 6.5.3 to 7.1.3 The universal forwarders on the different source systems delivering our inputs, send data via a load balancer to 2 intermediate forwarders, connected with our 6 indexers. That setup was recommended to us a few year ago (by a splunk partner) with the initial setup of our system. We found information indicating that the best setup recommended today is a direct connection between universal forwarders of the source systems and the indexers of our splunk cluster (no intermediate forwarders with a load balancer). Anyone who can comment on this?

Add-On builder: migration tool error on windows

$
0
0
Hi, I have the following error when I try to run project_migration_tool on windows: C:\Program Files\Splunk\etc\apps\splunk_app_addon-builder\bin\aob\aob_tools>project_migration_tool.bat PYTHONPATH=C:\dev\splunk-sdk-python-1.6.6;C:\Program Files\Splunk\etc\apps\splunk_app_addon-builder\bin\aob;C:\Program Files\Splunk\etc\apps\splunk_app_addon-builder\bin\aob\aob_tool Traceback (most recent call last): File "project_migration_tool.py", line 16, in import aob.aob_tools ImportError: No module named aob.aob_tools Any idea ?

Display a time chart for the distinct count of values in a field

$
0
0
I am beginner to Splunk and could you help me with the following scenario. Lets take I have a table with the field name "Computer". The field Name "Computer" when searched for different time period gives me different values. When I search for April the result is : a,b,c,d,c When I search for May the result is : a,b,c,d,e,f,a,b So the distinct count for April is **4** and for May is **6**. I would like to create a chart which shows the following. April - 4 May - 6 What search query could I use to display such a chart which shows me the distinct count of field "Computer" on a monthly basis. Thanks in advance.

How to integrate Incident management ticketing tool with splunk enterprise?

$
0
0
I have tried to find an app that can integrate Incident management ticketing tool with splunk but couldn'd. Is there any other option that can be used to do so?

Set given preset value in time range picker

$
0
0
Hi, I'm using Splunk Enterprise 7.2.3. I have a time range picker on my dashboard to set the date/time range to search between. I want to set its default value to "Previous week". In the XML code of the dashboard I can calculate the required date range and I can set that in the time range picker also (and it works well), but the displayed preset value will be: "Between Date-times". But I want to show: "Previous week". How can I do it? Thanks in advance, fjp2485

trigger correlation rule for past event occurance

$
0
0
there was one event occured yesterday and we have one correlation rules against that. unfortunatley it was not triggered. I fix it and update the correlation rule. is that possible to trigger against past event ?

Extraction issue with dynamic field names

$
0
0
Hello there, I am stuck with a dynamic field name extraction. The data is partly JSON and sometimes contains nested JSON in the JSON part: log-group=abc [2019-05-12 12:23:16,074] - INFO - {"time": "2019-05-12T12:23:16Z", "step": "PRE_REQUEST", "uuid": "abcxyz", "method": "GET", "ip_src": "1.2.3.4", "url": "https://api/abc", "url_params": {"name": "aaa", "reliability": "90", "equipment_name": "bbb", "element_name": "ccc"}, "user": "john"} I am trying to extract each element of the nested 'url_params'. To achieve this, I extract url_params as a JSON event and then I extract each of its field/value using dynamic field naming. 1st step - extracting url_params: url_params_extract = {"name": "aaa", "reliability": "90", "equipment_name": "bbb", "element_name": "ccc"} 2nd step - extracting each element: name = aaa reliability = 90 equipment_name = bbb element_name= ccc The configuration files look like this: transforms.conf [url_params] FORMAT = url_params_extract::$1 REGEX = url_params\"\:\s(\{.*?\})\, [url_params_extract] FORMAT = $1::$2 REGEX = \"(.+?)\"\:\s\"(.+?)\" SOURCE_KEY = url_params_extract props.conf [test] REPORT-url_params = url_params REPORT-url_params_extract = url_params_extract EVAL-url_params = null EVAL-url_params_extract = nullif(url_params_extract, "{}") The problem is each last element comes out with a closing curly bracket. For instance url_params_extract = {"name": "aaa", "reliability": "90", "equipment_name": "bbb", "element_name": "ccc"} Result: element_name = ccc"} Instead of desired: element_name = ccc Despite the regex being tested OK on regex101 Even more weird, if I do extract the nested JSON without curly braces, the issue remains: url_params_extract = "name": "aaa", "reliability": "90", "equipment_name": "bbb", "element_name": "ccc" I would still have: element_name = ccc"} Unfortunately, I am not able to reproduce the issue with this sample event, I am still try to figure out why. But I am starting to think that I am missing something on '$1::$2' format usage. Any hint ?

What is the REST API Post command to append existing native user's role?

$
0
0
Hi all, Is there any REST API command to add/append single or multiple roles to specific user. For e.g. user "SplunkUser" is already present in Splunk with role assigned to it as "role1". Which REST API command should be run to add roles "role2" and "role3" to User "SplunkUser". Following command I been running, but it replaces the existing role with new roles and not appending it. curl -k -u username:password /services/authentication/users/SplunkUser -d roles="role2" -d roles="role3" Please help..!!!

DB Input with two raising columns

$
0
0
I have a SQL query pull relied on two raising columns (see below). In DB input, is it possible to set two raise columns? I am using DB Connect 3.1.4. SELECT * FROM table_1 WHERE (timestamp > ? AND logoff_time IS NULL) OR logoff_time> ? ORDER BY timestamp, logoff_time

XX events missing due to corrupt or expired remote artifacts from search head

$
0
0
Hello guys, I can see some errors from clustered search head : "events missing due to corrupt or expired remote artifacts" What does it mean? Thanks.

How I can split single event into multiple events?

$
0
0
I have configured Rest api and it is giving data in json format as a single event. I wants to split it into multiple events ie, for example the data is in this format now { [-] queryResponse: { [-] @count: 4288943 @first: 0 @last: 99 @type: ClientSessions entity: [ [-] { [-] @dtoType: clientSessionsDTO @type: ClientSessions clientSessionsDTO: { [-] @displayName: 14 @id: 14 anchorIpAddress: abcd authenticationAlgorithm: xyz authorizationPolicy: na bytesReceived: 0 bytesSent: 0 webSecurity: abc wgbStatus: dgghsd } } { [-] @dtoType: clientSessionsDTO @type: ClientSessions clientSessionsDTO: { [-] @displayName: 15 @id: 15 anchorIpAddress: abcd authenticationAlgorithm: xyz authorizationPolicy: na bytesReceived: 0 bytesSent: 0 webSecurity: abc wgbStatus: dgghsd } } { [-] @dtoType: clientSessionsDTO @type: ClientSessions clientSessionsDTO: { [-] @displayName: 16 @id: 16 anchorIpAddress: abcd authenticationAlgorithm: xyz authorizationPolicy: na bytesReceived: 0 bytesSent: 0 webSecurity: abc wgbStatus: dgghsd } } . . . I want to split it as event 1 entity: [ [-] { [-] @dtoType: clientSessionsDTO @type: ClientSessions clientSessionsDTO: { [-] @displayName: 14 @id: 14 anchorIpAddress: abcd authenticationAlgorithm: xyz authorizationPolicy: na bytesReceived: 0 bytesSent: 0 webSecurity: abc wgbStatus: dgghsd } } event 2 { [-] @dtoType: clientSessionsDTO @type: ClientSessions clientSessionsDTO: { [-] @displayName: 15 @id: 15 anchorIpAddress: abcd authenticationAlgorithm: xyz authorizationPolicy: na bytesReceived: 0 bytesSent: 0 webSecurity: abc wgbStatus: dgghsd } } event 3 { [-] @dtoType: clientSessionsDTO @type: ClientSessions clientSessionsDTO: { [-] @displayName: 16 @id: 16 anchorIpAddress: abcd authenticationAlgorithm: xyz authorizationPolicy: na bytesReceived: 0 bytesSent: 0 webSecurity: abc wgbStatus: dgghsd } } how I can achieve this? Thanks in advance.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>