Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Fetch device details using cylance app for splunk

$
0
0
I have cylance protect app installed on my splunk enviroemnt , But i dont know how to fetch the details provided below using API that cylance provided in cylance app for splunk . • Id • Name • Host_name • OS_version • State • Agent_version • Policy • Last_logged_in_user • Update_type • Update_available • Background_detection • Is_safe • Date_last_modified • Ip_address • Mac_address • Date_first_registered • Date_offline If that can be accomplizhed using API calls , Please let me know how to use that Thanks Manish Kumar

Login for Splunk Security Datasets project

$
0
0
Hi, I registered to access the Splunk Security Datasets project and received an email with a link to login, but the standard username/password do not work. Is there someone that I can contact to help me with this?

How to audit capability assignment?

$
0
0
Hi, I am looking for real-time events from the aufit trail for capability assignments/changes, but it looks like this is not provided in _audit. How can I get an alert when someone adds can_delete for example? Or changes roles in other ways. I know I can query the REST API for the current state, but I am more interested in getting alerts for changes. Moitoring file changes also will only tell me that user X modifed authorize.conf, but not what was changed. thx afx

Help with props and transforms

$
0
0
Hi, I have a feed where the fields are separated by brackets (<>). I have a transforms.conf that extracts the fields automatically: REGEX = <([^\/][^>]+)>(.*?)<\/[^>]+> FORMAT = $1::$2 MV_ADD = true Unfortunately, the fields are all uppercase. I don't see any way to make the fields lowercase, so I've started creating aliases, using FIELD_ALIAS. We need to do this so that they are caught by our ES rules. I also need to do a transforms to map the values appropriately. Here is a sample field: "Allow I want to create an alias with the field name to be "action" and a transform that makes the value "allowed". I get the new field, but the transform is not working. Here's what I have configured: props.conf: FIELDALIAS-action = ACTION as action transforms.conf: [forcepoint_xml] REGEX = <([^\/][^>]+)>(.*?)<\/[^>]+> FORMAT = $1::$2 MV_ADD = true [ACTION] REGEX = (Allow|Permit) FORMAT = ACTION::allowed Any suggestions?

Splunk Data Flow Architecture diagram

$
0
0
Hi Splunkers..I need Splunk components Data flow diagram and how are they communicating each other. Can someone please share brief diagram

Problems with WinEvent collection on Windows2000 Server

$
0
0
Hi everybody, my client uses a UF to forward Data from a Windows 2000 server. They try to collect Winevents. Application System Security The collection itself seems to work, but we only see Application logs in Splunk. For both other event logs we get: WinEventLogChannel - init: Failed to bind to DC, dc_bind_time=9047 msec WinEventLogChannel - Initialized Windows Event Log='Security' Success; oldest_rec_id='0'; newest_rec_id='0'; total_rec='0' Which actually means that Splunk can not find any logs, but why?

Processing Microsoft multi-factor authentication server logs

$
0
0
I am injecting logs from Microsoft multi-factor authentication server. Unfortunately, the log entries are rather inconsistent. Here is the example of logs depicting successful SMS-based authentication in reverse chronological order May 13 11:12:54 server01 pfsvc: i|pendingSmses|updateSmsResult|#25c50ee6-d8fa-4725-96dd-dc7be580fadf|Updated authentication result. Call status: SUCCESS_SMS_AUTHENTICATED - SMS Authenticated May 13 11:12:17 server01 pfsvc: Pfauth failed for user 'user1.lastname@mydomain.com' from 184.152.232.200. Call status: SUCCESS_SMS_SENT - "One-way SMS Sent". As you can see the username and IP address is listed on one line and the result of the authentication is on the other. How can I build correlation between usernames and authentication results? I am not sure if my idea of stitching lines together with different LINE BREAKER in props.conf is a good one, since there are other non-authentication related log entries, which such approach may screw up. Please, advise. Your suggestions will be greatly appreciated.

user activities, PCI Requirement 10

$
0
0
May I know what is User Activity as per PCI requirement 10 ? On going SSAE 18 audit, there is one question - please provide the Daily Group and User activity report evidence of their review and evidence of investigation and follow-up (if applicable). Please share me if anyone have an idea regarding this! Thanks in advance. **PCI Requirement 10:** Track and monitor all access to network resources and cardholder data Logging mechanisms and the ability to track user activities are critical in preventing, detecting and minimizing the impact of a data compromise. The presence of logs in all environments allows thorough tracking, alerting and analysis when something does go wrong. Determining the cause of a compromise is very difficult, if not impossible, without system activity logs.

Splunk AWS inspector errors

$
0
0
I am getting the below error in the splunk_ta_aws_inspector.log: level=ERROR pid=1042 tid=MainThread logger=splunk_ta_aws.modinputs.inspector pos=util.py:__call__:163 | | message="Failed to execute function=run, error=Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/splunktalib/common/util.py", line 160, in __call__ return func(*args, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/inspector/__init__.py", line 53, in run _do_run() File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/inspector/__init__.py", line 30, in _do_run aiconf.AWSInspectorConf, "aws_inspector", logger) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/common/ta_aws_common.py", line 136, in get_configs tasks = conf.get_tasks() File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/inspector/aws_inspector_conf.py", line 60, in get_tasks _cleanup_checkpoints(tasks, config) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/inspector/aws_inspector_conf.py", line 119, in _cleanup_checkpoints internals = store.get_state("internals") File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/splunktalib/state_store.py", line 155, in get_state state = json.load(jsonfile) File "/opt/splunk/lib/python2.7/json/__init__.py", line 291, in load **kw) File "/opt/splunk/lib/python2.7/json/__init__.py", line 339, in loads return _default_decoder.decode(s) File "/opt/splunk/lib/python2.7/json/decoder.py", line 364, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/opt/splunk/lib/python2.7/json/decoder.py", line 382, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded I have this exact same app/setup in two lower environments which working just fine and im getting data. Today when I logged in I noticed we are not longer receiving AWS logs in my production environment and this is the only error I can find. Again, I have not changed any configurations from the Splunk side and this was working the night prior. Any idea what would be causing the object error?

Need to send automatic email on CPU Load/Memory usage

$
0
0
I wanted to send email to certain people automatically whenever there is high spikes on CPU Load/Memory on specific server. But I am getting email every 5 minutes which has set the time in Alert trigger option. Splunk is simply executing the query(Set the time to search the data every 3 minutes) which I have written and sending email on every 5 min of interval. Please help me on this.

Why are the blocked data counts not showing up?

$
0
0
Good day, I've the following query where I want to show the amount of times a category was notified "Blocked" out of the "Detected" ones, but the "Blocked" column isn't bringing up any results. index=test_csv (source="Detected*.csv" OR source="Stopped*.csv") sourcetype="csv" | eval cat = if(cat=="new", "web", cat) | eval action = if( like( notif , "%Deny%" ) OR like( notif , "%Block%" ), "Blocked" , "Detected" ) | stats count(action) as tcount by rep , cat | sort -tcount | stats values(rep) as "Rep" , list(tcount) as Detected , list(eval(action=="Blocked")) as Blocked by cat | sort -Detected | rename cat as categories What am I missing? Thanks!

Why isn't the following chart query sorting by amount of threats?

$
0
0
Good day, I've have created the following query for displaying the amount of critical threats reported by device, but it's only sorting by the Total field, and not by the amount of threats in every bar. tag::critical sourcetype=syslog | chart count over device by threat limit=0 usenull=f useother=f | addtotals | sort -Total | fields - Total | head 5 | rename device as Devices How do I achieve this? Thanks!

Cisco AMP Input Stream throwing errors

$
0
0
After successfully configuring with API Client ID and Key, we're getting the following error while creating an input, no matter what combinations we tried --leaving Event Types/Groups blank, or picking values from Drop Down list. From amp4e_events_input.log File "/opt/splunk/etc/apps/amp4e_events_input/bin/pika/adapters/base_connection.py", line 60, in __init__ 'Expected instance of Parameters, not %r' % (parameters,)) ValueError: Expected instance of Parameters, not Any ideas what could cause that error? There are only a handful of settings, so really not sure what we're doing wrong.

Filtering WinHostMon with transforms/props so a particular service doesn't alert when stopped.

$
0
0
Hello, I have splunk set up to send alerts when WinHostMon sees a service (any service) as being down. However, there is **one** service that we want to temporarily **NOT** be alerted on. So I am trying to use transforms/props to filter it out for now. This is what I have: **/etc/system/local/transforms.conf :** *[testing_service_filter] REGEX = MyService DEST_KEY = queue FORMAT = nullQueue* **/etc/system/local/props.conf** *[WinHostMon:Service] TRANSFORMS-block_service = testing_service_filter* **I have also tried:** *[WinHostMon://Service] TRANSFORMS-block_service = testing_service_filter [source::service] sourcetype = WinHostMon TRANSFORMS-block_service = testing_service_filter [source:service] TRANSFORMS-block_service = testing_service_filter* and *[WinHostMon] TRANSFORMS-block_service = testing_service_filter* What am I doing wrong? I have no problem filtering other sources (e.g. I have dozens of filters applied to WinEventLog, and never had an issue setting them up), but every transform I try to apply on WinHostMon doesn't seem to work. What am I missing? Thanks, Jeff

License slave indexing data but showing "0%" license used on license master.

$
0
0
Hello, I have a license slave that is an indexer. (actually 2, these indexers are clustered) One indexer is currently indexing data just fine but shows "0%" on the license master under the Licensing section in the configured license pool. The other indexer in the pool (the other index cluster member) is just fine; reporting accurate percentage used. I looked in the license_usage.log on the license manager but I do not see the indexer in question anywhere in the logs; I do see the other indexer in the pool showing a percentage used on the license manager. On the indexer in question, under licensing, all settings are correct to slave it to the license master, and if I refresh, It will show the last communicated time increase up until it hits a minute then starts over,,, so I know its checking in with license manager every minute as it should. Any reason why the indexer / license slave is not showing percentage of license used? Thanks, mgiddens

splunk-db-connect_314 does not create index

$
0
0
Hi, I am running splunk 7.2.0 single server instance running on RHEL 6.8. I wanted to get data from one of our postgresql DB, so installed splunk-db-connect_314 on this splunk single server. During configuration I was able to see the data from the sql query and I did not see any error. However, for some reason index has not been created for this data. Also, what sourcetype need to use for postgressql DB query data? During configuration it gives few choices but none seems to be appropriate. So, I created new sourcetype but it did help either getting data into splunk. Any help will be appreciated.

Filtering WinHostMon with transforms/props so it doesn't index the status of a particular service.

$
0
0
Hello, I am trying to use transforms/props to filter a service from being indexed This is what I have: **/etc/system/local/transforms.conf :** *[testing_service_filter] REGEX = MyService DEST_KEY = queue FORMAT = nullQueue* **/etc/system/local/props.conf** *[WinHostMon:Service] TRANSFORMS-block_service = testing_service_filter* **I have also tried:** *[WinHostMon://Service] TRANSFORMS-block_service = testing_service_filter [source::service] sourcetype = WinHostMon TRANSFORMS-block_service = testing_service_filter [source:service] TRANSFORMS-block_service = testing_service_filter* and *[WinHostMon] TRANSFORMS-block_service = testing_service_filter* What am I doing wrong? I have no problem filtering other sources (e.g. I have dozens of filters applied to WinEventLog, and never had an issue setting them up), but every transform I try to apply on WinHostMon doesn't seem to work. What am I missing? Thanks

Line break not working sometimes

$
0
0
Hello everyone! Why Line break sometimes is wrong? The following extract have events with correct line break and wrong line break (bold) 5/13/19 12:44:41.000 PM 17109 <13/05/2019 - 12:44:41>==================== INICIO REPORTE ====================== 17109 Query :(sta..SP_STA_MON_OBTIENE_TAREAS 17109,1) 17109 RESPUESTA DE TAREAS PENDIENTES Ret :0, nFilas : 17, nCols :1 Error:() 17109 ID. Tarea :(669153) Periodo Tarea :(201905) 17109 Nombre Tarea :(Traduccion - Conversion Archivo) Path Tarea :(/redbanc/bin_STI/bin/TAREA_CONVIERTE_ARCHIVOS) 17109 NUm. Params :(11) 17109 i :14, tot :17,EJECUTAR :[/redbanc/bin_STI/bin/TAREA_CONVIERTE_ARCHIVOS] 17109 Arg 0:[/redbanc/bin_STI/bin/TAREA_CONVIERTE_ARCHIVOS] 17109 Arg 1:[669153] 17109 Arg 2:[201905] 17109 Arg 3:[/redbanc/sca/casillas_STI/data/0008/2019050001182737REN00203715220190511] 17109 Arg 4:[/redbanc/sca/casillas_STI/data/0008/2019050000669164REN00203715220190511] 17109 Arg 5:[/0008] 17109 Arg 6:[/redbanc/sca/casillas_STI/data/0002/2019050001182738REN00203715220190511.CTR] 17109 Arg 7:[1] 17109 Arg 8:[1] 17109 Arg 9:[1] 17109 Arg 10:[1] 17109 Arg 11:[0] 17109 Arg 12:[607:83:44:21 9102 31 yaM7372811509102] 17109 Arg 13:[70] 17109 Se libera memoria de parametros : 13/05/2019 - 12:44:41 17109 <13/05/2019 - 12:44:41>==================== FIN REPORTE ===================== 5/13/19 12:44:41.000 PM 17109 <13/05/2019 - 12:44:41>==================== INICIO REPORTE ====================== 17109 Query :(sta..SP_STA_MON_OBTIENE_TAREAS 17109,1) 17109 RESPUESTA DE TAREAS PENDIENTES Ret :0, nFilas : 17, nCols :1 Error:() 17109 ID. Tarea :(669157) Periodo Tarea :(201905) 17109 Nombre Tarea :(Traduccion - Conversion Archivo) Path Tarea :(/redbanc/bin_STI/bin/TAREA_CONVIERTE_ARCHIVOS) 17109 NUm. Params :(11) 17109 i :14, tot :17,EJECUTAR :[/redbanc/bin_STI/bin/TAREA_CONVIERTE_ARCHIVOS] 17109 Arg 0:[/redbanc/bin_STI/bin/TAREA_CONVIERTE_ARCHIVOS] 17109 Arg 1:[669157] 17109 Arg 2:[201905] 17109 Arg 3:[/redbanc/sca/casillas_STI/data/0009/2019050001182739REN00203715220190512] 17109 Arg 4:[/redbanc/sca/casillas_STI/data/0009/2019050000669165REN00203715220190512] 17109 Arg 5:[/0009] 17109 Arg 6:[/redbanc/sca/casillas_STI/data/0000/2019050001182740REN00203715220190512.CTR] 17109 Arg 7:[1] 17109 Arg 8:[1] 17109 Arg 9:[1] 17109 Arg 10:[1] 17109 Arg 11:[0] 17109 Arg 12:[617:93:44:21 9102 31 yaM9372811509102] 17109 Arg 13:[71] 17109 Se libera memoria de parametros : 13/05/2019 - 12:44:41 17109 <13/05/2019 - 12:44:41>==================== FIN REPORTE ===================== **5/13/19 12:44:41.000 PM 17109 <13/05/2019 - 12:44:41>==================== INICIO REPORTE ====================== 17109 Query :(sta..SP_STA_MON_OBTIENE_TAREAS 17109,1)** **5/13/19 12:44:41.000 PM 17109 RESPUESTA DE TAREAS PENDIENTES Ret :0, nFilas : 1, nCols :1 Error:() 17109 NO HAY TAREA 17109 <13/05/2019 - 12:44:41>==================== FIN REPORTE =====================** 5/13/19 12:44:44.000 PM 17109 <13/05/2019 - 12:44:44>==================== INICIO REPORTE ====================== 17109 Query :(sta..SP_STA_MON_OBTIENE_TAREAS 17109,1) 17109 RESPUESTA DE TAREAS PENDIENTES Ret :0, nFilas : 11, nCols :1 Error:() 17109 ID. Tarea :(669127) Periodo Tarea :(201905) 17109 Nombre Tarea :(Generacion de Archivo Aviso) Path Tarea :(/redbanc/bin_STI/bin/TAREA_GENERA_AVI) 17109 NUm. Params :(5) 17109 i :8, tot :11,EJECUTAR :[/redbanc/bin_STI/bin/TAREA_GENERA_AVI] 17109 Arg 0:[/redbanc/bin_STI/bin/TAREA_GENERA_AVI] 17109 Arg 1:[669127] 17109 Arg 2:[201905] 17109 Arg 3:[/redbanc/sca/casillas_STI/data/0005/2019050000669127REN00202715220190512.AVI] 17109 Arg 4:[REN00202715220190512] 17109 Arg 5:[cca777p] 17109 Arg 6:[/0005] 17109 Arg 7:[LA TRANSMISIÓN DEL ARCHIVOS REN00202715220190512 HA LLEGADO SATISFACTORIAMENTE.] 17109 Se libera memoria de parametros : 13/05/2019 - 12:44:44 17109 <13/05/2019 - 12:44:44>==================== FIN REPORTE ===================== props.conf [Scheduler] NO_BINARY_CHECK = true SHOULD_LINEMERGE = true category = Custom pulldown_type = 1 disabled = false BREAK_ONLY_BEFORE = INICIO REPORTE

how to extract a change_link from _raw depending on the datapoint a user click on in a line graph?

$
0
0
Hi I insert the data in the following JSON format and create a linegraph over _time,each of the circle markers in the line graph represent the count field over _time ,what I am looking for is depending on the datapoint the user clicks on, I want to pull the URL from change_link. How do I set a variable (url) based on what the user click on to extract the change_link and then use this variable in the link_target? How do I pull the url from a nested json ?how to set a token based on what they clicked on and extract the change_link and then you use that token as your link_target? Please advise _raw = { "Analyze":{ "P5":{"count":"","change_link":""} }, "timestamp": int(time.time()) # Can also use datetime.datetime.now().isoformat() } ![alt text][1] [1]: /storage/temp/273651-pastedgraphic-2-7.png Following is my code CODE: index=indexname sourcetype=sourcetype | chart values(Analyze.P5.count) as P5 over _time@w0now1

how to break fields into separate lines which were concatenated

$
0
0
Hello, My Situation is different. I have few columns like: code, Week, rfs, decision, new_deecision. In my search, I have concatenated all the fields, to display in a column chart. If I place cursor on a column, It should show which week it is, code, rfs and decision. For this I have used a search like below: | eval All_details=tostring("Week: ")+'WEEK'+tostring(", CODE: ")+'CODE'+tostring(", Current Week RFS3: ")+'RFS3'+tostring(", Previous decision: ")+'decision'+tostring(", Current decision: ")+'new_decision' | stats count by All_details, WEEK When I place a cursor on a column of visualization, it is showing all the details in a single line. But I want to see all columns should appear in different lines when I place cursor on a bar or column. Like week in one line, code in next line following by week etc., Is there any way to show the fields values in different lines? Please help me on this. Thanks, Narmada.
Viewing all 47296 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>