Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Azure AD SAML Group Claims

$
0
0
I am trying to get Splunk Enterprise to use SAML authentication against Azure AD. I have followed the steps outlined in the directions on the Configure SSO with AzureAD or AD FS as your Identity Provider direction page. I have consulted the Configuring Microsoft’s Azure SAML Single Sign On (SSO) with Splunk Cloud – Using the 'New' Azure Portal blog post as well, even though it is for Splunk cloud and I am using Splunk on prem. The problem I am running into is when I try to log in, I get an error "SAML response does not contain group information". Using a SAML browser plugin, I can see Azure is not sending the group information in the SAML response. When looking at Azure AD documents for how to Customize claims issued in the SAML token, it states that Azure AD will NOT send the group claims. If Azure AD will not send the group claims, is there anyway for Splunk to do the role mapping? Has anyone else ran into a problem with Azure AD not providing group claims in the SAML response?

How do I get a full listing of indexes and gigabyte ingest?

$
0
0
I've been using the following search to get a count of ingested daily (24hrs) and for 30 days, but I'm only getting the top 10. How can I get the others beyond the top 10? index=_internal source=*license_usage.log type=Usage idx=* | eval GB = b/1024/1024/1024 | timechart span=1d useother=0 sum(GB) by idx | rename idx as Index, sum(GB) as Gigabyte

Need the correct regular expression for my rex command

$
0
0
Here is my raw data: {"line":"level=debug t=\"2019-01-29T19:47:20.971Z\" rt=1 method=GET path=\"/service/health?apikey=DEFAULT\" sc=200 dma=999 apikey=DEFAULT amzn_trace_id=unknown enabledFeatures=recommendations,upcomingSearch,popularityQueriesPlatformSpecific,availabilityTimes,avoidDefaultQuery,useFavoritesExternalSchemaForD2C,useFavoritesV2ForFavoritesFilter,endCardRecommendations,cmsAuthFallback os=1 rid=\"dpp-proxy-draft-db0ae210-2baf-42e7-bd88-1379d3efb157\" mode=draft","source":"stderr","tag":"ecs-**dev_dpp-proxy-draft_v1_blue**-798-dev-service-dpp-proxy-draft-96eda4add3ca82ec5600/8c19f5d7ff4b","attrs":{"SERVICE_NAME":"dpp-proxy-draft","SERVICE_TAGS":"dpp-proxy","SERVICE_VERSION":"v1","com.amazonaws.ecs.task-arn":"arn:aws:ecs:us-west-2:776609208984:task/497f2b51-9bb7-4fb1-bce9-4058561bb2ad"}} I hope to extract the highlighted portion seen above. Pls help!!

Is their such a configuration as multiple indexers in an enterprise environment but not consider it a cluster?

$
0
0
I have an environment with three search heads, three indexers, one license server (also acts as the deployer), and one deployment server (distributing forwarder configurations, inputs.conf). This environment is setup with a replication factor of 2. Would you say the above configuration is distributed as well as clustered? Also, does cluster alway refer to the indexer group or is it right to say "we have a search head cluster" as well as " we have an index cluster"? Thanks Gary

Can you help me use regex to extract fields that contain 'ssd'?

$
0
0
Hello Splunk, I have the following raw log lines: 1 2019-01-29T15:44:41.184068+00:00 xxx vpxd 4566 - - Event [5650552] [1-1] [2019-01-29T15:44:41.182223Z] [vim.event.VmMigratedEvent] [info] [] [x - x] [5650175] [Migration of virtual machine vm1 from host1, ds_SSD_001 to host1, ds_SSD_002 completed] I'm trying to find all log entries where both fields containing *SSD* (ds_SSD_001, or ds_SSD_002,or ds_SSD_00x) are different. (This basically means that one VM has moved from one datastore to another) I figured I should be using rex to extract the 2 occurrences of *SSD* and compare them | where field1 != field2 I can't manage to find the regex code to extract these fields (I'm very new to regex...)

Splunk is not displaying the latest time of lookup updated

$
0
0
Splunk is not displaying the latest time of lookup updated | rest /servicesNS/-/-/data/lookup-table-files | search title=* | table title updated title updated test.csv 1969-12-31T18:00:00-06:00

Azure Monitor Metrics in event hub but not appearing in Splunk

$
0
0
We configured the Azure Monitor Metrics input and configured diagnostics to send metrics (and logs) to our event hub. We are only seeing 2 amm_resourceTypes when there should be more (ex. Load Balancer). Using Service Bus Explorer, we can see expected metrics data in the event hub. After reading through the docs on GitHub, I do not see any additional configuration required to pull other Azure resource type metrics. Should the add-on automatically handle all/most resource types? We're using add-on version 1.3.1.

How do I get a full listing of indexes and gigabyte ingestion?

$
0
0
I've been using the following search to get a count of ingested daily (24hrs) and for 30 days, but I'm only getting the top 10. How can I get the others beyond the top 10? index=_internal source=*license_usage.log type=Usage idx=* | eval GB = b/1024/1024/1024 | timechart span=1d useother=0 sum(GB) by idx | rename idx as Index, sum(GB) as Gigabyte

Certificate Transparency Log add-on for Splunk not working as expected

$
0
0
Has anyone been able to get the add-on to work? I'm striking out here. I configured the add-on exactly per the documentation. This is what I'm getting for every input I configure. ![alt text][1] I can browse to https://ct.googleapis.com/logs/argon2018/ct/v1/get-sth if that means anyhting. ![alt text][2] [1]: /storage/temp/264684-capture3.jpg [2]: /storage/temp/264685-capture4.jpg

Help with a pie chart search?

$
0
0
All, I have a relatively simple search but I am tripping over it for some reason. I want a pie chart of all hosts in my company. Any host with package="telnet*" as red and those without in blue. Any idea how I'd get that search working?

Data Not Getting Extracted Correctly as per CSV

$
0
0
We got an requirement to ingest a CSV file from a client machine. And in that CSV file we have headers in place as well. Headers are as mentioned something like that below: Received SenderAddress RecipientAddress Subject Status FromIP Size MessageId 1/30/2019 4:29 xxxx@gmail.com yyyy@gmail.com Test Message Delivered 1.x.x.x 1234 xxx.gmail.com So I have written the inputs.conf as below: [monitor://X:\Test\*.csv] index = test sourcetype = test_logs crcSalt = initCrcLength = 4999 disabled = 0 And have ingested the same into Splunk but the logs are getting extracted as in excel. So should we need to place any props and transforms if yes what would be the props and transforms.conf and where should i need to place the props and transforms as well. Also the log file is not upated delay in Splunk as well. Actually new logs are already there in client machine but still its not reached Splunk as well. So kindly help on this request.

What does "notracking@example.com" mean in Splunk Add-on for Microsoft Cloud Services?

$
0
0
Hi, all I am currently collecting the ThreatIntelligence Workload using the Splunk Add-on for Microsoft Cloud Services. While reviewing the collected logs, I saw a log that the UserId field is "**notracking@example.com**", but I do not know what it means. I want to make sure that "**notracking@example.com**" is provided by Office 365, or information generated by add-on. The RecordType for that log is 41. Office 365 Management Schema documents do not provide this information. { [-] AppName: Mail AppVersion: 0.0.0000 CreationTime: 2019-01-28T22:37:20 Id: #blind# OS: Win32 Operation: TIUrlClickData OrganizationId: #blind# RecordType: 41 SourceId: #blind# SourceWorkload: Mailflow TimeOfClick: 2019-01-28T22:34:40 Url: http://abcde.com/?61o1EX=IGCQlSQRYNiGBrD0ALmQHT3LUw UrlClickAction: 2 **UserId: notracking@example.com** UserIp: 10.10.10.10 UserKey: ThreatIntel UserType: 4 Version: 1 Workload: ThreatIntelligence } Thank you.

create a macro with token value using js and i can use that macro in different dashboards also

$
0
0
Hi dudes, I have run a query in one dashboard based on that result created a token. Now i want to create a macro with that token results using java script or jquery. **Note:-** I have to access that macro in diff dashboards also. Thanks in advance.

Website regex

$
0
0
Is it possible to use regex in configuration of the websites. Special, if my logs are on multiple servers. So can I use something like this : vlp05([4-5]+). This shown example doesn't work so I am wondering what might work. Thanks.

SNMP Splunk MA App for Netcool is not sending traps

$
0
0
We have installed "SNMP Splunk MA App for Netcool" on a new search head and linked the search head to the indexers. An alert with the trigger action "Netcool Custom Modular Alert"has been created and all fieds have been filled. We see a log entry in /opt/splunk/var/log/splunk/netcool_custom_modular_alert.log: 2019-01-28 07:59:02,230 INFO START 2019-01-28 07:59:02,230 INFO splunkapp:search, splunksearch:test_xxxxxx, snmp_serverip:172.22.171.164, snmp_port:162, snmp_community:xxxxx, snmp_hostname:, snmp_alertmessage:More than 1 release cause 5XX in last 30 minutes for customer xxxx xxxxx, snmp_severity:5, splunk_escalation:xxxxxxxx, splunk_payload:{u'configuration': {u'hostname': u'', u'enterpriseSNMPSpecificObjectID': u'9', u'customtext': u'', u'AlertKey': u'123456789', u'community': u'public', u'alertmessage': u'More than 1 release cause 5XX in last 30 minutes for customer xxxxxxxxxt', u'enterpriseSNMPObjectID': u'1.2.3.4.5.6.7.8', u'enterpriseSNMPSpecificTrapID': u'10', u'serverip': u'172.22.171.164:162', u'escalation': u'xxxxxxxxx', u'severity': u'5'}, u'results_link': u'http://xxxxxxxxx:8000/app/search/search?q=%7Cloadjob%20scheduler__admin__search__RMD510cd368a33d67d83_at_1548658740_9899%20%7C%20head%201%20%7C%20tail%201&earliest=0&latest=now', u'server_uri': u'https://127.0.0.1:8089', u'results_file': u'/opt/splunk/var/run/splunk/dispatch/scheduler__admin__search__RMD510cd368a33d67d83_at_1548658740_9899/per_result_alert/tmp_0.csv.gz', u'result': {u'count(Q21_sip_dest_respcode)': u'2'}, u'sid': u'scheduler__admin__search__RMD510cd368a33d67d83_at_1548658740_9899', u'search_name': u'test_xxxxxxxl', u'server_host': u'xxxxxxxxxxxxx', u'search_uri': u'/servicesNS/nobody/search/saved/searches/test_xxxxxxx', u'session_key': u'd0X5lh5dXc7S9uTW^82E4eQ1l9z6jQpVjKhTm3xczVgILSEZjkVRvHf6z2QXOvv9MR197IjzD5_50uJ0anuIwvZwuYFGTcSmBuuI^L9QsYNPmwZKFplYgJPy8VbVPC^i1W82Gfvt8FY', u'app': u'search', u'owner': u'admin'}, splunk_customtext: 2019-01-28 07:59:02,272 INFO STOP We receive nothing at the dest ip 172.22.171.164:162 Also nothing is seen on the wire with tcpdump (other snmp sending procs on this server do work and are seen on the wire): tcpdump -i any -s 0 host 172.22.171.164 and port 162

Custom Alert Action UI

$
0
0
Hello! I'm trying to append to the Alert ui the query itself (the search from which the user create the alert), in order to send it to another dashboard via link from the UI. What i mean is that after a user will run a search, and will try to save it as an alert, the UI of the alert will contain a link to another dashboard with the search string. I'm aware of the $search$ token but it doesn't seem to work. Is this thing possible ?

Splunk Instrumentation error

$
0
0
Hi all, I keep on getting the following error in my logs: `message from "python /opt/splunk/etc/apps/splunk_instrumentation/bin/instrumentation.py" HTTPSConnectionPool(host='e1345286.api.splkmobile.com', port=443): Max retries exceeded with url: /1.0/e1345286/57d8a3f1-7eb3-5a4f-abd0-a902083af286/100/0?hash=none (Caused by : [Errno 104] Connection reset by peer)` We are not using any MINT connections, hence I am a bit flustered by the 'splkmobile' URL... Can anyone lead me into the right direction to fix this?

forwarding logs to third party system

$
0
0
Hello All , I want to check that whether Splunk forwarder agent (UF) can be use to forward collected raw data to another analytics tool other than splunk , I mean third party analytics tools . I have read some document that we can achieve this from UF /HF . But guys can you help me in to let me know that which all others third party tools i can use to test it . Warm Regards Manish

is Splunk convert the time from UTC to GMT?

$
0
0
Hi Splunker; I have kaspersky logs this logs send logs to splunk by use CEF format, when changed format to syslog format was there issue. this issue is: We are receiving now syslog from Kaspersky in real-time, but the timestamp is in UTC Time Zone (GMT 00:00) – the timestamp highlighted with red is our Time Zone (GMT+03:00) while the Kaspersky syslog is UTC Time Zone (See the timestamp highlighted with blue). There are three hours difference from our time, you can see the screenshot from syslog server. Is there way from splunk for does to convert (GMT 00:00) time to (GMT+03:00) ![alt text][1] Thank you [1]: /storage/temp/263777-syslog.png

set earliest and latest time stamp

$
0
0
Hi All, I want to set fix value on the earliest and latest, earliest should be 6PM and the latest should be 7AM the next day how can I do this? TIA
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>