Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How can I connect my ionic app to splunk enterprise server?

$
0
0
So I am trying to connect my Ionic app to splunk entreprise server but I don't know how I can do this, I install a Javascript SDK for splunk on my Ionic project then I add a script to connect but he turn 404 not found, I need a help please.

Approaches to manage logging level of Splunk Universal Forwarder

$
0
0
With changes in Splunk pricing coming faster than our ability to increase funding sources, our team is stuck in a maintenance mode where we cannot on-board a new data source without freeing up license/storage by retiring or tuning existing data sources. One of the more superfluous sources of storage displacement is splunkd INFO level log coming from our many many universal forwarders on client systems. I would like to change the default logging level for most components of splunkd from INFO level to WARN or above. For the time being, I plan to invoke this of change through a [script-based input][1], running each time splunk restarts. **Does anyone have a method to more nativly manage splunkd logging levels via a Splunk app?** **What sort of logging levels have hushed, if any?** [1]: https://github.com/dstaulcu/SplunkTools/blob/master/LogConfigMgr-v2.ps1

Change default search time for pivots from all time to 24hrs in splunk 7.2.1

$
0
0
Currently when building a pivot table the default time is set to "All Time". Is it possible to set it to some other value? I've tried overriding it by adding the following entries to $SPLUNK_HOME/etc/system/local/ui-prefs.conf, but they have no effect. We're running Splunk 7.2.1. [pivot] dispatch.earliest_time = -7d dispatch.latest_time = now [search] dispatch.earliest_time = -7d dispatch.latest_time = now [default] dispatch.earliest_time = -7d dispatch.latest_time = now

Threat PCAP configuration

$
0
0
I need guidance on how to configure Palo Alto panorama / firewalls to allow for requests for Threat PCAPs in PA Splunk app. I submitted a TAC case to PA asking if splunk only needed to communicate with Panorama and its seems that is not the case because these are file exports. So do I need to configure a API access on each firewall and ensure network connectivity to each from Splunk in order for the splunk app to retrieve PCAPs? Please see my original question below and PA TAC response. -------- Question: "We are configuring Panorama to accept API calls from splunk in order to export Threat PCAP files. We need to know if Splunk only needs to communicate with Panorama to download those files or if it needs to communicate to each individual firewall managed by Panorama." TAC RESPONSE: I'll summarize some information from our XML API guide as well as provide a link below. In short, it is not possible to use Panorama to export threat packet captures. The only API calls that can be redirected from a Panorama to a firewall are operational commands (type=op) using the target parameter. Unfortunately the threat packet capture export is an export command (type=export) and so it couldn't be redirected. Check out page 32 of the PDF you can export here which also mentions just operational commands being able to be redirected: https://docs.paloaltonetworks.com/pan-os/9-0/pan-os-panorama-api/get-started-with-the-pan-os-xml-api.html Personally, I've dealt with this in the past in my own scripts by pivoting on the serial number that is returned within a threat log entry, which I then get the IP using the "show devices connected" operational command on the Panorama, followed by doing a query directly to the firewall's IP address with the export command. I can't comment on whether something like this would be possible within Splunk's engine as opposed to something written in a separate script. Should you have an SME for the Splunk side that's familiar with how they can do API calls, I'd refer them to the XML API guide above or I'd also be happy to discuss further if they have any other ideas as far as options.

How do I calculate time between these values?

$
0
0
I have an event that has two fields. PROGRESS_START and PROGRESS_END. Both of these fields contain multiple values. One PROGRESS_START and PROGRESS_END for each navigation a user makes. If a user navigates 8 times, there will be 8 values inside of PROGRESS_START and 8 values inside of PROGRESS_END. PROGRESS_START means that the user has clicked to navigate to a new screen and it is given a value that is an epoch time of when that navigation starts. PROGRESS_END is when the loading is complete on the next screen and is also given an epoch time of when the loading from the navigation ends. Here is an example of what that a search looks like to view the PROGRESS_START and PROGRESS_END fields index="abc" sourcetype="xyz" event=timemetrics userid=123 | spath output=progress_start path="metrics.progressMetrics{}.events{}.PROGRESS_START" | spath output=progress_end path="metrics.progressMetrics{}.events{}.PROGRESS_END" | table progress_start,progress_end Here is the output form that search. progress_start 1573487643709 1573487722305 1573487955841 1573487979760 1573488015745 1573488060305 1573488078606 1573488093558 1573488109858 1573488122452 progress_end 1573487718303 1573487908044 1573487957176 1573487981268 1573488018744 1573488061909 1573488079705 1573488095764 1573488111632 1573488123971 What I'd like to know is how do I use these epoch time values to determine how long a user spent on a given screen before navigating? I think I would need to subtract a following START_PROGRESS value from a previous PROGRESS_END value. **Some bonus information:** With help from a user on this site, I was able to put a search together to calculate the loading time between each navigation by subtracting each PROGRESS_START value from each PROGRESS_END value. That difference is how long the user was looking at a loading screen. This is an example of what that search looks like index="abc" sourcetype="xyz" event=timemetrics userid=123 | spath output=progress_start path="metrics.progressMetrics{}.events{}.PROGRESS_START" | spath output=progress_end path="metrics.progressMetrics{}.events{}.PROGRESS_END" | eval timeremainder = mvzip(progress_end, progress_start,".") | mvexpand timeremainder | rex field=timeremainder "(?.*)\.(?.*)" | eval loading_time=(progress_end - progress_start) / 1000| table progress_start,progress_end,loading_time Here is what the output ends up being progress_start progress_end loading_time 1573487643709 1573487718303 74.594 1573487722305 1573487908044 185.739 1573487955841 1573487957176 1.335 1573487979760 1573487981268 1.508 1573488015745 1573488018744 2.999 1573488060305 1573488061909 1.604 1573488078606 1573488079705 1.099 1573488093558 1573488095764 2.206 1573488109858 1573488111632 1.774 1573488122452 1573488123971 1.519 In summary. I've been able to use these values to determine loading times and now I could use any advice or suggestions as how to leverage this same information to see the time spent between navigations. I hope I've articulated this question well enough. Feel free to ask if you have any questions or need more clarification. Thank you for any information you can share to help me solve this.

Creating a Conditional Field using Field Extraction

$
0
0
Hey everyone, I am new to Splunk, and I need to create a new sourcetype along with field extractions. I am using regex expressions in props.conf and so far it is working well. But for the next field, the field name will depend on the value of two other fields that I have already successful extracted. Hence, my question is: is it possible to have a field that is only extracted depending on the values of other fields? And if these conditions aren't met then the field is not extracted at all? For example, say we have two fields with these values in the logs. If field_a = 1 AND field_b = a , then extract a field called c1 (which equals 1). If field_a = 1 AND field_b != b , then do not extract anything. If field_a = 4 AND field_b = b , then extract a field called c2 (which equals 4). I know that this is easy to do in the search app interface on the web using SPL. But I want to be able to create this in the props.conf and so the field would be readily available while searching. Also, if this is possible, it would be a cool trick to learn. Thank you.

DB Connect, MSSQL Availability Group, Read Only Intent.

$
0
0
I am trying to connect 3.1.2 to a off-node in a 3 host MS-SQL Cluster. The reason for this is to take load off of the live cluster node. The DBA has assured me that the Availability group flag is set to allow read only connections to this node. I believe him because the error message changed from the last time I tried to connect. From this. > The target database ('DatabaseName') is in an availability group and is currently not accessible for queries. Either data movement is suspended or the availability group replica is not enabled for read access...... To this > The target database ('DatabaseName') is in an availability group and is currently accessible for connections when the application intent is set to read only. For more information about application intent, see SQL Server Books Online." The DBA has asked that I simply insert -Kreadonly into the connection string. The closest thing I can see to do that in DBConnect is the little check box for "read only". This does not work. I have also tried editing the JDBC URL to include `;readonly=true` with no success. Anyone run into this before?

Website Monitoring: Different alerts for different websites

$
0
0
I am literally a couple of hours into using Splunk free so please bare with me. We currently have multiple websites that we need up-time reports on so I downloaded the website monitoring application. This seems to be working like a charm, but I want to be able to send emails alerts to predefined groups depending on which website generated the failure. I see that there is a default alert which I am using to send emails, however; I want the distribution group to be different for each site. I am assuming I would need to set up different alerts for each specific site failure? Is this possible in Splunk and how would I do this? Thanks!

Extracting filename from verbose message

$
0
0
I am trying to write a splunk query to create a dashboard. I have message from where I need particular part as filename "**Copying the file : /mount/logs/output/fileName.xml to : /mount/splunk/fileName.xml.pgp is started**" I need the part **fileName.xml.pgp** from the above message, how do I achieve this? Thanks

chart only display when event exist at day / day hour

$
0
0
Hi how to display in chart only the days (or day & hour) when a „event“ (in my case speedtest results) is/are available. i do not need „count“, „avg“ … in the community i found: | timechart fixedrange=false count but, cause i dont need/use „count by XY“ this is useless for me. and, it can be to have more than just one (1) result per day. _time field_speedUp field_speed_Down 2019/11/13 14:35:09 800 400 2019/11/13 14:37:28 300 200 thanks for helping ;-)

Wondering about success with TA for Defender ATP hunting API

$
0
0
Has anyone successfully used this app?

Comment utilisez inputlookup et un index

$
0
0
Bonjour à tous, Ci dessous ma recherche (pas très propre, je suis novice :) ) Par contre j'ai une idée, j'ai regroupé tous les host dans un fichier CSV et j'aimerai obtenir le même resultat que dans la recherche actuelle. comment proceder? je sais que pour ajouter un fichier CSV à une recherche il faut faire |inputlookup "nomdufichier" , quel sera la suite dans mon cas? comment puis je ajouter l'index? merci à tous. [1]: /storage/temp/275140-image.png

Check for event that has not changed for X days

$
0
0
Hello. I'm struggling with a query. We want to search Windows Event logs for accounts whose passwords have not been changed (by admins) for more than 700 days. I have created a query that informs me of when a password was changed: index=main host=*DC* EventCode=4724 | eval Modifier = mvindex(Account_Name, 0) | eval User_Name = mvindex(Account_Name, 1) | rename Group_Name AS Modified_Group | table _time Modifier User_Name But I do not know how to get Splunk to check for a password that has NOT been changed for over X days. Is this even possible? Thank you in advance for your help.

TA-DMARC TLS Version Error

$
0
0
When attempting to add an input for TA-DMARC, I am receiving the following error: Error connecting to {imap.hostname.tld} with exception [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:741) ---------- TLS is working on the IMAP host on port 993: sslscan {imap.hostname.tld}:993 Version: 1.11.13-static OpenSSL 1.0.2-chacha (1.0.2g-dev) Connected to {ip.address} Testing SSL server {imap.hostname.tld} on port 993 using SNI name {imap.hostname.tld} TLS Fallback SCSV: Server only supports TLSv1.0 TLS renegotiation: Secure session renegotiation supported TLS Compression: Compression disabled Heartbleed: TLS 1.2 not vulnerable to heartbleed TLS 1.1 not vulnerable to heartbleed TLS 1.0 not vulnerable to heartbleed Supported Server Cipher(s): Preferred TLSv1.0 256 bits ECDHE-RSA-AES256-SHA Curve P-521 DHE 521 Accepted TLSv1.0 128 bits ECDHE-RSA-AES128-SHA Curve P-521 DHE 521 Accepted TLSv1.0 256 bits AES256-SHA Accepted TLSv1.0 128 bits AES128-SHA Accepted TLSv1.0 112 bits DES-CBC3-SHA SSL Certificate: Signature Algorithm: sha256WithRSAEncryption RSA Key Strength: 2048 Subject: {imap.hostname.tld} Altnames: DNS:{imap.hostname.tld}, {snip} Issuer: DigiCert SHA2 Secure Server CA Not valid before: May 31 00:00:00 2017 GMT Not valid after: Aug 3 12:00:00 2020 GMT ---------- And the SPLUNK instance is able to connect to the IMAP server via TLS 1.0 on port 993: $SPLUNK_HOME/bin/splunk cmd openssl s_client -connect {imap.hostname.tld}:993 CONNECTED(00000003) depth=1 C = US, O = DigiCert Inc, CN = DigiCert SHA2 Secure Server CA verify error:num=20:unable to get local issuer certificate --- Certificate chain 0 s:{snip} i:/C=US/O=DigiCert Inc/CN=DigiCert SHA2 Secure Server CA 1 s:/C=US/O=DigiCert Inc/CN=DigiCert SHA2 Secure Server CA i:/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Global Root CA --- Server certificate -----BEGIN CERTIFICATE----- {...snip...} -----END CERTIFICATE----- subject={snip} issuer=/C=US/O=DigiCert Inc/CN=DigiCert SHA2 Secure Server CA --- No client certificate CA names sent Server Temp Key: ECDH, P-521, 521 bits --- SSL handshake has read 3143 bytes and written 508 bytes --- New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1 Cipher : ECDHE-RSA-AES256-SHA Session-ID: {snip} Session-ID-ctx: Master-Key: {snip} Key-Arg : None PSK identity: None PSK identity hint: None SRP username: None Start Time: 1573671407 Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) --- * OK The Microsoft Exchange IMAP4 service is ready. ---------- Is there any configuration in TA-DMARC that may have an effect on this issue or should I be looking elsewhere in SPLUNK? Any pointers or hints with this issue would be appreciated.

Splunk Windows App for infrastructure_200

$
0
0
I have load splunk-app-for-windows-infrastructure_200, splunk-supporting-add-on-for-active-directory_300 and splunk-add-on-for-microsoft-windows_700. When I run the guided install it finds the Domain but does not find anything else from AD no controllers or nothing else. The AD directory has the info set-up and finds the domain just fine. MSAD Index has data in it at least some. Any suggestions would be appreciated.

Is it possible to suppress errors for lookups that are intentionally hidden from certain users?

$
0
0
We have (here at the University) some course dashboards we’re working on. The source data has obfuscated userIDs, and dashboard dev is going swimmingly. We want certain privileged users to be able to view these dashboards with actual (human-friendly) userIDs (we call “netIDs”). I’ve set up an automatic lookup to turn “personID” in to the netID value … but only if the privileged user is in a particular role. I.e., the lookup is only available to users in the 'privileged' role. This works great. The dashboards work for both privileged and unprivileged. (Unprivileged get the obfuscated ID, privileged get the ID from lookup.) However… persons without access to the lookup are getting errors about not being able to locate the lookup. My question: Is it possible to suppress these errors? They’re reporting a lack of access that is intentional. If there is no way to suppress the errors — is there another way to design this? (I don’t want us to have to manage separate sets of dashboards.)

Can Splunk share memory data to different queries?

$
0
0
Hello, Splunk experts, I have a very big raw data, and need to pass the different rules. For example: query1: index=abc, sourcetype=xyz data=raw|rule1,rule2...ruleN and another query2 is ndex=abc, sourcetype=xyz data=raw|ruleN+1,ruleN+2...ruleN+M.... the raw data is same, but rules are different . If I ran this 2 queries, how can I share same raw data in memory and don't need to load 2 times of the big data. Any solution for this? Thanks.

How to calculate time between these values?

$
0
0
I have an event that has two fields. `PROGRESS_START` and `PROGRESS_END`. Both of these fields contain multiple values. One `PROGRESS_START` and `PROGRESS_END` for each navigation a user makes. If a user navigates 8 times, there will be 8 values inside of `PROGRESS_START` and 8 values inside of `PROGRESS_END`. `PROGRESS_START` means that the user has clicked to navigate to a new screen and it is given a value that is an epoch time of when that navigation starts. `PROGRESS_END` is when the loading is complete on the next screen and is also given an epoch time of when the loading from the navigation ends. Here is an example of what that a search looks like to view the `PROGRESS_START` and `PROGRESS_END` fields index="abc" sourcetype="xyz" event=timemetrics userid=123 | spath output=progress_start path="metrics.progressMetrics{}.events{}.PROGRESS_START" | spath output=progress_end path="metrics.progressMetrics{}.events{}.PROGRESS_END" | table progress_start,progress_end Here is the output form that search. progress_start 1573487643709 1573487722305 1573487955841 1573487979760 1573488015745 1573488060305 1573488078606 1573488093558 1573488109858 1573488122452 progress_end 1573487718303 1573487908044 1573487957176 1573487981268 1573488018744 1573488061909 1573488079705 1573488095764 1573488111632 1573488123971 What I'd like to know is how do I use these epoch time values to determine how long a user spent on a given screen before navigating? I think I would need to subtract a following START_PROGRESS value from a previous PROGRESS_END value. **Some bonus information:** With help from a user on this site, I was able to put a search together to calculate the loading time between each navigation by subtracting each PROGRESS_START value from each PROGRESS_END value. That difference is how long the user was looking at a loading screen. This is an example of what that search looks like index="abc" sourcetype="xyz" event=timemetrics userid=123 | spath output=progress_start path="metrics.progressMetrics{}.events{}.PROGRESS_START" | spath output=progress_end path="metrics.progressMetrics{}.events{}.PROGRESS_END" | eval timeremainder = mvzip(progress_end, progress_start,".") | mvexpand timeremainder | rex field=timeremainder "(?.*)\.(?.*)" | eval loading_time=(progress_end - progress_start) / 1000| table progress_start,progress_end,loading_time Here is what the output ends up being: progress_start progress_end loading_time 1573487643709 1573487718303 74.594 1573487722305 1573487908044 185.739 1573487955841 1573487957176 1.335 1573487979760 1573487981268 1.508 1573488015745 1573488018744 2.999 1573488060305 1573488061909 1.604 1573488078606 1573488079705 1.099 1573488093558 1573488095764 2.206 1573488109858 1573488111632 1.774 1573488122452 1573488123971 1.519 In summary. I've been able to use these values to determine loading times and now I could use any advice or suggestions as to how to leverage this same information to see the time spent between navigations. I hope I've articulated this question well enough. Feel free to ask if you have any questions or need more clarification. Thank you for any information you can share to help me solve this.

Data retention

$
0
0
Where and How can I set the data retention on splunk? Because I have seen there are many bow to set it like telemetry, main etc.. So it seems to be really not clear..

Pagination cursor with GET REST API

$
0
0
If I setup the REST API modular input - it'll properly read the API but I can't figure out how to get it to paginate. In the API response there's a field called next-cursor which its value should be specified in the NEXT API query that the REST modular app makes. I'm thinking, maybe either Response Handler Arguments, OR the Token substitution in Endpoint URL but its not super clear on how to use each of these.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>