Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Single Search Head/Single Indexer (distributed search)

$
0
0
Hi, Is it possible to create a single search head instance ? And or a single indexer instane? - Or are the instances by default indexers?

iam receiving a message unbalanced quotes , i tried using back slash

$
0
0
| eval e="$time_token.earliest$", l=$time_token.latest$"| eval e=case(match(e,"^\d+$"),e,e="" OR e="now" , "0" , true(), relative_time(now(),e)) | eval l=case(match(l,"^\d+$"),l,l="" OR l="now" , "2145916800", true(), relative_time(now(),l))

Collectd Docker Plugin for Splunk App Infrastructre is not working

$
0
0
Hello Everybody, i want to monitor my docker containers with collectd and the Splunk infrastructure App, I followed the instructions of https://docs.splunk.com/Documentation/InfraApp/latest/Admin/ManageAgents, but when I want to start the collect deamon it comes up with these error Messages: docker plugin: Buffer size is 16384, Data received=16384. Increase ReadBufferSize docker plugin: curl_easy_perform failed with status 23: Failed writing received data to disk/application docker plugin: Failed to get list of running containers The Connection to my Splunk Server via hec is working fine, because i get the metrics of my physical machine, but not of these Docker containers. My Docker containers are running and i have checked the Docker.sock file with curl. I am working on that problem for 2 days now. Would be great if anyone could help. Best regards Jannik

Process Solaris audit files into Splunk 7.2.5

$
0
0
Hi, I have a customer running both Solaris 11 and I need to monitor their Solaris audit data as kept in their Global Zones (this monitors all Zones). How do I process this binary format file to retrieve only the latest log file (same way that DB-Connect App does). I have TA for *NIX LINUX installed on their Splunk Server. I want to be able to retrieve data such as: User Login information - failed; successfull with time of login and the number of attempts of unsuccessful logins etc. Regards David

Index and forward events on indexer

$
0
0
Hi all, i have a Splunk indexer (version 6.2.14) that receives events from a Splunk forwarder (same version). On the forwarder I have a monitor that reads some files from local filesystem and forwards a subset of them to the indexer. The indexer receives events on a TCP over TLS port and indexes the evens with no problem. Filter on the forwarder works as expected. Now I need to continue to index all that comes from the forwarder, but moreover I need to forward a subset of events received to a third-party destination with TCP (no TLS) protocol. Here is the config I have build: **FORWARDER outputs.conf ** [tcpout] defaultGroup = _9999 sslCertPath = /... sslRootCAPath = /... sslVerifyServerCert=false maxQueueSize = 100MB forwardedindex.3.blacklist = (_internal|_audit|_telemetry|_introspection) [tcpout:_9999] autoLB = false server = :9999 **transforms.conf** [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = . DEST_KEY = queue FORMAT = indexQueue [CA] REGEX = (?s)\d+\s\[-(?:07628|07777|07649|07675|07676|07697|07698|07705|07714|07717|07718|07719|07724|07725|97726|07727|07734|07751|07753|07765|07767|07783|07792|07816|07819|07824|07827|07836|07841|07842|07849|07854|07884|07886|07888|07889|07895|07896|07899|07900|07901|07903|07914|07916|07929|07930|07932|07933|07943|07948|07951|07952|07953|07954|07955|07956|07960|07963|07964|07966|07968|07972|07984|07965|07823|07977|07941|07992|07982|07981|07979|07994|07647|07840|07790|07756|07743|07744|07989|07990|07993|07618)\s DEST_KEY = queue FORMAT = indexQueue [udpsyslog] REGEX = .*\]\: Accepted password.*|.*\)\: session closed.*|.*\)\: session opened.*|.*\)\: authentication failure.*|.*\]\: Failed password for.*|.*\: invalid user.*|.*\: password changed for.* DEST_KEY = queue FORMAT = indexQueue **inputs.conf** [default] host = [monitor:///tmp/file.log] time_before_close = 15 disabled = false followTail = 0 sourcetype = CA **INDEXER: inputs.conf** [default] host = [splunktcp-ssl:9999] _INDEX_AND_FORWARD_ROUTING=STRING _TCP_ROUTING=my_syslog_ca [SSL] cipherSuite = TLSv1.2+HIGH:!3DES:@STRENGTH password = ********** requireClientCert = false rootCA = /... serverCert = /... sslVersions = tls1.2 **outputs.conf** [indexAndForward] index=true selectiveIndexing=true [tcpout] defaultGroup=my_syslog_ca forwardedindex.3.blacklist = (_internal|_audit|_telemetry|_introspection) [my_syslog_ca] indexAndForward = true [tcpout:my_syslog_ca] disabled=false sendCookedData=false server=:9999 **props.conf** [source::/tmp/file.log] TRANSFORMS-ca=send_to_syslog_ca **transforms.conf** [send_to_syslog_ca] REGEX = (?!.*\[-07965.*Client type: GUI.*Operator\/CMS).*\[-07965.*Client type: GUI.*|(?!.*\[-07966.*Client type: GUI.*Operator\/CMS).*\[-07966.*Client type: GUI.*|.*\[-07968.*|.*ALARM.*|.*\[-07963.*|.*\[-07964.*|.*\[-07972.*|.*\[-07792.*|(?!.*\[-07841.*Nearing expiration).*\[-07841.*|(?!.*\[-07842.*Nearing expiration).*\[-07842.*|(?!.*\[-07895.*Nearing expiration).*\[-07895.*|.*\[-07968.* DEST_KEY=_TCP_ROUTING FORMAT=my_syslog_ca As said, on the forwarder everything works as expected, events are read, filtered and sent to indexer. The indexer index the filtered events received with no issues. The problem is that the filter while sending the events to the third party does not works. The indexer sends everything to the third party, not only the events defined on the regex in the indexer's transforms. What am I doing wrong? Thanks in advance. Alessandro

Need help using Tstats getting count of a string in raw logs

$
0
0
I want to show the count of logs where a string appeared I have a strong and need to know how many times it appears in logs

Can Splunk process data that is "updated" over time?

$
0
0
Dear fellow Splunkers, I have a use case where I believe Splunk could provide great insight, alerts and dashboards, but I do not know if the way data has to be acquired makes it the right tool for the job. The data in question is timesheet reporting, with the additional challenge that timesheets might be updated (data entry errors fixed) later on. For example, I could run a script every day that would import records consisting of: * ID/Name of the user * Current timestamp = the time that data was read from the underlying operational system * Timesheet period: Date, begin and end time * Project being worked on * Maybe additional categories So, it might happen that I import some of these tuples, but then – say the next day – re-run the import and one of the following happens: * A particular period is no longer present, maybe because it has been deleted (time recorded by mistake) * A particular period has changed in duration (e. g. forgot to stop timer) * New periods are added (forgot to start timer) Would it be feasible to work with this data in Splunk at all? I guess the problem is that Splunk is not a (relational) database but an append-only index, right? I mean, how could I easily add to all relevant searches that for a particular day, only those events (imported records) are to be considered that have been imported at the time where data for that day has last been updated? Does that problem description make sense?

Can i run a refresh from the command line?


Join two lines in the same search

$
0
0
Hi all, I'm currently monitoring log files. I have exctrated 2 fields end_collection_timestamp & starting_collection_timestamp. I want to calculate duration of execution. | eval duration = end_collection_timestamp - starting_collection_timestamp But this method do not work because every lines with field end_collection_timestamp do not contain the fields starting_collection_timestamp. I do not understand all but i think this is the root cause. The result i want is a timechart with avg duration by day & source. Thanks for your help

Display date on X axis

$
0
0
Hi all, I'm trying to generate a timechart wich expose execution duration of a file. I almost succeed but i'm not able to generate an X axis with tiimestamp visible. Is it possible ? index="saplogs" sourcetype=SAPCARBOOKING source="CARBOOKING.*.log" | stats min(_time) as start max(_time) as end by source | eval duration=end-start | eval start=strftime(start, "%Y-%m-%d %H:%M:%S") | eval end=strftime(end, "%Y-%m-%d %H:%M:%S") | stats avg(duration) as Duration by end, source | rename end as "End of processing date" In stats line i sort by end date and source. Because i want to see the source concerned by the duration field. Thank you for your help.

can some one explain me the function of the below code in specific

$
0
0
| eval created_upper_token=if("$time_token.latest$"="" OR like("$time_token.latest$","%now%"),"@s","$time_token.latest$") | eval created_lower_token=if("$time_token.earliest$"="",0,"$time_token.earliest$") | replace "rt*" with * in created_upper_token | replace "rt*" with * in created_lower_token | eval created_lower_bound = if(isnum(created_lower_token), created_lower_token, relative_time(now(),created_lower_token)) | eval created_upper_bound = if(isnum(created_upper_token), created_upper_token, relative_time(now(),created_upper_token)) | where order_date >= created_lower_bound AND order_date <= created_upper_bound|

Lookup file 'cisco_ios_messages.csv' has 2 missing fields

$
0
0
This warning has been polluting my internal logs for a long time: 11-27-2019 13:39:46.280 +0000 WARN IndexedCSV - csv file /opt/splunk/var/run/searchpeers/B5C95237-71AF-4203-AD5C-F88B10308CFC-1574861980/apps/cisco_ios/lookups/cisco_ios_messages.csv has 2 missing fields Today I finally decided to look at it. Using some filtering in Excel, I found two lines in the **cisco_ios_messages.csv** lookup file in this app that didn't have all the columns populated: splunk@splunk:[~/etc/shcluster/apps/cisco_ios/lookups]> grep '""' cisco_ios_messages.csv "ZONE","6","ZS_INVALID_MEMBER","Invalid member [dec] [chars]","Invalid member type [dec] [chars] recieved from the API call.","" "ZONE","6","ZS_UNKNOWN_LIC_FEATURE","[chars]","","" A bit of googling uncovered this Cisco page [Cisco MDS 9000 Family and Nexus 7000 Series NX-OS System Messages Reference][1]. With the information from that page, I fixed the lookup file. Now the internal errors are gone: "ZONE","6","ZS_INVALID_MEMBER","Invalid member [dec] [chars]","Invalid member type [dec] [chars] recieved from the API call.","No action is required." "ZONE","6","ZS_UNKNOWN_LIC_FEATURE","[chars]","Zone Server received an event for an known licensing feature: [chars].","No action is required." [1]: https://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/system_messages/reference/sys_Book/sl_7K_MDS_u_to_z.html Thank you for the app!

How to show latest month data in Solid line and rest all months in marker point in line chart?

$
0
0
Hi , I have data for each month like below. For example, Data1 min Months -1 322 Jan-19 1 340 Jan-19 2 200 Jan-19 -1 250 Feb-19 1 360 Feb-19 2 200 Feb-19 Similarly for all months till Oct-19. we want to show min over data1 by Months. Wanted to display all months data in dots and for latest month (Oct-19) wanted to show in solid line in single chart panel. Please help.

sourcetype reporting interval?

$
0
0
Anybody have a query to show sourcetype reporting intervals (how often a ST sends data). I cant download or install any apps, so I need to use spl. Timechart maybe? Anybody have a dashboard for this? Gracias

Can I create multiple rows of the tag title

$
0
0
I am trying to separate in two rows my tag title without using an html tag with its linebraker. *TITLE TAG code* >>> `**** STREAM BY AGE CATEGORY ( Stream: YYYYYYY Aging: YYYYYYY ) ` *HTML TAG code* >>> `**

** STREAM BY AGE CATEGORY

**<br/>**

( Stream: YYYYYYY Aging: YYYYYYY )

` Can someone suggest me any solution? Thanks in advance

Verification of SAML assertion using the IDP's certificate provided failed. Error: Failed to verify signature with cert

$
0
0
I have configured SAML 2.0 SSO with our own IdP. My local splunk app http://khal:8000/ successfully redirect to Assertion consumer URL. Then I enter user and pass there and get an error message on spunk login page: Verification of SAML assertion using the IDP's certificate provided failed. Error: Failed to verify signature with cert Here is /opt/splunk/var/log/splunk/splunkd.log: 11-27-2019 16:59:30.229 +0200 ERROR XmlParser - func=xmlSecOpenSSLX509StoreVerify:file=x509vfy.c:line=341:obj=x509-store:subj=unknown:error=71:certificate verification failed:X509_verify_cert: subject=/CN=selfSi gned; issuer=/CN=selfSignedCA; err=20; msg=unable to get local issuer certificate 11-27-2019 16:59:30.229 +0200 ERROR XmlParser - func=xmlSecOpenSSLX509StoreVerify:file=x509vfy.c:line=380:obj=x509-store:subj=unknown:error=71:certificate verification failed:subject=/CN=selfSigned; issuer=/CN=s elfSignedCA; err=20; msg=unable to get local issuer certificate 11-27-2019 16:59:30.229 +0200 ERROR XmlParser - func=xmlSecOpenSSLKeyDataX509VerifyAndExtractKey:file=x509.c:line=1505:obj=x509:subj=unknown:error=72:certificate is not found:details=NULL 11-27-2019 16:59:30.229 +0200 ERROR XmlParser - func=xmlSecOpenSSLKeyDataX509XmlRead:file=x509.c:line=655:obj=x509:subj=xmlSecOpenSSLKeyDataX509VerifyAndExtractKey:error=1:xmlsec library function failed: 11-27-2019 16:59:30.229 +0200 ERROR XmlParser - func=xmlSecKeyInfoNodeRead:file=keyinfo.c:line=117:obj=x509:subj=xmlSecKeyDataXmlRead:error=1:xmlsec library function failed:node=X509Data 11-27-2019 16:59:30.229 +0200 ERROR XmlParser - func=xmlSecKeysMngrGetKey:file=keys.c:line=1230:obj=unknown:subj=xmlSecKeyInfoNodeRead:error=1:xmlsec library function failed:node=KeyInfo 11-27-2019 16:59:30.229 +0200 ERROR XmlParser - func=xmlSecDSigCtxProcessKeyInfoNode:file=xmldsig.c:line=790:obj=unknown:subj=unknown:error=45:key is not found:details=NULL 11-27-2019 16:59:30.229 +0200 ERROR XmlParser - func=xmlSecDSigCtxProcessSignatureNode:file=xmldsig.c:line=503:obj=unknown:subj=xmlSecDSigCtxProcessKeyInfoNode:error=1:xmlsec library function failed: 11-27-2019 16:59:30.229 +0200 ERROR XmlParser - func=xmlSecDSigCtxVerify:file=xmldsig.c:line=341:obj=unknown:subj=xmlSecDSigCtxSignatureProcessNode:error=1:xmlsec library function failed: 11-27-2019 16:59:30.229 +0200 ERROR Saml - Error: Failed to verify signature with cert :/opt/splunk/etc/auth/idpCerts/idpCert.pem; 11-27-2019 16:59:30.229 +0200 ERROR Saml - Unable to verify Saml document 11-27-2019 16:59:30.229 +0200 ERROR UiSAML - Verification of SAML assertion using the IDP's certificate provided failed. Error: Failed to verify signature with cert Here is /opt/splunk/etc/system/local/authentication.conf: [saml] entityId = splunkEntityId fqdn = http://khal idpSLOUrl = https://idp.cloud.imprivata.com/BOE/saml2/slo/post idpSSOUrl = https://idp.cloud.imprivata.com/BOE/saml2/sso/post inboundSignatureAlgorithm = RSA-SHA1;RSA-SHA256 issuerId = https://idp.cloud.imprivata.com/BOE/saml2 redirectPort = 8000 replicateCertificates = true signAuthnRequest = true signatureAlgorithm = RSA-SHA256 signedAssertion = true sloBinding = HTTP-POST sslKeysfile = /opt/splunk/etc/auth/server.pem sslKeysfilePassword = $7$3creInbv0FSAruNBlecI/Ax+eJmCOy2kaKaGi/AYzwNChCylHgv/cQ== ssoBinding = HTTP-POST Environment: OS: 18.04.1-Ubuntu. Splunk Enterprise: splunk-7.3.3-7af3758d0d5e-linux-2.6-amd64 and splunk-8.0.0-1357bef0a7f6-linux-2.6-amd64 P.S: We are using self signed certificates, so answer in https://answers.splunk.com/answers/543221/problem-with-saml-cert-error-uisaml-verification-o.html doesn't apply.

Text Clustering in Splunk

$
0
0
Hi, Here is my requirement I have file with column 'Description'. I need to get the most common pattern of the words.Example Repetitive Pattern Count Percentage Examples Job 80 15% Job Related with Ticket number Access 130 20% Access issues Any Job or Jobs should categorize as Job. I have installed Machine Learning Tool Kit and tried to apply TFIDF and Kmeans. I am unable to proceed as i am new to splunk. Can any one help me how to do clustering using Kmeans with data as mentioned above and get required output. Please help.

Splunk USB Control

$
0
0
Hi, We use Splunk to manage usb devices. We write script which find usb's serial number and check in our database if it is registered splunk run a command which is **devcon.exe update "c:\Windows\inf\disk.inf" "USBSTOR\GenDisk"** Our script work properly in windows 7 and 8.1 but not work in windows10. When I run bat file manually its work. When I check the logs everything is seen right. I dont understand where the problem is. Script is right because when i run manually , usb devices is plugged. Can you help me ? Thank you

Require splunk query to get list of processes running in web server

$
0
0
I used sourcetype-perfmon:process and i could get fields - counter/instance/object which refers process name

3 issues with TA_crowdstrike app: URL constants valid only for commercial and not for EU cloud, authentication header issue in validate of credentials and uncomment execution of Query API imports

$
0
0
Hi, I'm trying to use your Add-On for the EU Cloud API and I've encountered the following issues and found a solution I would like to share with you in order to ask you to check and eventually fix them in a "official" Add-on released by you. **EU Cloud version** The current Add-on contains a series of consts for the URL that are pointing to the Commercial APIs. However we don't have access to them, but to the EU Cloud ones. Is is possible for you to create a version of the Add-on that accepts the URLs for the Query and The Streaming API valid for the EU Cloud either as an input or at least to configure them in such a way? **Authentication header issue for validation of credentials** In the file "ta_crowdstrike_rh_falcon_host_accounts.py", for the method "validate" of the class "CheckValidation" , the header "Authorization" set with your code in the following way: headers = {"Authorization": "Basic " + base64string, "Content-Type": "application/json", "Accept": "application/json"} didn't work. The API returned an "Authentication" error. I had to make the following changes to let it work auth=HTTPBasicAuth( data["api_uuid"], data["api_key"] ) base64string = base64.b64encode('%s:%s' % (data["api_uuid"], data["api_key"])) headers = {"Content-Type": "application/json", "Accept": "application/json"} params = { headers": headers, "proxies": proxies} rest_resp = requests.get("https://falconapi.eu-1.crowdstrike.com/detects/queries/detects/v1", headers=headers, proxies=proxies, auth=auth) I don't know if this fix is valid also for you, but, can you check it and eventually fix it in your official code, please? **Execution of the Query API imports commented** After having applied the changes above and once configured, the modular input didn't import any data. Investigating in the code, I've found the following comment and commented code In the file "falcon_host_data_client.py" #We are restricting device endpoint due to issue at product side. We will revoke below condition # once get resolved at product side. if self._endpoint.find(consts.DEVICE_QUERY_ENDPOINT)!=-1 or self._endpoint.find(consts.DETECT_QUERY_ENDPOINT)!=-1: return False Commenting the two code lines, the import of data works. Can you enable back the import of data in your official version, please? Our purpose is not to have a customized add-on, so please, let us know if it is in your objectives to keep this app up-to-date Thank you for your support
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>