Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Converted SimpleXML dashboard to HTML, trellis not working for single value

$
0
0
I have a dashboard that I have converted to HTML. The dashboard contains a single value element with trellis enabled:| ess index="sandbox_report*" scan=true $only_high_score$| stats count by sandbox$timepicker.earliest$$timepicker.latest$ This works as expected. When I convert the dashboard to HTML however, the trellis doesn't work anymore, the element is displayed without trellis, as a standard single value element. The converted code: var element1 = new SingleElement({ "id": "element1", "trellis.enabled": "1", "trellis.size": "medium", "drilldown": "none", "height": "185", "managerid": "search1", "el": $('#element1') }, { tokens: true, tokenNamespace: "submitted" }).render(); Does anyone have any pointers on why this does not work as expected, and how to fix it?

Splunk Indexes Bucket Management

$
0
0
I have a question about managing the buckets in my volumes configured for indexes. Below is my current configurations: [volume:hotwarm] path = /data/splunk/homedb maxVolumeDataSizeMB = 900000 [volume:cold] path = /data/splunk/colddb maxVolumeDataSizeMB = 900000 [default] maxDataSize = auto_high_volume maxWarmDBCount = 80 frozenTimePeriodInSecs = 31104000 homePath.maxDataSizeMB = 800000 coldPath.maxDataSizeMB = 800000 Current data indexed per day is roughly 140GB per day, and my hot/warm & cold are 1 TB each (I know they are severely under sized at the moment, and we are working to increase the space). After implementing the configurations above, my understanding is that the warm buckets would start to roll to cold after hitting the homePath.maxDataSizeMB however, currently the space utilization for the homePath is 900+GB. Did i make a mistake in my configurations? Any advice on how best to manage the indexes would be greatly appreciated. Another question I have is regarding some of the parameters in indexes.conf homePath.maxDataSizeMB - should this be set differently for each individual index? or would it be ok to set one value globally? maxTotalDataSizeMB - like the above should this be set differently for each individual index? Regards, Keith Yap

How to collect log within 1 hours from file log rotate

$
0
0
Dear all I have file log access /var/log/secure . Use log rotate ( setting daily) I need collect log login fail 3 times on 1 IP within 1 hour from file log /var/log/secure. I use query: > source="/var/log/secure" sourcetype=linux_secure process=sshd "password for" NOT pam_unix NOT Accepted earliest=-24h latest=now | rex field=_raw "(?Accepted|Failed) password for (?\w+) from (?[0-9A-Fa-f:\.]+)" | stats count by ipaddr | where count >=3 I need support collect log with 1 hours Please support me

need to change the ip address of the host on which splunk forwarders are installed

$
0
0
In deployment server, i can see the windows host with the old IP address. I want to update the IP address of that host . Please guide me

Would monitoring files with logrotate and delayed compression cause reindexing?

$
0
0
If I'm monitoring files that are being rotated with an added timestamp, and the rotated files are being compressed after a couple of days, could this cause reindexing of log events? I know that Splunk supports reading compressed files, and that as long as you don't add `crcSalt=`, log-rotating with a timestamp would not cause reindexing. However, the doc state that adding data to a compressed file would in fact cause reindexing ([link][1]). This confuses me. If Splunk decompresses files to read the checksum (to check if the log file have already been indexed or not), why could adding data to a compressed file cause reindexing? If Splunk doesn't read checksums in that way for compressed files, how can we be sure normal rotated log files with delayd compression can't cause reindexing as well? Hope someone can explain this to me. :) [1]: http://docs.splunk.com/Documentation/Splunk/7.1.2/Data/Monitorfilesanddirectories#How_Splunk_Enterprise_monitors_archive_files

What are the capabilities required for a role/user to appy shcluster-bundle from deployer server?

$
0
0
We need to create a role on deployer server to create the users since admin access is blocked. What are the capabilities required for a role to appy shcluster-bundle from deployer server using below command? /splunkdrive/splunk/bin/splunk apply shcluster-bundle --answer-yes -auth : -target https://:8089

KV Store field type cidr

$
0
0
Hello Splunkers I just noticed that there is a field type "cidr" for the KV Store. According to the API documentation this should handle any kind of IP ranges nicely in canonical form. http://docs.splunk.com/Documentation/Splunk/7.1.2/RESTREF/RESTkvstore#CIDR Until now we used field type string field.netrange = string I created a new collection for testing with field.netrange = cidr and transferred the content with | inputlookup | outputlookup But upon inspection | inputlookup I still observe the previous non-canonical IP ranges like 2001:620:2000::/48 Did I do something wrong? What is the benefit of using the field type cidr when there are no changes?

Histogram and bucket size

$
0
0
Hi I have some proprietary log data that gives 3 different response times for each event. These are extracted into Timer1,Timer2,Timer3 What I want to achieve is to count the number of timer events that fall into a bucket where i can control the bucket size. That means that just countingh the number of 0,3 second response time events is not enough, but I also want to control so that it is counted in bucket that holds 0-1 second response times. As a twist to it, I don't know how many buckets I need, or rather I don't know how long the longest response time is, but I would like to truncated/gather up the values over a certain value.. Then I want to plot the count on the Y axis, and the buckets on the X axis. I get somewhere by using: |bin span=1 timer1 as Rtime | chart count as "Count" by Rtime But then I'm stuck.

If I have two timestamps in my log file, how can I choose one timestamp as the timestamp of the event?

$
0
0
I have two timestamps in my log as shown below: "#01#20180626-125301;969#19700101-000028;723#0046#01#GROUND#Y#4Y1651" My sourcetype is written in a way to pick up the second timestamp within 5000 days. Now, since the date in the above example is 19700101, it attached the indexation time as the timestamp of the event. But is there a way to select the first time as the timestamp of the event when my second timestamp is invalid?

Can 6.5.2 indexers co-exist with 7.1.2 indexers?

$
0
0
I will be upgrading 4 indexers from 6.5.2 to 7.1.2. Will I need to stop all 4 indexers, upgrade them all, and then start them all again on the same version? Or can I stop indexer1, upgrade, start, and then do the same for the rest of the indexers? In other words, can 6.5.2 indexers co-exist with 7.1.2 indexers? Thanks in advance.

How to connect to shared group outlook mailbox using TA mailclient in Splunk?

$
0
0
We are trying to connect to shared group outlook mailbox using TA mailclient. We are not able to connect to it. when we try individual mail box it works fine but can not connect to shared mailbox. How do we connect to the shared mailbox?

How do you retroactively make an unmanaged app a managed app on the deployment server?

$
0
0
Hello everyone, I have a deployment server that manages most of our Splunk apps, but when everything was setup, some apps were installed unmanaged. In particular we have a Checkpoint app on one of our heavy forwarders that isn't managed by the deployment server. We are trying to move all unmanaged apps to the deployment server to make administration easier. Is there any process to create this link without redeploying the app? Thanks! Jacob

Why is eventtype not tagging 100% of events?

$
0
0
In an attempt to explain this right... We have set up multiple eventtypes to different occurrences. For example: eventtype=major eventtype=warning major works just fine.. When running a simple search : sourcetype="example" eventtype=warning The matched results return a result that is not 100% of events. So, for example, the search returns 200 events, but when selecting `eventtype` in the interesting fields column, it shows that that warning only shows up for 90% (180) of the events. The search is still returning events that meet the requirements for the `eventtype=warning`, but it is not tagging them as such. The goal here is to generate alerts based off of these eventtypes to make it much easier to manage. My concern is if the `eventtype` field is not applying to all occurrences that an alert may not have triggered. Looking into the events that are not getting the `eventtype` field, i notice they are rather long, and the portion of the log that would fulfill the requirements for the `eventtype` field are over 100 lines down in the log. Is there a `props.conf` or maybe an `eventtype.conf` setting that can be modified? I'm wondering if it is not looking all the way through the logs to apply the field. Thanks for any help

How do I keep the Splunk CLI from disapearing in Windows so I can read the output?

$
0
0
Hi I have two Splunk deployments, one running Splunk 7.1.0 on Windows Server 2016 and Splunk 7.1.2 on Windows 10. When I run Splunk from the bin folder, or any Splunk command, I see the Splunk window open, but then it immediately closes. I would like to have the window remain open so I can read the output. Am I running the wrong command to open the CLI? Thank you!

Why can't my UF send data from /var/log/messages?

$
0
0
***Question: why is /var/log/messages not forwarded to index?*** My deployment: ---------- UF: version 7.1.2 RHEL 6.10 **/opt/splunkforwarder/etc/apps/_server_app_linux-server/local/inputs.conf** [monitor:///var/log] disabled = false index = linuxlog sourcetype = syslog **etc/apps/_server_app_linux-server/local/app.conf** # Autogenerated file [install] state = enabled **splunk list monitor** Monitored Directories: ... /var/log ... /var/log/messages /var/log/messages-20180805 /var/log/messages-20180812 /var/log/messages-20180819 /var/log/messages-20180826 **ll /var/log/messages** -rw-r-----+ 1 root root 1160093 Aug 30 12:07 /var/log/messages -rw------- 1 root root 653 Aug 5 02:37 /var/log/messages-20180805 -rw------- 1 root root 580 Aug 12 02:05 /var/log/messages-20180812 -rw------- 1 root root 19310 Aug 19 02:42 /var/log/messages-20180819 -rw------- 1 root root 728770 Aug 26 02:05 /var/log/messages-20180826 ---------- Deployment server version 7.1.2 CentOS 7.5.1804 ---------- Search head version 7.1.2 CentOS 7.5.1804 **search:** index="linuxlog" source="/var/log/messa*" ***where is no "/var/log/messages" in sources!*** ![alt text][1] [1]: /storage/temp/255880-splunk-uf-messages-forward-01.png

What's the output of the following eval and now() function query?

$
0
0
Hi All, Could you please help me here in confirming what would be the output of the below eval command? "eval age = (now() - _time )" Would the output be in minutes or seconds? Thanks in advance,

How do I measure the amount of data in cold buckets?

$
0
0
I am using the following search ,and it seems to works with hot buckets but not when changed to cold. I need to have the output from cold buckets for a billing purpose. Been working on this forever it seems. If you could assist me, that would be great! SEARCH: | tstats summariesonly=true sum(everything.rawlen) as rawBytes from datamodel=storage_billing by splunk_server,index,everything.bucketId,host | rename everything.* as * | eval rawMBytes=rawBytes/1024/1024 | join splunk_server, bucketId [ dbinspect index=* | eval rawSizeMB=rawSize/1024/1024 | fields splunk_server, bucketId, path, state, startEpoch, endEpoch, modTime, sizeOnDiskMB,rawSizeMB ] | eval compression=sizeOnDiskMB/rawSizeMB, newRawMBytes = rawMBytes * compression | eventstats sum(rawMBytes), sum(newRawMBytes) by splunk_server, bucketId | eval margin_of_error= round( ( sizeOnDiskMB - 'sum(newRawMBytes)' ) / sizeOnDiskMB,4) | stats sum(newRawMBytes) as MBytes_Used, count(bucketId) as Bucket_Count by splunk_server,index,state,host | search state=cold | eval GBytes_Used=round(MBytes_Used/1024,2) | rename host as "Volume Name" | dedup host | rename MBytes_Used as Space | eval "Copy Type"="Primary" | eval F4="Copy" | fields "Volume Name", Space, "Copy Type", F4 | outputcsv Logging_tsm

Running SecureAuth 9.2 and app is not working

$
0
0
all searches are not working - return zero results, but data is flowing into Splunk

Creating a List of Email Addresses and performing a search loop

$
0
0
Pretty new to Splunk and looking for advice. I’ve tried reviewing subsearches, map and foreach looping but I just can’t crack the syntax. I have two indexes, one that stores computer hostname, ip, and a tag for a contact email. The other index is scan data regarding missing patches by ip Index=hostnames Hostname ip_address Contact Hostname1 192.x.x.1 Email1 Hostname2 192.x.x.2 Email2 Hostname3 192.x.x.3 Email3 Hostname4 192.x.x.4 Email4 Hostname5 192.x.x.5 Email2 Hostname6 192.x.x.6 Email3 Index=scandata Ip scanfindingname scanfindingdescription 192.x.x.4 java-blah java-blah 192.x.x.2 java-blah java-blah 192.x.x.2 java-blah2 java-blah2 I have figured out how to get the search with a join ip to ip_address to display a table with a stats count hostname, ip, and contact email to show the hostname and total amount of findings. Table where Contact=Email2: Hostname IP Contact Count Hostname2 192.x.x.2 Email2 2 Hostname5 192.x.x.5 Email2 1 I cannot figure out how to create an automated email for each email address from the hostnames index. It's essentially 3 queries. 1. Get list of email addresses from contact field in hostname index (dedup contact) [Email1, Email2, Email3] 2. Find Scan data by ip and grab the hostname and total found by hostname where contact = $Email$ 3. Email table to $Email$ Any advice is appreciated.

Field Extraction from Source Field in props.conf

$
0
0
Hello, I am going bananas trying to figure out the error in my props.conf. All of my logs are collected using Splunk Enterprise and forwarded to a centralized server that I do not have CLI access to. I do all of my main configuration from the source host command line and forward the data to the centralized server. I need to extract a field called "microservice" from my source path. I have tested my regular expression in search with the following statement and it works. host=myhostname sourcetype=log4j | rex field=source "^\/opt\/apps\/myapp\/microServices\/(?\w+)\/.*" Example path: /opt/apps/myapp/microServices/neededDirectoryName/Logs/mylog_log.log There are many directories that I am collecting logs from that are the same sourcetype: log4j. I am also only indexing error logs from this sourcetype as well, that is what the TRANSFORMS is for. I'll include my transform.conf for reference. I have other regular expressions extracting fields from the log events on Splunk web (on the centralized server). props.conf: [log4j] EXTRACT-mspls = ^\/opt\/apps\/myapp\/microServices\/(?\w+)\/.* in source TRANSFORMS-set = nullqueue, errorlogs transforms.conf: [nullqueue] REGEX= . DEST_KEY = queue FORMAT = nullQueue [errorlogs] REGEX = ^(\[ERROR\]|\[WARN\]|\[MANDATORY\]) DEST_KEY = queue FORMAT = indexQueue Thank you!!
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>