Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Why is it recommended to harden the KV Store?

$
0
0
Splunk documentation ("[Harden your KV store port][1]") states "we recommend that you secure your environment by restricting KV store access to your port" but there doesn't seem to be any documentation stating any of the following: 1. What is the risk of NOT hardening the port 2. What, if any, integral security is included with the KV store 3. What the appropriate methods would be to harden it As to the first point, I presume the risks are potential exfiltration of data and/or alteration of the kvstore - but that goes to the second point: why isn't integral security adequate? Is MongoDB security broken? Are connections encrypted? how is a connection authenticated? [The KV store documents][2] don't mention any of this. Based on the [MongoDB documentation][3] I presume the recommended hardening method is iptables, but Splunk docs don't mention this either. In other words, what is the basis of this recommendation? More info/documentation is needed. [1]: https://docs.splunk.com/Documentation/Splunk/latest/Security/HardenyourKVstoreport [2]:http://docs.splunk.com/Documentation/Splunk/latest/Admin/TroubleshootKVstore [3]: https://www.mkyong.com/mongodb/mongodb-allow-remote-access/

How to set a token from a base search in my dashboard to be consumed in an HTML panel?

$
0
0
Hello, Like [previous post][1] I would like interpret code in html. Just a little change : html in token.index=* |stats count by sourcetype-60m@mnowNumber of results : <BR/>$result.sourcetype$No result found

$tok_wimg$

How can I see in html > Number of results :> 2 [1]: https://answers.splunk.com/answers/442254/how-to-set-a-token-from-a-base-search-in-my-dashbo.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev

Splunk bucket replication network limit in multisite

$
0
0
We recently setup a multisite and replication between the sites. This is causing network congestion when it comes to replication the buckets. Is there a way to limit this using something like the limits.conf?

1st Time Setup of Universal Forwarder for Windows Log Collection and Missing Something

$
0
0
I am trying to setup my splunk enterprise 6.6.1 to be able to injest windows logs from remote pc's but not having much luck. I know I am missing something, or not comprehending something, but can't figure it out. So far, I have configured the receiver on my indexer as TCP port 9997. I have installed the windows universal forwarder v. 7.0.0 on the windows PC i want to collect the logs from. I have enabled to collect both the system and application logs. I am seeing the following in my splunkd log file on the client where the universal forwarder is installed: 09-29-2017 08:58:23.417 -0400 INFO TcpOutputProc - Connected to idx=10.0.103.210:9997, pset=0, reuse=0. 09-29-2017 08:58:59.026 -0400 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_10.1.211.25_8089_bens-testbox.patientfirst.com_BENS-TESTBOX_FC09E8A3-4F3E-4CCC-BF5B-8C3D6884D2C4 09-29-2017 08:59:59.040 -0400 INFO HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_10.1.211.25_8089_bens-testbox.patientfirst.com_BENS-TESTBOX_FC09E8A3-4F3E-4CCC-BF5B-8C3D6884D2C4 I have the following in my inputs config on the universal forwarder client: [default] host = BENS-TESTBOX # Windows platform specific input processor. [WinEventLog://Application] disabled = 0 [WinEventLog://Security] disabled = 1 [WinEventLog://System] disabled = 0 I then have the following in my Splunk Enterprise inputs config file: [default] host = splunk1 [splunktcp://9997] connection_host = none disabled=0 When I try and do a search though my search head (currently my setup is a single indexer with a single separate search head) for host: #ipofclientpc, I don't get anything. I have not setup a data input, which i think is my issue, but can't figure out the correct process to configure that to pull/receive from the forwarder. If anyone can help, i would be most appreciative.

How to resolve the error "Cannot get username when all users are selected"

$
0
0
I am getting error "**Cannot get username when all users are selected**" on the splunkweb when i ran any search. I have tried deleting cookies, it didnt work. I am using AWS ELB for load balancing the 3 clustered indexers. Is there any issue regarding load balancing configuration?

Error message: domain needs 'min' and 'max' fields

$
0
0
Hi I have run the following search ( Endpoint - Malware Daily Count - Context Gen) verified from a couple of different sources, and get the above mentioned error message....any advice? | tstats `summariesonly` dc(Malware_Attacks.signature) as infection_count from datamodel=Malware.Malware_Attacks where earliest=-31d@d latest=-1d@d Malware_Attacks.action=allowed by Malware_Attacks.dest,_time span=1d | stats sum(infection_count) as total_infection_count by _time | stats count,median(total_infection_count) as median by _time | eval min=0 | eval max=median*2 | xsCreateDDContext name=count_1d container=malware type=domain terms="minimal,small,medium,large,extreme" scope=app app=SA-NetworkProtection | stats count

Transforms.conf not added to UI

$
0
0
I'm getting ready to upgrade an app that we had developed in Splunk 6.2. We are now going to start using version 7.0 and wanted to update the queries so that it will work properly in 7.0. However, we can't even get the app's transforms.conf to be recognized by Splunk. I have reviewed the newest docs from Transforms.conf, but nothing changed in the features we were using. Our transforms.conf file is loaded in the default folder of the app, and we even moved it over to the local folder, but it still isn't being recognized in the UI under settings->fields->field transforms, nor is it transforming the data. Here is the transforms.conf file: [client_map] external_type = kvstore collection = genesis_location fields_list = src_ip, region, sitename max_matches = 1 min_matches = 1 default_match = UNKNOWN match_type = CIDR(src_ip) We have confirmed that the collection is working, and that the src_ip field is being exposed. Not sure why this isn't working.

Need help on predict command usage in graph

$
0
0
I have a trend graph that shows some data then its predicting out that data a couple days forward. However, The prediction starts when the normal data starts, when I would rather have the prediction start on the graph when there is no previous data. Basically attaching itself to the previous trendline and adding on with it's prediction. Is there a way to do this?

Format cell in table by comparing to another value

$
0
0
I have a table that is setup as below. I need to change the cell background color based on a comparison of each cell to the requirement cell in that row. Column headers are going to be changing regularly as data snapshots are saved. KPI Requirement 09/13/17 22:30 09/13/17 22:45 09/13/17 23:00 09/13/17 23:15 KPI1 0.20 0.05 0.04 0.04 0.04 KPI2 0.20 0.10 0.09 0.10 0.10 KPI3 1.60 1.24 1.24 1.24 1.22

How to compare previous data and alert if result over 5 percencet

$
0
0
We have monthly data for each SBU and we want to setup an alert if any total increase more than 5% for up coming month. index=mydata | bin span=1mon _time | stats sum(total) as Total_Val by _time, SBU | sort +SBU -_time Can you please help us to write a Splunk query to filter if any total increase more than 5% comparing with previous month. Note: We have more than 50 SBU.

After editing inputs.config on forwarder data shows up unreadable

$
0
0
Hi i edited the inputs.cinfig file on my forwarder and once i restart splunk etc i see the data on search but it is not readeble. can anyone tell me what i am doing wrong? [default] host = xxxxxxx [monitor://C:\Windows\System32\winevt\Logs\*] disabled = false index=xxxxxx followTail = 0 sourcetype = sync i have all the other data coming in fine. Thanks, ![alt text][1] [1]: /storage/temp/216658-sync-log3.png

Trend values on x-axis and y-axis by serv

$
0
0
index=... sourcetype=... | rex "(?) and (?\w+) and (?)" | table totaltime,duration | timechart or chart would like to populate totaltime in x-axis and duration in y-axis for each serv would like to show trend line graph based on the values of "totaltime" in x-axis and "duration" in y-axis for each "serv". Assume the below sample from "serv1". for serv2,serve3,etc has to show on the graph sample data: **28820.220**: [Full GC (System.gc()) 8832K->8624K(37888K), **0.0261704 secs**] 29372.500: [GC (Allocation Failure) 23984K->8816K(37888K), 0.0013546 secs] 29932.500: [GC (Allocation Failure) 24176K->8808K(37888K), 0.0017082 secs] 30492.500: [GC (Allocation Failure) 24168K->8960K(37888K), 0.0017122 secs] 31047.500: [GC (Allocation Failure) 24320K->8944K(37888K), 0.0020634 secs] 31602.500: [GC (Allocation Failure) 24304K->8992K(37888K), 0.0017542 secs] 32157.500: [GC (Allocation Failure) 24352K->8968K(37888K), 0.0018971 secs] 32420.247: [GC (System.gc()) 16160K->8944K(37888K), 0.0012816 secs] 8186.000: [GC (Allocation Failure) 91332K->36212K(246272K), 0.0081127 secs] 8347.676: [GC (System.gc()) 42225K->35996K(246272K), 0.0040077 secs] **8347.678:** [Full GC (System.gc()) 35996K->21313K(246272K), **0.1147433 secs**] 8929.342: [GC (Allocation Failure) 76609K->24356K(246784K), 0.0047687 secs] 8952.577: [GC (Allocation Failure) 80164K->29098K(246272K), 0.0053928 secs] 9921.694: [GC (Allocation Failure) 84906K->27626K(247808K), 0.0053474 secs] 11567.840: [GC (Allocation Failure) 85994K->27730K(247808K), 0.0030062 secs] 11947.795: [GC (System.gc()) 41757K->27562K(248320K), 0.0035917 secs] **11947.797**: [Full GC (System.gc()) 27562K->22923K(248320K), **0.1237187 secs**]

How do I replicate settings in system/local across the search head cluster?

$
0
0
When using a stand alone search head, we made configuration changes in `etc/system/local/`e.g. outputs.conf, limits.conf, etc I've converted this standalone instance to a search head cluster, but I don't want to go into each cluster member and reconfigure these settings. How would I ensure that I can create the configurations in one place and replicate them to the cluster members? My current idea is to add these configurations to the deployer e.g. `etc/shcluster/apps/custom_configs/limits.conf` and then set the app to export its settings using `export=system`. This worked when migrating savedsearches and custom apps, but I worry that the same is not true for configurations that are not part of any app.

Error message when running a search on the search head - Unable to distribute to peer

$
0
0
I get the following error message when running a search on the search head: Unable to distribute to peer named :8089 at uri=:8089 using the uri-scheme=https because peer has status="Down". Please verify uri-scheme, connectivity to the search peer, that the search peer is up, and an adequate level of system resources are available. See the Troubleshooting Manual for more information. I've tried increasing timeout settings in distsearch.conf with no luck. I have also checked the system resources on the search head and the indexers and didn't see any constraint. Do you have any ideas?

Should metrics support overwriting events instead of duplicating metrics

$
0
0
In Splunk 7.0.0, when sending data to a metrics index, it looks like one can send duplicate metric measurement events (e.g., the same tuple of time, metric name, and dimensions) and the metric index will store all duplicates, thereby affecting the statistics that come out. Is that the intended behavior for the metric store? Other time-series metric stores I have played with use overwrite/last-in logic that only preserves the most-recently indexed value for a given metric tuple. Using similar logic here would seem to make more sense for the the use cases I would see for the metric store, but I freely admit to making assumptions. Please clarify how allowing duplicate metric events is intended to be used / handles. Note, my understanding of a distinct metric tuple is the timestamp (to milliseconds), metric name, and dimension fields. So, assuming you see the following two metric tuples that arrive at the indexer at different times (the first column), only the later one (the top row) would be saved in the index. Right now (as of Splunk 7.0.0), both are saved in the metrics index/store. | indexing timestamp | metric timestamp | metric name | metric value | server | 1506708015.390 | 1506708000.000 | server.power.kwh | 126.06 | na-server-1 | 1506708010.242 | 1506708000.000 | server.power.kwh | 104.56 | na-server-1

one of my index size is 500 GB now its almost getting full so want to increase size to 2TB. I am using multi site cluster environment.

$
0
0
one of my index size is 500 GB now its almost getting full so want to increase size to 2TB. I am using multi site cluster environment. Can anyone please suggest me how to do it?

Comparing values in dashboard and then applying traffic light colors

$
0
0
I need to compare values in columns to a column that contains a performance requirement. The requirement will be different in each row and the column headers (Val1, Val2, Val3) are dates so they will be relatively random. Is this possible in simple xml? If not, does anyone have a non simple xml solution? Example KPI Requirement Val1 Val2 Val3 KPI1 2 1.5 2.2 1.9 KPI2 3 2.5 3.2 2.6 KPI1 Val1 and Val3 would be green, Val2 would be red, etc.

Plotting a timeline

$
0
0
Hello: I have a long row of time and dates for each overall "event". So the data looks like 8/11/2017 18:00:00 8/15/2017 04:00:00 8/19/2017 15:00:00 Can you recommend the best way to plot this information? I'm a little thrown off since I have multiple timestamps in one row. Thanks!

Drilldown: Use starttime of bar in timechart as `earliest` field in subsequent search

$
0
0
After spending hours unsuccessfully searching the splunk answers for a solution I would like to phrase my question: I have a timechart which I display in a dashboard. When I click on a bar, I would like that a new search is triggered with the time interval matching that of the clicked bin in the timechart. Unfortunately, using $earliest$$latest$ does not give me the timeinterval of the clicked bin, but of the whole timechart query. On the other hand $click.value$ does give me right start time, but in the following format 2017-09-29T01:00:00.000-04:00 which I then can't use to set my field in the query. I could reformat the $click.value$ string to the expected epoch format, using strftime("2017-09-27T22:04:00.000-04:00", "%Y-%m-%dT%H:%M:%S.%3N-%:z") but I don't know if I can run this command as a script in the dashboard xml. Does anybody have a solution for this? I am a bit amazed that this is such a struggle, seams like a common use-case.

What is the best practice: Implicit or Explicit Index Path Locations?

$
0
0
Curious on what is the recommended? I know the second one makes sense for readability, but the first one i feel would greatly reduce retyping and indexes.conf file size: **Practice 1** [default] coldPath=$SPLUNK_DB/$_index_name/colddb homePath=$SPLUNK_DB/$_index_name/db thawedPath=$SPLUNK_DB/$_index_name/thaweddb frozenTimePeriodInSecs = 200000 [foo] frozenTimePeriodInSecs = 100000 [bar] **Practice 2** [default] frozenTimePeriodInSecs = 200000 [foo] coldPath=$SPLUNK_DB/foo/colddb homePath=$SPLUNK_DB/foo/db thawedPath=$SPLUNK_DB/foo/thaweddb frozenTimePeriodInSecs = 100000 [bar] coldPath=$SPLUNK_DB/bar/colddb homePath=$SPLUNK_DB/bar/db thawedPath=$SPLUNK_DB/bar/thaweddb
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>