Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Does Splunk Addon for AWS support Server Side Encryption/ Decryption of Kinesis stream ?

$
0
0
Hi , We encrypted Kinesis stream : https://aws.amazon.com/blogs/aws/new-server-side-encryption-for-amazon-kinesis-streams/ using Server Sided Encryption. We then have the Splunk Addon for AWS with the IAM instance profile role which has the decrypt for the KMS but when we look at the logs to see if its flowing or not , we are not seeing any logs coming in - does Splunk Addon for AWS supports the decryption of the encrypted kinesis stream records ?

Convert an indexer into a (heavy?) forwarder?

$
0
0
Hello all, We are replacing our single splunk indexer with a pair of new indexers and have migrated all the indexes except those filled by syslog sources. We know that sending syslog straight to an indexer is not best practice, so we are now looking at directing this to SyslogNG first. However, we would like to make use of the old splunk indexer server to take the output of syslogNG and load balance it to the two new indexers. What we dont understand is if this is simply a matter of editing the old indexers outputs.conf or if the indexer will still need to function to take the different udp data input ports and direct them to the correct indexes. Thanks in advance!

How to click on a table in a dashboard and open the search in a new window in HTML?

$
0
0
I have a dashboard coded in HTML and when I click on a table element, it changes the page to show the search ran in Splunk. Is there a way to click on the table but keep the dashboard up but show the search in a new tab or window? Using drilldown or something?

How can I search for events that match two subsearches?

$
0
0
I'm trying to pull back events that have a specific field value, but should only return events that match that field value if it has related events (two criteria of subsearches match). E.g., I have a part, I only want to return that part if it has two subparts { "part_id": 1234, "part_name": "main", .... } { "ref_part_id": 1234, "part_name": "docker", "manufacturer": "bar", ... } { "ref_part_id": 1234, "part_name": "docker", "manufacturer": "foo", ... } { "part_id": 5678, "part_name": "main", .... } { "ref_part_id": 5678, "part_name": "docker", "manufacturer": "foo", ... } { "ref_part_id": 5678, "part_name": "docker", "manufacturer": "bar", ... } I only want to return events where the field 'part_name' is 'main', but I need those events only where the belong to the main part which has a specific ID, the part_name is 'docker' and it has both 'docker' parts from two manufactures ('foo' and 'bar') (it can have other parts and manufacturers, but HAS to at least have those two) ```part_name=main | join max=0 part_id [search manufacturer=bar part_name=docker| rename ref_part_id AS part_id] | join max=0 part_id [search manufacturer=foo part_name=docker | rename ref_part_id as part_id]``` I'm getting unexpected results

Delta over Multiple SIDs

$
0
0
In my raw data I have a lot of values for a field called "sid". For each of those values I want to calculate the delta of another field "Number" from the previous value for that sid. Then I need to do a little eval on it and then timechart the values for each sid. For each of those values I want to calculate the delta of another field "Number" from the previous value for that sid. Then I need to do a little eval on it and then timechart the values for each sid. Basically I want to do this: index=foo sid=="bar" | delta p=2 Number AS num | eval num=abs(num/300) | timechart avg(num) BUT that just does the work for one sid "bar". I want to do it all in one search for all values of "sid", ie and get that sid="bar" out of there. I've tried the foreach command but I'm doing something wrong because I keep getting an error like "none streaming commands not allowed"

How to reduce the timespans used by accelerated searches?

$
0
0
We use accelerated searches to speed up the data being presented by our dashboards, but we would like to reduce the amount of space that it is using. However, because this data is aggregated by a unique ID, I don't need the acceleration for 1s, 10sec, etc. I would like to change the timespans to just 1 hour and 1 day. I have tried changing the `auto_summarize.timespan` field in the Advanced Edit view for the report, but the field did not seem to change (it is still empty). Should I be changing the timespan in `auto_summarize.timespan` or in `auto_summarize.command`?

exact value from log

$
0
0
Hi, can someone help me to exact 536 MiliSeconds from below is log 536 MiliSeconds 6>2017-11-02T05:55:12Z d065d14b-3bcd-481c-512a-bfd42485714d doppler[15]: {"cf_app_id":"d5632633-2365-4b73-83ba-27d07e5d2c4d","cf_app_name":"xln-sm-d-MarketingExtOffer-sit3","cf_ignored_app":false,"cf_org_id":"39dd3350-6ecd-4af6-a32d-72f1825bb516","cf_org_name":"CROSS_LOB-NAM","cf_origin":"firehose","cf_space_id":"be47e794-aa31-414a-88bc-3fb616b224b2","cf_space_name":"SALES_MKTG-SIT3","deployment":"cf","event_type":"LogMessage","ip":"153.40.210.253","job":"diego_cell-partition-6b050969f85bcb42df10","job_index":"71","level":"info","message_type":"OUT","msg":" INFO [nio-8080-exec-9] c.c.e.i.u.ExtOfferConnectionUtil c.c.e.i.u.ExtOfferLogUtil.logInfo(ExtOfferLogUtil.java:27) - POST|/private/v1/offer/rocketFuel|84f85631-114f-41f5-9824-f526fa541477|372aae13-728c-4002-87f5-5d8aed0a9815|||||||||||ExtOfferConnectionUtil- Time taken for ExtOfferConnectionUtil.sendRF is : 536 MiliSeconds","origin":"rep","source_instance":"0","source_type":"APP","time":"2017-11-02T05:55:12Z","timestamp":1509602112630605039

Not able to see the syslogs of ASA on Splunk Web

$
0
0
Hi All, I've configured my ASA to send syslog to splunk server installed on centos. I took capture on ASA and I can see packets are leaving the ASA. I took capture on centOS on port 514 and packets are making to the centOS machine as well. For some reason I don't see them on splunk web. I've created data input for UDP port 514 (all default), Source type (cisco:asa). I'm really not sure what is the piece of info or config I'm missing here. I would appreciate your quick help here. Regards, Dv

REST modular input index zip file error

$
0
0
I want to read a file in the zip archives and index it into splunk with rest modular input. The following is my code for the handler: ![alt text][1] I have been doing research to get my code to work for a week now with no result. I tried running the code on an actual zip file and it works. But when I make change to the actual responsehandlers.py, it doesn't run the whole class and I believe it got stuck in the line file = zipfile.ZipFile(io.BytesIO(response_object.content)). Which is why I wrote the line print_xml_stream(response_type) and print_xml_stream("test2") to test it and only the first print works. The response_type that got index is "text", which lead me to wonder if rest modular input works for handling zip file since on the drop-down list, I only see json, xml and text. I read a few thing about python sdk that run on rest, would this be a good work-around? [1]: /storage/temp/219715-rest.png

How to round the 3 decimal percentage in a pie-chart ?

$
0
0
I had a pie-chart and I have added the following line on the source to display the percentage on the piechart But the percentage is displaying as xx.xyz. Now, how can I round it to single digit likle xx.y option name="charting.chart.showPercent" as true Is there any way to round the percentage from the xml?

search/jobs/{search_id}: View the status of this search job - Is there a way to only get the response with "isDone"

$
0
0
HI , When I try to get the status of the search_id using the REST endpoint "search/jobs/{search_id}: ", I see a lot of information in the response. Is there a way to only get the response to check the status of the job i.e. the field - "isDone" (without all the other information)

Centralized Splunk config synchronization over HTTPS

$
0
0
I have a customer with a very unique network environment. They will have multiple ES clusters worldwide. The only way those clusters can talk to a central region is via web proxies that don't support SOCK5 and can't anytime soon for a variety of political reasons. Does anyone know of a method or tool to achieve something like rsync over HTTPS so that I can have a centralized Splunk instance that I use to configure ES, TAs, dashboards, etc. and the distributed ES clusters can pull down the content and keep it in sync? Thanks. C

Is there a way to only get the response with "isDone" using the REST endpoint "search/jobs/{search_id}:"?

$
0
0
HI , When I try to get the status of the search_id using the REST endpoint "search/jobs/{search_id}: ", I see a lot of information in the response. Is there a way to only get the response to check the status of the job i.e. the field - "isDone" (without all the other information)

Why are some of the checkboxes in my dashboard greyed out?

$
0
0
I have a dashboard that has a number of tick boxes. I have 4 in one panel at the top of the dashboard (snipped of tick box that I can see and tick) ![snipped of tick box that I can see and tick][1] and 4 in a panel at the bottom of the dashboard (snipped of tick box that is greyed out so I can't tick/untick) ![snipped of tick box that is greyed out so I can't tick/untick][2] The 4 in the panel at the bottem appear greyd out so I can't tick them. Any idea why this is the case? I am pretty sure it ws not greyed out in the past. [1]: /storage/temp/218728-tick-box-uanable.png [2]: /storage/temp/218730-tick-box-greyedout.png

Centralized Splunk config synchronization like rsync over HTTPS

$
0
0
I have a customer with a very unique network environment. They will have multiple ES clusters worldwide. The only way those clusters can talk to a central region is via web proxies that don't support SOCK5 and can't anytime soon for a variety of political reasons. Does anyone know of a method or tool to achieve something like rsync over HTTPS so that I can have a centralized Splunk instance that I use to configure ES, TAs, dashboards, etc. and the distributed ES clusters can pull down the content and keep it in sync? Thanks. C

How can I search for two different error messages to see if they both happened in a one-minute timespan?

$
0
0
I have 2 sourcetypes. For each sourcetype having different error messages, how can I search those 2 different error messages to see if they happened in a bucket of 1 minute timespan? sourcetyep=first OR sourcetyep=second_one ErrorMessage="timeout" OR ErrorMessage="brokenPipe" |bucket _time span=1m

Help extracting a value from this log?

$
0
0
Hi, can someone help me to exact "536 MiliSeconds" from below is log 6>2017-11-02T05:55:12Z d065d14b-3bcd-481c-512a-bfd42485714d doppler[15]: {"cf_app_id":"d5632633-2365-4b73-83ba-27d0xxxxxxxxx","cf_app_name":"xln-sm-d-MarketingExtOffer-xxxx","cf_ignored_app":false,"cf_org_id":"xxxxxxxx-6ecd-4af6-a32d-xxxxxxxxxx","cf_org_name":"CROSS_xxx-NAM","cf_origin":"xxxxxxxxx","cf_space_id":"xxxxxxxxxx-aa31-414a-88bc-xxxxxxxxx","cf_space_name":"SALES_MKTG-SIT3","deployment":"cf","event_type":"LogMessage","ip":"153.40.210.253","job":"diego_cell-partition-xxxxxxxxxx","job_index":"71","level":"info","message_type":"OUT","msg":" INFO [nio-8080-exec-9] c.c.e.i.u.ExtOfferConnectionUtil c.c.e.i.u.ExtOfferLogUtil.logInfo(ExtOfferLogUtil.java:27) - POST|/private/v1/offer/rocketFuel|84f85631-114f-41f5-9824-xxxxxxxxxx|372aae13-728c-4002-87f5-xxxxxxxxxxx|||||||||||ExtOfferConnectionUtil- Time taken for ExtOfferConnectionUtil.sendRF is : 536 MiliSeconds","origin":"rep","source_instance":"0","source_type":"APP","time":"2017-11-02T05:55:12Z","timestamp":1509602112630605039

REST API Modular Input: index zip file error

$
0
0
I want to read a file in the zip archives and index it into Splunk with REST modular input app. The following is my code for the handler: ![alt text][1] I have been doing research to get my code to work for a week now with no result. I tried running the code on an actual zip file and it works. But when I make change to the actual responsehandlers.py, it doesn't run the whole class and I believe it got stuck in the line file = zipfile.ZipFile(io.BytesIO(response_object.content)). Which is why I wrote the line print_xml_stream(response_type) and print_xml_stream("test2") to test it and only the first print works. The response_type that got index is "text", which lead me to wonder if rest modular input works for handling zip file since on the drop-down list, I only see json, xml and text. I read a few thing about python sdk that run on rest, would this be a good work-around? [1]: /storage/temp/219715-rest.png

How to round the decimal percentage in a piechart with XML?

$
0
0
I had a pie-chart and I have added the following line on the source to display the percentage on the piechart But the percentage is displaying as xx.xyz. Now, how can I round it to single digit likle xx.y option name="charting.chart.showPercent" as true Is there any way to round the percentage from the xml?

Splunk Universal Forwarder fails port scans on AIX 7.1 servers

$
0
0
I have several AIX servers (AIX 7.1) with Splunk Universal Forwarder 6.5.2 that all fail Nessus port scans for allowing TLS1.0 on port 8089. All configs, verified by btool, have "sslVersions" and "sslVersionsForClient" set to tls1.2 and Splunk has been restarted many a time after making sure these configs are correct. Have several Red Hat Linux servers (RH 6) with same Universal Forwarder version and same SSL settings in the same configs, that do NOT fail port scans for any issues related to port 8089. Anyone encounter the same issue on AIX platforms? The AIX platforms all have OpenSSL 1.0.1e, while the Linux platforms have OpenSSL 0.9.8e-fips-rhel5 I appreciate any insight.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>