Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Why am I getting messages to review roles for unnecessary read or write access to authorize.conf?

$
0
0
Hi, Everyday i am getting message like "Review roles for unnecessary read or write access to authorize.conf and remove access if possible". What could be the possible reason for this?

Are there related fields between sudo log and LDAP log? I want to monitor daily Linux sudo activity.

$
0
0
I have a requirement for daily report of Linux sudo activity. I came to know that the LDAP log will tell me if the user successfully has access, and sudo log will tell me what the execute request is and where? Can I relate both logs using a common keyword or something to fetch results of both? I don't see one. Is there any approach tried by anyone on this, please let me know asap!

Why is my eval if() not working consistantly

$
0
0
I'm having a difficult time getting what I believe is a simple eval command to work as I would expect. What I'm trying to accomplish is to convert a 1 or 0 into Yes or No respectively. I'm able to do so just fine on one field, but 2 others are giving me a problem. Here's the search I'm running: | rest /servicesNS/-/-/saved/searches | search action.myAction=1 | foreach action.myAction.param.myParam1 action.myAction.param.myParam2 is_scheduled [eval <>=if(isnull(<>),<>,if(<>=1,"Yes","No"))] | rename action.myAction.param.myParam1 as param1, action.myAction.param.myParam2 as param2 I had to add the isnull check as the 2 param fields do not always have data in them and the search would not run without the isnull. This search does result in Yes/No values in the is_scheduled field, but the param fields remain unchanged. To my knowledge, Splunk is treating them as numbers as they are right-justified in the results table. Here is some sample output from the above search: title, param1, param2, is_scheduled alert1, , yes alert2, 1, 1, Yes alert3, 1, 0, Yes alert4, 0, 0, Yes I have also tried adding another field to test whether the data is a string, number or null, but end up with very strange results from that. I added the following lines between the search and foreach lines to get the results below: | eval isNumber=if(isNum(action.myAction.param.myParam1),"yes","no") | eval isString=if(isStr(action.myAction.param.myParam1),"yes","no") | eval isNull=if(isNull(action.myAction.param.myParam1),"yes","no") Results: title, param1, param2, is_scheduled, isNumber, isString, isNull alert1, , yes, no, no, yes alert2, 1, 1, Yes, no, no, yes alert3, 1, 0, Yes, no, no, yes alert4, 0, 0, Yes, no, no, yes I have copied and pasted the field name everywhere within the command to make sure I haven't typo'd anything and I have tried renaming the fields prior to the eval command and using the renamed field instead of the original, but that changes nothing. I have also tried doing it outside a foreach loop, but still get the same results. What am I missing? Is there a better way to accomplish what I'm trying to do?

Streamstats Question

$
0
0
Using this query below could you help me identify servers that were added on a daily basis? example today is friday 13th i would like to see new servers that were not on the report on the Thursday the 12th. Alternatively I would like to see servers that were removed. query - index=#### sourcetype=#### Name="####*"|table Name _time OS LastScanDate|eval Days=round((relative_time(now(),"@d")-relative_time(LastScanDate,"@d"))/86400,0)|eval LastScanDate=strftime(LastScanDate, "%Y-%m-%d")|sort by Name _time|streamstats window=1 current=f global=f values(LastScanDate) as prev|eval John=strftime(LastScanDate, "%d") Example Name _time OS LastScanDate Days prev Sever 1 2017-10-06T23:45:48.840-0500 Windows Server 2016 9/12/2017 31 ####WCAPPW1601 2017-10-07T23:45:15.257-0500 Windows Server 2016 9/12/2017 31 9/12/2017 ####WCAPPW1601 2017-10-08T23:45:53.773-0500 Windows Server 2016 9/12/2017 31 9/12/2017 ####WCAPPW1601 2017-10-08T23:50:59.393-0500 Windows Server 2016 9/12/2017 31 9/12/2017 ####WCAPPW1601 2017-10-09T23:45:11.293-0500 Windows Server 2016 9/12/2017 31 9/12/2017 ####WCAPPW1601 2017-10-10T23:45:15.580-0500 Windows Server 2016 9/12/2017 31 9/12/2017 ####WCAPPW1601 2017-10-11T23:45:37.297-0500 Windows Server 2016 9/12/2017 31 9/12/2017 ####WCAPPW1601 2017-10-12T23:45:55.467-0500 Windows Server 2016 9/12/2017 31 9/12/2017 ####WDAPPBSO06B 2017-10-06T23:45:48.840-0500 Windows Server 2012 R2 9/14/2017 29 9/12/2017 ####WDAPPBSO06B 2017-10-07T23:45:15.257-0500 Windows Server 2012 R2 9/14/2017 29 9/14/2017 ####WDAPPBSO06B 2017-10-08T23:45:53.773-0500 Windows Server 2012 R2 9/14/2017 29 9/14/2017 ####WDAPPBSO06B 2017-10-08T23:50:59.393-0500 Windows Server 2012 R2 9/14/2017 29 9/14/2017 ####WDAPPBSO06B 2017-10-09T23:45:11.293-0500 Windows Server 2012 R2 9/14/2017 29 9/14/2017 ####WDAPPServer02A 2017-10-06T23:45:48.840-0500 Windows Server 2012 R2 9/19/2017 24 9/14/2017 ####WDAPPServer02A 2017-10-07T23:45:15.257-0500 Windows Server 2012 R2 9/19/2017 24 9/19/2017 ####WDAPPServer02A 2017-10-08T23:45:53.773-0500 Windows Server 2012 R2 9/19/2017 24 9/19/2017 ####WDAPPServer02A 2017-10-08T23:50:59.393-0500 Windows Server 2012 R2 9/19/2017 24 9/19/2017 ####WDAPPServer02A 2017-10-09T23:45:11.293-0500 Windows Server 2012 R2 9/19/2017 24 9/19/2017 ####WDAPPServer02G 2017-10-06T23:45:48.840-0500 Windows Server 2012 R2 9/19/2017 24 9/19/2017 ####WDAPPServer02G 2017-10-07T23:45:15.257-0500 Windows Server 2012 R2 9/19/2017 24 9/19/2017 ####WDAPPServer02G 2017-10-08T23:45:53.773-0500 Windows Server 2012 R2 9/19/2017 24 9/19/2017 ####WDAPPServer02G 2017-10-08T23:50:59.393-0500 Windows Server 2012 R2 9/19/2017 24 9/19/2017 ####WDAPPServer02G 2017-10-09T23:45:11.293-0500 Windows Server 2012 R2 9/19/2017 24 9/19/2017 ####WDAPPServer02H 2017-10-06T23:45:48.840-0500 Windows Server 2012 R2 9/19/2017 24 9/19/2017 ####WDAPPServer02H 2017-10-07T23:45:15.257-0500 Windows Server 2012 R2 9/19/2017 24 9/19/2017 ####WDAPPServer02H 2017-10-08T23:45:53.773-0500 Windows Server 2012 R2 9/19/2017 24 9/19/2017 ####WDAPPServer02H 2017-10-08T23:50:59.393-0500 Windows Server 2012 R2 9/19/2017 24 9/19/2017 ####WDAPPServer02H 2017-10-09T23:45:11.293-0500 Windows Server 2012 R2 9/19/2017 24 9/19/2017 ####WDAPPServer03B 2017-10-06T23:45:48.840-0500 Windows Server 2012 R2 9/16/2017 27 9/19/2017 ####WDAPPServer03B 2017-10-07T23:45:15.257-0500 Windows Server 2012 R2 9/16/2017 27 9/16/2017

How can I fix my double timechart graphs?

$
0
0
I want to see 2 timecharts that each 1 contains different counter my search is: source="perfmon:test" counter="Private Bytes" NOT _total instance=chrome | eval MB_Used=Value/1024/1024 | timechart sum(MB_Used) by instance useother=f | append [search counter="Working Set Peak" NOT _total instance=chrome | eval MB_Used=Value/1024/1024 | timechart span=1m avg(MB_Used) useother=f by instance] With the append command I managed to add the timecharts together but I see them with the same name and the graphs look awkward (the 2 timecharts have the same name so it shows them awkwardly). Do you have any idea?

How do you clear the token values in HTML dashboards?

$
0
0
I've got a dashboard that is POSTing stuff to a kv store. It currently clears the input forms once I hit submit, but the actual values seem to still be held in the tokens. For instance, I can hit submit five times in a row, even though the input forms have been cleared, but it will post the same values five times in a row instead one set of values and four blanks. So yeah, how do I clear these token values? Here is my html: var submit = new SubmitButton({ id: 'submit', el: $('#search_btn'), text: 'Add to kv store' }, {tokens: true}).render(); submit.on("submit", function() { submitTokens(); // Below this point is from custom dashboard tutorial // When the Submit button is clicked, get all the form fields by accessing token values from their Label field var tokens = mvc.Components.get("default"); var form_ticket = tokens.get("form.ticket"); var form_type = tokens.get("form.type"); var form_value = tokens.get("form.value"); // Create a dictionary to store the field names and values var record = { "ticket": form_ticket, "type": form_type, "value": form_value }; // Use the request method to send a REST POST request // to the storage/collections/data/{collection}/ endpoint service.request( "storage/collections/data/mykvstore/", "POST", null, null, JSON.stringify(record), {"Content-Type": "application/json"}, null) .done(function() { // Run the search again to update the table search1.startSearch(); // Clear the form fields $("#formaddtokvstore input[type=text]").val(""); }); }); To me it looks like it should be re .getting the variables each time which should be blank, but this is not what is happening. Should I just set my variables to empty at the end?

How can we adjust our firewall's timezone?

$
0
0
Hi All, Currently we are facing an issue with time stamp for an firewall logs. We could see the logs are coming into splunk with a time difference of 3 hours. We have 5 heavy forwarder instance as intermediate forwarder and this firewall log is read from this 5 HF instance which is configured as syslogs server. The splunk reads the logs from these 5 HF instance and then ingest the data into indexer. inputs.conf detail : [monitor:///opt/syslogs/mguard/.../mguard.log*] index=fw sourcetype=mguard:network:log host_segment = 4 10/13/17 10:35:57.000 AM Oct 13 10:35:57 test01.xxx.com 1,2017/10/13 10:35:57,007257000034869,TRAFFIC,start,0,2017/10/13 10:35:57,10.x.x.x,168.x.x.x,0.0.0.0,0.0.0.0,trust-xxxx,,,ssl,vsys1,trust,xxxx,ethernet1/2,ethernet1/1,Splunk,2017/10/13 10:35:57,761997,1,51475,8089,0,0,0x104000,tcp,allow,416,350,66,4,2017/10/13 10:35:56,0,any,0,70021120,0x0,x.0.0.0-x.255.255.255,United States,0,3,1,n/a,0,0,0,0,,test01,from-policy,,,0,,0,,N/A eventtype = nix-all-logs eventtype = pan network host = test01.xxx.com source = /opt/syslogs/mguard/test01.xxx.com/mguard.log sourcetype = mguard:network:log tag = network timeendpos = 16 timestartpos = 0 Current EDT time is 1:40 PM and logs are coming into splunk with a timestamp of 10:35:57.000 AM, so need to adjust the time zone by 3 hours to match the current EDT time. Kindly guide me how to adjust this time zone by 3 hours in Splunk

How do I prevent empty values from being submitted to my KV store on my dashboard?

$
0
0
I have an HTML dashboard that lets me submit values to my kv store. How do I check the values for emptiness and then inform the user that the values are empty?

Why is an empty value from a MultiSelectInput deleting ALL the items in my KV Store?

$
0
0
Not sure if this is a bug or what, but if I push the delete button on my dashboard and there are no values selected in the MultSelectInput, all of my kv store values are wiped out. One caveat is that you HAVE to legitimately delete a single value once before this bug shows up. **Typical use would be:** select one value from the MultiSelectInput and then hit the Delete button. **Bugged process:** delete one value, erase MultiSelectInput value so that the input is empty, hit Delete button. All values are now gone. This deletes a single value in the kv store: https://mysplunk.com/en-US/splunkd/__raw/servicesNS/nobody/search/storage/collections/data/mykvstore-test/59e125611177c51dd93f148c?output_mode=json This deletes nothing (happens when I first load the page adn try to delete with a blank MultiSelectInput): https://mysplunk.com/en-US/splunkd/__raw/servicesNS/nobody/search/storage/collections/data/mykvstore-test/59e125611177c51dd93f148c?output_mode=json This deletes everything (happens after I delete one thing, then try to delete again with a blank MultiSelectInput): https://mysplunk.com/en-US/splunkd/__raw/servicesNS/nobody/search/storage/collections/data/mykvstore-test/?output_mode=json I think this is a bug. Can anyone confirm? --------------------------------------------------------------- Here is my input that selects KeyIDs to delete: var input4 = new MultiSelectInput({ "id": "input4", "choices": [], "valueField": "KeyID", "labelField": "KeyID", "value": "$form.KeyID$", "managerid": "search10", "el": $('#input4') }, {tokens: true}).render(); input4.on("change", function(newValue) { FormUtils.handleValueChange(input4); }); And here is my delete button and code: $("#deleteKeyID").click(function() { // Get the value of the key ID field var tokens = mvc.Components.get("default"); var form_keyid = tokens.get("form.KeyID"); // Delete the record that corresponds to the key ID using // the del method to send a DELETE request // to the storage/collections/data/{collection}/ endpoint service.del("storage/collections/data/mykvstore-test/" + encodeURIComponent(form_keyid)) .done(function() { // Run the search again to update the table search1.startSearch(); // Clear the form fields THIS DOESN'T WORK FOR MULTISELECTINPUTS $("#formKeyDeletion input[type=text]").val(""); }); return false; }); // Initialize time tokens to default if (!defaultTokenModel.has('earliest') && !defaultTokenModel.has('latest')) { defaultTokenModel.set({ earliest: '0', latest: '' }); } if (!_.isEmpty(urlTokenModel.toJSON())){ submitTokens(); }

How do you use custom XML in reports (from dashboard formatting)?

$
0
0
Hi everyone, I have made a bar graph that uses XML to make custom colors for two different series. I seem to lose the colors I set the series at whenever I convert to a report from my dashboard that uses the XML. I am only converting to a report so I can embed my display into an HTML file and ultimately on to the web. 1.) Can I do something to keep the XML formatting I have done in the dashboard? 2.) If not, does anyone know of a way to embed a dashboard? I appreciate any input that anyone has!

Cannot re-add UDP data input after deleting it. Parameter name: UDP port 514 is not available

$
0
0
First I wanted to create an alternate data input using 514/udp, so I disabled the existing one and tried to clone another one, and change the port number. Got rejected with the full text message: "Encountered the following error while trying to save: Parameter name: UDP port 514 not available." I deleted the existing one and tried again. No joy. Tried setting up new data input, still can't. Recycling Splunk didn't help. I'm now stuck with a system with no 514/UDP listener, and no way to even add back the original one, because it somehow thinks it's still in use. Any suggestions on how to clear it out the vestigial information preventing me from adding anything on the old port?

Help with indexing .XET files or SQL database in Splunk? What should the charset be?

$
0
0
How do you index .xet files or trace file of SQL database in Splunk and what should be the charset for that if i use NO_BINARY_CHECK = true NO_BINARY_CHECK = true what should be charset for that?

Splunk Add-on for Tenable: Security Center Logs Failed to Index

$
0
0
On Splunk 6.6, most up-to-date Splunk Add-On for Tenable. Been using it successfully from around February 2017 til middle of May 2017 with no issues, but after a Splunk update or two, have noticed the logs stopped flowing into Splunk. No network change, Security Center user change to be noted, but seeing the following error at regular intervals coming in (once every 60-90 seconds, just depends on the interval I have set or changed to troubleshoot). Didn't know if this was due to an update to Splunk that the Add-On did not account for, or if it was something else. Seeing some few other questions with similar reported issues, but wanted to bump the posts up with this error. Any assistance or direction would be fantastic! ------------------------------------------------------------------ 885 +0000 log_level=ERROR, pid=2248, tid=Thread-6, file=ta_data_collector.py, func_name=index_data, code_line_no=118 | [stanza_name="SecurityCenterInput" data="sc_vulnerability" server="SecurityCenter"] Failed to index data Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\Splunk_TA_nessus\bin\splunk_ta_nessus\splunktaucclib\data_collection\ta_data_collector.py", line 115, in index_data self._do_safe_index() File "C:\Program Files\Splunk\etc\apps\Splunk_TA_nessus\bin\splunk_ta_nessus\splunktaucclib\data_collection\ta_data_collector.py", line 148, in _do_safe_index self._client = self._create_data_client() File "C:\Program Files\Splunk\etc\apps\Splunk_TA_nessus\bin\splunk_ta_nessus\splunktaucclib\data_collection\ta_data_collector.py", line 89, in _create_data_client ckpt = self._get_ckpt() File "C:\Program Files\Splunk\etc\apps\Splunk_TA_nessus\bin\splunk_ta_nessus\splunktaucclib\data_collection\ta_data_collector.py", line 80, in _get_ckpt return self._checkpoint_manager.get_ckpt() File "C:\Program Files\Splunk\etc\apps\Splunk_TA_nessus\bin\splunk_ta_nessus\splunktaucclib\data_collection\ta_checkpoint_manager.py", line 31, in get_ckpt return self._store.get_state(key) File "C:\Program Files\Splunk\etc\apps\Splunk_TA_nessus\bin\splunk_ta_nessus\splunktalib\state_store.py", line 141, in get_state state = json.load(jsonfile) File "C:\Program Files\Splunk\Python-2.7\Lib\json\__init__.py", line 291, in load **kw) File "C:\Program Files\Splunk\Python-2.7\Lib\json\__init__.py", line 339, in loads return _default_decoder.decode(s) File "C:\Program Files\Splunk\Python-2.7\Lib\json\decoder.py", line 364, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Program Files\Splunk\Python-2.7\Lib\json\decoder.py", line 382, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded ----------------------------

Which command or stanza can be used to decide which fields are extracted at search time to improve performance?

$
0
0
As far as I know, fields- does not improve performance, and I'm looking for a better option.

How to specify an index name in the docker instance of Splunk universal forwarder

$
0
0
I am trying to find a way to specify the index name to use when collecting data from a CSV file using the Splunk universal forwarder docker container. I have tried using SPLUNK_CMD environment variable and that does not seem to work. Any ideas how to provide the index name when starting the docker container?

CSV input. Need output based on 3 different fields. 1 search

$
0
0
Hello, We have been importing a particular csv daily into a single index, so the data is nice and clean. We want to perform 1 search and chart out results . Field are: Volume, Change, & Price. Volume needs to be greater than 1 Change needs to be greater than 1 Price needs to be greater than 0.001 These 3 fields will determine results. We want to then output a table that has the following columns: Symbol Volume Change Price We want to then have the flexibility to sort the table results by one of the 3 fields (volume, change, price) listed above in ascending or descending order. Does the sort need to be included in the search syntax, or can we simply use the Splunk UI to click the column to sort (so far, I don't see this as an option, but I could be doing something wrong). HERE'S THE KICKER... The "volume" field must have been 0 at some point in time (remember, we are ingesting results daily), and must have changed to greater than 1 (as per above requirement). Thanks in advance!

Need values to stick within a range for chart

$
0
0
Hello, We have the following search: index="blah" | stats values(Change), values(Volume), values(Price) by Symbol Some results are too large or too small of a number range, so I want to fine tune the range. How do I do this? | stats values (Change){range=0.001:0.100} ??? How do I make sure each value meets the criteria of a particular range? Thanks!

DB Connect Time-Based lookup

$
0
0
Is there any way to create a time-based database lookup with DBConnect 3.11? I don't see the option within the GUI and can't find a way to customize the lookup SQL query since ```WHERE field=value``` appears to be appended to the lookup SQL. If this can't be done with the GUI, is there a way to do this by creating a stored procedure with the lookup logic and passing the values with the ```dbxquery``` command? [This related question referred to an older version of DBConnect][1]. [1]: https://answers.splunk.com/answers/126421/time-based-lookups-using-db-connect.html

Can anyone explain me how to on board data.

$
0
0
I was hired in an organization as a Splunk onboard specialist, I don't know much about onboarding data. I had gone through getting data in docs but that is not helpful to deal in real time. Our environment 325 GB/ per day 7 indexers, 4 SH, 100 UF. Can anyone please share your onboarding knowledge with me. splunk learner. Rocky.

SG500 Logging

$
0
0
I have two Cisco SG500 switches and I'd like to get them logging to splunk. What is the best method? I can't find a premade dashboard, nor source connector when adding a port.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>