Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Forwarder input explicit blacklist: is it required?

$
0
0
Hi, we were sending all var/log to one index. Now I am trying to send them to separate indexes. DO i Need to explicitly forward and blacklist, or if I forward is it automatically blacklisted?. Thanks for the guidance. NP [monitor:///var/log/mongo/...] crcSalt = disabled = false index = mongo [monitor:///var/log/hpp/…] crcSalt = disabled = false index = common [monitor:///var/log/apache2/...] crcSalt = disabled = false index = common [monitor:///var/log/epp/…] crcSalt = disabled = false index = common [monitor:///var/log/prd/deployment/...] crcSalt = disabled = false index = common [monitor:///var/log/prd/…] crcSalt = disabled = false index = elastica {% if 'gr’ in salt['grains.get']('roles') %}blacklist = /var/log/prd/gr/png|\.(gz|bz2|z|zip|\d)|UNKNOWN.INFO|audit\.log$ {% else %}blacklist = \.(gz|bz2|z|zip|\d)|UNKNOWN.INFO|audit\.log$ {% endif %} [monitor:///var/log/...] crcSalt = disabled = false index = main blacklist = \.(gz|bz2|z|zip|\d)|UNKNOWN.INFO|prd|apache2|mongo|hpp|epp|audit\.log$

Events with no source type information

$
0
0
Using Splunk 6.6.2, I've created a search to look for supervisord events on two different hosts. These events are not currently assigned a source type in inputs.conf on the forwarders:>index=os host=rooster OR host="rooster-2" sourcetype=supervisord* The events do have sourcetypes when viewed in search, which I assume Splunk assigned at index time. However, when I try to "Extract More Fields" I get: >The events associated with this job have no sourcetype information: 1506449927.283954 Do I have to assign the source type on the forwarder for the extraction to work?

Field extraction issue on events with no sourcetype information

$
0
0
Using Splunk 6.6.2, I've created a search to look for supervisord events on two different hosts. These events are not currently assigned a source type in inputs.conf on the forwarders:>index=os host=rooster OR host="rooster-2" sourcetype=supervisord* The events do have sourcetypes when viewed in search, which I assume Splunk assigned at index time. However, when I try to "Extract More Fields" I get: >The events associated with this job have no sourcetype information: 1506449927.283954 Do I have to assign the source type on the forwarder for the extraction to work?

Splunk DB Connect: Field displays "no matches" instead of value

$
0
0
I have Splunk 6.6.3 and today i tried to add a new input in the db connect App from a database (Microsoft SQL, i already had data of this source) but! the values in the "rising value" parameter don't appear, it displays "No matches". I try to open any other of the inputs (this inputs are working good) and it's the same issue.

Is it possible to use a column from a .csv lookup file as a column in the results?

$
0
0
So, I tried https://answers.splunk.com/answers/480296/how-to-add-an-additional-column-in-my-results-from.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev and that answer doesn't seem to work. I've also reviewed the documentation for lookup and inputlookup, but there's something simple here I'm missing (I hope) So what I have is a .csv full of phone numbers and names, called phonebook.csv: 5135550010 Bob 5135550012 Jake I have a index in splunk with phone numbers, model of phone, etc. as a data source (let's call it "inventory") I can search: 5135550009 Pineapple 6S 5135550010 Pineapple 7 5135550029 Gootle Paxel 2 What I am trying to match and what I'm trying to end up with should look something like this: 5135550010 Bob Pineapple 7 5135550012 Jake That is, when the model of the phone exists in the inventory, add it as a field. If it does not exist in the inventory, don't add anything. I tried this search: index="inventory" [|inputlookup phonebook.csv | fields PhoneNumber] | stats last(Username), last(Model) BY PhoneNumber But all this gives me is: 5135550010 Bob Pineapple 7 What I want is to see *every* row of the original phonebook.csv, even if there are no results returned for that row: 5135550010 Bob Pineapple 7 5135550012 Jake How does one achieve this? I have done a lot of searching and trying to understand "inputlookup" and "lookup" but I'm just not getting something. It seems so simple. p.s. I don't have the power to just add phonebook.csv as a data source and just append the results column to that. Our admin is on vacation until next week :(

List of users accessing activesync

$
0
0
index=exchange sourcetype=uag trunk="activesync2010" user="*" *returns a list of active sync users in the last timeframe I have a lookup table list of watched users | lookup VIP_mail.csv If the user in the VIP lookup table also has active usage logs than I want the logs for all users in the table index=exchange sourcetype=uag trunk="activesync2010" user="*" | lookup VIP_mail.csv "User ID" as USERID | where user=USERID the match should be true if user ID's match

Merge two tables from two different sources

$
0
0
i have a requirement to merge two tables **table 1** appname | source app1 | src1 app2 | src 2 app3 | src 3 **table 2** appname | userinfo app1 | usr1 app3 | usr 3 merge two tables depending on the appname and the result should be like appname | source | userinfo app1 | src1 | usr1 app2 | src2 | app3 | src3 | usr3 I have tried something like this index=appdata | spath path=result{} output=x|mvexpand x | stats latest(src) by appname | join type=left appname [| search index=usrdata | spath path=result{} output=x | mvexpand x | table appname userinfo] this query is populating data from only the first search before the join command. Any help is much appreciated. Thanks!!!

Extraction error from log file

$
0
0
All, would like to extract the below information from the logs Caused by: org.apache.camel.TypeConversionException: Error during type conversion from type: java.lang.String to the required type: int with value [% JMS_Input_Consumers %] due java.lang.NumberFormatException: For input string: "[% JMS.LOGSBUS.JMS_Input_Consumers %]" at org.apache.camel.impl.converter.BaseTypeConverterRegistry.createTypeConversionException(BaseTypeConverterRegistry.java:610) at org.apache.camel.impl.converter.BaseTypeConverterRegistry.mandatoryConvertTo(BaseTypeConverterRegistry.java:177) at org.apache.camel.impl.converter.BaseTypeConverterRegistry.mandatoryConvertTo(BaseTypeConverterRegistry.java:156) at org.apache.camel.util.IntrospectionSupport.convert(IntrospectionSupport.java:622) at org.apache.camel.util.IntrospectionSupport.setProperty(IntrospectionSupport.java:537) at org.apache.camel.util.IntrospectionSupport.setProperty(IntrospectionSupport.java:602) at org.apache.camel.util.IntrospectionSupport.setProperties(IntrospectionSupport.java:459) at org.apache.camel.util.IntrospectionSupport.setProperties(IntrospectionSupport.java:469) at org.apache.camel.util.EndpointHelper.setProperties(EndpointHelper.java:256) at org.apache.camel.impl.DefaultComponent.setProperties(DefaultComponent.java:257) at org.apache.camel.component.jms.JmsComponent.createEndpoint(JmsComponent.java:886) at org.apache.camel.impl.DefaultComponent.createEndpoint(DefaultComponent.java:114) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) Out of the above sample would like to extract the below caused by: org.apache.camel.TypeConversionException Description: Error during type conversion from type: java.lang.String to the required type: int with value [% JMS_Input_Consumers %] due java.lang.NumberFormatException: For input string: "[% JMS.LOGSBUS.JMS_Input_Consumers %]" at org.apache.camel.impl.converter.BaseTypeConverterRegistry.createTypeConversionException(BaseTypeConverterRegistry.java:610) at org.apache.camel.impl.converter.BaseTypeConverterRegistry.mandatoryConvertTo(BaseTypeConverterRegistry.java:177) at org.apache.camel.impl.converter.BaseTypeConverterRegistry.mandatoryConvertTo(BaseTypeConverterRegistry.java:156) at org.apache.camel.util.IntrospectionSupport.convert(IntrospectionSupport.java:622) at org.apache.camel.util.IntrospectionSupport.setProperty(IntrospectionSupport.java:537) at org.apache.camel.util.IntrospectionSupport.setProperty(IntrospectionSupport.java:602) at org.apache.camel.util.IntrospectionSupport.setProperties(IntrospectionSupport.java:459) at org.apache.camel.util.IntrospectionSupport.setProperties(IntrospectionSupport.java:469) at org.apache.camel.util.EndpointHelper.setProperties(EndpointHelper.java:256) at org.apache.camel.impl.DefaultComponent.setProperties(DefaultComponent.java:257) at org.apache.camel.component.jms.JmsComponent.createEndpoint(JmsComponent.java:886) at org.apache.camel.impl.DefaultComponent.createEndpoint(DefaultComponent.java:114) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) Any help would be appreciated.

How to search for matching fields using 2 different host with same sourcetype?

$
0
0
I'm looking to find matching field (lets call this field action) from 2 different host with the same sourcetype. example Sourcetype=pan host=1 and host=2 I'm looking to create a ta table that would show the matching field for field action (I only want the matching field to generate result) so if host 1 has action=allowed and host 2 has action=allowed. I want to create a table that would include the time, action, src, dest.

Splunk App and Add-on for Unix and Linux –– add-on specific fields are not being extracted, which is breaking the dashboards

$
0
0
I've got the Splunk Add-on for Unix and Linux installed on my index master and across my 3 indexers via a cluster bundle. In the App for Unix & Linux running on my search head, I can see results from all 4 hosts, text like the output from `cpu.sh` and `ps.sh`. But none of the add-on specific fields, e.g., **pctCPU** from `top.sh`, are being extracted, which of course breaks many of the associated dashboards. Any help on getting the app & add-ons working, and in particular, fixing field extraction, across the cluster would be very much appreciated.

TA-nmon-Technical Addon for Nmon Performance Monitor -- Are there any special permissions required for embedded scripts?

$
0
0
Hi, Do the Embedded Scripts in the TA-nmon require special permissions like root privileges/ACLs ? thanks, Shreedeep Mitra.

How do you btool inputs.conf?

$
0
0
hi, can you please tell me what is the right way to btool inputs.conf for a specific app context. I want to troubleshoot this error that is too much in my splunk search head messages notification : Received index from dleeted/missing/unconfigured indexes. I read previous blogs: it says your inputs.conf is sending data to an indx that doesnt exist

Does a dashboard keep refreshing itself by default?

$
0
0
Looking at the MC **Search Usage Statistics: Deployment** stats and then speaking with the top “offending” user who said that the only thing he did in the past 4 hours was to have a dashboard minimized. So, I wonder, do the custom dashboards keep refreshing themselves by default?

Splunk Add-on for Bro IDS: How can I contribute to this app?

$
0
0
I've been using the Bro add-on and it's been working well, but there are a couple serious problems that I've run into while using it: 1. I ended up with thousands of sourcetypes for "too-small" each prefixed with the MD5 hash of the pcap file (seems to be a problem with the PREFIX_SOURCETYPE settings in props.conf combined with the use of the MD5 hash of the pcap file in the log filename) that overloaded the parsing engine. 2. This might have to be something that needs tuning on the pcap capture side, but at least once a day there will be a failure to read the pcap file (possibly due to the file being rolled over before processing can occur) and this will completely crash the part of the plugin that invokes bro (pcap_monitor.py) that requires either a full restart of splunk or enabling/disabling the plugin to bring it back up. I dug around in the source code for the add-on and the fixes for both seem pretty straightforward and I was wondering what if any procedure there would be for me to contribute those to you (since it's Splunk-built) for inclusion in a new release of the add-on. I'm also looking into making any modifications necessary to support bro 2.5.x (so far, it's been working well with a modification or two). Thanks!

Common Field in Two Different Indexes

$
0
0
I have two indexes that I can successfully join via stats. However, both indexes have a common field named "STATUS". I want to be able to separate the STATUS field into STATUS1 and STATUS2 before the join - so I can see both. I have left out STATUS below but showing successful join SPL below: index=customertest OR index=valuetest | stats values(Spend) as spend values(Order) as order by Customer | fillnull value=NULL | mvexpand spend | sort Customer Any recommendations?

When I try to start Splunk after untarring, I get the error "Couldn't determine $SPLUNK_HOME or $SPLUNK_ETC"

$
0
0
After untarring a download of Splunk in tar.gz format, I get the following error: ERROR: Couldn't determine $SPLUNK_HOME or $SPLUNK_ETC; perhaps one should be set in environment What is the cause of this?

How to sum up multiple fields without using foreach

$
0
0
I am using the below search query which contains multiple fields. All the fields (DATA_MB, INDEX_MB, DB2_INDEX_MB, etc.,) contains size values of a particular DB. index=main|timechart span=1w sum(DATA_MB) as datamb, sum(INDEX_MB) as indexmb, sum(DB2_DATA_MB) as db2datamb, sum(DB2_INDEX_MB) as db2indexmb, sum(DB2_LOB_MB) as db2lobmb, sum(DB2_LONG_MB) as db2longmb, sum(DB2_XML_MB) as db2xmlmb by DOMAIN limit=25 I want all these 7 fields such as datamb, indexmb, db2datamb, etc., to be summed up together and display it in a single field name without using "foreach" clause. Is it possible? Could anyone please help me on this.

i want to use the regular index for capturing today's data and for last 6 days it should capture the data from summary index.

$
0
0
i have created a dashboard with 6 panel's, with last 7days time frame for transaction's count between the A-b, B-c, C-D applications, daily more than 1lakh + transactions are flowing, no i want to use summary index for improving the performance. As summary index run's fast searches, My requirement is, i want to use the regular index for capturing today's data and for last 6 days it should capture the data from summary index. Please help me with the queries and commands which i can use.

The minimum free disk space (5000MB) reached, solved by increase disk space in linux?

$
0
0
Hi, Recently I encountered the issue "The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch". My splunk is installed on a vm so I just tried to give more disk space to linux by just increasing the provision space. Somehow, the splunk message is no more. However, I read that I still need to do something to the linux partition table? Do I really need to do the change to partition table in linux to really solved the disk space issue for splunk?? https://www.unixmen.com/increase-disk-space-and-memory-in-linux-vmware-virtual-machines/

Scatter plot with trend line

$
0
0
Hi! Is there any way to make trend line for scatter plot like this: ![alt text][1] Scatter plot matrix has this option, but I need single chart. Also I found article about implementation of linear regression for splunk (https://wiki.splunk.com/Community:Plotting_a_linear_trendline) but actual data in it looks like regular line chart - not scatter [1]: /storage/temp/217670-linear91.jpg
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>