Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

In selective indexing with defaultGroup=noforward, do I have to worry about adding _INDEX_AND_FORWARD_ROUTING to Splunk default inputs also?

$
0
0
I'm reading the section *Index one input locally and then forward all inputs* in [Route and Filter data][1] where **selectiveIndexing=true** and **index=true**. I have a couple of questions about that, but first here is my understanding. Nothing gets indexed or forwarded unless you explicitly state it for each input on the system, correct? As I understand the documentation, for every input stanza in any inputs.conf file you add **_INDEX_AND_FORWARD_ROUTING=local** to enable indexing, and **_TCP_ROUTING=***myDefinedIndexer* if you want that index to be forwarded to another indexer, and both if you want both. Now the **main question** is -- does this apply to Splunk's internal inputs/indexes, or do I only have to worry about the inputs that I've created since I installed Splunk? A **second question** I have is this: when the logs for any given input are forwarded, does the forwarded information allow the receiving indexer to know what index they should be put in, assuming both indexers have the same indexes? And my **final question**: if the first indexer has filters (transforms) to drop some logs, and index others, does this behavior apply to forwarded logs? (I hope the answer to this is yes!). For anybody wondering what I'm doing, I'm migrating to a new system and so I'm wanting to send logs from the old one for a few weeks before I switch to the new one for all my forwarders and syslog senders. [1]: http://docs.splunk.com/Documentation/Splunk/6.5.2/Forwarding/Routeandfilterdatad

Adding a Input (Folder) to Forwarder

$
0
0
i was trying to add a folder to forwarder to read data but its giving me an error ..as your session is invalid. please login. [root@localhost bin]# ./splunk add monitor /home/user/Desktop/Forward_Data -index my_db Your session is invalid. Please login user = admin password = changeme I have tried that login credentials but its not working either, and the forwarder is added already i jus want to send the data form forwarder to indexer so im trying to add Input (folder) to forwarder to monitor the data

We are trying to make a REST input and the result is XML data but it has no schema.

$
0
0
We are trying to make a REST input and the result is XML data but it has no schema. The Source we are using is the Palo Alto, specifically Panorama, not the firewalls directly. Can someone help me create an XML schema??? (I have been stuck on this for a while!) Is there a way to manually build a schema in splunk for this input?

How to use Splunk App for CEF

$
0
0
I have successfully installed the Splunk App for CEF to our stand alone test server and I try to select a data model according to this [document][1] http://docs.splunk.com/Documentation/CEFapp/2.0.0/DeployCEFapp/UsetheSplunkAppforCEF#Select_the_data_that_you_want_to_output_in_common_event_format However, I could not find any drop down menu, where can I select the data model? Here is the screen when I click the New CEF output. ![alt text][3] Thanks [1]: http://docs.splunk.com/Documentation/CEFapp/2.0.0/DeployCEFapp/UsetheSplunkAppforCEF#Select_the_data_that_you_want_to_output_in_common_event_format [3]: /storage/temp/191200-splunk-app-for-cef.png

timechart auto scale, how to over-ride?

$
0
0
I have a very simple query that shows the number of events over the course of a month -- plotted on a timechart: ` | timechart count by host limit=0 span=1d` The graph that gets drawn puts the first event at the far left, and the last event on the far right, and changes the scale (start and end dates) accordingly. For example, if I have some events on the 4th of the month, then more on the 8th -- the graph has two bars (in the case of a bar-graph) at both ends of the graph that starts with 4 and ends with 8 -- when I really want the whole month represented. Wish I could post a screenshot, but I think you get the picture. Played around with various timechart options and every legend option I could try... How can I over-ride this behavior? For the record, I understand why it's doing this: it doesn't know that there's a minimum x-axis value of 1 and max of 31. However, I'm looking for a way to set those min-max x-axis values. Thanks!

Setting outputs.conf

$
0
0
I have a doubt here..I want to index data to both sandbox and production. What changes do I need to make here. [tcpout] defaultGroup = production [tcpout:sandbox] server=ABC:PORT [tcpout:production] server=XYZ:PORT autoLB = true useACK = false

Splunk Add-on for OPC UA Error REST 404 & Script error with code 127

$
0
0
Hi dears, I has installed the app Splunk Add-on for OPC UA in my lab but when I try configure the app, two errors occur: - First: `Unable to initialize modular input "mi_opcua_pull" defined inside the app "Splunk_TA_opcua": Introspecting scheme=mi_opcua_pull: script running failed (exited with code 127).` - Second: `External handler failed with code '1' and output: 'REST ERROR[404]: Resource/Endpoint requested dose not exist - https://127.0.0.1:8089/servicesNS/nobody/Splunk_TA_opcua/data/inputs/mi_opcua_pull?output_mode=json&count=0'. See splunkd.log for stderr output.` Has anyone ever seen this error? Thanks!

Quick and Dirty Keyword Search

$
0
0
I know that this can be done with a lookup, but I was wondering if there was a quick and dirty way to search through web traffic for like three keywords. For example: bad1 bad2 bad3 I would like to break the keywords into counts. Normally you would do count by fieldName. Is there a way to do this in SPL, and not have to create a lookup table for each time you go hunting for something? This would probably be used once or twice per set of keywords - that is why I am not trying to do a saved lookup.

How can I correlate firewall traffic from two different timestamps

$
0
0
I want to get a list of traffic that has accessed the same site at two different times. All I know are the times: say 10:00 AM and 11:30 AM. How can I get a list of events where an internal IP connected to the same external IP at or near both times. I don't know either of the IP's I simply want to find a list of connections that were active at both times. * earliest="(date and time)" latest="(date and time)" AND earliest="(date and time)" latest="(date and time)"

Comparing Chart Results to field

$
0
0
This is my first time posting to the community, I hope this answer is not listed somewhere else.. if it is I have been unable to find it. I create my own dashboards for everything and I am more or less trying to tackle my first data correlation attempt but so far have come up short. The Dashboard I have created uses dynamically generated filters the field I am focused on at the moment is very simple; its "host" and it looks like this: (Filter1) $time_span$ index=$nexus_app_dc$ nexus_syslog_level=$nexus_loglevel$ $keyword$ | chart count by host (THIS WORKS) What I want to do is use the results of this chart to run another search against other data. I will provide an example Lets say the chart comes back with the following: "10.0.0.1" "10.0.0.2" "10.0.0.3" I only want my next search to contain hosts that are in that list.. here is what I have so far: $time_span$ index=dcxx_acs **Address=$nexus_app_host$** | top limit=50 _time, User, Address, CmdSet | fields - count - percent I guess in my mind I see it something like the following if I were to write it out manually $time_span$ index=dcxx_acs **Address=10.0.0.1 OR Address=10.0.0.2 OR Address=10.0.0.3**| top limit=50 _time, User, Address, CmdSet | fields - count - percent I hope at least some of this makes sense to some of you guys, any assistance is appreciated.

How to extract fields from a JSON?

$
0
0
Hello everybody, I have the next event registered in my splunk: Fri Mar 31 11:05:18 COT 2017 name=amqp_msg_received event_id=null msg_queue=seguros.traza.documentoValidado msg_exchange=seguros.cuadre.documentoValidado msg_body={"valid": true} And what i need is to extract the value of "valid", the source_type of the event is *json_no_timestamp*, how could i do this? I have tried using spath without luck, any advice? Thanks.

Assigning a variable to field values consolidated by wildcard

$
0
0
I'm trying to wrap my head around assigning a variable to field values that have been consolidated by wildcard. The specific field is a url which contains unique values, but can be consolidated by wildcard: /api/v1/data/dataInfo/5034542340/0031f24ea10c/867542388 /api/v1/data/dataInfo/6134191727/0031f24ea10c/1353781841 /api/v1/data/validate Each of these has statusCode, timestamp, etc fields associated. I am needing to do a count of how many times /api/v1/data/dataInfo/* had a 404 response, and how many times /api/v1/data/validate had a 404 response, ideally in a timechart. Without consolidating to a wildcard, I have hundreds of results, because the hash that I'm consolidating via wildcard is unique. I've tried the following, but it errors on "Error in 'eval' command: The expression is malformed. An unexpected character is reached at '/api/v1/data/dataInfo/*)'." I take this to mean I can't use eval/if with a wildcard. index=data_index environment=Production clientName="DataTool" statusCode=404 | eval dpInfo = if(url=/api/v1/data/dataInfo/*) | eval validate = if(url=/api/v1/data/validate) | timechart count Any ideas would be very much appreciated!

How to avoid exceeding daily limit when monitoring directory?

$
0
0
I want to monitor a directory that already has many gbs of data (historical data). New data is added to that directory but in a low rate 50mbs/daily. I want to index all the data to Splunk without exceeding the daily limit. I don't need all the data to be indexed at once. 1. Is there a way to control how much data is indexed daily? On **limits.conf** there is a setting called **maxKBps**, but it seems it's related to forwarders.

Cannot See Universal Forwarder from Splunk Enterprise

$
0
0
Hello, I have installed splunk enterprise in a windows environment. I have installed Universal Forwarder on a separate machine. Before running the ./splunk add forward_server command (to add the indexer), I ran ipconfig from the windows box where splunk enterprise is. Using that IPv4 address (lets call it xxx.xx.xxx.xxx). I then successfully pinged that address from where I installed the forwarder (a linux machine). Then, using the default forwarder port (9997), I ran the command as: ./splunk add forward-server xxx.xx.xxx.xxx:9997 which ran successfully. I then restarted forwarder like: ./splunk restart and the forwarder successfully restarted. I verified that the outputs.config file in the splunk_home/etc/system/local had the correct settings: defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = xxx.xx.xxx.xxx:9997 [tcpout-server://xxx.xx.xxx.xxx:9997] I then logged into the splunk enterprise web interface, and selected "Add Data" link, and then the "forward" link. At the top is says "Select Forwarders", but beneath that there is a red triangle that says "There are currently no forwarders configured as deployment clients to this instance". Am I doing something wrong? If so, how do I diagnose and correct? Grateful for any response!

Event sent to null queues

$
0
0
I have merged several lines in to one event using Should_linemerge=true. Now event looks like abc bcd **cde** efg I want to sent the line **cde** to null queue and remaining to index queue. If I match regex to "cde" and send to null queue(using transforms.conf) , whether that particular line consisting "cde" will be sent to nullqueue or the entire event associated with it will be moved to null queue?

HttpListener Socket error when using item.update

$
0
0
When attempting to update inputs.conf in an app using `item.update` I see this warning message: WARN HttpListener - Socket error from 127.0.0.1 while accessing /servicesNS/nobody/launcher/data/inputs/APPNAME/IntLab/: Broken pipe inputs.con still updates correctly but I was wondering if anyone had any ideas about this?

HTTP Event Collectors Invalid Token

$
0
0
I am having issue with multiple sets of HTTP Event Collectors we have running, each of which are throwing a "{"text":"Invalid token","code":4}" message, as shown below, when I ran a simple curl command against them. [root@ip-10-0-17-167 ~]# curl -k https://<>:8088/services/collector/event -H "Authorization: Splunk 297B4C96-5B44-44D2-A9C1-873862AAD558" -d '{"event": "hello world"}' {"text":"Invalid token","code":4} This is happening with several tokens, all of which were previously working without issues. The only thing that has changed that I am aware of since I last tested the functionality (at build out) was a minor upgrade from v6.3.3 to v6.3.9. With that said, I have tested both existing (pre-upgrade) and new (post-upgrade) tokens, both with same result. We are using a Deployment server to generate the tokens from within the UI and deploy them out to the HTTP Event Collectors. On the Deployment server, all of the tokens are listed under the splunk_httpinput app, including the one I am using in the curl command provided above. [root@ip-10-0-16-52 splunk_httpinput]# cat /opt/splunk/etc/deployment-apps/splunk_httpinput/local/inputs.conf [http] disabled = 0 port = 8088 enableSSL = 1 dedicatedIoThreads = 2 maxThreads = 0 maxSockets = 0 ... [http://adslot-lambda] disabled = 0 index = app sourcetype = adslot-lambda token = 297B4C96-5B44-44D2-A9C1-873862AAD558 I also confrimed that the tokens, including the one I am using in the curl command provided above, are deployed to the HTTP Event Collector I am pointed to. It is listed under the splunk_httpinput app just like it is listed on the Deployment server and Splunk has picked up the inputs setting following the reload. [root@ip-10-0-18-38 apps]# cat /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf [http] disabled = 0 port = 8088 enableSSL = 1 dedicatedIoThreads = 2 maxThreads = 0 maxSockets = 0 ... [http://adslot-lambda] disabled = 0 index = app sourcetype = adslot-lambda token = 297B4C96-5B44-44D2-A9C1-873862AAD558 [root@ip-10-0-18-38 apps]# /opt/splunk/bin/splunk cmd btool inputs --debug list ... /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf [http://adslot-lambda] /opt/splunk/etc/system/default/inputs.conf _rcvbuf = 1572864 /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf disabled = 0 /opt/splunk/etc/system/local/inputs.conf host = ip-10-0-18-38 /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf index = app /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf sourcetype = adslot-lambda /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf token = 297B4C96-5B44-44D2-A9C1-873862AAD558 Please let me know if additional informaiton is needed and thanks in advanced for any assistance you can provide me.

Splunk 'Agree to terms' function

$
0
0
Is there a function where a custom 'terms of use' can be displayed each time a user logs in, with the option to continue or log out?

How to configure authentication in distributed environment

$
0
0
I have a handful of Splunk servers. I'm trying to understand, does authentication work like deployed apps? Meaning, if I add a new role/ldap group mapping on the search head cluster master or on the deployment server, should I expect that configuration to replicate to the search heads, indexers, etc? Or do I need to create that role and mapping on each search head individually?

Can I turn off the data is too_small sourcetype behavior?

$
0
0
I have a set of log files that when they contain greater than 99 events have rules defined in the props.conf to properly apply sourcetypes. Yet when the logs contain 99 or fewer events the sourcetype gets a "[filename]-too_small" sourcetype assigned to it. When the files increase in size to 100 or greater they still have the incorrect sourcetype applied. Is there anyway to stop this default action other than "pad" the logs with dummy events to number at least 100? Basically I would like Splunk to consult with the rule stanzas in the props.conf file before resorting to the default action on small files. Thanks
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>