Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How do I break into multiple events just by space?

$
0
0
I want the one event in the picture to be broken into many events with the spaces in between. How do I do so with props.conf ? ![alt text][1] [1]: /storage/temp/216802-one-event.png

How do I chart rare values?

$
0
0
Hello! I'm fairly new to Splunk, and I'm using my Minecraft server logs to chart some data. I am having a hard time charting rare values. Here is the search I'm trying: *index=minecraft action=block_broken | rare block_type | chart count(block_type) over player by block_type useother=f* This does not work. I know I'm doing this incorrectly, but I'm not sure how, exactly. Any tips would be greatly appreciated!

How to use Splunk to create dashboard for elasticsearch

$
0
0
How to use Splunk to create dashboard for elasticsearch. I have all the data in elastic cluster however want to use splunk instead of kibana.

How to write query to get status of failed batch jobs indexed from database log source

$
0
0
We have the all the batch jobs with their expected Start & End times running on a server and that data is getting index to Splunk server, we want to check for each job status that if they are not started say for 20 min from their expected time generate an alert and send an email to the respective team with configurable content of email. Please help to write query on the same as soon as possible.

iRule script for LTM-F5 killed the load balancer. Please advise a solution

$
0
0
The iRule_http example provided in the documentation for "Configuring iRules for LTM" killed my client's load balancer. I dont have much info on this atm. I have asked the client for debug details. Until then, I was hoping to find here someone who is already aware of this issue and has got solution for it. Even if not, maybe a workaround ? Background info: In addition to the above, the client followed below instructions to collect logs from F5-LTM, [Adding a remote syslog server using the Configuration utility][1] [https://support.f5.com/csp/article/K13080#CU] and [Configuring the BIG-IP system to log to the remote syslog server using TCP protocol][2] [https://support.f5.com/csp/article/K13080#tcpsyslog] They followed instructions from Splunk documentation, mainly, 1. Configure F5 for syslog 2. Configure iRules for LTM Was there anything else needed to make this work ? [1]: https://support.f5.com/csp/article/K13080#CU [2]: https://support.f5.com/csp/article/K13080#tcpsyslog

Dashboard can't read new CSS updates

$
0
0
Hi All, New here, In the dashboard, there's already a declared 'styles.css' So here's my problem: whenever I try to add new updates to 'styles.css', it doesn't reflect (Already restarted splunkweb) If i try to change the 'styles.css' to 'styles2.css', the dashboard returns to default as if no css was declared.

Splitting stats count results into 2 sepereate Columns

$
0
0
hi can someone please help me with this, ive been trying and searching but no luck. i want to split the "Delivered" field into 2 and stats count on each field. ideally i want it to look like the below, so there will be the total count and then what makes up the total count should be split Count| True| False 100 80 20 my search | mcType=delivery Dir=Inbound Sender="*" | chart sparkline count by "Sender" | sort count desc hope it makes sense

Adding additional column after grouping for JSON records

$
0
0
The incoming logs are stored in Splunk in a JSON format. Example JSON records below. **Entry 1 :** { data:[ { "endpoint":"ep_1", "service":"service_1", "status":"inactive" }, { "endpoint":"ep_2", "service":"service_1", "status":"inactive" }, { "endpoint":"ep_3", "service":"service_2", "status":"inactive" } ] } **Entry 2 :** { data:[ { "endpoint":"ep_1", "service":"service_1", "status":"inactive" } ] } The expected output for my search should be something like : ![alt text][1] When I search using the query: host=mashery_production "data{}.http_status_code"= inactive | eval endpoint='data{}.endpoint' | eval service='data{}.service' | Stats Count("data{}.status") as Count, values(service), by endpoint | where Error_Count > 0 the output I get is : ![alt text][2] which looks like the grouping is messed up. Please help. [1]: /storage/temp/216795-table1.png [2]: /storage/temp/216794-table2.png

unable to start the splunk on my mac os High Sierra system

$
0
0
I install the splunk to my mac for several times, but I still cannot be able to start it. Here is screenshoot when I run the splunk Can anyone gives me advise on what happen? and how to resolve it? please give me ![alt text][1]step by step Thank you [1]: /storage/temp/216797-screen-shot-2017-10-12-at-111446-am.png

why searchead peering / multi searcheads

$
0
0
Hi, sorry in advanced, im kinda new to splunk. im wonder, maybe im missing something here but, why do i need a multi / peering of SH? as far as i know, SH is a web interface with user/pass authentication. why not connecting to SH from different clients using the same credentials, that way dashboards and other objects is shared for all users. thanks!

why I am unable to extract the fields from error log using IFX

$
0
0
I have an error log as follows. would like to extract the ERROR and Caused by from the below log.When I try to extract from IFX getting error stated "The extraction failed. If you are extracting multiple fields, try removing one or more fields. Start with extractions that are embedded within longer text strings". Am I doing something wrong?attached screenshot as well. Sample data: 2017-09-06 11:12:38,415 [] [main] ERROR abc.connectivity.core.cdi.camel.CdiCamelContext: Error starting CamelContext() org.apache.camel.RuntimeCamelException: org.apache.camel.FailedToCreateRouteException: Failed to create route fixmlToMxmlRouteStart: Route(fixmlToMxmlRouteStart)[[From[jms-tibco:queue:{{jms.inp... because of Failed to resolve endpoint: concurrentConsumers=%5B%25+JMS..JMS_Input_Consumers+%25%5D&transacted=true&transactionManager=%23jtaTransactionManager&useMessageIDAsCorrelationID=true due to: Error during type conversion from type: java.lang.String to the required type: int with value [% JMS..JMS_Input_Consumers %] due java.lang.NumberFormatException: For input string: "[% JMS..JMS_Input_Consumers %]" at org.apache.camel.util.ObjectHelper.wrapRuntimeCamelException(ObjectHelper.java:1642) ... 40 lines omitted ... at abc.connectivity.client.ConnectivityRunnerBoot.main(ConnectivityRunnerBoot.java:277) Caused by: org.apache.camel.FailedToCreateRouteException: Failed to create route fixmlToMxmlRouteStart: Route(fixmlToMxmlRouteStart)[[From[jms-tibco:queue:{{jms.inp... because of Failed to resolve endpoint: ?concurrentConsumers=%5B%25+JMS..JMS_Input_Consumers+%25%5D&transacted=true&transactionManager=%23jtaTransactionManager&useMessageIDAsCorrelationID=true due to: Error during type conversion from type: java.lang.String to the required type: int with value [% JMS..JMS_Input_Consumers %] due java.lang.NumberFormatException: For input string: "[% JMS..JMS_Input_Consumers %]" at org.apache.camel.model.RouteDefinition.addRoutes(RouteDefinition.java:201) at org.apache.camel.impl.DefaultCamelContext.startRoute(DefaultCamelContext.java:947) ... 9 lines omitted ... at abc.connectivity.core.cdi.camel.CdiCamelContext.start(CdiCamelContext.java:99) ... 40 more Caused by: org.apache.camel.ResolveEndpointFailedException: Failed to resolve endpoint: aaaa://queue?concurrentConsumers=%5B%25+JMS..JMS_Input_Consumers+%25%5D&transacted=true&transactionManager=%23jtaTransactionManager&useMessageIDAsCorrelationID=true due to: Error during type conversion from type: java.lang.String to the required type: int with value [% JMS..JMS_Input_Consumers %] due java.lang.NumberFormatException: For input string: "[% JMS..JMS_Input_Consumers %]" at org.apache.camel.impl.DefaultCamelContext.getEndpoint(DefaultCamelContext.java:590) at org.apache.camel.util.CamelContextHelper.getMandatoryEndpoint(CamelContextHelper.java:79) at org.apache.camel.model.RouteDefinition.resolveEndpoint(RouteDefinition.java:211) ... 4 lines omitted ... at org.apache.camel.model.RouteDefinition.addRoutes(RouteDefinition.java:1052) at org.apache.camel.model.RouteDefinition.addRoutes(RouteDefinition.java:196) ... 51 more Caused by: org.apache.camel.TypeConversionException: Error during type conversion from type: java.lang.String to the required type: int with value [% JMS..JMS_Input_Consumers %] due java.lang.NumberFormatException: For input string: "[% JMS..JMS_Input_Consumers %]" at org.apache.camel.impl.converter.BaseTypeConverterRegistry.createTypeConversionException(BaseTypeConverterRegistry.java:610) ... 20 lines omitted ... Caused by: org.apache.camel.RuntimeCamelException: java.lang.NumberFormatException: For input string: "[% JMS..JMS_Input_Consumers %]" ... 6 lines omitted ... Caused by: java.lang.NumberFormatException: For input string: "[% JMS..JMS_Input_Consumers %]" ![alt text][1] [1]: /storage/temp/216799-fieldextraction-error.png

why i am finding count difference in timechart function

$
0
0
Hi, When i run a search for 7 days , i am getting correct count for all 7 days .But when i run for 30 days then i am finding difference in count .I am left joining 2 indexes and finally i am using timechart command. May i know the reason. thanks,

How to combine a search with a data model without the JOIN operator?

$
0
0
Hi experts, I try to combine a normal search with a data model without the JOIN operator, because of the slow processing speed and the subsearch result limitation of 50.000 results per search. I read in the .conf 2016 session by Nick Mealy (https://conf.splunk.com/files/2016/slides/let-stats-sort-them-out-building-complex-result-sets-that-use-multiple-source-types.pdf) that this not possible because the data model command is a generating command. :( Does anybody has a solution or face the same problem? I think it is really important to combine a data model and normal searches in a efficient way. Kind regards, Christopher

Why can't I add a new search head to my cluster?

$
0
0
Hi, I'm trying to connect a new search head to my master node. I'm able to add my Indexers in search peers but, whenever I try to add the master node at *Settings>Indexer Clustering>Enable Indexer Clustering>Search Head Node*, I get the error, `Could not contact master. Check that the master is up, the master_uri=https://192.x.x.x:8089 and secret are specified correctly`. If I deliberately change the security key wrongly, I'm able to connect but, it's giving an error `The searchhead is unable to update the peer information. Error = 'failed method=POST path=/services/cluster/master/generation/9E12B9AF-1ED1-4332-86BB-B6AD94F96D1B/?output_mode=json master=192.168.22.50:8089 rv=0 gotConnectionError=0 gotUnexpectedStatusCode=1 actual_response_code=400 expected_response_code=2xx status_line="Bad Request" socket_error="No error" remote_error= In handler 'clustermastergeneration': Argument "host" is not supported by this handler.' for master=https://192.x.x.x:8089.`

Lookup table does not exist . It is referenced by configuration 'yyyy'

$
0
0
Hi, When configuring the lookup table, i receive the following error The lookup table 'xxxx' does not exist. It is referenced by configuration 'yyyyy'. index=main sourcetype=yyyy [|inputlookup xxxx.csv |fields account_name] | chart count(username) by username The objective of is to find a list of accounts listed in xxxx.csv field and count the number of time the user login from the sourcetype yyyy. Please advise what could be the error

MultiValue Table from Json array

$
0
0
Hello everyone, I search a very longtime on internet and splunk doc and i didn't get what i want well i have this Json array : "LeagueResult": { "Matchs": { "Team": "MANU", "Date": "2017-09-25T00:00:00", "Place": "HOM", "Scored": 0, "Conceded": 4, "Difference": -4, }, { "Team": "CHE", "Date": "2017-10-05T00:00:00", "Place": "AWA", "Scored": 5, "Conceded": 4, "Difference": 1, }, ... { "Team": "TOT", "Date": "2017-10-05T00:00:00", "Place": "HOM", "Scored": 1, "Conceded": 1, "Difference": 0, } And I want to obtain this table or a look-like one : ![alt text][1] [1]: /storage/temp/217822-12-10-17-11-50-34-am.png I used spath function, mvzip function and mvexpand but I didn't succeed.

Drill down to dashboard with hidden table

$
0
0
Hey, I am trying to drill down from one dashboard to another and show a table with the selected category in the target dashboard. I have manged to make the general drill down work and it shows the corresponding category, BUT i need to also show a hidden table which shows category details in the target dashboard when a cell is clicked in the existing table in the target dashboard. Sounds a bit confusing so let me know if you have any questions! Any help would be appreciated :) Thanks!

User logon profiling

$
0
0
Hi, I am trying to get a matrix populated with the time ranges users connect successfully to a system. The raw data gives me the username, the timestamp (_time) and the event ID indicating a successful logon. Is there a way to, for instance, generate a list saying that user X, connects usually from 3am to 7am, Mon to Fridays, but user Y connects usually from 08am to 04pm, on Fridays only. My aim is to detect for unusual logons. What is the best way to achieve this? THank you.

How to confirm a udp input is running?

$
0
0
Hi, I'm having issues with what should be a very basic setup. I have an appliance sending syslog messages to a heavy forwarder, on port 514, using UDP. I've verified that the events are coming in via tcpdump. My inputs is setup to listen on port 514, and nothing else is listening on it, but the events are not appearing in the indexer. I've checked for all-time, and recent time, and manually send some events via netcat. I do not see anything in the logs indicating that splunk is even listening for this data. Should some message appear somewhere, indicating that it's listening on port 514, similar to how it shows what logs are being watched? The HFW can talk to the indexer, as internal events are appearing. Inputs: [udp://514] connection_host = dns index = main sourcetype=syslog disabled = no queueSize = 1KB

Splunk app for DBconnect 3.x Not forwarding data to Index Cluster

$
0
0
Hi, We have a HF running DBconnect version 3.1.0 (also tried 3.1.1) It has the correct drivers installed and the inputs are correct, We can execute the SQL commands and when turning on local indexing, we are able to index data from all sources, What has been done on DBconnect side? I changed the task server to use another port from the default. **now 1425** We are using a rising column and I can see that the value is rising even though no data is being indexed. the dbx_job_metrics events have a Read_count and a Write_count as well as a Status=complete The indexes.conf file from the indexers have been copied to the HF. Prior to upgrade we where using v.2.4 and the data was being indexed, On the same instance. What I can see DBconnect is using the HEC to index data, So I'm wondering if I need to configure any additional recieving on the index cluster?, I have Other HF using DBconnect and their inputs are being forwarded to the IX cluster. However they are using v. 3.0.1 Im not sure where to look next. Please help :)
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>