Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Splunk DB Connect - MS SQL Server - Cannot get schemas

$
0
0
Hi folks, I'm currently attempting to set up and configure Splunk DB Connect. So far, I've gone through the installation steps, and everything looks to be configured properly. My connection and identity are setup. When I go to the Data Lab -> Explore Data tab, I am able to pick my connection, see my list of databases, and select a database. However, when it attempts to populate the Schemas, it gives an error saying "Cannot get schemas". I've tried both the 4.2 driver recommended in the documentation as well as the latest driver published by MS (6.4). My database is an Azure SQL Database. I have tried multiple users, including the database admin user. I have also attempted to run a query using the table with a fully qualified schema name, and that comes back with "Invalid Object Name". Has anyone else run into this, or have any thoughts at to what the issue might be?

How to implement Multi-Matching?

$
0
0
Within one event I have several XML tags that require encryption. Reviewing the Python code in this implementation, it's looking for the first match on each line, which results in only the first XML tag being encrypted. It's a bit beyond me to know how I could refactor so that if I provided a match-all-XML-tags regular expression, each tag would get appropriately encrypted, but that's what I'm looking to achieve. Thanks for the help!

append data from two indexes

$
0
0
i have two indexes: index#1 contain raw event log. from this event log i calc for every domain the number of events so that i have: domain_name description Event_count in this index i look at time span in the time span selection in search index#2 contain aggregated information on domain meaning for each domain i have query_count on each day in this index i calc more data for every domain_name in this index i look at time span of last 30 days what i want at the end of the day is that the query will return for me a table that will contain: event_domain Event_count Dates_Count SumQueries MaxQueries avg30Days i tried to use join but some domain that appear in index2 don't appear after the join the query i use: index="event_raw_data" | join event_domain [search index="domain_agg_info" earliest=-30d | eval epoch33days_ago=relative_time(now(), "-33d@d" ) | eval epochEventDays = strptime(date,"%Y-%m-%d") | where epochEventDays > epoch33days_ago | eventstats dc(date) as "Dates_Count" by event_domain| eventstats count(date) as "Record_count" by event_domain | eventstats max(query_count) as "MaxQueries" by date | eventstats max(Dates_Count) as "MaxDatesCount"| eventstats sum(query_count) as "SumQueries" by event_domain | eventstats avg(customer_count) as "AvgCustomerCount" by event_domain | eval AvgCustomerCount=round(AvgCustomerCount,0)| eval avg30Days=round(if(Record_count < 30,SumQueries/MaxDatesCount,SumQueries/Record_count)) | eval avg30Days=avg30Days+1 | eventstats max(query_count) as "MaxQueries" by event_domain | eval Ratio = round(MaxQueries/avg30Days,3) | where Ratio <= 5 ] | eventstats count as "Event_count" by event_domain | table event_domain,Event_count,Dates_Count,AvgCustomerCount,Ratio,SumQueries,MaxQueries,avg30Days | dedup event_domain,Event_count,Dates_Count,AvgCustomerCount,Ratio,SumQueries,MaxQueries,avg30Days | sort by Event_count desc | head 10 what i need to change to get this data for every domain on index1?

Planned outage graph logic

$
0
0
I want to build a logic for SEARCH-2 **My SEARCH -1** Gives me start and End time stamp of a Planned Outage. **My SEARCH-2** Gives me the availaiblity graph of my server. Now in the availibility graph, if the timestamp is in the time range of planned outage (given by search 1) than I should mark that as "planned outage" (so that no one is confused whether its planned or an actual outage). index=avail sourcetype=availaibility | if availaibility == 0 | check if the time of event is in the time range given by SUB SEARCH ""My Search 1

How to exclude events where the date greater than today?

$
0
0
Hi, Is there a way to exclude events in a search where a specific date field (not timestamp) is greater than today. Sow i only want to see events where the specified date field is today or smaller.

How to monitor proxy uploaded data split by users, greater than say 1GB Splunk for Blue Coat ProxySG?

$
0
0
Monitoring proxy uploaded data split by users, greater than say 1GB in the last 24hrs and then Alert. Not sure how to do this. index="proxy_logs" time="*" filter_results=OBSERVED protocol="*" url="*" upload="*" user="*" |

I get the rollback error during Splunk Enterprise installation on Windows 10 every time.

$
0
0
I start the Splunk SetUp Wizard, i check domain account, enter the admin password, get almost to the end of the install then the Rollback action starts. At the end it says "Splunk Enterprise Wizard ended prenaturelly because of an error" w/o saying what error. anyone that can guide me, please?

Trouble with custom indexer

$
0
0
Hi, all I created custom indexer with default parameters and for files/folder monitor define it indexer. After added files to folder and in indexes I see updated indexer info ![alt text][1] But data summary is empty and in the search (*) results is empty too ![alt text][2] What's wrong ? The forwarder also sending data - it I see by changed index parameters, but it not usable. if I delete indexer and set monitor without custom indexer - all work correctly. Please, help me with it trouble. [1]: /storage/temp/251038-screenshot-1.jpg [2]: /storage/temp/251039-screenshot-2.jpg

How would I use my own data on Splunk Free with ML Toolkit?

$
0
0
Hi, I am using the below for data that I have uploaded and I get 0 events. When I go to datasets/tables, I see this dataset. Why does it not work when I call it within the ML Toolkit? Thanks! | inputlookup weeklyTrends-sundayGrowthUnderlying.csv

Problem with timechart showing portscanning by srcip

$
0
0
I wanted a timechart to show portscanning of Juniper routers, but have run into a snag that I can't figure out. The syslog message from the router follows this format: 2018-06-25T17:19:51+00:00 PFE_FW_SYSLOG_IP: FW: D (tcp|udp) (1 packets) I'm defining a portscan as any srcip that hits any router on 10 or more distinct ports within a 30sec window. Here's the splunk query I'm using: sourcetype=syslog PFE_FW_SYSLOG_IP AND " D " AND NOT (" 3784 " OR " 179 ") | rex field=_raw "(?\d+\.\d+\.\d+\.\d+) (?\d+\.\d+\.\d+\.\d+) (?\d+) * (?\d+)" | where dstport>=1 AND dstport<=30000 | bucket span=30s _time | eventstats dc(dstport) AS port_scan by srcip, dstip, _time | where port_scan > 10 | timechart dc(dstport) by srcip useother=f usenull=f span=30s The timechart works properly when I've selected 8 hours of data, but stops working beyond 10 hours of previous data. If I slide the time selection to specify the previous time range, then timechart shows that there are srcips that meet the criteria that were not previously shown. Any clues on how to get this to work?

Logs being ingested to previous date as there is no date in the timestamp

$
0
0
The logs doesn't contain date in them and so the events ingested into splunk are going to previous date. Following are some of the events from splunk 05:50:41.426: GenHttpRequest() with id: 216526 created 05:50:41.426: HttpSocket selected for http://10.93.78.16:8800/ 05:50:36.715: GenHttpRequest with id: 216525 destroyed 05:50:36.715: Socket fd=956, Message Length:224 What could be the best props I can configure to get the correct date to show up in Splunk?

Need to find .conf files on a splunk interface only accessible via a URL?

$
0
0
I need to locate the savedsearches.conf on a Splunk web server i.e. I can only reach this Splunk instance with a URL. if there is an app that allows this to happen that would be great too. I essentially want to copy/paste these alerts from the online instance to another instance, however I can only obtain the saved search properties via job inspection, which isn't in the format I want.

search based on lookup table help

$
0
0
Hello All I have a lookup table that I created that only has ip address and hostnames. I want to run the following search against the lookup table but I am not getting the results I expect. index=_internal sourcetype=splunkd [inputlookup dmzhosts.csv | table ip | rename ip as search | format] group=tcpin_connections NOT eventType=* | stats max(_time) as last_connected, sum(kb) as sum_kb by guid, hostname | addinfo | eval "Source Host" = hostname | eval ttnow = now() | eval Current = strftime(ttnow,"%m-%d-%Y %H:%M:%S") | eval Status = if(isnull(sum_kb) or (sum_kb <= 0) or (last_connected < (info_max_time - 60)), "Not Reachable", "active") | eval "Last Connected" = strftime(last_connected,"%m-%d-%Y %H:%M:%S") | where Status = "Not Reachable" | table "Source Host" "Last Connected" Current Status The search seems to run but I know it isn't really working as the lookup table has 160 IP addresses and the events only show 46 sourceIp's. What I really need is the is a for loop it seems so that the search will set the sourceIp to the ip from the lookup table and then provide a list of all the ones that are missing at the end of the search. Ideas? thanks ed

Has anyone automated UF upgrade on Linux servers?

$
0
0
hi All, We have nearly 500 UF's in our environment on Linux hosts. We are planning to upgrade our environment can some help me if anyone has automated the process of upgrading UF on Linux Hosts ? Thanks, Sree

Splunk universal forwarder

$
0
0
Developers are sending a log in json format. But splunkforwarder is reading the log as single line text. What migt the issue ?. Any help is appreciated. Thanks in advance

Deploying Splunk to Oracle Cloud Platform

$
0
0
We are currently have Splunk Enterprise deployed on-premise and are considering the possibility of implementing our deployment with the Oracle Cloud Platform. Does Splunk support such an implementation at all and are there any best practices documentation available for guidance on how to proceed with such an implementation?

How do you search for events that match the exact text of a raw text?

$
0
0
Hello index="cs_test" "Splunktest" "Refund succeeded" OR *"action"=>"refund"* I have a below raw text log, I want to return events that contain either "Refund succeeded" OR *"action"=>"refund"*, the problem is logs that contain only " => " or "refund" are also being returned. How do I just return results that contain exact string of "Refund succeeded" OR *"action"=>"refund"*? Example raw text "status"=>"pending", "action"=>"refund", "convert_to_cash_url"=>nil}], "v2_return_service_enabled"=>true, "inventory_service_id"=>"voucher", "order_reversal_url"=>"/order_reversal/refund", "is_expiration_extendable"=>false, "can_partial_refund"=>false, "tradable"=>"ineligible", "merchant_payment_text"=>"Continuous", Thanks

How to set two tokens off one dropdown in dashboard?

$
0
0
New fun dashboarding issue. I'm trying to set two different tokens off one dropdown. Is this possible? I have a dropdown input with a token called `$application$`. I have one dashboard that summarizes things by IP Address and the drilldown for that is set based on a condition. If you click on the Total at the bottom of the table it will set one thing, otherwise it goes for the `$click.value$`. This drives a second dashboard that uses a kvstore lookup which is prefiltered using a rather clever (I thought anyway) subsearch to set the where clause. This subsearch uses `$application$` in it's function narrow the list of IP Addresses it is initially looking at. I'm trying to make it so if I change the dropdown pointing to $application$ I can get it to update the search and rerun it. So this looks something like this right now: ([subsearch stuff | where application="$application$" | return 1000 IP_Address])Search 1 - By IP addresssome search here | where application=$application$ | stats count by IP_Address | addcoltotals labelfield="IP_Address"([subsearch stuff | where application="$application$" | return 1000 IP_Address])$click.value$"IP_Address="+$ip$
Search 2 - Details| inputlookup kvstorelookup where $ip$ | do some stuff
So as of right now, as long as I don't change application, I can get Search 1 to affect Search 2 to my hearts content. It will happily switch the value back and forth between `IP_Address=someip` and the subsearch `([subsearch stuff | where application="$application$" | return 1000 IP_Address])` but when I change the value of `$application$` I have to reclick "Total" in Search 1 in order to update the value of `$application$` in search two. Effectively what I would like to do is when you change the value of `$application$` have it overwrite the value of `$ip$` back to the subsearch value with the new application defined. Oh, the reason I am using the where clause at all on the kvstore is without this the search will take 3x as long (45 seconds instead of 15 seconds). And then once I overwrite the value of `$ip$` to just a single IP it will reduce that further down to a ~3 second search. This greatly enhances user experience, if I can just get the last piece to work.

Log files not indexing.

$
0
0
When I configured log file monitoring it worked only on that day till 11:59PM and then no events are getting indexed. Please recommend. In splunkd.log have been getting the message. 06-25-2018 17:12:21.197 -0400 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/amz/xyz/logs/sap/prod1/cbc/xyz_3.0.log-2018.05.11.gz'. 06-25-2018 17:12:31.201 -0400 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/amz/xyz/logs/sap/prod1/cbc/xyz_3.0.log-2018.05.11.gz'. Inputs.conf [monitor:///opt/amz/xyz/logs/sap/prod1/cbc/xyz_3.0.log-*] whitelist = OrderFulfillment_3\.0\.log-\d{4}\.\d{2}\.\d{2} disabled = false index = main sourcetype = sap initCrcLength = 256

"Parameter name: Path is not readable" - Splunk Add Monitor Command Error

$
0
0
Hello Team Splunk, I am trying to add a monitor to a log file. When I do this as either the 'splunk' user or the 'root' user I receive the following error: "**Parameter name: Path is not readable.**" I noticed that as the 'splunk' user I cannot read the file with the *vi* program. However I can read the file as the root user. So why would I receive this error if the 'root' user can read the file and I am running the ./splunk program as 'root'. I also noticed that the log files I am trying to forward are on a network file system that is mounted on the operating system (OS). I am not sure if this mount makes a difference or not. Regards, rogue_carrot
Viewing all 47296 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>