Quantcast
Viewing all 47296 articles
Browse latest View live

How to re-index / sync new data from directories which are monitored?

Hi, each day, I download new logs in directories which are monitored. I would like to know how to force Splunk to add these new logs just after their downloading. PS : I don't want to re-index all my directory, just new logs, so please don't answer "splunk clean eventdata -index _thefishbucket"

create search template ??

Hi i need to create a search template using splunk so i want to know what are the steps that i have to follow ? must i creaet an apps ? are there any easy way without using xml ?

unable to Send access.log events to the web index. Hosts should be www1, www2, www3

Hi , I have created indexer{2 indexers] in AWS environment with 2 fowarder and 1 search heads. If I create indexes on a search head/indexers using GUI will the configuration as shown below. I am not able to send access.log from /opt/log/www*/access.log to web index ,please advice how can i fix it. However if it put to main index it works but not to any other newly created index . Configuration ------------------ Search Head ——------------- deployment apps ---------------------- /opt/splunk/etc/deployment-apps [root@ip-172-31-19-169 deployment-apps]# ls -plrt total 8 -r--r--r-- 1 506 506 307 Jul 10 03:26 README drwx------ 4 root root 4096 Aug 17 11:06 _server_app_eng_webservers/ [root@ip-172-31-19-169 deployment-ap /opt/splunk/etc/deployment-apps/_server_app_eng_webservers/local/ Inputs.conf --------------- [root@ip-172-31-19-169 local]# cat inputs.conf [monitor:///opt/log] blacklist = secure.log disabled = false index = web sourcetype = access_combined_wcookie whitelist = www* [root@ip-172-31-19-169 local]# IDX —— [root@ip-172-31-29-204 etc]# cat ./apps/search/local/indexes.conf [web] coldPath = $SPLUNK_DB/web/colddb coldToFrozenDir = /opt/fozen/web enableDataIntegrityControl = 0 enableTsidxReduction = 0 homePath = $SPLUNK_DB/web/db maxDataSize = 300 maxTotalDataSizeMB = 6000 thawedPath = $SPLUNK_DB/web/thaweddb [root@ip-172-31-29-204 etc] —— FWD —— [root@ip-172-31-17-211 www1]# pwd /opt/log/www1 -rw-r--r-- 1 root root 315210 Aug 17 05:21 access.log [root@ip-172-31-17-211 www1]# —— regards smdasim

Multiselect option not getting displayed if the option is chosen in differnt order

Hi, I have a dashboard with multiselect input,
LatencyThroughputErrorAlltruetruetruetruetruetrue
The probelm here is if i select Latency first, the panels of Latency gets selected. If I select Throughput first and Latency - second , Latency doesnt get displayed. Could someone please help me out here

An error occurred (AccessDenied) when calling the AssumeRole operation: Roles may not be assumed by root accounts.. Please make sure the AWS Account and Assume Role are correct

I am getting this error while configuring AWS add on with splunk , let me know if you have any solution.

How to get distinct count of a field only for the latest events?

I'm constantly feeding my splunk with a .csv source, all of them with a pattern ïn their name: "Data1.csv", "Data2.csv", "Data3.csv", etc... These csv's have a table like: _time | Extracted_Host | Info1 | Info2 | Info3 How could I search only for the distinct count of Extracted_host, but only counting based on the latest submitted events? For example: if the latest csv is called Data5.csv, I want my search to get the distinct count of extracted_Host in the Data5.csv, if is Data6.csv, I want my search to get the distinct count of extracted_Host in Data6.csv

splunk index cuts out some lines

Hi, I am testing splunk config from my local machine before implementing it in production. So i am indexing a json file of about 5000 lines. However when it is indexed I get one event with about 138 lines only if I turn SHOULD_LINEMERGE = true in props.conf. If I set it to false , I get about 218 events with each event about 2-3 lines. How can I get splunk to index the entire lines , I don't really care if it shows as one event or as multiple events. I just want to see the entire content of the file. Here is my props.conf. default] CHARSET = UTF-8 LINE_BREAKER_LOOKBEHIND = 100 LINE_BREAKER = TRUNCATE = 100000000000000000000 DATETIME_CONFIG = /etc/datetime.xml ADD_EXTRA_TIME_FIELDS = True ANNOTATE_PUNCT = True HEADER_MODE = MATCH_LIMIT = 100000 DEPTH_LIMIT = 1000 MAX_DAYS_HENCE=2 MAX_DAYS_AGO=2000 MAX_DIFF_SECS_AGO=3600 MAX_DIFF_SECS_HENCE=604800 MAX_TIMESTAMP_LOOKAHEAD = 128 SHOULD_LINEMERGE = false BREAK_ONLY_BEFORE = Path= BREAK_ONLY_BEFORE_DATE = True MAX_EVENTS = 6000000 MUST_BREAK_AFTER = MUST_NOT_BREAK_AFTER = MUST_NOT_BREAK_BEFORE = TRANSFORMS = SEGMENTATION = indexing SEGMENTATION-all = full SEGMENTATION-inner = inner SEGMENTATION-outer = outer SEGMENTATION-raw = none SEGMENTATION-standard = standard LEARN_SOURCETYPE = true LEARN_MODEL = true maxDist = 100 AUTO_KV_JSON = true detect_trailing_nulls = false sourcetype = priority =

HEC Sourcetype

Hello everyone! I just have a brief question regarding the HEC input. Our primary data input is the HEC. For new applications that want to forward through our deployed Heavy Forwarder, we must first configure an token for them, and set a sourcetype. We're advocating for our applications to send data via a JSON format; however, if I were to select the _json sourcetype, this would not be correct. To provide an example of how their logs would look here's a JSON object: { "time": 1426279439, // epoch time "host": "localhost", "source": "datasource", "sourcetype": "txt", "event": "xx.xxx.xxx.xx /web/link/goes/here error 404" } I realize that the "event" attribute can be broken down into more key/value pairs, but most applications that want to integrate with our service may not want to separate out everything from their log in key/value pairs since some applications will not have a clear way of doing that. If we were to provide additional extractions to the "event", it would modify the **_json** sourcetype (which we wouldn't want). We're assuming the best way around this problem is to duplicate the _json sourcetype and rename it so that we can add additional extractions? Thanks in advance!

Lookup File data retention Question

Hi Team, I have requirement to show last 90 days worth of app login stats broken by day. I have a lookup table/defnition created and i have saved search that writes the summary data every morning 5 am for the previous day onto the lookup. Question i got, is there any time limitation until which lookup will retain this data before which it starts truncating or deleting data? I expect the data would remain intact however i wanted to check with wider audience to see how your experiece has been. I understand better way would be to either create summary index or kv store, i am not going that route as it would need 2 weeks to get it out to production in my space and i need something quick. Please share your thoughts. Mine is clustered environment (both SH & indexers) , version is 6.6+ Thanks!

Limitation on number of boolean clauses within search string

Is there a limitation on the number of search boolean clauses (i.e. OR, AND) within a search string? For example | search 'user1' OR 'user2' OR 'user3' OR ... 'user180' It seems like the color of OR changes from orange to black after a certain number. (I know need to figure out a way to shorten string due to blah, blah..)

How to find the RequestPerSec by Country using ClientIP adddress ?

Hi I have a query which would list me avg, max & P95 requestpersec for the selected time range index=test client_ipaddress=* |eval requestcount=1 | timechart per_second(requestcount) AS RequestPerSec | eventstats max(RequestPerSec) as peakRequestPerSec | eval peakTime=if(peakRequestPerSec==RequestPerSec,_time,null()) | timechart span=1m avg(RequestPerSec) as avgRequestPerSec max(RequestPerSec) as peakRequestPerSec p95(RequestPerSec) as p95RequestPerSec | fieldformat peakTime=strftime(peakTime,"%m/%y %H:%M") | eval avgRequestPerSec=round(avgRequestPerSec,2) | eval peakRequestPerSec=round(peakRequestPerSec,2)| eval p95RequestPerSec=round(p95RequestPerSec,2)|rename avgRequestPerSec as "Average Requests/Sec" peakRequestPerSec as "Max Requests/Sec" p95RequestPerSec as "P95 Requests/Sec" The question here is, Can i show Requestpersec by country using the field client_ipaddress present in the events ? How do i do that ? Please let me know

Use predict with split by function?

Is there a way to split by using predict. I can predict on a single factor, e.g. | timechart span=1h max(values) as values | predict values How about: | timechart span=1h max(values) as values by user?

Access token label in javascript?

I have a dropdown input field with associated token="offset". I want to use the label associated with the token value in a javascript function. How do I access it?

report is not sending an email.

I can see the report in the -searches and reports but I don't know its not triggering the mail.

DB Connect 3.1.3 Hive connection

I am trying to install the Hive driver so that I can send Splunk data to the company's hadoop instance. I found two articles that had some good details. Each uses a different method. I tried both and I haven't yet been successful. 1) https://answers.splunk.com/answers/575488/db-connect-3-hive-connection.html The answer in this thread seems to indicate that I can drop the connection jar file, HiveJDBC4.jar into $SPLUNK_HOME/etc/apps/splunk_app_db_connect/drivers/ and then the rest of the supporting files from the Cloudera zip file go into a new directory $SPLUNK_HOME/etc/apps/splunk_app_db_connect/drivers/HiveJDBC4-lib/. I tried this and then reloaded the drivers page in Splunk web, Hive driver still shows not installed. 2) https://www.splunk.com/blog/2015/02/25/splunk-db-connect-cloudera-hive-jdbc-connector.html This blog post indicated that the driver should go in $SPLUNK_HOME/etc/apps/dbx/bin/lib. I'm assuming that would mean $SPLUNK_HOME/etc/apps/splunk_app_db_connect/bin/lib. I don't have this path. I tried adding a new lib directory into the bin directory and then unzipping the Cloudera file here. THen I created database_types.conf and added it to $SPLUNK_HOME/etc/apps/splunk_app_db_connect/local/. I reloaded the drivers page and still not showing that hive is installed. Any thoughts on what else I could try or what I am doing wrong? Thanks!

Symantec TA field Extractions not working

Hello All, I am troubleshooting an issue with the Symantec TA. Fields are not being extracted correctly and I am stumped as to why. I can take the regex out of transforms and put it directly into the search bar and it works like a champ and all fields are extracted correctly but it is not being done automatically. I even went as far as to "extract new fields" and use the regex from transforms. What is strange is that this failed to automatically extract the fields too. Permissions were set to global and i was searching in verbose mode. In addition the sourcetype is correct because i can search on that sourcetype and there are events. Sample source from Transforms and props. [field_extraction_for_agt_behavior] # The regular expression consists of repeated shorter regex in below form: # (?[[sep_file_field]]) # All those regex are joined by ",\s*" which is a comma actually. # The [[sep_file_field]] is referring modular regex "sep_file_field". Refer to Splunk Documentation for detail about modular regex. # The last two fields "File_Size" and "Device_ID" are optional. REGEX = ^(?i)(?:[[sep_file_prefix]]),\s*(?[[sep_file_field]]),\s*(?[[sep_file_field]]),?\s*(?[[sep_file_field]])?,\s*(?[[sep_file_field]]),\s*(?[[sep_file_field]]),\s*(?[[sep_file_field]]),\s*(?:Begin:\s*(?[[sep_file_field]]))?,\s*(?:End:\s*(?[[sep_file_field]]))?,\s*(?[[sep_file_field]]),\s*(?[[sep_file_field]]),\s*(?[[sep_file_field]]),\s*(?[[sep_file_field]]),\s*(?[[sep_file_field]]),\s*(?[[sep_file_field]]),\s*(?[[sep_file_field]]),\s*(?:Domain:\s*(?[[sep_file_field]]))?,\s*(?:Action\sType:\s*(?[[sep_file_field]]))?(?:,\s*File\ssize\s\(bytes\):\s*(?[[sep_file_field]]),\s*Device\sID:\s*(?[[sep_file_field]]))?$ [symantec:ep:behavior:file] TRANSFORMS-nullqueueheader = sep_file_header #KV_MODE = none pulldown_type = true category = Network & Security description = Symantec Endpoint Protection agent behavior events MAX_TIMESTAMP_LOOKAHEAD = 32 SHOULD_LINEMERGE = false REPORT-field_extraction_for_agt_behavior = field_extraction_for_agt_behavior, process_from_caller_process_name, caller_md5_from_description FIELDALIAS-vendor_action_SEP_behavior_vendor_action = vendor_action as SEP_behavior_vendor_action

How can I extract key value pair in a table?

I have data like **Data: {"code": "abc", "version": "2018.6", "name": "testdata", "group": "QA", "DB": "oracle"}** in the field **Message**. How can I export the key and value pair in a table. So, I would need code, version, name, group, Db in a table. I tried using spath but it didn't work as the data is not exactly in json format. How can I get the data in tabular format.

How to create a search template (macro) using Splunk?

Hi I need to create a search template using Splunk so I want to know what are the steps that I have to follow? must I create an app? are there any easy ways without using XML?

Help with Regex to remove HTML tags

Hello, Could someone please help me with removing the HTML tags from fields. The data is a few sentences, such as remediation of a Microsoft patch, but contains links within. This data is coming in through a lookup that I can't modify apparently. I'd like to get rid of the

etc tags so I can just display the text in a clear format. Thank you!

How to filters the event by differ date than mapped in sourcetype

We have Date1 mapped in the sourcetype for the index. So if I select last 7 days in the date filter data is filtered on date 1. But for my project I need to use Date2 as a date / duration filter in the dashboard. I cannot change the sourcetype only for my dashboards. Is there any way to make this changes in the settings or code? Sample data Created_Date (Date1) First Name Last Name Task Execution_Date (Date2) 1/12/2017 ABCD XYZ Open Request 12/12/2017 6/12/2017 DDFFG SSV BBB Save the File 12/12/2017 Data has been indexed on Created_Date (Date1) where as I want to use Execution_Date (Date2) as a date filter.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>