Hi at all,
I have an Indexer Cluster where each Indexer is accessed by users as a stand alone server, in other words there aren't Search Heads.
Now I accelerated some data using txidx file (tscollect command).
My question is: txidx files are replicable between indexers or they are locally generated on each server by the same scheduled search?
in other words: I have to schedule my schedulead search on one indexers configure tsidx files replication or I have to schedule the same search on both the Indexers?
Thank you for you help.
Bye.
Giuseppe
↧
acceleration with tscollect in indexer cluster
↧
How to monitor servers using splunk
I have been tasked with figuring out how to monitor server activity using splunk and create alerts
↧
↧
Problem to index the entire csv file
we use csv to track app's performance. I added the csv to forwarder and keep monitoring it. The problem is that while app is running and keeping writing to the csv, however, only the few minutes at the beginning will be indexed but not the rest. If moved the whole file, while no more update, to another location, I can get everything indexed. Any suggestions?
inputs.conf
[monitor:../logs/performance.csv]
blacklist = -\d
index = test
sourcetype = csv
crcSalt = <SOURCE>
↧
Is there a page that clearly identifies APPs that are Deployment Server compatible? +another ?
I ask this because I just spent a while trying to debug why installing the "Microsoft Supporting Add-on for Active Directory" would not work when I deployed it using the deployment server. I determined that it is using the REST api to encrypt the password for the LDAP account being configured. This however uses the current server (Master Nodes) private key... and therefore when deployed to the other servers, they cannot perform a successful BIND as they cannot successfully decrypt the LDAP account password. I understand the challenges of secure credentials when deploying however this wouldn't be an issue if two things happened:
1. Check for running directory and notify the user if the App is being run for the first time from the slave-apps directory to allow them to re-enter the credentials.
2. Web GUI works after deployment.
This add-on's web GUI also appears broken when deployed into the slave-apps directory... but I am still troubleshooting this... If anyone has any idea where to start that would be help. I am assuming some sort of static reference (/opt/splunk/etc/apps/SA_ldapsearch) to directories has been made instead of a relative reference ($SPLUNK_DIR/SA_ldapsearch), but its just a guess.
↧
Problem with SAML authentication after updating to Splunk 7
I have upgraded to Splunk 7, and I am encountering with "Verification of SAML assertion using the IDP's certificate provided failed. Error: Failed to verify signature with cert :S:\Splunk\etc\auth\idpCerts\idpCert.pem;" error. My earlier version with 6.6.3 was stable
↧
↧
Can I filter a table based on cluster number or subsearch dynamically?
I have a table of data that is clustered via KMeans, I am trying to filter down to only display the other items in a particular cluster, but since the cluster number is done on the fly, this is proving to be difficult.
index=blah | stats count by something, device | fit PCA k=2 h_fields | fit KMeans k=10 PC_\* | table cluster PC_\* device h_fields
This will give the info I am looking for, but I only want to filter to view the other items in a single cluster, I know what device ahead of time, but I don't know the cluster number to look for until after the table renders. Basically want to only find other data in the same cluster.
I've been trying to do something like " | search [ search device="myDevice" | return 1 cluster=cluster] " but that does not seem to work....
↧
Syslog events not matching IOS XR regex to transform
Here is the format of our data coming from Cisco IOS XR NCS 4K platform. I don't think the regex is able to match our data. Running Enterprise 7.0 and Cisco Networks Add-on 2.3.4.
Thank you.
Cisco IOS XR Software, Version 6.1.12
Copyright (c) 2013-2016 by Cisco Systems, Inc.
Sample events:
Oct 2 16:04:57 65.230.192.100 222107: HRSHPAXH-0110013A RP/0/RP0:2017 Oct 2 16:04:57.084 UTC: SSHD_[68398]: %SECURITY-SSHD-6-INFO_GENERAL : Enc name is NULL: client aes128-cbc,blowfish-cbc,3des-cbc server aes128-ctr,aes192-ctr,aes256-ctr
Oct 2 16:04:55 65.230.40.4 24078: FLPKNYFP-0330608A LC/0/LC1:Oct 2 12:04:55.531 : fia_driver[118]: %PLATFORM-CIH-5-ASIC_ERROR_THRESHOLD : fia[18]: A generic-err error has occurred causing performance loss transient. CMIC.CMIC_CMC0_IRQ_STAT4.FCT.Interrupt_Register.UnrchDestEvent Threshold has been exceeded
Oct 2 16:04:20 65.230.165.132 47232: GLBONJGB-0114503A RP/0/RP0:2017 Oct 2 12:04:20.587 EDT: smartlicserver[397]: %LIBRARY-REPLICATOR-3-IDT_FAIL : Failed to complete IDT after several retries: rc 0x0 (Success)
↧
Return information when there are no expected results.
This search checks to make sure a certain process ended on time. I expect to have results for the 6 cases in the where clause below. In the case that a Client's process did not end on time, it would not be returned in this search.
I would like to reverse the logic to return information for when a Client misses an expected end time.
For Example: if client6's process ends after 01:15:00, I would want to see the ClientID and expected time range.
source=*D:\\THY\\helper* source=*IH_Daily\\Debug* End earliest=-30h@h
| eval time=strftime(round(strptime(file_Time, "%I:%M:%S %P")), "%H:%M:%S")
| rex field=source "importhelpers\\\+(?[^\\\]+)"
| where ((like(source,"%"."client1"."%")) AND time>"05:00:00" AND time<"05:15:00")
OR ((like(source,"%"."client2"."%")) AND time>"09:30:00" AND time<"09:45:00")
OR ((like(source,"%"."client3"."%")) AND time>"07:30:00" AND time<"07:42:00")
OR ((like(source,"%"."client4"."%")) AND time>"07:00:00" AND time<"07:25:00")
OR ((like(source,"%"."client5"."%")) AND time>"05:00:00" AND time<"05:30:00")
OR ((like(source,"%"."client6"."%")) AND time>"00:30:00" AND time<"01:15:00")
| table ClientID, timerange, source
↧
How to create a table of eval fields along with stats
I have a query where I eval 3 fields by substracting different timestamps
eval Field1 = TS1-TS2
eval Field2 = TS3-TS4
eval Field3 = TS5- TS6
eval Date = strftime(_time, "%m-%d-%Y")
Next I use the stats command to calculate count, min,max,average for these 3 evaluated Fields by date.
If use stats count(Field1), count(Field2),count(Field3) by Date then I end up with all the values in same row.
How can i get these stats for each Fields in different line ?
i.e my out put should look like :
Date,Fields,Min,Max,Avg
10/2/2017, Field1,5,10,8
10/2/2017, Field2,15,110,30
10/2/2017, Field3,11,102,58
10/3/2017, Field1,15,110,28
10/3/2017, Field2,25,210,100
10/3/2017, Field3,12,110,60
↧
↧
Adding xauthuser to datamodel
I tried to add the xauthuser field to the data model ftnt_fos and after that I get no results any more. Did I break it?
The xauthuser field carries the username that connected to the firewall using an ipsec tunnel, it's a critical field for the vpn dashboard.
↧
DBConnect 3.x not working with rising columns the way 2.x did...
I have the following sql statement that is working with other database inputs that were created with dbconnect v2.x. But 3.x fails with an error. This is the first input I've tried to create using the new 3.x version and I'm not having luck with any new ones I'm trying to create. I know timestamp isn't the ideal field, but it is all I have to work with. I've tried extended_timestamp as well as just timestamp and I get an error and cannot get past this. In v2.x I've created over 20 inputs without any problem whatsoever. With 3.x I can't get any created. Any help would be greatly appreciated.
SELECT * FROM "SYS"."DBA_AUDIT_TRAIL"
WHERE TIMESTAMP > ?
ORDER BY TIMESTAMP ASC
java.sql.SQLDataException: ORA-01861: literal does not match format string
No results found.
↧
Splunk 7 and DBConnect 3.1.1 not working new install
Brand new CentOS 7 system with Splunk 7, DBConnect 3.1.1 and Java JDK 1.8.0_144. Splunk starts fine, DBconnect installs fine, but when I go to access the app, I just get the message Unable to initialize modular input "server" defined inside the app "splunk_app_db_connect": Introspecting scheme=server: script running failed (exited with code 127).
I can't even access the controls to configure the DBconnect app, I just get the little circling spokes with the splunk controls at the top of the window.
I've tried wiping Splunk and reinstalling to see if it was an issue with the Java install, but same results. Has anyone else run into this with new the version of Splunk?
↧
"File Integrity checks found 1 files that did not match the system-provided manifest. See splunkd.log for details."
I have no idea where this message is coming from. I see the subject message in the WebUI but when I restart splunk it tells me all is OK. Here is the output from a restart:
[dev]root@ip-10-94-18-55:/opt/splunk/etc/users:#/opt/splunk/bin/splunk restart
Stopping splunkd...
Shutting down. Please wait, as this may take a few minutes.
............. [ OK ]
Stopping splunk helpers...
[ OK ]
Done.
Splunk> Needle. Haystack. Found.
Checking prerequisites...
Checking http port [8000]: open
Checking mgmt port [8089]: open
Checking appserver port [127.0.0.1:8065]: open
Checking kvstore port [8191]: open
Checking configuration... Done.
Checking critical directories... Done
Checking indexes...
Validated: _audit _internal _introspection _telemetry _thefishbucket aws_anomaly_detection aws_topology_daily_snapshot aws_topology_history aws_topology_monthly_snapshot aws_topology_playback aws_vpc_flow_logs history main summary
Done
Bypassing local license checks since this instance is configured with a remote license master.
Checking filesystem compatibility... Done
Checking conf files for problems...
Invalid key in stanza [ui] in /opt/splunk/etc/apps/SA-ge_splunk_health/local/app.conf, line 12: version (value: 1.0).
Invalid key in stanza [calendar_heatmap] in /opt/splunk/etc/apps/calendar_heatmap_app/default/visualizations.conf, line 6: supports_drilldown (value: True).
Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug'
Done
Checking default conf files for edits...
Validating installed files against hashes from '/opt/splunk/splunk-6.5.2-67571ef4b87d-linux-2.6-x86_64-manifest'
All installed files intact.
Done
All preliminary checks passed.
Starting splunk server daemon (splunkd)...
Done
[ OK ]
Waiting for web server at https://127.0.0.1:8000 to be available................. Done
If you get stuck, we're here to help.
Look for answers here: http://docs.splunk.com
The Splunk web interface is at https://ip-10-94-18-55:8000
I ran the REST API call to https://10.94.18.55:8089/services/server/status/installed-file-integrity and it tells me that the file /opt/splunk/etc/users/users.ini has been modified. What am I missing here?
ANy help is MUCH apprecaietd as this is very annoying.
↧
↧
How do you use a custom field as a token for a drilldown?
I have a dashboard that contains a line chart. The query for this is something like:
search ....... | rex field=_raw " (ERROR|E|SEVERE) (?[a-zA-Z0-9\. \-]*)[:\. ]" | timechart count by method limit=10 usenull=f useother=f
The custom field is "method". I would like a drill-down configured so that when the user clicks on either the data point on the chart or the label name, it will take the method value and add it to the query of the "Search" dashboard. So something as simple as:
search $method$
I have tried using $method$ but it literally adds "$method$" to the query. The reason I don't want to use "Auto" feature of the drilldown is that I don't want all the search arguments from the original query added to the drilldown (it's rather long and complicated). I just want the drilldown to have a simple query.
↧
How to move a diag to the desktop folder?
I ssh into our server and created a diag, but how can I move it to my desktop so I can email it to someone else? What are the necessary steps I need to take it from the CLI? When I do this, it has to have the diag name as well as the time stamp. Any help would be great, thank you!
↧
Comparing results from three separate events
Forgive my ignorance if this has been answered elsewhere, I did my best to search for an answer but have not found it.
I am trying to compare three different search results for three separate events for specific time periods. Here are the strings I'm searching for:
1. user=BeerNFries OR ComputerName=xyz.local OR srcip="123.123.123.123"
2. user=Id10T OR ComputerName=123.local OR srcip="111.111.111.111"
3. user=PhishMe OR ComputerName=456.local OR srcip="222.222.222.222"
Where:
Event 1 occurred 9/17/2017 between 11:45 - 11:48
Event 2 occurred 8/19/2017 between 14:15 - 14:20
Event 3 occurred 9/12/2017 between 15:21 - 15:39
How would I be able to compare what happened during these times to look for similarities?
↧
Splunk DB Connect Inputs not working. What do I specify for source and sourcetype?
I connected a database through configuration. When I try to add input source and sourcetype, I do not get any results. I even tried creating my own sourcetype.
Here is what the documentation specified:
Source: Optional. The input name will be used if you leave it blank.
Source type: Enter a sourcetype field value for Splunk Enterprise to assign to queried data as it is indexed. Click the field and enter a value, or choose an existing value from the menu that appears.
↧
↧
How to turn of splunkd during certain hours
I have a customer who wants to have the splunk forwarder turned off during certain critical processing time.
↧
regular excpresssions
This is the event :
02OCT2017_16:46:47.212 130880:140149567481600 INFO event.py:177 root event = {"hopTrace": {"hops": [{"machine": {"nodeId": 569}, "application": {"processId": 19295, "processName": "udrqssvc.tsk", "appName": "DRQS"}, "authenticatedUser": {"uuid": 10095155}}]}, "event": {"eventType": "DRQS UPDATED", "drqsNumber": **107516809(FIELD5)**, "newHeader": {"status": "Q", "function": "N539", "billToId": 5028, "yellowKey": "", "billToType": "HIER", "lastUpdateTime": "2017-10-02T20:46:47.000+00:00", "type": "IW", "creatorUuid": 1603009, "slaCategory": -1, "summary": "MM/DD n539 hardware failure IBM PMR: 24465.L6Q.000", "queue": "", "timeClosed": "1899-12-31T05:00:00.000+00:00", "ouTypeCode": 0, "routeToGroup": **270(FIELD4)**, "ouTypeDescription": "", "tsCustomerNumber": 0, "closedUuid": 0, "lastUpdateUuid": 10095155, "createTime": "2017-09-29T12:00:48.000+00:00", "ownerUuid": 2984495}, "logNotes": [{"logNoteId": "1049598095", "timestamp": "2017-10-02T20:46:47.141+00:00", "authorUuid": 10095155, "logText": [{"text": "Note added from offline, remote machine 208\n", "textType": "DEEMPHASIZED"}, {"text": "{FIFW PRQS **160269881(FIELD6)**} submitted to take **N539(FIELD1)** (N539) offline on **Tue Oct 03 2017 19:00:00 GMT-0400 (EDT)(FIELD2)** for **HARDWARE REPAIRS(FIELD3)**\n", "textType": "NORMAL"}], "isAutomated": true}]}, "metadata": {"publishId": "121785005", "publishTime": "2017-10-02T16:46:47.189-04:00"}}
From the above event I want to create a statistics table with Field1-Fileds 6
I have highlighted the needed fields as bold . I get some of them but not all 6 fields
↧
Cylance Protect data integration with Enterprise Security ES
Hi,
I need to use the Cylance Protect syslog data in Enterprise Security.
Has anyone used this data in ES context ? What data models does the data to map to and whether any additional field extractions are required ?
Just an FYI - I'm receiving the following Cylance Protect sourcetypes. The Cylance TA and App are able to parse and display data and information respectively.
syslog_audit_log
syslog_device
syslog_script_control
Any pointers/directions are appreciated!
Best Regards,
Shreedeep Mitra.
↧