Hi,
We have a Glass table which I'd like to move to another Splunk instance. Unlike Dashboards, I do not see any "edit source" options for Glass Tables. And the edit drop down will only allow to clone locally.
Is there any way to find the source for glass table directly on the server? and can it be deployed on another instance?
OS - CentOS 6.9
Splunk Version - 6.6.2
ES Version - 4.5.2
Thanks,
~ Abhi
↧
Is it possible to copy glass table to another Splunk instance?
↧
Detecting endpoint change in a specific event with an alert
Looking for assistance with creating an email alert when an endpoint changes in logs.
We want to avoid multiple emails going out every 15 minutes and only send the email alert when the switch happens.
The alert would be searching every 15 minutes. Thinking that the best way to do this is come up with a search that only returns the specific event in question. If we find two different endpoints (field value) for the 15 minute window, then we know a switch has occurred.
From here am looking for assistance. How to write the query to detect which endpoint we started with and what we switched to. Thinking that we can do something like the following to get timestamp for endpointA and endpointB events. Then see which one is greater than the other. Then conditional statement to determine what the source and destination endpoints are.
... | eval time_a = case(expression to determine if it's endpointA, _time) | eval time_b = case(expression to determine if it's endpointB, _time)
Any help would be greatly appreciated.
↧
↧
Tour Creation App for Splunk -- How to work with a default view that has many variables?
For [our app][1]: the default view isn't:> tc_view_main
It's actually more like this:
> tc_view_main?form.start=-30d&form.span=1d&form.indicator_type=*&form.rating=*&form.confidence=*&form.state=New&form.indicator=*&form.victim=*&earliest=0&latest=
By default, the Splunk Tour App appends to tc_view_main as such:
> tc_view_main?tour=welcome_to_threatconnect_for_splunk
But then our app appends the above remainder for the view of that landing dashboard. I've tried adding the ?tour=welcome_to_threatconnect_for_splunk to the end, used an **&** instead of a **?** but still no dice. I'll try via the conf file but I was hoping to use the App for it as it takes the guesswork out of the DOM elements. My guess is that our app loads the default view and then a page refresh appends and we loose the tour capability.
[1]: https://splunkbase.splunk.com/app/3075/
↧
Splunk not starting after upgrade (6.6.1 > 7.0)
Hi, i just updated from 6.6.1 to latest version(7) and now i'am stuck with splunk not starting web interface:
# ./splunk restart
Stopping splunkd...
Shutting down. Please wait, as this may take a few minutes.
..................................... [ OK ]
Stopping splunk helpers...
[ OK ]
Done.
Splunk> Map. Reduce. Recycle.
Checking prerequisites...
Checking http port [10.244.161.7:8000]: open
Checking mgmt port [10.244.161.7:8089]: open
Checking appserver port [127.0.0.1:8065]: open
Checking kvstore port [10.244.161.7:8191]: open
Checking configuration... Done.
Checking critical directories... Done
Checking indexes...
Validated: _audit _internal _introspection _telemetry _thefishbucket checkfwd eqalis_network_sample firewall history itau main mwg_audit os ossec perfmon snort_cardholder snort_servidores sos sos_summary_daily summary summary_forwarders summary_hosts summary_indexers summary_pools summary_sources summary_sourcetypes syslog tp_win_sec tp_win_servers windows wineventlog
Done
Bypassing local license checks since this instance is configured with a remote license master.
Checking filesystem compatibility... Done
Checking conf files for problems...
Improper stanza [dhcpd_server_dhcprelease] in /opt/splunk/etc/apps/unix/default/tags.conf, line 30
Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug'
Done
Checking default conf files for edits...
Validating installed files against hashes from '/opt/splunk/splunk-7.0.0-c8a78efdd40f-linux-2.6-x86_64-manifest'
All installed files intact.
Done
All preliminary checks passed.
Starting splunk server daemon (splunkd)...
Done
[ OK ]
Waiting for web server at https://10.244.161.7:8000 to be available............................................................................................................................................................................................................................................................................................................
WARNING: web interface does not seem to be available!
What can be causing it ?
↧
Archive data to S3, understanding the options.
I have an indexer cluster with a minimum replication factor of 2 to prevent data loss. I would like to setup Splunk to archive frozen data after the retention period has passed to an S3 bucket (This will eventually be in a S3 glacier bucket for minimal cost and reliable storage). This data DOES NOT need to be searchable, it just needs to be available to thawing in the future.
It seems that Splunk provides a few options with advantages and disadvantages so I am trying to understand what would be the best in such a scenario.
#### Using cold to frozen script
This seems to fit most of the criteria but it requires a separate disk area to move the frozen data to first. There are also some questions on this method
* What is the API of such a script, I cannot find any information. By that I mean what arguments does Splunk supply the script if any?
* Instead of having the coldToFrozen script move the data and then a separate script to move to S3 as per recommendation, couldn't one set auto archiving (coldToFrozenDir) in Splunk and then having the second script move from there to S3, thus saving one script?
#### Hadoop Data Roll
This one seems to be a bit of an odd ball. The information on how this works is spread everywhere and one might think you require a Hadoop cluster here but some information seems to point to the fact that one can just have a Hadoop client on Splunk to write to S3. Is this correct? Also some other things.
* This is definitely more complicated to setup. Is there a definitive step by step guide on how to go about this with examples?
* It is a bit unclear on how this works. Does it archive warm/cold buckets too? Does it archive frozen data at all?
* I want Splunk to be searching warm/cold data from the disks, not from S3 but it is unclear if this is the case here
So I am a bit confused on what would be the best way to go here. It feels simple to setup the coldToFrozen script (if I can figure out the API of the script call) but I am willing to get my hands dirty with the Hadoop data roll process if that means I will have not only archived data in S3 but also searchable but only if Splunk is still searching from the disks for hot/warm/cold buckets and only frozen from S3 (due to obvious performance differences)
It is a bit of a long post but any comments and suggestions are more than welcome to try and clarify the issue.
Source:
https://answers.splunk.com/answers/578596/archive-splunk-buckets-to-aws-s3.html
https://docs.splunk.com/Documentation/Splunk/7.0.0/Indexer/HowHadoopDataRollworks
https://docs.splunk.com/Documentation/Splunk/7.0.0/Indexer/Automatearchiving
↧
↧
Can you help me understand archiving best practices? Can I archive frozen data to a bucket on S3?
I have an indexer cluster with a minimum replication factor of 2 to prevent data loss. I would like to setup Splunk to archive frozen data after the retention period has passed to an S3 bucket (This will eventually be in a S3 glacier bucket for minimal cost and reliable storage). This data DOES NOT need to be searchable, it just needs to be available to thawing in the future.
It seems that Splunk provides a few options with advantages and disadvantages so I am trying to understand what would be the best in such a scenario.
#### Using cold to frozen script
This seems to fit most of the criteria but it requires a separate disk area to move the frozen data to first. There are also some questions on this method
* What is the API of such a script, I cannot find any information. By that I mean what arguments does Splunk supply the script if any?
* Instead of having the coldToFrozen script move the data and then a separate script to move to S3 as per recommendation, couldn't one set auto archiving (coldToFrozenDir) in Splunk and then having the second script move from there to S3, thus saving one script?
#### Hadoop Data Roll
This one seems to be a bit of an odd ball. The information on how this works is spread everywhere and one might think you require a Hadoop cluster here but some information seems to point to the fact that one can just have a Hadoop client on Splunk to write to S3. Is this correct? Also some other things.
* This is definitely more complicated to setup. Is there a definitive step by step guide on how to go about this with examples?
* It is a bit unclear on how this works. Does it archive warm/cold buckets too? Does it archive frozen data at all?
* I want Splunk to be searching warm/cold data from the disks, not from S3 but it is unclear if this is the case here
So I am a bit confused on what would be the best way to go here. It feels simple to setup the coldToFrozen script (if I can figure out the API of the script call) but I am willing to get my hands dirty with the Hadoop data roll process if that means I will have not only archived data in S3 but also searchable but only if Splunk is still searching from the disks for hot/warm/cold buckets and only frozen from S3 (due to obvious performance differences)
It is a bit of a long post but any comments and suggestions are more than welcome to try and clarify the issue.
Source:
https://answers.splunk.com/answers/578596/archive-splunk-buckets-to-aws-s3.html
https://docs.splunk.com/Documentation/Splunk/7.0.0/Indexer/HowHadoopDataRollworks
https://docs.splunk.com/Documentation/Splunk/7.0.0/Indexer/Automatearchiving
↧
Best way to add multiple(30+) panels to a splunk dashboard
What is the Best way to add multiple panels to a splunk dashboard?
I currently have a dashboard where I want to add 30+ panels which are just very simple timecharts for the last 24 hours.
I have all the searches and titles I want like this
search title
index=*... title1
...
index=*... title30
The best way I can think of is to just add them individually
1 - Edit dashborad - Add panel (copy paste search and title), Add panel .... repeat.
2 - Use XML - Create 1 panel, copy and paste this panel X times and then edit accordingly.
Is there a better way than this very repetitive task?
↧
Timechart function and graphing specific field?
I would like to capture the value of used_memory_peak_human =>__"26.28M"__ as it increases or decreases from all servers., in timechart or bar graph.
I have servers from app0-app7.
__639 <14>1 2017-09-28T20:39:01.000+00:00 app0-test.labs.local test_ecp/traffic_worker 10837 - - sec=631141.354 sev=INFO pid=10837 tid=21127380 rid=0 GraphsRedisClient redis shard [3] info: {"tcp_port"=>"6203", "uptime_in_days"=>"9", "config_file"=>"/var/illumio_pce_data/runtime/config/redis-cache-3.conf", "connected_clients"=>"25", "used_memory_human"=>"23.26M", "used_memory_peak_human"=>"26.28M", "mem_fragmentation_ratio"=>"1.35", "expired_keys"=>"81780", "evicted_keys"=>"0", "keyspace_hits"=>"5955043", "keyspace_misses"=>"164053", "used_cpu_sys"=>"514.45", "used_cpu_user"=>"163.96", "db0"=>"keys=452,expires=413,avg_ttl=76297901"}__
Thank you.
Peter
↧
Using the transforms.conf file to only forward events that match a regex.
I've got a log file that get's 2 different event formats depending on if debugging is turned on. When debugging is turned on I don't want the debug events forwarded but I do want the normal events forwarded as normal.
I have a regular expression that will only include my normal events that looks like this: `[0-9]*:.*[%].* `
I know that I can create a transforms.conf file in `$SPLUNK_HOME/etc/apps/appName/local` to filter events.
In `inputs.conf` I have the following:
`[monitor:///var/log/boot.log]
disabled = false
followTail = 0
index = zod-os
sourcetype = linux_bootlog`
I think if I add the following to `transforms.conf` it will do what I want:
`[linux_bootlog]
REGEX = [0-9]*:.*[%].* `
What I'm not 100% sure of is if I need to create a `props.conf` file to point to the transform like I've seen in other answers. I don't want to extract any additional fields other than what Splunk appears to automatically be doing. Also, the debug events are multiline but since they don't match the regex I think they will drop automatically.
Does all of that sound like it will work?
↧
↧
Why aren't my logs being forwarded for indexing by my forwarders?
**Summary**
Not all logs are being forwarded for indexing by my splunkforwarders.
**Situation**
I have 4 instances that run 3 processes I am interesting in.
Each process outputs logs that I am forwarding to Splunk via a splunkforwarder on the instance.
These logs are rotated by logrotate.d.
On some instances all logs are being forwarded, on some instances only some logs are being forwarded.
**Problems**
I believe the relevant error from the logs is this one (others below):
splunkd.log:09-22-2017 01:30:04.522 +0000 ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=/home/ubuntu/logs/json-bowman-1-bowman-worker_search-1.log). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info.
**Possible Solutions?**
- Increase the initCrcLen?
- WHAT ELSE SHOULD I TRY?
- DO THE OTHER ERRORS IN THE LOG MATTER (ERROR JsonLineBreaker or ERROR TcpOutputProc)
**Reference**
**Splunk Forwarder Config**
Env=prodb|Role=bowman|root@bowman-1:/opt/splunkforwarder/etc/system/local# cat inputs.conf
[default]
host = bowman-1
[monitor:///home/ubuntu/logs/json-bowman-1*.log]
disabled = 0
sourcetype = boeinglogjson
index = prod-boeing
Env=prodb|Role=bowman|root@bowman-1:/opt/splunkforwarder/etc/system/local# cat outputs.conf
[tcpout]
defaultGroup = default-autolb-group
[tcpout:default-autolb-group]
server = splunk.myotherserver.com:9997
[tcpout-server://splunk.myotherserver.com:9997]
Env=prodb|Role=bowman|root@bowman-1:/opt/splunkforwarder/etc/system/local# cat props.conf
TRUNCATE = 2000000
[boeinglogjson]
INDEXED_EXTRACTIONS = json
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = false
TIMESTAMP_FIELDS = info.created
TIME_FORMAT = %Y-%m-%d %H:%M:%S.%3Q
category = Custom
disabled = false
**Other errors from Splunk Logs**
Env=prodb|Role=bowman|root@bowman-1:/opt/splunkforwarder/var/log/splunk# grep ERROR *.log
splunkd.log:09-21-2017 23:44:53.585 +0000 ERROR TcpOutputProc - LightWeightForwarder/UniversalForwarder not configured. Please configure outputs.conf.
splunkd.log:09-22-2017 01:30:04.522 +0000 ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=/home/ubuntu/logs/json-bowman-1-bowman-worker_booking-1.log). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info.
splunkd.log:09-22-2017 01:30:04.522 +0000 ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=/home/ubuntu/logs/json-bowman-1-bowman-worker_search-1.log). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info.
splunkd.log:09-26-2017 08:51:46.621 +0000 ERROR JsonLineBreaker - JSON StreamId:11681658046189288813 had parsing error:String value too long - data_source="/home/ubuntu/logs/json-bowman-1-bowman-worker_default-1.log", data_host="bowman-1", data_sourcetype="boeinglogjson"
**Example logrotate.d conf..**
//
{
size 250M
missingok
rotate 3
compress
delaycompress
notifempty
create 664 ubuntu ubuntu
su ubuntu ubuntu
sharedscripts
postrotate
service bowman-worker_booking-1 restart
endscript
}
↧
How can I run a search that will use data from buckets from a specific time interval?
Given a timeinterval provided by the user, I would like to output those buckets who contain more elements than the average of the 50 non-empty buckets before a bucket.
Is there an easy way of doing this?
↧
How to convert distinguishedName to canonical name using Regex?
Hi
I have distinguishedName values from Ldap query, how can I convert it to canonical names using Regex?
for eg:
CN=test,OU=test service,OU=Special Accounts,DC=test,DC=com
CN=test1,OU=users,DC=test,DC=com
canonical name:
test.com/Special Accounts/test service/test
test.com/users/test1
↧
Error messages when I try to connect the universal forwarder
Hi, I'm brand new to Splunk and been given an existing Splunk environment to manage. I need to get a universal forwarder installed on a couple servers. This environment already has several universal forwarders in place. I installed the forwarders and selected Windows Application, Security and System logs. The deployment is setup to listen on port 9997.
In the splunkd log on the forwarder server, I see these lines repeated and not sure what they mean. I'd appreciate any help and keep in mind, I'm still very new to this. Thanks!
09-28-2017 18:45:47.694 -0400 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected
09-28-2017 18:45:59.695 -0400 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected
09-28-2017 18:46:02.913 -0400 WARN HttpPubSubConnection - HTTP client error in http pubsub Connection closed by peer uri=https://team-splunk01:9997/services/broker/connect/A917C286-95F0-4285-9F0C-8FDE5F9C5596/TEAM-SV-FILE01/c8a78efdd40f/windows-x64/8089/7.0.0/A917C286-95F0-4285-9F0C-8FDE5F9C5596/universal_forwarder/TEAM-SV-FILE01
09-28-2017 18:46:02.913 -0400 WARN HttpPubSubConnection - Unable to parse message from PubSubSvr:
↧
↧
Bluecoat × universal forwarder
http://docs.splunk.com/Documentation/AddOns/released/BlueCoatProxySG/Releasenotes
I am using Splunk Add-on for Blue Coat ProxySG.
I can successfully import using GUI.
However, using universal forwarder does not work.
Does anyone know anything?
I think that the commented out part does not work well.
↧
What is best approach to implement kv store to replace using lookups?
HI!
I have two search heads in cluster and multiple lookups in Splunk but currently started facing issues of replication of knowledge bundles. After investigation, I have observed that few of the lookups are not getting replicated between the search heads. I have learnt that it's best to use kv store than using lookups but I don't have clear idea of how and when using kv store is best suitable.
Would really appreciate your suggestions and help.
Thanks!
↧
About daylight savings time
I am thinking about building an environment in a country where daylight saving time exists, but as for the server, I am setting to change the summer time and winter time automatically,
Will it automatically change with respect to the time zone set for the user of splunk?
If it does not change automatically, what should be set?
↧
Not extracting all fullgc events
Could not be able to pull all the Full GC events. Is there any tweak requires in the regex?
| makeresults
| eval _raw="28820.220: [Full GC (System.gc()) 8832K->8624K(37888K), 0.0261704 secs]
29372.500: [GC (Allocation Failure) 23984K->8816K(37888K), 0.0013546 secs]
29932.500: [GC (Allocation Failure) 24176K->8808K(37888K), 0.0017082 secs]
30492.500: [GC (Allocation Failure) 24168K->8960K(37888K), 0.0017122 secs]
31047.500: [GC (Allocation Failure) 24320K->8944K(37888K), 0.0020634 secs]
31602.500: [GC (Allocation Failure) 24304K->8992K(37888K), 0.0017542 secs]
32157.500: [GC (Allocation Failure) 24352K->8968K(37888K), 0.0018971 secs]
32420.247: [GC (System.gc()) 16160K->8944K(37888K), 0.0012816 secs]
8186.000: [GC (Allocation Failure) 91332K->36212K(246272K), 0.0081127 secs]
8347.676: [GC (System.gc()) 42225K->35996K(246272K), 0.0040077 secs]
8347.678: [Full GC (System.gc()) 35996K->21313K(246272K), 0.1147433 secs]
8929.342: [GC (Allocation Failure) 76609K->24356K(246784K), 0.0047687 secs]
8952.577: [GC (Allocation Failure) 80164K->29098K(246272K), 0.0053928 secs]
9921.694: [GC (Allocation Failure) 84906K->27626K(247808K), 0.0053474 secs]
11567.840: [GC (Allocation Failure) 85994K->27730K(247808K), 0.0030062 secs]
11947.795: [GC (System.gc()) 41757K->27562K(248320K), 0.0035917 secs]
11947.797: [Full GC (System.gc()) 27562K->22923K(248320K), 0.1237187 secs]
13602.721: [GC (Allocation Failure) 81803K->23467K(247808K), 0.0029760 secs]
15283.208: [GC (Allocation Failure) 82347K->23363K(249344K), 0.0035369 secs]
15547.924: [GC (System.gc()) 33663K->23283K(248832K), 0.0142619 secs]
15547.937: [Full GC (System.gc()) 23283K->22914K(248832K), 0.0788277 secs]
17283.683: [GC (Allocation Failure) 83842K->23298K(250368K), 0.0077597 secs]
19069.372: [GC (Allocation Failure) 86274K->23354K(249856K), 0.0027577 secs]
| rex max_match=0 field=_raw "^(?[^:]+):\s+\[Full GC\s\(([^\)]+)\)\)\s+(?\d+)K-\>(?\d+)K\((?\d+)K\),\s+(?[^\s]+)\ssecs\]"
↧
↧
How to rex out and substitute it with *
I would like to substitute below kind of email address with *
Original :- john.trava@gmail.com
Expected:- Jo**.***va@gmail.com
First two character of first name and last two character before @ should be visible and the domain name like gmail.com should also be visible
Thanks in advace
↧
iplocation
I am not getting iplocation working in this query:
tag= web | stats count by IP, sessionId | stats dc(IP) as count, values(IP) as clientIP by sessionId | where count> 5 | iplocation clientIP
I can see the country, city, region fields appear but they are not populated
But when I run the following search I get IP location working with the country, region etc fields populated.
tag= web | iplocation IP | table IP, Country
↧
Event data filtering working in one environment but not in other.
I have two clustered environments consisting of 3 SH,3 Indexers and 1 HWF each running on Splunk 6.4.1.
I need to filter out certain unwanted events coming from jms queues and send them to the nullQueue.
I added below code in HWF in props.conf:
[my_sourcetype]
TRANSFORMS-set= setnull,setparsing
and this in transforms.conf
[setnull]
REGEX = .
DEST_KEY = queue
FORMAT = nullQueue
[setparsing]
REGEX = (?<=mbody=.{51}TQ-123|mbody=.{51}TQ-145)
DEST_KEY = queue
FORMAT = indexQueue
This is working perfectly in one cluster environment but not working in another cluster environment . Since the conf files are the same and so is the version of the splunk forwarders ,indexers and servers, why does filtering fails on the 2nd environment.
Any suggestion as to how to debug this? Or what might be the reason for this?
Thanks !
↧