Is there any guideline or best practice what .conf to put in gui/indexer/forwarder level?
I mean each conf has its purpose and alot of settings, but maybe in practice we can somehow isolate its complexity by grouping the .conf for each level?
Or at least minimize complexity
Like for example, forwarder usually boils down to at minimum app.conf, inputs.conf, outputs.conf
↧
Group configuration files to simplify each app in splunk (search head, indexer, forwarder)
↧
Can Splunk Add-on for Oracle Database work without DBconnect?
Hi All,
Can this Add-on be used without DBconnect if I just want to monitor some local oracle log files? e.g. alert_SID.log and SID_ora_*.aud
↧
↧
Transfer logs between different network segments - which forwarders to use where ?
Hi,
our network count ~9000 Servers. Most of them running in the separate network IP segments. I would like to kindly ask You about log forwarding from that machines. Between indexer and some servers we have to build several hops (forwarders). How to build it properly ?
Please take a look on that example:
Example:
Linux Server GROUP LAN1 (splunk forwarder which one ?) ----- > Splunk Forwarder (which one ?) LAN1 ----> Splunk Indexer
Linux Server GROUP LAN4 ----> splunk forwarder LAN3 ---....---> Splunk Forwarder LAN1 --- > Splunk Indexer ?
If I good understand Splunk architecture, on the each machine I have to install Splunk Universal Forwards(or lightforwarder ?) to transfer logs from the local running applications. Each Universal Forwarder installed on the app servers will push the logs to the
heave forwarder which will be connected with the next hop (also Heave Forwarder or in the final step with the indexer). Is this the proper solution ?
lightforwarder ---> heave forwarder ---> heavy forwarder ----> Indexer ?
What about loadbalancing, we need it. If we would like to push logs from ~500 heavy loaded systems, we need minimum two machines I suspect. Is it possible to loadbalance such a traffic ?
Thanks in advance for any hints.
With kind regards
Mike
↧
Stats sum(kb), subtotal output based on grouping
I have a query below that produces the sum of bandwidth used by remote intermediate forwarders. The output give me a simple linear output with sum by host
index=_internal metrics thruput site-hub 11001 host=server0* | stats sum(kb) by host
What I am trying to get without success is to aggregate/subtotal the output by locations (not currently an index field) so that I can produce a graph by location rather than a graph by host.
↧
How to extract fields with JSON values while creating a DB input in Splunk DB Connect?
- I am creating a DB Input in DB Connect v3
- My DB columns contains JSON values.
- I am getting correct raw data in Splunk, but on selecting Table mode, the field does not have correct values.
for ex, if the column name is status and the JSON it contains is : "[{"datasetId":1,"refreshStatus":16}]", then the field created in
splunk is status but it only contains the value : '[{'.
- All the rows in Splunk contains this same value '[{'
- May be Splunk is using double quote as a delimiter to separate fields, but the JSON itself contains the quotes.
↧
↧
Unable to forward syslog to third-party syslog server
I have an all-in-one environment, which indexed VPN logs. I also want to forward the vpn raw logs to the third party syslog servers.
I have configured outputs, transforms, and props as the snapshot, however, It cannot forward any log out.
09-18-2017 17:45:02.632 +0800 INFO Metrics - group=syslog_connections, vpnsyslog:172.18.165.144:514:172.18.165.144:514, sourcePort=8089, destIp=172.18.165.144, destPort=514, _tcp_Bps=0.00, _tcp_KBps=0.00, _tcp_avg_thruput=0.00, _tcp_Kprocessed=0, _tcp_eps=0.00
Anything wrong with my configuration?
↧
Unable to start SPLUNKD on Search Head
Looks like my Linux devices where restarted sometime yesterday. I was able to restart my license server, how ever when I tried to restart my search head I get a message indicating that http port 8000 is already bound and do I want another port.
I did a ps -auf | grep splunk and I all I see is this;
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
n2e033 2018 0.0 0.0 121512 2280 pts/0 Ss 06:27 0:00 -bash
root 9730 0.0 0.0 170296 3116 pts/0 S 06:34 0:00 \_ su root
root 9735 0.0 0.0 108496 1916 pts/0 S 06:34 0:00 \_ bash
root 11103 2.0 0.0 123240 1404 pts/0 R+ 06:35 0:00 \_ ps -auf
root 4734 0.0 0.0 4068 588 tty6 Ss+ Sep17 0:00 /sbin/mingetty /dev/tty6
root 4732 0.0 0.0 4068 592 tty5 Ss+ Sep17 0:00 /sbin/mingetty /dev/tty5
root 4730 0.0 0.0 4068 588 tty4 Ss+ Sep17 0:00 /sbin/mingetty /dev/tty4
root 4728 0.0 0.0 4068 592 tty3 Ss+ Sep17 0:00 /sbin/mingetty /dev/tty3
root 4726 0.0 0.0 4068 588 tty2 Ss+ Sep17 0:00 /sbin/mingetty /dev/tty2
root 4724 0.0 0.0 4068 588 tty1 Ss+ Sep17 0:00 /sbin/mingetty /dev/tty1
↧
Execute stored procedure with parameters using datainputs
Hi,
I want to execute stored procedure with parameters but expected it gives me error like "com.microsoft.sqlserver.jdbc.SQLServerException: The statement did not return a result set."
DB input:
exec "StaffDeployment"."dbo"."procedurename" @FROMDATE = '9/14/2017', @TODATE = '9/14/2017 23:59:00', @STEV = '1', @Counter = 5, @FieldPosition = '1,2,3,4'
Please help me in this.
Also please contact me as i have small requirement on splunk whih needs to be finished as task. will pay you further as required.
Regards
Suhail
↧
Command for consecutive events
Hi All,
I need the command for consecutive events which is triggered one after another out of multiple events( 3 consecutive events from 100 events)
for example if we receive any hits from external IP towards our web-server as accept,accept, deny or deny, deny accept or in windows if we receive the account successfully login, account created, change password attempt etc
The goal to get three or more consecutive events generated one after another out of 100 logs to identify specific pattern
Can anyone please help with Splunk command to achieve the same
↧
↧
Using _time as a discriminator without time span?
I want to use the _time field as one of my discriminator fields in a tstats command. I wasn't able to figure out, how to do this, without the time values being rounded/group in some time stamp.
For other fields, when used as discriminators every existing value is displayed as a separate row, but with _time, even if I'm no using any span= with my command, they are grouped somehow.
Obviously, in this case, I have really rare events, that's why I want to have the exact time values here.
↧
Heavy Forwarder using only one CPU
I would like to understand if it is possible to work with multiple CPUs in the Heavy Forwarder.
In my current architecture, I have two Heavy Forwarders and both using only one CPU for processing events.
Thanks,
Nardi
↧
Doing math on results of sum(duration) of transaction?
I have a search that results in showing the time a phone was in a call in seconds by using sum(duration) of the events:
| transaction Tag | chart count(Tag) as NumberOfCalls sum(duration) as Time_in_call(seconds) by codec | sort sum(duration) | reverse
I'd like to get that the Time_in_call(seconds) to be in minutes. I thought it might be as simple as :
| transaction Tag | chart count(Tag) as NumberOfCalls sum(duration) as **Time_in_call(seconds) / 60** by codec | sort sum(duration) | reverse
or something like | transaction Tag |**eval Time_in_call(minutes) as sum(duration) / 60** | chart count(Tag) as NumberOfCalls **Time_in_call(minutes)** by codec | sort sum(duration) | reverse
It looks like eval doesn't allow sum() anything . This seems straightforward but I'm at a loss even after spending 30 minutes searching around here.
↧
Multiple css in dashboard, can one css override the other
Hi Splunkers,
I am using 3 css files in multiple dashboards, Now my usecase is I need to consolidate all 3 in one css, this needs adding panel Id which will take long effort hours.
All said, Is it possible to define the precedence of css and use them.
↧
↧
Renewing my developer license taking really long?
I have a splunk developer license that I have renewed a total of 3 times now. It is set to expire on the 23rd (in 5 days), and I just wanted to get it renewed before it ran out, because I am bringing in data. I fear I might not be able to search once the license expires and I violate the 500mb a day limit. I applied over a week ago, and I have gotten no response.
Anyone know why the delay might be happening? Is there anything else I need to do to renew my develop license?
↧
Configure selective indexing to send all logs to a dev indexer
i am bit lost on selective indexing. I wanted to configure on of my prod indexers to send logs to a dev indexer and after reading up on some documents i feel i am missing something. below would be the config i would apply anyone have tips on what i am missing?
**-Prod indexer-**
outsputs.conf-
[indexAndForward]
index=true
selectiveIndexing=true
[tcpout:send_to_dev]
server = dev_indexer:9997
-inputs.conf-
add _INDEX_AND_FORWARD_ROUTING=send_to_dev to all inputs.conf stanzas on the prod indexer.
**-Dev indexer inputs.conf-**
add a inputs.conf stanza that will listen for prod_index:9997
↧
Streamstat reset_after resets for all users
I found this search from [woodcock][1] user and it basically searches for successful logins after several failed attempts:
index=* sourcetype=linux_secure tag=authentication action="failure" OR action="success"
| reverse
| streamstats sum(eval(match(action,"failure"))) AS action_count reset_after="("match(action,\"success\")")" BY user
| reverse
| where match(action,"success") AND action_count>=3
(in this case the query searches for 3 failed logins followed by one successful login)
The **action_ count** counts all the failed attempts, and this works quite good.
For example if root has failed logins after 5 attempts it counts this as 5 and when one successful attempts occurs , it just resets the count and starts with one with the next failed login.
However this only works if you search for a particular user (in this case " user=root"),
but if you run the query it will still count the failed logins per user, but after a reset of one user, it also resets the count for all users,
so would it be possible to reset the count per user base instead of all users?
[1]: https://answers.splunk.com/users/1493/woodcock.html?utm_source=answers&utm_medium=email&utm_term=woodcock&utm_content=&utm_campaign=mention
↧
Any Tool to encrypt passwords based on a splunk secret?
We have multiple secrets for the different tiers (forwarders/search heads etc.). Some of the apps like IPS needs to have UI to encrypt password :( which is not possible on all tiers.
Is there a tool/API which can encrypt the password based on splunk.secret ?
eg. what I'm looking for is
=> supply passwords.conf and splunk.secret as inputs to the tool
=> run the api/tool so that it takes passwords.conf and splunk.secret of the relevant tier/server and hash the password with it
Thanks in advance
↧
↧
simulating 100 concurrent search
I would like to check if there is any possibility to simulate 100 concurrent search.
Also if I were to login 5 different account on a single PC and perform searches on every login does that equate to 5 concurrent searches?
Please advise.
↧
Splunk Add-on for Microsoft Cloud Services: REST ERROR[1021]: Fail to decrypt the encrypted credential information - not well-formed (invalid token)
Hello Splunkers,
I am fed up with an error when trying to install the microsoft could services add-on on my search head:
First, I must mention that I work on a distributed environment with:
1 search head
2 indexers with 1 master cluster node
2 forwarders
1 deployment servers
Has stated in the documentation, I currently can't install and collect office data from my forwarders has they are not Heavy forwarders.
From what I understood, the only way I can go further with this is to install it on my SH.
OK now I try to create an input and get the following error:
REST ERROR[1021]: Fail to decrypt the encrypted credential information - not well-formed (invalid token) : line 33, column 42
Here is the full trace:
09-18-2017 17:23:34.118 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': BaseException: REST ERROR[1021]: Fail to decrypt the encrypted credential information - not well-formed (invalid token): line 33, column 42
09-18-2017 17:23:34.128 +0200 ERROR AdminManagerExternal - External handler failed with code '1' and output: 'REST ERROR[1021]: Fail to decrypt the encrypted credential information - not well-formed (invalid token): line 33, column 42'. See splunkd.log for stderr output.
09-18-2017 17:23:48.975 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': Traceback (most recent call last):
09-18-2017 17:23:48.975 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': File "/opt/splunk/bin/runScript.py", line 78, in
09-18-2017 17:23:48.975 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': execfile(REAL_SCRIPT_NAME)
09-18-2017 17:23:48.975 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunk_ta_ms_o365_rh_server_accounts.py", line 31, in
09-18-2017 17:23:48.975 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': admin.init(base.ResourceHandler(Account), admin.CONTEXT_APP_AND_USER)
09-18-2017 17:23:48.975 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 129, in init
09-18-2017 17:23:48.975 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': hand.execute(info)
09-18-2017 17:23:48.975 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 589, in execute
09-18-2017 17:23:48.975 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': if self.requestedAction == ACTION_CREATE: self.handleCreate(confInfo)
09-18-2017 17:23:48.975 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/splunktaucclib/rest_handler/base.py", line 285, in handleCreate
09-18-2017 17:23:48.975 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': args = self.encode(self.callerArgs.data)
09-18-2017 17:23:48.975 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/splunktaucclib/rest_handler/base.py", line 348, in encode
09-18-2017 17:23:48.975 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': args = self._cred_mgmt.encrypt(tanzaName, args)
09-18-2017 17:23:48.975 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/splunktaucclib/rest_handler/cred_mgmt.py", line 75, in encrypt
09-18-2017 17:23:48.975 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': cred_data = self.decrypt(stanzaName, {})
09-18-2017 17:23:48.975 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/splunktaucclib/rest_handler/cred_mgmt.py", line 123, in decrypt
09-18-2017 17:23:48.975 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': shouldRaise=True)
09-18-2017 17:23:48.975 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/splunktaucclib/rest_handler/error_ctl.py", line 149, in ctl
09-18-2017 17:23:48.975 +0200 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute': raise BaseException(err)
Another thing is, when going on the Troubleshooting page of the Add-on, it shows warning icons saying from my indexers:
REST Processor: Failed to fetch REST endpoint uri=https://127.0.0.1:8089/servicesNS/nobody/Splunk_TA_microsoft-cloudservices/configs/conf-splunk_ta_ms_o365_server_management_api_inputs?count=0 from server https://127.0.0.1:8089. Check that the URI path provided exists in the REST API.
As well as:
Unexpected status for to fetch REST endpoint uri=https://127.0.0.1:8089/servicesNS/nobody/Splunk_TA_microsoft-cloudservices/configs/conf-splunk_ta_ms_o365_server_management_api_inputs?count=0 from server=https://127.0.0.1:8089 - Not Found
I clearly don't understand any of these three error messages (the one when trying to create an input as well as the two from my indexers). Why am I getting an error from my indexers as they don't interfer with this installation ?
A help would be really appreciated guys !
Thanks a lot
Cheers
↧
CSV Fields Imported
Hi!
I imported a CSV file with 97 fields and after doing some searches, some fields are missing for some records. I have this so-called 'close_notes' field and it's present to some of the records while there are a few records where it does not exist.
Thank you.
↧