Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How can I watch a file a CSV file?

$
0
0
All, I have a CSV being laid to a file system by a database. A basic monitor stanza brought the file in perfect with sourctype=csv. How ever when a new file is loaded with the same name Splunk does not bring in the file with the new contents. Any idea on how to get Splunk to reread the file?

Enabling Duo in Splunk breaks local admin login. Is there a way around that?

$
0
0
I'm on the 6.5.2 release and have Duo turned on in the Splunk configs. It has been working great, but I just found out that I cannot login as user **admin** in Splunkweb. I get this message: `Access Denied. The username you have entered cannot authenticate with Duo Security. Please contact your system administrator.` That's rather inconvenient! Surely there is a way around this?

How do you make a multiple cumulative time series?

$
0
0
I can make mulitple summed time series. source="splunk-source" | timechart sum(figure) as figure by category I can make a single cumulative summed time series. source="splunk-source" | timechart sum(figure) as figure | streamstats sum(figure) as cumulative_figure | timechart last(cumulative_figure) But I can't make multiple cumulative summed time series. I would appreciate some help with that.

Why am I getting a "File in use" error when trying to upgrade our forwarder to version 6.6.6?

$
0
0
I'm trying to upgrade our forwarder version to splunkforwarder-6.6.6-ff5e72edc7c4-x64-release.msi, but it is failing with a "File in use " error. This is the command i used: msiexec.exe /i splunkforwarder-6.6.6-ff5e72edc7c4-x64-release.msi /log C:\Windows\Install\Install_SplunkForwarder_6.6.6_MSI.log /quiet /norestart LAUNCHSPLUNK=0 AGREETOLICENSE=Yes Looks like it fails because the Splunk service is running. But, the MSI usually takes care of it. Any idea whats going on?

How can I watch a CSV file?

$
0
0
All, I have a CSV being laid to a file system by a database. A basic monitor stanza brought the file in perfect with sourctype=csv. However, when a new file is loaded with the same name, Splunk does not bring in the file with the new contents. Any idea on how to get Splunk to reread the file?

Why does enabling Duo in Splunk break local admin login and is there a way around that?

$
0
0
I'm on the 6.5.2 release and I have Duo turned on in the Splunk configs. It has been working great, but I just found out that I cannot login as user **admin** in Splunk Web. I get this message: `Access Denied. The username you have entered cannot authenticate with Duo Security. Please contact your system administrator.` That's rather inconvenient! Surely there is a way around this?

Can I use an average in maps+ instead of count?

$
0
0
While using maps+ the clusters it makes show count of events in it. How can i use average of the values for a particular kpi? Like when it shows cluster count can I display average of a KPI like I am able to do on custom cluster maps ![alt text][1] [1]: /storage/temp/256000-capture.png AS the picture shows counts like 273, I want average of a percentage displayed here. Is that at all possible. Please help, I need this done quickly. Currently I am doing the same thing using Custom Cluster Maps basesearch|eval kpi=A+B |geostats latfield=latitude lonfield=longitude avg(kpi) This gives me the desired result where geogriphical clusters are made with average of KPI for all the items in the cluster displayed on top. But map+ has better detailing, so I wanted to use that. Is there a way I can get a similar average there instead of count of items in the cluster?

How do you bucket two events starting using a timespan that starts with the first event?

$
0
0
My question is a mix of using the transaction command with the bin command. What I would like to achieve is capturing when 2 consecutive POST requests are made in proxy logs within two seconds of each other. Straight up `using | _bin span=2s` misses out on events that might happen during odd seconds. Essentially, I want the two second timer to start when the first event occurs, and then looks for the next event (another POST request), within two seconds. Is there a feasible way to achieve what I'm asking for? Or am I not making much sense?

Memory Tracker not working as expected.

$
0
0
Hi Splunkers, We have set search_process_memory_usage_threshold to 3GB, but noticed that searches are terminated when the usage reaches much higher values, example below. Is this expected behaviour, or is there any other parameter to be enabled make it work better? 09-13-2018 10:59:00.013 +1000 WARN SearchProcessMemoryTracker - Dispatch Command: The search processs with sid=XXXX was forcefully terminated because its physical memory usage (5626.160000 MB) has exceeded the 'search_process_memory_usage_threshold' (3000.000000 MB) setting in limits.conf.

which index volume should be more ?

$
0
0
i have upgraded my indexer to 2TB from 450GB to increase my data retention. Below is my current indexer volume configuration: hot volume : 70GB cold volume: 35GB should i increase my hot volume or cold volume.Please suggest

Display last 8 hours from now () ..?

$
0
0
Hi Splunkers, i want to display last 8 hours data with 1 hour different without any index or kv table .like `makeresults` or `gentimes` Eg:- **suppose now time is "2018-09-14 13:31:42"** ` |makeresults |eval current=now() | timechart span=1h count as duration. **i want to display like below** **time** 13:30 12:30 11:30 10:30 09:30 08:30 07:30 06:30 ThanQ in advance :(

Help on table count

$
0
0
Hello I use the table count below : index="wineventlog" sourcetype="wineventlog:*" SourceName="*" Type="Critique" | dedup host | table _time SourceName host | stats count by host | sort - count limit=10 | join host [search index=windows sourcetype=winregistry key_path="\\registry\\machine\\software\\wow6432node\\x\\master\\WindowsVersion" | stats values(data) as OS by host] | table OS count But in reality i want not a count each time there is a new host but a global count of the OS For example actually I have OS Count W10 1 W10 1 But i need instead OS Count W10 2 Could you help me please??

splunk ta for linux

$
0
0
as I installed linux TA and app , received logs are in the form of raw event and they dont indexed with this TA, linux servers send logs to universal forwatrde by syslog and when i search in the related index ,logs are seems to be raw events, and field extraction dont be happened, the TA is most downloaded in splunkbase, what is the solution?

how to display multiple column headers.

$
0
0
hello everyone I'd like to display multiple column headers on the table like below image. I can create the table, but the problem is column header. It doesn't matter what color is. I'd like to make just two rows as column header. And I'd like to make three groups on the first column header row. Please refer to attached image. I'm waiting for your information. Thank you in advance. ![alt text][1] [1]: /storage/temp/254934-multiple-column-headers.png

how to execute a search where there are two patterns, first pattern host(is a field ) should be ignored on second pattern search

$
0
0
I was executing my search on a log file, This is the pattern i want to search ** END ABCD234** **hour>00** where this shouldn't be searched on several **host**(servers). host need to be ignored can be identified by this pattern **"DISABLE" "END" hour>00** Here hour is a field extracted from timestamp (Example:**01**:15:38- here 01 was extracted). Please let me know if more info needed.

Is it possible to make Monitoring Console app display on the Apps list on the left side on the Home page?

$
0
0
Is it possible to make Monitoring Console app display on the Apps list on the left side on the Home page? Thanks.

Webhook when a search background jobs completed

$
0
0
Hi, I am trying to automate Splunk search and export the result to our database. Is it possible to do a search as a background job and webhook to my API when it completes?

Finding and removing strings in logs from the Forwarder

$
0
0
Hello, I'm trying to send some antivirus logs from the forwarder into splunk. The logs I'm sending have a tendency to spam, for example: 13/09/2018 16:06:53 No usable rule found Blocked 192.168.0.40:53354 192.168.0.30:53 UDP C:\Windows\System32\dns.exe NT AUTHORITY\SYSTEM 13/09/2018 16:06:54 No usable rule found Blocked 192.168.0.40:52091 192.168.0.30:53 UDP C:\Windows\System32\dns.exe NT AUTHORITY\SYSTEM 13/09/2018 16:06:54 No usable rule found Blocked 192.168.0.40:49467 192.168.0.30:53 UDP C:\Windows\System32\dns.exe NT AUTHORITY\SYSTEM 13/09/2018 16:06:54 No usable rule found Blocked 192.168.0.40:53354 192.168.0.30:53 UDP C:\Windows\System32\dns.exe NT AUTHORITY\SYSTEM 13/09/2018 16:06:55 No usable rule found Blocked 192.168.0.40:52091 192.168.0.30:53 UDP C:\Windows\System32\dns.exe NT AUTHORITY\SYSTEM 13/09/2018 16:06:56 No usable rule found Blocked 192.168.0.40:53354 192.168.0.30:53 UDP C:\Windows\System32\dns.exe NT AUTHORITY\SYSTEM 13/09/2018 16:06:57 No usable rule found Blocked 192.168.0.40:52091 192.168.0.30:53 UDP C:\Windows\System32\dns.exe NT AUTHORITY\SYSTEM 13/09/2018 16:06:58 No usable rule found Blocked 192.168.0.40:56694 192.168.0.30:53 UDP C:\Windows\System32\dns.exe NT AUTHORITY\SYSTEM 13/09/2018 16:06:59 No usable rule found Blocked 192.168.0.40:56694 192.168.0.30:53 UDP C:\Windows\System32\dns.exe NT AUTHORITY\SYSTEM I want to be able to filter out lines in the log that say "No usable rule found". I've tried adding the following to props.conf which I've copied into [C:\Program Files\SplunkUniversalForwarder\etc\system\local] directory, here is the line I added to props.conf: [source:\path\to\log\log.txt] SEDCMD-strip-detail-msg=^.*(listening on the port|[Nn]o usable rule found)*$ I have also tried messing with transform.conf too, but to no avail. Any ideas guys?

Changing UI in Enterprise version

$
0
0
Hi Guys, I may sound stupid, but since I am new here wanted to know if Enterprise License of Splunk allows us to change the UI (look and feel)? Thanks

Microsoft Azure Active Directory Reporting Add-on for Splunk - Traceback Error when saving Client ID and Client Secret

$
0
0
Hi everybody, I installed the Microsoft Azure Active Directory Reporting Add-on for Splunk. When I enter the Client ID and the Client Secret, I am getting the following error when clicking on "save": Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/splunktaucclib/rest_handler/handler.py", line 113, in wrapper for name, data, acl in meth(self, *args, **kwargs): File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/splunktaucclib/rest_handler/handler.py", line 86, in wrapper return meth(self, name, data) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/splunktaucclib/rest_handler/handler.py", line 197, in update self.rest_credentials.encrypt_for_update(name, data) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/splunktaucclib/rest_handler/credentials.py", line 165, in encrypt_for_update self._set(name, encrypting) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/splunktaucclib/rest_handler/credentials.py", line 382, in _set password=context.dump(credentials) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/solnlib/utils.py", line 154, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/solnlib/credentials.py", line 150, in set_password self._update_password(partial_user, curr_str) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/solnlib/utils.py", line 154, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/solnlib/credentials.py", line 176, in _update_password self._storage_passwords.create(password, user, self._realm) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/solnlib/packages/splunklib/client.py", line 1826, in create state = _parse_atom_entry(entries[0]) IndexError: list index out of range Does anyone know how this happens? I installed the AddOn on my SH and my two Indexer and tried to configure the Set Up on my Searchhead as it was not possible to configure anything on the Indexers.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>