Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Picking the right HDD, SSD

$
0
0
Hello everyone, Could anyone post a typical HDD profile detailing what a medium and high end HDD could be for Splunk. Also for SSD. ( Links more details about picking the right disk for Splunk deployment are also welcome !) Thank you! Kind regards, David

Retrieve lookup data with JS

$
0
0
Hello splunkers, i'm gonna try to be short, I'm trying to create an HTML homepage for Splunk APP and I've been trying to get some information through a lookup without splunkJS or any splunk code (i'm doing this because splunk is spoiling my CSS pages and I need to customize this page view and customize either the way I handle this lookup data). OBS.: I have my JS and lookup in the same app, I tried to do something like this: `$.ajax({ url: '../../../../lookups/ccm_links.csv', dataType: 'text', }).done(successFunction);`

change column name with specified new column value in Splunk

$
0
0
Hi, I am having correct value in current field and want to use that value as column name which is currently showing as A. Please help to solve this issue. For any other information please let me know. e.g if current is '06-24-2018' then in table header row should have column name as '06-24-2018' | base search | eval current = strftime(currentTime,"%m-%d-%Y") | eval A = if(P1C>0 OR P2C>0,"R",if(P3C>0,"Y","G")) | table "Project",A

How to make a web service call from the dashboard?

$
0
0
I want to call a web service from the dashboard using post method from the UI. How to define the web service in the app?

Graylog whitelist\blaclist?

$
0
0
I am using Graylog (winlogbeats) to forward windows events to a Linux based UF. I have a props.conf on my indexer and SH to set field alias since Graylog forwards fields with a winlogbeats preface. I have 2 questions: 1. if I want to whitelist\blacklist on the UF would I look for the fields with windlogbeats? so instead of this: blacklist1 = EventCode="4662" Message="Object Type:\s+(?!groupPolicyContainer)" would I replace it with this: blacklist1 = Winlogbeat_EventCode="4662" Message="Object Type:\s+(?!groupPolicyContainer)" 2. should I or should I not put the props.conf on the linux UF? it looks like this: [graylog:windows] SHOULD_LINEMERGE = false TIME_FORMAT=%Y-%b-%d %H:%M:%S TZ = UTC FIELDALIAS-winlogbeat_as_host = winlogbeat_fields_collector_node_id as host FIELDALIAS-winlogbeat_as_eventid = winlogbeat_event_id as EventCode FIELDALIAS-winlogbeat_as_processname = winlogbeat_event_data_ProcessName as Process_Name FIELDALIAS-winlogbeat_as_logonid = winlogbeat_event_data_SubjectLogonId as Logon_ID FIELDALIAS-winlogbeat_as_user = winlogbeat_user_data_SubjectDomainName as user FIELDALIAS-winlogbeat_as_src_user = winlogbeat_user_data_subjectDomainName as src_user FIELDALIAS-winlogbeat_as_action = winlogbeat_keywords as action FIELDALIAS-winlogbeat_as_security_id = winlogbeat_user_data_SubjectUserSid as Security_ID FIELDALIAS-winlogbeat_as_account_domain = winlogbeat_user_data_SubjectDomainName as account_domain Thanks!

Data loss after a week for an Index in Splunk

$
0
0
I have around 700 forwarders send the data to splunk and no index will keep data longer than 90 days. My indexed data seems to be fine for last one week. However, If i go for search before a week, number of returned data is significantly very low. I have been observing this issue from past one month. Data and number of returned hosts for a search is very very low before a week. Can anyone suggests what is happening and why splunk is behaving like this. Thanks, Ramu Chittiprolu

splunk and task manager

$
0
0
hello i begin with splunk and i have Something complex to need i need to index the data coming from the Windows task manager, tab "détails" in fact i want to index the processor and the memory usage for a specific service how i can do this please? THANKS

Not able to override default font colour of single-value label field

$
0
0
Hi Team, I am using dark.css in y dashboard and everything is becoming black including the lable font of a single value Visualizations . How to change the label font clour so that font colour of single-value label field is white Regards smdasim

Parameter passing between 2 searches as input as well as output

$
0
0
HI All, I need to give input from search1 to search2 and then get a single result from search 2 with the values from search 1. For example, in the tables below, the correct Main_Ticket for Z4563A/B/C/* is C2995A. To find it, first I need just first 5 Character from the Sourcetype_B Ticket (Z4563), Then I need to pass it to another query, where I can search Z4563 in the Sourcetype_A linked tickets. If found, I need to return Sourcetype_A Ticket as output(Here C2995A). Sourcetype_A Ticket | Main_Ticket | Value | Line | LinkedTicket A2345A | A2345A | DES | L1 | C2995B001 | C2995B | DTS | X2 | C2995A | C2995A | DPU | L1 | Z4563A, C2995A001, C2995B001 C2995A001 | C2995A | DTS | X2 | Sourcetype_B Ticket | Main_Ticket | Value | Line | LinkedTicket A2345A002 | A2345A | DES | L1 | C2995B002 | C2995B | DTS | X2 | C2995A003 | C2995A | DPU | L1 | Z4563B | Z4563A | SUB | S1 | Z4563A Z4563C Z4563A | Z4563A | SUB | S1 | Z4563B Z4563C Z4563C | Z4563A | SUB | S1 | Z4563A Z4563B First I tried with eval and subquery as: index="Index_Source" sourcetype="Sourcetype_B" SUB | rename Ticket as B_Ticket | eval Main_Ticekt_5=substr(B_Ticket,1,5) | table Main_Ticekt_5 | eval B_MAIN_TIcket = [ search sourcetype="Sourcetype_A" | rename Ticket as A_Ticket | rename LinkedTicket as A_LinkedTicket | search( A_LinkedTicket=*$Main_Ticekt_5$*) | eval B_SUB_MAINTICKET="\"$A_Ticket$\"" | return $B_SUB_MAINTICKET ] | table B_Ticket, B_SUB_MAINTICKET However, It is not working. I read online that it is not possible to pass variables in eval search. Is there any other possible way to do it. Thanks a lot in advance for your help.

what is the endpoint for splunk to export user session from dynatrace?

$
0
0
i have attached snapshot which i have done for elastic search , want same to do with splunk enterprise to export user session from dynatrace. Please suggest ![alt text][1] [1]: /storage/temp/252065-integration.jpg

Search for average data indexed over 30, 60, 90 days by index

$
0
0
Splunkers, Looking for a search string that will allow me to use the time picker to see how much data has been indexed over 30, 60, 90, etc days by index. I tried a few searches but had no luck. Any help would be greatly appreciated. Thanks

Want to combine all the source types in single search result.

$
0
0
I have almost 19 different indexes, which was already mentioned in my inputs.conf file. But today I got to know that the source type are not same for the same log files which are indexing daily on the real time format. But I had perform the search result always with a single source type and created a email alert notification with it. Due to different source types are available in my log files, so lot of errors are not coming in my search result and i missed those errors. Can anyone help me out from this problem that how can I combine all source types in a single search result and extract my important fields which will be present in all source types and create a complete search result? Please mentioned the link also if you have.

Dashboard form to create a new event type

$
0
0
Hello all, I have a dashboard that contains a panel with 'Statistics Table' visualization of search results. I use that type of visualization to have a list of 10 single-line records per page. I don't like the 'Events' view due to its size, my events contain large fields so it would result in huge rows which is not very convenient for users to view. I also have a couple of panels with the selected event details. What I want is to have an option to create an event type based on some fields from my search results right in the dashboard or in a separate window opened from the dashboard. I know I can add a panel with the 'Events' view which will have a button with workflow actions under the event row but it will not look suitable there and besides, I think I don't have control over the displayed fields. If only I could have a button which would collect data from input components and create a new event type, or at least a drilldown action for any visual object which would result into opening an event type builder window, then it would be great. Does anyone have any suggestions on this? Thank you.

Why is my scheduled search so much quicker than my adhoc search?

$
0
0
Hi, I have a number of scheduled searches which run significantly faster than the same search run from the search-bar. I have no idea why this would happen - are there some settings that might cause this? The scheduled search is about 30-40 seconds (runs every 30 minutes throughout the day). The same adhoc search runs for minutes. Search: index=raytheon_proxy dstBytes>0 |eval totbytes=bytes/1024/1024/1024 |chart sum(totbytes)

PowerShell Logging- Blacklist everything except Event Code 4104 & Level: Warning

$
0
0
We are attempting to ingest server powershell logging into Splunk. We found that ingest all the data was noisy and want to reduce the data ingested to what we really care about. Our goal is to only ingest Event Code 4104 with the level: Warning. Is there a way to blacklist everything, and then whitelist only Event Code 4104 with the level: Warning? We are ingesting via here: [WinEventLog://Microsoft-Windows-PowerShell/Operational]

Splunk on local machine fails to install apps from file

$
0
0
I'm trying to install [Splunk Security Essentials for Fraud Detection](https://splunkbase.splunk.com/app/3693/ "SSE for Fraud Detection") on my local machine that I use for practicing with Splunk, and I can't find the app in the Browse More Apps section, so I downloaded the .tgz file, unzipped it to get the .tar file, and tried it both ways. In the past, app installs would throw an error, but the app would still be installed. This time I'm getting either ERR_CONNECTION_RESET or ERR_CONNECTION_ABORTED depending on if I use the .TGZ or .TAR respectively. Is there an easier way to do this, or some other app I need to install prior to the SSE for Fraud Detection app? I already have SSE installed. Thanks!

Installing Tripwire Enterprise Add-On

$
0
0
Can anyone give fairly detailed instruction on how to install the Tripwire Enterprise AddOn. Our Splunk configuration is 5 servers, a search head server, 2 indexers, a heavy forwarder, and a deployment server. We have a single instance of Tripwire Enterprise and a specific user created currently with admin privileges to the console until I can get this working. I installed the addon as directed by the TE installation instructions on my search head. I went through the setup screen, although I did not choose to use the API. Is it necessary to do that? It didn't seem like it was during setup. I copied the TA folder to my heavy forwarder and created the input locations as designated. I copied the SA folder to my indexers and also copied the two indexes from the app into my indexes.conf on my deployment server to be distributed to all my Splunk boxes so they all know about the indexes for TE. I can do a tcpdump on my heavy forwarder and see logs coming from my TE console server, although not on port 514 as I would expect. I cannot see anything for my TE server going from my heavy forwarder to my indexers, nor do I see anything in searching in the te index on my search head. I'm fairly new to Splunk and just starting to get a handle on how to configure things. This is my first attempt at configuring an app that wasn't configured by PS, so I'm sure I have something set up incorrectly, but hoping that someone will be able to give a little better detail in how this needs to be configured as the TE installation document seems to be lacking a bit in detail. THanks.

Scheduled reports: jobs are running fine, but the reports aren't refreshed with the results.

$
0
0
Hi, I'm having a bit of a struggle with a few of my scheduled reports. The reports aren't being updated while the jobs are finishing and producing results. Example scenario: my reports are scheduled to be run every *n* th hour with cron schedule 0 \*/*n* \* \* \*. All the reports are starting up their respective jobs just fine and in their scheduled time. They are being finished correctly and without errors. I can even see the results of each job if I click on the search job in question. Everything is fine and dandy so far. Problem is, the jobs do not update the report! Every time I click on the report in the app, they show me old results most of the time. Sometimes, the reports are updated correctly but most of the time I click on them to have 1-4 days old results blaring at my face with the message "The following results were generated X days ago." . I then enter the recent scheduled runs to see what's up and are presented with the most recent results. What gives, man? Is this the first real bug I've ever encountered with Splunk? Am I missing something obvious? What I've done, to no avail: - Cloned the reports to see if they run correctly as new reports - Searched for internal errors (absolutely none are found) - Searched to see if there are any skipped searches in the logs. There are only succeeded searches there. - Increased base searches allowed (even though there's no errors suggesting this may help in the internal logs) - Yelled at my summer intern - Rescheduled the reports to different times - Allowing skew - Adding, removing and tinkering with Schedule Windows - Changing the owner of the reports to different users with different roles What I'm planning to do: - Swing a dead chicken over my head three times at midnight during a blood moon to summon the god of fire and destruction. Some basic troubleshooting info: Splunk 7.1.1 (recently updated from 7.0.2) Searches are run as an administrator Distributed environment, 1TB per day No SHC It's a bummer really, since this is vital to maintain certain areas of business. Anyone have any ideas?

Splunk Add-on for Box and multiple Box tenants

$
0
0
We have a customer that has two Box tenants for legal separation but would like to use a single Splunk instance for event tracking. A previous question in 2016 asked a question which hinted that it might be a future addition. Has it been added and if not, is there a way to have two Box tenants feed 1 Splunk instance? What would the box add-on deployment look like? Thanks in advance.

How to search for average data indexed over 30, 60, 90 days by index?

$
0
0
Splunkers, Looking for a search string that will allow me to use the time picker to see how much data has been indexed over 30, 60, 90, etc days by index. I tried a few searches but had no luck. Any help would be greatly appreciated. Thanks
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>