Quantcast
Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Showcase


Channel Catalog



Channel Description:

The latest questions for the topic "splunk-enterprise"
    0 0
  • 11/11/19--02:34: double fields If Statements
  • Hello Guys, i try to generate different fields using if 2. I would like to write a query which looks at the following: if sender==x then eval field_a==time_a and eval field_b==time_b if else sender===y then eval field_x==time and eval field_y==time_y To the general scenario I want to calculate the duration of the processing of log files. For this log files are sent from a server_a to a server_b where they are processed and sent back to server_a. Here I want to write a query which calculates how long the file needs from server_a to server_b... how long from server_b to server_a and total duration... speaks server_a -> server_b -> server_a

    0 0

    Hi, I have user names in the field ContextUsername in index/ sourcetype index=otcs sourcetype=OtcsSummarytimings. To analyze how users are working with the system, I would need the following two counts: 1. light users = who access the system only **once per week or only 52 times per year** 2. normal users = who access the system **more than once a week or more than 52 times per year** I know I can use timechart, span=1w and a dc(ContextUsername) but I don't know how to realize the part with once a week or only/more than 52 times a year. Any help would be much appreciated. Thanks!

    0 0

    Hello, I am trying to import Sep Cloud Events to Splunk. I am reading this documentation but i cant install this in window. The scrip (wrapped.sh) cant run in a windows system. Here is the manual that Symantec shows https://support.symantec.com/us/en/article.HOWTO128173.html but it doesnt explain how can i configurate it in a windows system. Anyone can help me?

    0 0

    I'm hoping someone could help. I would like to create a dashboard for one of our hosts similar to the Splunk app for infrastructure overview page (as in screenshot below). We are indexing as metrics (index=em_metrics) and would like to create the panels shown below. ![alt text][1] [1]: /storage/temp/275124-capture.png Thanks

    0 0

    good morning    We are working on itsi and it is necessary to be able to visualize the dashboards via mobile, for this the splunk cloud gateway app has been installed, but apparently it is not possible with this app, someone has any suggestions or knows in what way the dashboards could be seen in Mobile devices Regards

    0 0

    Hello I have difficult times to understand how to deploy an app which needs virtual environment and deploy the app in distributed environment? Currently I have installed the app on the Heavy Forwarder manually but this is client of DS so this will not work because I think if I restart DS then the app will disappear from HF. So how can I handle to place the app to DS and then deploy the app to HF when the app requires a Virtual environment which currently is installed in bin folder of the app? Thank you in advance

    0 0

    Recently upgraded from Splunk 7.1* to Splunk 8.0. I had installed Splunk Platform Upgrade Readiness App version 1 before this upgrade and it was loading. However due to obvious reasons it was skipping all the tests as the Splunk version was 7.1. Hence Upgrade to the latest available version - Splunk 8. Now post this The Splunk Platform Upgrade Readiness App app is not loading the test-run panel. I'm hence unable to run and test the instance compatibility with the upcoming python 3.0. I then upgraded to the new version(version 2) of the platform upgrade app. That didn't help either. Any points on how I should proceed further to get this app loading?

    0 0

    Hi All, I am trying to create a drilldown for my timechart, the idea is to drill down to the events that happened 30 mins before and after the clicked time (click.value in my case). I referred the answer https://answers.splunk.com/answers/215176/subtracting-30-minutes-from-passed-drilldown-param.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev Now this is helping me pass the earliest time, but when i use the same concept with the latest time i am not getting the desired result. Below is my current query: index="tutorial" categoryId=strategy earliest=[|gentimes start=-1|eval new = relative_time($click.value$,"-1800")| return $$new] latest=[|gentimes start=-1|eval latest1 =($click.value$ + 1800)| return $$latest1] | top clientip after passing through the drilldown pipeline the query changes to: index="tutorial" categoryId=strategy earliest=[|gentimes start=-1|eval new = relative_time(1572728400.000,"-1800")| return $new] **latest=[|gentimes start=-1|eval latest1 =(1572728400.000 1800)| return $latest1]** | top clientip If you look closely the + sign just vanished. I tried many ways to find a work around for that but wasn't able to. Please take a look and try to run the query once by yourself. @Raghav2384 @RVDowning @somesoni2 @woodcock @gcusello @mayurr98 please help guys!!

    0 0

    Hello wonderful people of the internet, I'm still quite new when it comes to using splunk, so could use a bit of advice with this one. I have 2 CSV files, both containing a list of IP addresses. One of these is called IOC1.csv, and is a file of known malicious addresses. The second CSV, called ignore.csv, contains all of the IP addresses I wish to exclude from the results (Basically, stuff we want to discount/tune out). I'd like a search which could check all of the FW logs for any hits which have an IP from IOC1.csv located in there, but discount the event if an IP from ignore.csv is also present. Could somebody advise on how this could be done? Thank you all so much.

    0 0

    Hello, I have the following little csv file: time,interface,utilization 2019-11-03,int_a,100 2019-11-04,int_b,200 You can see in contains a header and two rows with the data. I want to perform index time extraction of the fields. I also want to use timestamp from the time column. This is my props.conf configuration: DATETIME_CONFIG = INDEXED_EXTRACTIONS = csv LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true TIMESTAMP_FIELDS = time TIME_FORMAT = %Y-%m-%d category = Custom pulldown_type = 1 HEADER_FIELD_LINE_NUMBER = 1 disabled = false FIELD_HEADER_REGEX = PREAMBLE_REGEX = No matter what i do Splunk always indexes the header as well. I don't want that. I have tried the following settings: 1. PREAMBLE_REGEX - this ignores the header, but then index time field extractions are not performed. Probably because the header is ignored (chicken and egg situation). I can work around this by listing the comma separated field names manually but i want schema on write support which Splunk doesn't seem to provide. 2. HEADER_FIELD_LINE_NUMBER = 1 Tried this setting which made no difference. Does anyone know if it is possible to index csv file fields without the header and without defining column names manually in props.conf? Thank you, Kiril

    0 0

    Hi I have implemented ignoreOlderThan for 7 days , I want to verify it if its working or not ? Is their any query or any place in DMC where i can validate that its working ?

    0 0

    Hello, I am having difficulty getting the strptime function to properly convert my date string into a usable and accurate time stamp. Here is an example of the string and the strptime function I have tried. Can you help with the proper conversion please? string=**05-NOV-19 10.53.49.287000 AM AMERICA/CHICAGO** This did not work: **| eval first_res_time = strptime(previous_resolution_time, "%d/%B/%y %H/%M/%S/%N")** Thank you, -MD

    0 0

    I am wondering if anyone has any experience or suggestions for using Splunk as a tool for Capacity and Performance management (in addition to using it as IT ops and Security tool) Ultimately i would like to be able to report capacity and performance stats for different domains such as VM's , Network, Telephony, Storage, etc. The way i see it right now I'll have 3 types of data sources: 1. Systems that Splunk has apps for and logs to monitor (vSphere, CISCO, etc) - this one should be straight forward 2. Systems that can be scripted to produce a daily, weekly or monthly reports (storage system, etc)- i think i should be able to monitor report directory and index the data sources such as .CSV ? 3. Systems that don't log or have ability to report capacity/performance related stat - someone will collect couple of KPI's once a month - what is the best place to store the "manual" data inputs? A CSV file that gets ingested into Splunk?

    0 0

    Hi I am using this search in order to find out what Bluecoat filter categories cause the most bandwidth utilization index=bluecoat mysearch | fields sc_filter_category sc_bytes | eventstats sum(sc_bytes) as allbytes | stats sum(sc_bytes) as "totalbytes" by sc_filter_category,allbytes | eval "Bandwidth(MB)"= round(totalbytes/(1024*1024),2) | eval Percentage=(totalbytes/allbytes)*100 | sort 10 -"Bandwidth(MB)" Tis seems to work fine. My result is as an example a table like this sc_filter_category allbytes, Bandwidth(MB), Percentage category1, 100, 20,20 category2,100, 11,11 category3,100,10,10 category4,100,5,5 So what I would like to do is then in a second search be able to list what top two URLs cause the most bandwidth for each category. The output would look like this Category Top-URLs category1 www.abc.com, www.def.com, www.ghi.com category2 www.abc1.com, www.def1.com, www.ghi1.com I am not able to find out how to search dynamically using the result of the first search... any help appreciated.

    0 0

    HI Experts I am having an issue in indexing the log file which gets rotated ever hour. The log file error.log gets rotated every hour at top of the hour and a new file is created with the same name(error.log). The old file gets renamed and zipped to error.log._timestamp.gz. Sometimes splunk does not index the file for an hour and resumes the indexing once the file is again rotated so the complete 1 hour logs gets skipped. Before splunk resumes the indexing following error message is logged. WatchedFile - Checksum for seekptr didn't match, will re-read entire file Every file has a different content because each event has a timestamp so first 256 characters should not much the fishbucket. [monitor:///data/logs/] _TCP_ROUTING = indexers123 index = indexname sourcetype = sourcetypename initCrcLength = 1024 whitelist=\.log$ disabled = 0 Splunk version is 7.0.3

    0 0

    What are the minimum requirements for a Splunk VM for the Splunk Fundamentals 1 course? I can see the reference hardware at https://docs.splunk.com/Documentation/Splunk/6.5.0/Capacity/Referencehardware but that's a bit much just for a VM for the course.

    0 0
  • 11/11/19--10:20: Storing spl in lookup
  • Is it possible to store a search string in a lookup column, retrieve the content and run it as a search? For example: index=some_index | lookup test.csv lookup_key_field as event_code OUTPUT spl_field as search_string | ... some command to actually run the search_string ...

    0 0
  • 11/11/19--10:29: Frozen Index Management
  • Is there an app/script/mechanism out there that would allow you to list your available frozen indices by their human readable date/timestamps and allow you to move them back to thawed state and rebuild? Thanks!

    0 0

    With a configuration like the one below. It appears as if the splunk forwarder is essentially load balancing telemetry logs, ie, randomly sending telemetry enties to either Splunk server, but not both. (The servers are independent) Is it possible to have a configuration like the one below send telemetry to both servers. If not, then what would be the best way to end up with a full set of telemetry on both servers? splunk-forwarder splunk-server 1.1.1.1 port 9997 splunk-server 2.2.2.2 port 9997 index inventory index sflow index topology index interface-counters index syslog vrf MGMT http-commands protocol socket

    0 0
  • 11/11/19--11:59: Black listed software alerts
  • Hi, I work in an enterprise environment. I'm trying to figure out a way to create a list of blacklisted software and have splunk send an alert to the score card whenever any blacklisted software is installed on a user's host. Any idea of how I can get this done?