Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Sort by time in a chart with time header names

$
0
0
Hi, I have a search table that aims to show the inflow of tickets for a time range. Here is what it looks like... Hour | Apr-18 | Apr-19 | Aug-18 | Dec-18 0:00 2 3 5 3 1:00 2 13 2 1 Here is the search for this table... index=_internal | bin _time span=1h | eval hour = strftime(_time, "%H:%M") | eval monthYear = strftime(_time, "%b-%y") | stats count(ticketNumber) as inflow values(hour) as hour values(monthYear) as monthYear by _time | chart limit=0 sum(inflow) as inflow over hour by monthYear I want to sort my columns by date, (Apr-18, Aug-18, Dec-18, Apr-19). I cannot use "fields ..." because the user is free to input the time range that the table will display. Any help would be appreciated. Thank you.

hot to merge multiple lines into a single event?

$
0
0
hi guys, i am trying to break these lines into a event so far i tried [cycledata] EVENT_BREAKER = (CycleDataTask finished) SHOULD_LINEMERGE = false and MUST BREAK AFTER same regex these is an example: ` 2019-05-09 13:29:02.3975 INFO CycleData - CycleDataTask started ________________________________________________________ 2019-05-09 13:29:06.3746 INFO CycleData - Pool has NEW TICKETS:-> = 2019-05-09 13:29:06.3746 INFO CycleData - Pool has NEW TICKETS: -> 2019-05-09 13:29:06.3746 INFO CycleData - Pool has NEW TICKETS: -> 2019-05-09 13:29:06.3746 INFO CycleData - Pool has NEW TICKETS: -> 2019-05-09 13:29:06.8166 INFO CycleData - Pool has been updated succesfully. 2019-05-09 13:29:06.8166 INFO CycleData - Pool has been updated succesfully. 2019-05-09 13:29:06.8166 INFO CycleData - Pool has been updated succesfully. 2019-05-09 13:29:06.8166 INFO CycleData - Pool has been updated succesfully. 2019-05-09 13:29:06.8166 INFO CycleData - CycleDataTask finished _______________________________________________________ ` thank you

List process count and 0 when didn't found

$
0
0
I'm tring to do a search for some process for a server but I would like for those that are not running the result comes with 0, becouse in splunk the process when not running they don't bring any information: exemplo: index=os sourcetype=ps host IN (wmwl5000 , wmwl5001, wmwl5002) | search process="launch.sh" OR process="WebLogic.sh" | stats count(process) by host What I wish to see is like this: wmwl5000 launch.sh 1 wmwl5000 weblogic.sh 0 wmwl5001 launch.sh 1 wmwl5001 weblogic.sh 1 Cheers

how can i create a Planned alerte ?

$
0
0
hello , how can i create an alert planned between 8am and 5am ? ( by earliset and latest )

help with an inputlookup issue

$
0
0
Hello I use the search below wich runs perfectly (index="X" sourcetype=XmlWinEventLog source="XmlWinEventLog:System" EventCode=* (Level=1 OR Level=2 OR Level=3)) OR (index=master-data-lookups sourcetype="itop:view_splunk_assets") | eval host=coalesce(HOSTNAME,host) | eval time=if(EventCode="*",_time,null()) | stats values(sourcetype) as sts max(time) as _time values(SITE) as SITE values(ROOM) as ROOM values(TOWN) as TOWN values(CLIENT_USER) as CLIENT_USER values(COUNTRY) as COUNTRY values(OS) as OS by host | where NOT (mvcount(sts)=1 AND sts="X:view_splunk_assets") | table _time host COUNTRY TOWN SITE ROOM CLIENT_USER OS | sort -_time – COUNTRY But I need to do the search from a list of host in a csv file So I m doing this but it doesnt works **[|inputlookup host.csv | table host]** (index="ai-wkst-wineventlog-fr" sourcetype=XmlWinEventLog source="XmlWinEventLog:System" EventCode=* (Level=1 OR Level=2 OR Level=3)) OR (index=master-data-lookups sourcetype="itop:view_splunk_assets") | eval host=coalesce(HOSTNAME,host) | eval time=if(EventCode="*",_time,null()) | stats values(sourcetype) as sts max(time) as _time values(SITE) as SITE values(ROOM) as ROOM values(TOWN) as TOWN values(CLIENT_USER) as CLIENT_USER values(COUNTRY) as COUNTRY values(OS) as OS by host | where NOT (mvcount(sts)=1 AND sts="itop:view_splunk_assets") | table _time host COUNTRY TOWN SITE ROOM CLIENT_USER OS | sort -_time – COUNTRY what is strange is that when I specify an hostname which also exists in my csv file, I have results index=master-data-lookups sourcetype="x:view_splunk_assets" `HOSTNAME=3020026296` | table _time HOSTNAME SITE ROOM TOWN CLIENT_USER COUNTRY OS So I dont understand why there is a matching issue when i call the hostname from my csv file? thanks for your help!

I have one field time received this is showing date and time i want to change with that time to _time in props .conf how it would work because i want change to _time or any other way please suggest

$
0
0
FIELD -TimeReceived: 2019-05-09T05:29:03.000Z this is my prpos .conf xyz SHOULD_LINEMERGE=false NO_BINARY_CHECK=true LINE_BREAKER = ([\r\n]+) CHARSET=UTF-8 KV_MODE=json TRUNCATE=999999 DATETIME_CONFIG =

HELP ON DIFFERENT EVAL ARGUMENTS WHICH GIVE THE SAME RESULT

$
0
0
hi I use the search below in order to count the number of machines which are online it works BUT When I count the machines which are offline (I replace | stats dc(host) AS OnlineCount by Code | fields OnlineCount by | stats dc(host) AS OfflineCount by Code | fields OfflineCount I have the same list of machines than the list of machines online!! could you help me please?? [| inputlookup host.csv | table host] index="x" sourcetype="winhostmon" Type=Service Name=SplunkForwarder | eval timenow =now() | eval EventCreatedTime=_time | eval DiffInSeconds = (timenow - EventCreatedTime) | eval Status=if(DiffInSeconds<900, "Online", "Offline") | convert timeformat="%d-%b-%Y %H:%M:%S %p %Z" ctime(EventCreatedTime) | table host EventCreatedTime DiffInMinutes Status | sort +EventCreatedTime | dedup host | eval Code = if(like(Status,"Online"), "Online", "Offline") | stats dc(host) AS OnlineCount by Code | fields OnlineCount | appendpipe [ stats count | where count=0]

Invalid template path Errors after upgrading to 7.2.6

$
0
0
Hi, I'm after to Splunk Enterprise 7.2.6 from 7.1. The install, on Windows via gui, completed successfully. The migrating logs shows no error. However when I log on to the Splunk Web Ui and go to Deployment Monitor I get the following errors. ServerSideInclude Module Error! Invalid template path. C:\APP\splunk_deployment_monitor\appserver\static\text\upgrade_popup.html Invalid template path. C:\APP\splunk_deployment_monitor\appserver\static\text\description_idle_indexer.html Invalid template path. C:\APP\splunk_deployment_monitor\appserver\static\text\description_backed_up_indexer.html Invalid template path. C:\APP\splunk_deployment_monitor\appserver\static\text\description_missing_forwarder.html Invalid template path. C:\APP\splunk_deployment_monitor\appserver\static\text\description_quiet_forwarder.html Invalid template path. C:\APP\splunk_deployment_monitor\appserver\static\text\description_low_data_forwarder.html Invalid template path. C:\APP\splunk_deployment_monitor\appserver\static\text\description_high_data_forwarder.html Invalid template path. C:\APP\splunk_deployment_monitor\appserver\static\text\description_missing_sourcetype.html Invalid template path. C:\APP\splunk_deployment_monitor\appserver\static\text\description_low_data_sourcetype.html Invalid template path. C:\APP\splunk_deployment_monitor\appserver\static\text\description_high_data_sourcetype.html On my system these files are located in the directory C:\Program Files\Splunk\etc\apps\splunk_deployment_monitor\appserver\static\text So clearly there is a config problem. I'm just not sure how to fix? Any help would be grately appreciated... Thanks, John

monitor application installed on VM of vmware

$
0
0
Hello All , I have vmware environment setup and i make 10 vm's using vsphere client So I need help in monitoring applications installed on those vm's , like any db application installed on VM , Note : i cant install splunk UF on VM , but can install UF on vsphere , SO is there any way in which from Vsphere itself i can monitoring applications installed on VM . What to monitor : application running or not . Thanks in advance Manish Kumar

Rapid7 search error

$
0
0
Hello, I'm trying to use the search below but I only get 0 events. What Am I doing wrong? index=rapid7 sourcetype=* | eval site=coalesce(site, "") | eval asset=coalesce(asset_id, "") | search site=* status=Approved reason="Acceptable risk" | search [search index=rapid7 sourcetype="rapid7:nexpose:asset" | fields * | eval tag=coalesce(split(nexpose_tags,";"), "") | search tag="*" * vendor_product="*" site_id="*" pci_status="*" (hostname=* OR ip=* OR mac=*) | fields * | table asset_id hostname ip mac os site_name nexpose_tags os] | dedup site asset vulnerability_id | sort "Status" DESC | table status vulnerability_id title asset_id severity_score severity reason additional_comments submitted_by review_date review_comment expiration_date port key

join is skipping rows

$
0
0
csv files has 70k entries and when i join it with index which has 30k rows, it fails to join random records even when data is available. when i specify the record before and after join it picks up the value. any idea if this is due to any kind of limitation.

how can i create a Planned alert ?

$
0
0
hello , how can i create an alert planned between 8am and 5am ? ( by earliset and latest )

Handling big json response from REST API when using REST API Modular Input

$
0
0
Hi all, Our REST API response in json is quite big and chunky about 6000 line (more than 1000 items and about 200KB). I need to use this endpoint to get data in. In the Data Input REST settings, I have specified json as the source type. The problem is that just a small portion of the json response is being present as an event, and the fields are not extracted correctly. Therefore I cannot use the json data, spath is not wotking etc. IS there a way to get around this, or I need to have a smaller json response. Please help. Thanks.

splunk HTTP event collector

$
0
0
Splunk HTTP event collector not sending data to an index. I have HTTP event collector configured in HF . And it sends data to indexers. I am not able to get the data on indexers. Please any one face this issue.

How do I filter data with props and transforms, and how can I only index a specific string?

$
0
0
I have a directory which is full of .html webpages. I'd like Splunk to index those html files, but only a specific string of text (if the file contains it). I got as far as having Splunk index the entire file if it contains the string, but now, how can I get Splunk to only index a portion of that file? I've done this in the past but can't seem to remember how it was done. I remember using SEDCMD to remove everything but the specific portion. What am I missing? Trying to parse out these: ![alt text][1] props.conf [html] TRANSFORMS-set= setnull, keepBuildFiles SEDCMD-keepThis = s/Total\stime\:\s.*?\
/g transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [keepBuildFiles] REGEX = (Total\stime\:\s.*?\
) DEST_KEY = queue FORMAT = indexQueue data -- *answers.splunk.com is decoding the html. Here is raw (https://pastebin.com/ee3srkPM) ======================================================================<br/> 2019-05-07_02:58:29 --- Makefile@abcbuild2 (): src(for_release)<br/> make[1]: Entering directory `/abc/builds/accrued/zzz/accu_ws/ZZZ_19.4.0_CM_CI/src'<br/> 2019-05-07_02:58:29 --- src/Makefile@abcbuild2 (src): idl(for_release)<br/> make[2]: Entering directory `/abc/builds/accrued/zzz/accu_ws/ZZZ_19.4.0_CM_CI/src/idl'<br/> 2019-05-07_02:58:29 --- src/idl/Makefile@abcbuild2 (src/idl): idl(for_release)<br/> Buildfile: /abc/builds/accrued/zzz/accu_ws/ZZZ_19.4.0_CM_CI/src/java/build.xml<br/><br/> build_idl:<br/><br/> find_modified_idl:<br/> [exec] New/Updated IDL:<br/> [echo] No Modified IDL detected... skipping code generation<br/><br/> BUILD SUCCESSFUL<br/> Total time: 2 seconds<br/> [1]: /storage/temp/273640-screen-shot-2019-05-10-at-85308-am.png

Sparkline now working in 7.2.6

$
0
0
Hi I am on version 7.1.6 and want to move to 7.2.6 but i have noticed that spark-lines don't work in the new version. Or am i missing something? This is the SPL index=mlc_live | table host _time| chart sparkline count by host | fields - count When i change from fast mode to verbose mode in SPL it works, but you cant save a search that way. Any help would be great.

How do I get Azure Sign-In data into Splunk?

$
0
0
I'm using the Splunk Add-On for Microsoft Cloud Services, and after properly configuring it, I am unable to see the Azure Sign-In Audit Data. Am I doing something wrong or how do I see that data?

how do I specify a search where different fields for hostA and hostB when they are identified as IP addresses

$
0
0
how do I specify a search where different fields for hostA and hostB when they are identified as IP addresses I want to only pull stats for hostA and hostB when they are only identified as IP addresses and other specific names index=stuff* blah OR blahblah | fillnull value=NULL hostA, hostB | where match(hosA,"(\d{1,3}\.}{3}\d{1,3})") | where match(hostB,"(\d{1,3}\.}{3}\d{1,3})") | stats count by hostA, hostB and index=stuff* blah OR blahblah | fillnull value=NULL hostA, hostB | regex hostA="^(\d{1,3}\.}{3}\d{1,3}).*" | regex hostB="^(\d{1,3}\.}{3}\d{1,3}).*" | stats count by hostA, hostB I have tried both and neither turn up results am I taking the right approach?

Version of Highcharts in the latest Splunk Enterprise

$
0
0
Hi, Could you tell me that which version of Highcharts component is present in the latest version of Splunk, please? Currently we are using Splunk Enterprise 7.2.3 and it uses Highcharts 4.0.4 which is not able to be loaded with ADM/Require.js. Regards, fjp2485

Microsoft Office 365 Reporting Add-on for Splunk - Static start and end dates not updating with with continuous Input Mode

$
0
0
I am continually getting a 404 Client error for bad request due to the start and end dates being past 7 days. The app works when I Index Once but as soon as I enable continuous monitor and add a day within 7 days from today I get the following error: ExecProcessor - message from "python /opt/splunk/etc/apps/TA-MS_O365_Reporting/bin/ms_o365_message_trace.py" HTTP Request error: 400 Client Error: Bad Request for url: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2018-06-12T14:39:16Z'%20and%20EndDate%20eq%20datetime'2018-06-12T15:09:16Z' I have the following settings for the app: Interval=180 Query Window size= 30 Delay throttle=32 and Start date/time= 2019-05-10T09:00:00. I checked the local directory as well to verify these setting are in the input.conf file. Why does the error show a date of 2018? Is there another place I should check for the start/end date? Thanks!
Viewing all 47296 articles
Browse latest View live