Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Converting from MB to GB not working

$
0
0
HI, I have my query and doesn't seem to convert from MB to GB. What am I doing wrong? Can anyone help me? index= * | eval TotalMB=round((TotalSent+TotalRcvd)/1024/1024,2) | eval TotalGB=round(TotalMB/1024,2) | stats sum(sentbyte) AS TotalSent, sum(rcvdbyte) AS TotalRcvd by app | addtotals | dedup app | sort limit=30 - total

how to send events to nullque?

$
0
0
Hi, how to exclude internal source IP events for a sourcetype (web_logs) with src_ip=10.0.0.0/8 before indexing.

Can we send the XML dashboard(with multiple panels) link in email scheduled every hour?

$
0
0
Hi all, We want to add the dashboard(not the PDF) link in the email, so that whenever the user clicks on the link can access the dashboard which displays the data for last 1 hr from the time of generation of email. Thinking that we can fetch the email generation time as the latest time of dashboard base search through the link present in email message. suppose email message : link to dashboard: "//localhost:port/dashboard" dashboard base search: query> ...../query> earliest>-1h /earliest> latest>$tok$ /latest> this $tok$ in dashboard should fetch the email generation time in supported format. Is it possible? correct me with the approach. please help in achieving this.

Do we require OS and splunk restart if we hot-Add Memory/vCPUs.

$
0
0
Hi Folks, We have planned to extend resources in entire cluster deployment(SH,Idxr,IdxMaster..). These are running RHEL 6 on VMs. Do we really require restart of OS/splunkd if we hot-upgrade the resources? any advises are appreciated. Pramodh B Splunker Jr.

Replace all newlines anywhere (beginning, middle, end) on field

$
0
0
Hello all, I have a field with data that looks like this: The process has failed. Please review blah: Dear Team Please open a new Incident and assign to Team blah Submitted from 1928389112828 blah Please review attached logs. Sincerely Support There are also lots of newlines before the first line `The process has failed. Please review blah:`. I don't know why the site isn't formatting the spaces correctly. I want to remove all linebreaks like so: The process has failed. Please review blah: Dear Team Please open a new Incident and assign to Team blah Submitted from 1928389112828 blah. Please review attached logs. Sincerely, Support. I've tried sed to do it: `| rex mode=sed field=description "s/(\n+)//g" `, but the output still has extra spaces at the beginning. I've also tried `trim(description)` but it's giving me the same result. Any help would be appreciated. Thanks.

Do we require splunkd restart if we hot-Add Memory/vCPUs.

$
0
0
Hi Folks, We have planned to extend resources in entire cluster deployment(SH,Idxr,IdxMaster..). These are running RHEL 6 on VMs. Do we really require restart of splunkd after OS is updated to latest hardware changes on fly? any advises are appreciated. Pramodh B Splunker Jr.

How to get single row output with fields from multiple events from multiple log files

$
0
0
Hi Team, My scenario is I have multiple request and response xmls which are basically my events in index for one circuit id. Basically, whenever I request with the circuit id from UI it will create a new transaction id for that particular hit which means logs will have multiple requestids for the same circuit id for 1 day. What I need is when I search with the circuit ID it should give me a table output showing all the different request ids along with their specific response fields in a single row. My challenge here is I am trying to show the fields from request & response xmls from multiple source files into a single row but it is returning multiple rows. Please help if there is any way to get this done.

Hard disk Failure on One Index in a Cluster

$
0
0
Hi all, Our environment consists of, amongst other things, a multisite (3) clustered environment. Each site has three indexers making a total of nine indexers. We also have a replication factor of 3. On each indexer the hot/warm and cold buckets are on separate filesystems. On one of the indexers, the filesystem containing the cold buckets suffered a hard disk failure which has destroyed the entire FS. My question is: when the disk/filesystem is repaired, will Splunk automatically rebuild the cold buckets from the replications? If it does, will it do it when I start Splunk or is there some maintenance commands that I will need to issue? Many thanks, Mark.

How to group a multi-regex event to form a single event until you find the date only at the beginning of the interaction

$
0
0
Hi, I have the following log format, How can I break this multiline event on condition that "2020-01-23 03:50:49,063" arrives. Note that the log needs to be indexed with Local Time. //****************************************************************************************************** // Module : teste 6.15.0001.77 // Local Time : 23/01/2020 03:50:48.985 (Daylight Saving Time=Off) // System Time (UTC) : 23/01/2020 06:50:48.985 // // Domain Name : itau.corp.ihf // // 32/64 Bit : 64 Bit // // Module Name, File Version, Modification Date: // ---------------------------------------------------------------------------------------------------- // teste.exe, 6.15.0001.77, 05/08/2019 19:58:36 // //****************************************************************************************************** 2020-01-23 03:50:49,063 | INFO | 4 | testeService.OnStart | | teste | testeService.OnStart: Log Client initialized successfully. 2020-01-23 03:50:49,094 | INFO | 4 | testeService.OnStart | | teste | testeService.OnStart: Trying to load teste modules... 2020-01-23 03:50:49,610 | INFO | 15 | ServiceHost | | teste | testeService.HandleServiceHostLogEvent: Going to register WCF teste 2020-01-23 03:50:53,391 | INFO | 15 | ServiceHost | | teste | testeService.HandleServiceHostLogEvent: Config file already defines ServiceModel configuration, for service teste. Trying to load updated configuration and combine (for Accessible mode only!)... 2020-01-23 03:50:53,485 | INFO | 15 | ServiceHost | | teste | testeService.HandleServiceHostLogEvent: Finished writing updated ServiceModel configuration to config file, for service teste. 2020-01-23 03:50:53,813 | INFO | 15 | ServiceHost | | teste | testeService.HandleServiceHostLogEvent: << All WCF services succeeded to publish. took: 00:00:00.3281398 In this example, the log should be broken into 06 lines, considering the log "2020-01-23 03: 50: 49,063" as the beginning.

Splunk Get Earliest Data by Index and Sourcetype

$
0
0
Hi All, Is it possible to get the Earliest available date of index and source type . I tried "Tstats" and "Metadata" but they depend on the search timerange. I need to get the earliest time that i can still search on Splunk by index and sourcetype. A good example would be, data that are 8months ago, without using too much resources. Just let me know if it's possible

Deployment Server Upgrade

$
0
0
Dear All, We have Deployment Server with around 1900+ clients reporting to it. Currently it is v7.0 and we are planning to upgrade it v7.3.3. The document says, disable deployment server and then upgrade, but if I disable it what would be the behavior of clients? What would be safest way to upgrade deployment server without losing any data. Also, will Deployment Server (v7.3.3) work well with Indexer Cluster (v7.0) ? will I can potentially face any compatibility issues? Regards, Abhi

help on coloring a threshold number with a unit value

$
0
0
hi I use a search wich add a unit value at the end of the result (GB) | eval FreeSpace=FreeSpace." GB", TotalSpace=TotalSpace." GB" I need to use a threshold coloring on this value but it doesnt works due to the unit value at the end... [#DC4E41,#F1813F,#53A051]10,80 what i have to do please??

Syntax error

$
0
0
I’m wondering how can I write simple sql command to join two table in sql editor on splunk. For e.g. when I run below query give me syntax error. SELECT * FROM "sysmaster":"sysadtinfo". "sysbufpool" sysmaster: database name sysadtinfo: table1 sysbufpool: table2 Is this right syntax?

Unable to retrieve data from Splunk

$
0
0
Hi, In our environment Nagios and Splunk are integrated. We configured an alert in Nagios monitoring tool which fetches data from Splunk but in Nagios monitoring tool, it is showing as "UNKNOWN - Error in Application name "wms" ". The alert is configured in such that it is using the script check_splunk_savedsearch_value.sh and it is taking three arguments. check_splunk_savedsearch_value.sh -a wms -s "WMOS - EW - Number of Allocation records" -w 1 [root@nagios server]# ./check_splunk_savedsearch_value.sh -a wms -s "WMOS - EW - Number of Allocation records" -w 1 UNKNOWN - no output returned from splunk.ce.corp|"wms:WMOS - EW - Number of Allocation records"=ERROR When we ran the script in debugging mode, the following command is not returning any output. [root@nagios server ]# /usr/bin/curl -s -k -u username:password https://splunk.ce.corp:8089/servicesNS/monitor/wms/search/jobs/export -d 'search=savedsearch %22WMOS%20%2d%20EW%20%2d%20Number%20of%20Allocation%20records%22' -d output_mode=csv|sed 1d [root@nagios server]# What could be the reason. We see that Splunk forwarder is not installed on Nagios Production server. Is Splunk forwarder needs to be installed on Nagios production server.

Splunk ODBC driver issue :Splunk ODBC: "error code 126: The specified module could not be found (C:\Program Files\Splunk ODBC Driver\lib\SplunkDSII.dll)."

$
0
0
I have installed Splunk 8.0 version with Splunk ODBC driver"splunk-odbc_211" Windows Server 2012 R2 Std x64. Windows Server 2016 R2 Std x64. However I'm getting below error The setup routine for the Splunk ODBC Driver ODBC Driver could not be loaded due to system error code 126: The specified module could not be found (C:\Program Files\Splunk ODBC Driver\lib\SplunkDSII.dll). What should I install or configure next, or do you have any other advice? Thank you.

Extract field values by eliminating random strings

$
0
0
I have field values as below , field1=value1 filed2=server1 field1=service/value2/a1 field2=server2 field1=value3 field2=server3 field1=service/value4/a2 filed2=server4 field1=value5 field2=server5 field1=service/value6/a2 filed2=server4 field1=value7 field2=server6 field1=service/value8/a2 filed2=server2 I am getting few extra strings on field1 from server2 and server4. Now i want to check, if log is from server2 or server4, then truncating pre and post random values and save only actual value My final output field should be like below field1=value1; value2; value3; value4; value5; value6.. etc

Http event collector vs monitor directory

$
0
0
I have Splunk Universal Forwarder installed on raspberry pi and couple of apps from which I want to send logs to forwarder. What is the best and most efficient way to do this? I was thinking of: 1. Http event collector 2. Monitor local directories where apps are storing their logs in json format (large files) 3. I cannot use tcp because there is no .net core library for this purpose Also, target Splunk instance to which forwarder sends data, is often offline, so fowarder needs to buffer big amount of logs. That's why I thought that monitoring files will be the best approach here but i'm not sure.

Splunk Integration with Power BI

$
0
0
Hi, I'm looking at possibly integrating certain of my Splunk dashboards with Power Bi hopefully using a REST API. Has anyone had any success with this? Thanks

file monitoring in server for size and modified date

$
0
0
We have folder directories on the Application server and collecting data through forwarder. i need to calculate file size, last modified for certain files in different directories . can anyone help me here . how to do it?

ArtifactReplicator - Connection failed

$
0
0
I see below errors in the search head cluster.can some one helps resolve the issue? 02-11-2020 13:59:26.997 +0000 WARN ArtifactReplicator - Replication connection to ip=10.164.196.166:8999 timed out 02-11-2020 13:59:26.997 +0000 WARN ArtifactReplicator - Connection failed 02-11-2020 13:59:26.997 +0000 WARN ArtifactReplicator - event=artifactReplicationFailed type=ReplicationFiles files="/opt/splunk/var/run/splunk/dispatch/_splunktemps/send/s2s/scheduler__pbasav_ZWVfc2VhcmNoX3NwbHVua19zdXBwb3J0__RMD59b3a79690728a412_at_1581429480_498_638683B3-25D9-4D2A-AF2E-4E43362FDBFA-644D578C-F001-4711-B459-2338E22DF399.tar" guid=644D578C-F001-4711-B459-2338E22DF399 host=xx.xx.xxx.166 s2sport=8999 aid=746. Connection failed we see some of the reports are generating without data and only some time..not sure what it is causing?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>