Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How do I pull a stats table where there are blank fields in event data?

$
0
0
This is the event data: ls1=INFO ls1Label=Severity ls2=MS SQL SERVER ls2Label=ServerType ls3=Command List ls3Label= cat=Audit sproc=ubuntu user=billy uid=DOMAIN\\billybob dest= lhost=abrokenserver ohost=serverconnectedto CMD=su apt install * index=rootCMDs | rex field=_raw "^[^ \n]* (?P[^ ]+)" | rex field=_raw "^(?:[^\|\n]*\|){5}(?P[^\|]+)" | rex field=_raw "ls3label=(?.*)\scat=" | eval ls3label=case(isnull(ls3label),"NULL",1=1,dst) | where isnotnull(ls3label) | search dst=" " | stats count by lhost, ls3label, sproc. user, uid | sort 0 count desc When I pull the stats count I get no data but the even data lists everything and has hundreds of events where *="no data". How do I specifically search for the blank data only? Or is my search improperly formatted?

How to search for specific text in field without additional text?

$
0
0
Sorry for the strange title... couldn't think of anything better. Doing a search on a command field in Splunk with values like: sudo su - somename sudo su - another_name sudo su - And I'm only looking for the records "sudo su -". I don't want the records that match those characters and more... just records that ONLY contain "sudo su -". When I write the search Command="sudo su -" I still get the other records too. Struggling to figure this out. Thanks!

Will there be any issue when upgrading splunk from 6.6.2 to 7.1.3?

$
0
0
I see that it pretty straightforward procedure to upgrade from 6.6 version to next 6. versions. My question is should I follow the same steps when upgrading from 6.6. versions to 7. versions or any extra precautions should be taken?

For a Splunk Enterprise non-clustered, distributed environment, can someone point me to documentation that covers server role assignment?

$
0
0
I have inherited a Splunk non-clustered, distributed Enterprise environment. I believe that my Splunk instances have too many server roles assigned to them. Is there documentation stating: 1. What role(s) should a Heavy Forwarder have? 2. What role(s) should a Search Head have? (Search Head role only, or KV store as well?) 3. What role(s) should an Indexer have? (Indexer role only, or KV store as well?) 4. What role(s) should a Deployment Server have? 5. What role(s) should the DMC Server have? Right now, the 4 server roles, Indexer, KV Store, License Master, and Search Head are assigned to my DMC. It is the License Master for my infrastructure so I know that role is required. I am having a hard time finding documentation online that explicitly states how the server role assignment should be. Thanks in advance.

How do I extend/increase the all buckets size in Splunk by Time period (Days)?

$
0
0
Hi Everyone, I have gone through some Splunk documents about buckets. But most of the time I have seen that everyone discusses how to increase/extend the size of any bucket by Size means either MB/GB, which is converted in mb format. But my concern is I want to increase/extend my buckets by Days format (example : I want to store my last 60 days data in my hot bucket). I know that I have to convert the days to minutes value and then use that in abucket configuration. But I didn't find any proper example in Splunk. Can anyone help me on this or any good documentation with a proper example? It'll be very helpful for me. Thanks, Saibal6

Logs not found under _internal Index but able to see in Actual Index

$
0
0
HI Friends, I am using below search query to see the usage of specific Index, when I pull the search for 30days - it is showing usage for few days and showing nothing for few days. But when I search for logs in Index directly (QUERY: Index=ship) I can see data for all days. But for Indexes I am able to see complete usage for all 30days. Can you help me debug this issue ? QUERY: index=_internal source=*metrics.log | eval GB=kb/(1024*1024) | search group="per_index_thruput" series=ship

maxTotalDataSizeMB & frozenTimePeriodInSecs Can be combined both for Index config ?

$
0
0
HI Friends, I am using below config for creating Indexes in my both QA & Production Cluster. At this point I am only using retention period for Indexes but it not helping in capacity management, can I add **frozenTimePeriodInSecs** to this config so that if it reaches capacity limit it will take care of it ? [ship] homePath = volume:primary/ship/db coldPath = volume:primary/ship/colddb thawedPath = $SPLUNK_DB/ship/thaweddb frozenTimePeriodInSecs=10368000 **frozenTimePeriodInSecs** when it reach the capacity limit does it remove the old logs and continue Indexing new logs just like **frozenTimePeriodInSecs** or does it just stop Indexing when it reach the limit ? Thanks, -Prashanth

Why can't I find logs under _internal Index when I'm able to see in them in actual Index?

$
0
0
Hi friends, I am using the below search query to see the usage of a specific Index. When I pull the search for 30days, it is showing usage for a few days and then showing nothing for a few days. But, when I search for logs in the Index directly (QUERY: Index=ship) , I can see data for all days. But for Indexes I am able to see complete usage for all 30 days. Can you help me debug this issue ? QUERY: index=_internal source=*metrics.log | eval GB=kb/(1024*1024) | search group="per_index_thruput" series=ship

Can maxTotalDataSizeMB & frozenTimePeriodInSecs be combined for Index config ?

$
0
0
HI Friends, I am using below config for creating Indexes in both my QA & Production Cluster. At this point, I am only using retention period for Indexes but it is not helping in capacity management. Can I add **frozenTimePeriodInSecs** to this config so that, if it reaches capacity limit, it will take care of it ? [ship] homePath = volume:primary/ship/db coldPath = volume:primary/ship/colddb thawedPath = $SPLUNK_DB/ship/thaweddb frozenTimePeriodInSecs=10368000 **frozenTimePeriodInSecs** when it reaches the capacity limit, does it remove the old logs and continue Indexing new logs just like **frozenTimePeriodInSecs**? Or does it just stop Indexing when it reaches the limit ? Thanks, -Prashanth

Trying to migrate from Windows running 6.2.3 to Centos 7 running 7.1.3

$
0
0
Like the title says, moving from an old windows install to a centos build. Have the new build setup and tested, and have run into a few issues while testing 1. If I stop splunk, and attempt to move the files in the DB, SOME of the files move, but others give some sort of file handler error. Have tried stopping the services, as well as splunk stop. The files that DO copy, do show up in the destination 2. I dont get any alerts, reports, etc in the copy. Sure would be cool if there was an app that you could enter server1IP/creds, server2ip/creds and it migrates the data from server 1 to server 2. Does such a tool exist?

Continuous Monitoring Remote unix file from Local Windows host

$
0
0
I have been having a difficult time finding any examples of this specific scenario. I need my Splunk Enterprise 7.0.3 instance which is being executed by an MSA (residing on a Windows host) to continuously monitor the audit log files on a remote Linux host. How I access the log files manually: From Windows host, I have set up a NFS (using Open Text NFS Solo) that can access the file using either the 2 UNC paths: ***1. \\ remote_ip_addr\var\log\audit\audit.log 2. \\ remote_ip_addr\/var/log/audit/audit.log*** I also have a mapped S: to the UNC path= \\ remote_ip_addr\/var/log (***S:\audit\audit.log***) *(Please note that I have purposely added a whitespace after "\\" in the paths above because I do not have enough karma points to post links and I did not want the paths to be sensored by answers.splunk. But no whitespace exists on my real system)* Attempts with Splunk Web to Add Data>upload are successful if I use any of the above 3 options. Every attempt to continuously monitor this file has been unsuccessful resulting in one of the following: 1. No data exists in the index and splunkd.log is reports the following error: **WARN FilesystemChangeWatcher - error getting attributes of path "*full_path_to_audit.log*": The network path was not found.** 2. No data exists in the index but splunkd.log reports no errors/warnings. I have also tried to add continuous monitoring in via stanza form in $SPLUNK_HOME/etc/system/local/inputs.conf What is the proper what to have Splunk monitor this file?

correlation between events where one is a calculated field and the other field is not in all the events

$
0
0
A requirement is to get a list of domains (src_host) with the count of their actions (blocked, delivered)associated with them. The action field is calculated by the event below 2018-09-26T16:00:09+00:00 x.x.com mail_logs_mail*_push: Info: MID 1966 ICID 2657 To: Rejected by Receiving Control but the src_host is in the field is in the event 2018-09-26T16:00:08+00:00 x.x.com mail_logs_mail*_push: Info: Info: New SMTP ICID 2657 interface Data_1 (1.1.1.1) address 1.1.1.151 reverse dns host abc.net verified yes I would like to know how I can correlate the 2 fields without the 'transaction' command and get the results.

How can I get my Splunk Enterprise instance to monitor audit log files on a remote Linux host?

$
0
0
I have been having a difficult time finding any examples of this specific scenario. I need my Splunk Enterprise 7.0.3 instance, which is being executed by an MSA (residing on a Windows host), to continuously monitor the audit log files on a remote Linux host. How I access the log files manually: From Windows host, I have set up a NFS (using Open Text NFS Solo) that can access the file using either the 2 UNC paths: ***1. \\ remote_ip_addr\var\log\audit\audit.log 2. \\ remote_ip_addr\/var/log/audit/audit.log*** I also have a mapped S: to the UNC path= \\ remote_ip_addr\/var/log (***S:\audit\audit.log***) *(Please note that I have purposely added a whitespace after "\\" in the paths above because I do not have enough karma points to post links and I did not want the paths to be censored by answers.splunk. But no whitespace exists on my real system)* Attempts with Splunk Web to Add Data>upload are successful if I use any of the above 3 options. Every attempt to continuously monitor this file has been unsuccessful resulting in one of the following: — No data exists in the index and splunkd.log is reports the following error: **WARN FilesystemChangeWatcher - error getting attributes of path "*full_path_to_audit.log*": The network path was not found.** —No data exists in the index but splunkd.log reports no errors/warnings. I have also tried to add continuous monitoring in via stanza form in $SPLUNK_HOME/etc/system/local/inputs.conf What is the proper what to have Splunk monitor this file?

How can I make a correlation between events where one is a calculated field and the other field is not in all the events?

$
0
0
A requirement is to get a list of domains (src_host) with the count of their actions (blocked, delivered) associated with them. The action field is calculated by the event below 2018-09-26T16:00:09+00:00 x.x.com mail_logs_mail*_push: Info: MID 1966 ICID 2657 To: Rejected by Receiving Control But the src_host is in the field is in the event 2018-09-26T16:00:08+00:00 x.x.com mail_logs_mail*_push: Info: Info: New SMTP ICID 2657 interface Data_1 (1.1.1.1) address 1.1.1.151 reverse dns host abc.net verified yes I would like to know how I can correlate the 2 fields without the 'transaction' command and get the results.

Transactions and reporting multivalue event types

$
0
0
Hi all. I'm having trouble expanding a multivalued Transaction into separate fields by their corresponding values. I'm conducting a quality study to determine the number of incomplete first inspections. For example, inspector "a" initially approved a product, but when the product was returned for further service inspector "b" discovered issues that should've been captured originally. I've tried using the stats command to accomplish this, but the results were unreliable from a sequence of events perspective, so I've decided to use the transaction command which has yielded much better results. What I would like to do is to keep the transaction grouped by *productId*, but separate *eventTypes* values by *inspectorName*, so I can create reporting. This is queuery I'm trying to expand: **| transaction productId startswith=(eventType=approve AND new=true OR new=false) endswith=(eventType=reject AND new=true) mvlist=true** I've tried combining fields with **mvzip**, but it's hard for me to manipulate the data to create a nice tabular report.

How do I search for Chinese characters in Splunk?

$
0
0
Hi, I need to create a report that looks for certain terms in Chinese. Is there anything special that I need to do that? I have a translation table, but I'm not sure if there's something else that needs to be done.

Can you help me migrate from Windows running 6.2.3 to Centos 7 running Splunk 7.1.3?

$
0
0
Like the title says, moving from an old Windows install to a Centos build. I have set up and tested the new build, but have run into a few issues while testing 1. If I stop Splunk, and attempt to move the files in the DB, SOME of the files move, but others give some sort of file handler error. Have tried stopping the services, as well as Splunk stop. The files that DO copy, do show up in the destination 2. I don't get any alerts, reports, etc in the copy. Sure would be cool if there was an app that you could enter server1IP/creds, server2ip/creds and it migrates the data from server 1 to server 2. Does such a tool exist?

Converting Windows Event Log to rsyslog format

$
0
0
I am trying to aggregate our windows and Linux logs from universal forwarders to a heavy forwarder, finally, to our internal Splunk indexer as well as to a third-party syslog server. I was able to split up the routing and ports to enable logs to go where they need to (_TCP_ROUTING and _SYSLOG_ROUTING), but the syslog server receives the windows event logs as multiple events (each key value seems to get its own line). How can I use props.conf/transforms.conf to parse only the data being forwarded from the windows port on the heavy forwarder to the windows syslog port? -windows universal forwarder is using [WinEventLog://Security] in inputs.conf and [tcpout://server:] -heavy forwarder is using inputs.conf [splunktcp://] configure_host = dns _TCP_ROUTING = WindowsTCP _SYSLOG_ROUTING = WindowsSyslog outputs.conf [syslog] defaultGroup = WindowsSyslog [syslog:WindowsSyslog] server = syslogserver:514 type = tcp -receiving Linux server is using rsyslog port 514/tcp.

Calculating ratios of counts to field totals

$
0
0
Here's what I have: base search| stats count as spamtotal by spam This gives me: (13 events) spam / **spamtotal** ===== ===== original / **5** crispy / **8** ===== ===== What I want is: (13 events) spam / eggs / count / **spamtotal** / ratio ========================== original / AAA / 2 / **5** / 0.4 original / BBB / 1 / **5 ** / 0.2 crispy / CCC / 2 / **8** / 0.25 crispy / DDD / 2 / **8** / 0.25 etc... ========================== Basically it's a ratio of count to the spamtotal, or a dynamic impact percentage. I feel like this should be easy. But stats and eventstats isn't working for me so far. Thank you.

Radius user eliminated still visible in Splunk console. How to eliminate it?

$
0
0
A user called "seguridad" created in Radius but already eliminated is still visible in Splunk we console (see images below). Initially the customer was in Splunk 6.3. We upgraded, first to 6.5 and finally to Splunk 7.1.2, but issue persists. Also, the folder of this user was eliminated from path $HOME/splunk/etc/users. Any idea of what else is missing to eliminate this user? ![alt text][1] [1]: /storage/temp/256077-3.png
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>