Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How to stream Azure VM performance counters with Azure monitor add on for splunk?

$
0
0
I'm using **Azure monitor add-on for Splunk** to collect Azure Diagnostic, activity logs and metrics but this add on , as I understand , does not include collecting performance counters Data from VMs. does someone knows how can I collect performance counters? Thanks

eval token in dashboard for correcting hostname does not work

$
0
0
Hi everybody, I have following problem: On the first dashboard are a lot of panels, which should link to a more detailed view about a special host. On most of the charts, the link does work. But on one panel, the `$click.name2$` Value is not "host" but "send: host" or "received: host". I have found a way to correct it on this panel, so I thought, I could change the host in the detail view: replace($form.tok_host$, ".*?([^\s]+)$", "\1") I even tried `'form.tok.host'` instead of `$form.tok_host$`. But seems like, this just sets the token to blank. Does anybody know an answer to this problem? Greetings Christoph

Compare 2 results from different time periods.

$
0
0
I'm trying to compare 2 results from different time periods using the below search, but am getting a zero result where I should be seeing 2 events. I've recreated the search as below into something that we can all see. I am using one existing field (sourcetype) and eval to create another (output) to do the math and output to a result field. index=_internal earliest=-4d@d latest=-3d@d splunkd_stderr [ search index=_internal earliest=-3d@d latest=-2d@d splunkd_stderr | eval output=sourcetype] | eval result=sourcetype-output | stats count(result)

Need to know how to normalize fields to Join two searches

$
0
0
I have ran across an issue I have banging my head agains and it will not give. I have a search that is trying to join to another search, easy enough. However, I seem to have some funky character at the end of some of my data that is messing up the join. For example, I have a search that produces this as the output: name | value vwilliams1 | 10000 I have a second search that LOOKS to be returning this output: name | here vwilliams1 | true When I do a join on the name field it cannot join the two records. I have tried trim, rex sed commands, everything I can think of to get this to work. I have ran the len command and comapared filed lengths, which showed they are the same. I have even used substr(name,1,10) for both searches and it does not work. If I use substr(name,1,9) for both searches it will join, so it has to be something with that last character. The only way I have been able to get the join to work is on the second search do this |eval name= substr(name,1,9) |strcat name "1" name This obviously will not work for any fields that don't have exactly 10 characters and end in a 1 so I cannot really use this solution. So I am back to trying to normalize the data in the second search to match the first search. I have never came across any situation like this before and have used everytrick I can think of to try and normalize this data. Is there any suggestions that people can give me on how to get this to work? Thanks to all!

Change bar color based on y axis value in timechart

$
0
0
Hi there, I have already found several answers to the question how to apply color ranges on the column chart, but I didn't manage to get them to work using a timechart. My search looks like this: index="index" startupTime=* | timechart span=1hour count(startupTime) by host limit=0 I have like 100 hosts and want to mark hosts green which are only having one restart an hour, 2-4 restarts yellow and 4-max red. Is this somehow possible using a time chart? Thanks in advance

Splunk 8.0 release date?

$
0
0
Hello guys, anyone knows when Splunk 8.0 will be released (and therefore EOL Splunk 6) ? Thanks.

Can I upgrade my CentOS to Python 3

$
0
0
Hello all! I got a security alert from my company about my CentOS server using Python 2.7. As it stands, I know that my version of Splunk uses 2.7.5 in it's own /bin/python directory; however, if I were to upgrade the OS's version of Python to 3.0... would this affect my on-premise Heavy Forwarder (that uses 2.7.5)? Or, is everything contained within the /bin/python for Splunk? Thanks!

Why does only specific data on my UF not being sent to my Indexers?

$
0
0
Hi, here is my situation: My Splunk environment is all Linux which exists of 3 indexers, 2 search heads and 2 log collectors running the UF client. On one of my log collectors I have a stanza written for collecting Websense logs. Here is the stanza: [monitor:///iscsi/rsyslog_custom/logs/websense/hostname/*/] index=main sourcetype=websense:cg:kv host=HOSTNAME For some odd reason these websense logs are not being sent to the indexers. However, all other logs collected on that UF are being sent over. Clearly not a network communication issue because all other logs are being sent successfully. When run a search for sourcetype=websense:cg:kv I get diddly squat. Subsequently when do a search for index=main the hostname does not appear in the results. I have been looking through the splunkd.log file but nothing jumps out at me. Any help is appreciated. Thanks

How do I get my "splunk-select" dropdown menu to not be clipped off the bottom of the frame?

$
0
0
To illustrate what I mean, here is the splunk-select item in the triggered-alerts config UI: ![alt text][1] and here is the one I've created in the ui for my own action: ![alt text][2] [1]: /storage/temp/254997-triggered-alerts.png [2]: /storage/temp/254998-mine.png As you can see, the bottom of the menu is cut off at the end of the frame. It's especially annoying in Safari, which doesn't let you scroll down to see the bottom (well, it lets you, but instantly scrolls back up). Is there a parameter I should be adding to the splunk-select, make it a particular class perhaps? I can't find anything about it in the docs.

how do I change a legend label for a graph?

$
0
0
sourcetype="pan:threat" earliest=-1d | timechart span=5m count by threat_name limit=8 I am doing a search like the one above, and one of the legend labels for the threat_name only comes up with the ID number not the actual name, ie. URL filtering for 9999 in the legend. The other labels come up correctly. I am trying to find where I can change the 9999 to a URL filtering (9999) or something like that? Is this a case that I would use an eval/case statement. I have tried and was unsuccessful in forming a correct one to get what I was hoping for. Thank you in advance. -Sam

Querying Databases without Java

$
0
0
With the new changes for cost of support on of Java version 8 coming soon, is there any other way that people are connecting into databases (mainly Microsoft SQL Server but other types of databases are possibly going to be needed in the future) that are not using DB Connect and Java? According to the docs for DB Connect, JRE 8 is required and that one is having the price changed for support. This would be for both executing ad hoc queries to show data and pulling data in for indexing/storing in Splunk. Thanks.

Where can I open a chat session with a Splunk rep from splunk.com for certification questions?

$
0
0
I have been told that there is no chat by some Splunk support lines but I guess I need to clarify. I opened a chat session while in the training area of the site. Anyway regardless someone besides me has got to know what I am talking about??

What type of user can add data?

$
0
0
This has been asked before, but not answered. 1. Do you have to be an admin to add data? 2. The [roles and capabilities][1] section of the documentiation doesn't seem to describe a specific capability for "add data". Previous answers to this question have pointed to this section of the doc. [1]: http://docs.splunk.com/Documentation/Splunk/7.1.3/Security/Rolesandcapabilities

Python SDK index.submit

$
0
0
Hi! I've cluster of Search Heads (SH) and Indexers (I). My SHs are configured to forward data to Indexers without local copy. I've created index IndexName on Indexers and want to upload some data there via python sdk (index.submit) from my custom alert action. As I got service/client from splunk on SH firstly I got an exception with something like "client.indexes['IndexName'] because there was no IndexName on SH. I created IndexName on SH and exception disappeared. So I expected that I would submit an event via client.indexes['IndexName'].submit() and SH would forward it to Indexers. I had no problems with submitting, but I couldnt find event neither on SHs nor on Indexers. Who knows where my event is?

It is possible to group notable events in Incident Review for Enterprise Security?

$
0
0
I have a couple searches that trigger in Incident Review and I want to group them up by users who are doing the actions and let the drill down show me the detailed information. Any know who to group them?

Parsing - Chrome Log

$
0
0
Hi Community, We have some issue with one of our cloud product and we need to collect our chrome browser log. So we havea log file like this : [2368:3448:0911/104129.306:INFO:CONSOLE(9006)] "Handling message type", source: https://dhqbrvplips7x.cloudfront.net/directory/5039/assets/web-directory-7ffa5c07600e16c16ba367872647fa6c.js (9006) [2368:3448:0911/104129.331:INFO:CONSOLE(9006)] "Handling message type", source: https://dhqbrvplips7x.cloudfront.net/directory/5039/assets/web-directory-7ffa5c07600e16c16ba367872647fa6c.js (9006) [2368:3448:0911/104129.353:INFO:CONSOLE(9006)] "Handling message type", source: https://dhqbrvplips7x.cloudfront.net/directory/5039/assets/web-directory-7ffa5c07600e16c16ba367872647fa6c.js (9006) [2368:3448:0911/104129.366:INFO:CONSOLE(9006)] "Handling message type", source: https://dhqbrvplips7x.cloudfront.net/directory/5039/assets/web-directory-7ffa5c07600e16c16ba367872647fa6c.js (9006) [2368:3448:0911/104140.068:INFO:CONSOLE(9013)] "STASH-LOGGER: sendLogTraces", source: https://dhqbrvplips7x.cloudfront.net/directory/5039/assets/web-directory-7ffa5c07600e16c16ba367872647fa6c.js (9013) [2368:3448:0911/104150.489:INFO:CONSOLE(372)] "Other topic: { "topicName": "channel.metadata", "eventBody": { "message": "WebSocket Heartbeat" } }", source: chrome-extension://bkmeadpinckobiapkihcoenmipobdaio/background/background.js (372) [2368:3448:0911/104150.499:INFO:CONSOLE(312)] "Sending ping WS healthcheck", source: chrome-extension://bkmeadpinckobiapkihcoenmipobdaio/background/background.js (312) [2368:3448:0911/104150.519:INFO:CONSOLE(372)] "Other topic: { "topicName": "channel.metadata", "eventBody": { "message": "pong" } }", source: chrome-extension://bkmeadpinckobiapkihcoenmipobdaio/background/background.js (372) [2368:3448:0911/104150.519:INFO:CONSOLE(374)] "Pong WS healthcheck received.", source: chrome-extension://bkmeadpinckobiapkihcoenmipobdaio/background/background.js (374) I made this props.conf but it's not correct in my search My props.conf [chrome:log] LINE_BREAKER=^\[\d+.\d+:\d+\/\d+. CHARSET=latin-1 SHOULD_LINEMERGE=true TIME_FORMAT=%m%d/%H%M%S.%3N category=Miscellaneous description=A common log format with a predefined timestamp. Customize timestamp in "Timestamp" options disabled=false pulldown_type=true TIME_PREFIX=\[\d+.\d+ But in my search i have again 2 or more line by events (like this) : ![alt text][1] [1]: /storage/temp/254995-recherche-splunk-663-google-chrome-2018-09-20-0827.png Can you help me :) Many Thanks

How else can you connect into databases (mainly Microsoft SQL Servers) without DB Connect and Java?

$
0
0
With the new changes for cost of support on of Java version 8 coming soon, is there any other way that people are connecting into databases (mainly Microsoft SQL Server but other types of databases are possibly going to be needed in the future) that are not using DB Connect and Java? According to the docs for DB Connect, JRE 8 is required and that one is having the price changed for support. This would be for both executing ad hoc queries to show data and pulling data in for indexing/storing in Splunk. Thanks.

Can someone help me locate a Python SDK event in a cluster?

$
0
0
Hi! I have a cluster of Search Heads (SH) and Indexers. My SHs are configured to forward data to Indexers without local copy. I've created the index IndexName on Indexers and want to upload some data there via Python SDK (index.submit) from my custom alert action. As I got a service/client from Splunk on SH, firstly I got an exception with something like "client.indexes['IndexName'] because there was no IndexName on SH. I created IndexName on SH and exception disappeared. So I expected that I would submit an event via client.indexes['IndexName'].submit() and SH would forward it to Indexers. I had no problems with submitting, but I couldn't find the event neither on SHs nor on Indexers. Who knows where my event is?

Can you help me with an inputs.conf wildcard issue?

$
0
0
Hi, I have a forwarder setup with this inputs.conf: [monitor:///home/mqm/mqstatistics/splunk/*_QM_Q_*] disabled = false index = mq sourcetype = qstats crcSalt = [monitor:///home/mqm/mqstatistics/splunk/*_QM_CHL_*] disabled = false index = mq sourcetype = chlstats crcSalt = The location /home/mqm/mqstatistics/splunk/ has many files, here is a sample directory listing: -rw-r--r--- 1 mqm mqm 30335 Sep 19 12:24 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_CHL_statistics_2018-09-06.splunk -rw-r--r--- 1 mqm mqm 29468 Sep 19 12:25 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_CHL_statistics_2018-09-07.splunk -rw-r--r--- 1 mqm mqm 5325 Sep 19 12:26 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_CHL_statistics_2018-09-08.splunk -rw-r--r--- 1 mqm mqm 10626 Sep 19 12:26 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_CHL_statistics_2018-09-09.splunk -rw-r--r--- 1 mqm mqm 0 Sep 19 13:18 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_CHL_statistics_2018-09-10.splunk -rw-r--r--- 1 mqm mqm 32233 Sep 19 13:19 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_CHL_statistics_2018-09-11.splunk -rw-r--r--- 1 mqm mqm 39100 Sep 19 13:20 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_CHL_statistics_2018-09-12.splunk -rw-r--r--- 1 mqm mqm 32861 Sep 19 13:20 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_CHL_statistics_2018-09-13.splunk -rw-r--r--- 1 mqm mqm 32758 Sep 19 13:21 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_CHL_statistics_2018-09-14.splunk -rw-r--r--- 1 mqm mqm 9269 Sep 19 13:21 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_CHL_statistics_2018-09-15.splunk -rw-r--r--- 1 mqm mqm 11222 Sep 19 13:22 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_CHL_statistics_2018-09-16.splunk -rw-r--r--- 1 mqm mqm 31818 Sep 19 13:23 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_CHL_statistics_2018-09-17.splunk -rw-r--r--- 1 mqm mqm 32847 Sep 19 13:23 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_CHL_statistics_2018-09-18.splunk -rw-r--r--- 1 mqm mqm 178561 Sep 19 12:24 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_Q_statistics_2018-09-06.splunk -rw-r--r--- 1 mqm mqm 177300 Sep 19 12:25 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_Q_statistics_2018-09-07.splunk -rw-r--r--- 1 mqm mqm 128417 Sep 19 12:26 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_Q_statistics_2018-09-08.splunk -rw-r--r--- 1 mqm mqm 140852 Sep 19 12:26 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_Q_statistics_2018-09-09.splunk -rw-r--r--- 1 mqm mqm 0 Sep 19 13:18 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_Q_statistics_2018-09-10.splunk -rw-r--r--- 1 mqm mqm 181606 Sep 19 13:19 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_Q_statistics_2018-09-11.splunk -rw-r--r--- 1 mqm mqm 195047 Sep 19 13:20 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_Q_statistics_2018-09-12.splunk -rw-r--r--- 1 mqm mqm 183082 Sep 19 13:20 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_Q_statistics_2018-09-13.splunk -rw-r--r--- 1 mqm mqm 181658 Sep 19 13:21 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_Q_statistics_2018-09-14.splunk -rw-r--r--- 1 mqm mqm 136505 Sep 19 13:21 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_Q_statistics_2018-09-15.splunk -rw-r--r--- 1 mqm mqm 140286 Sep 19 13:22 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_Q_statistics_2018-09-16.splunk -rw-r--r--- 1 mqm mqm 181603 Sep 19 13:23 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_Q_statistics_2018-09-17.splunk -rw-r--r--- 1 mqm mqm 181470 Sep 19 13:23 /home/mqm/mqstatistics/splunk/BRT5TS01_QM_Q_statistics_2018-09-18.splunk I confirm that I can read those files as the splunk ID. I also manually loaded a couple of those files in Splunk Enterprise and they look good. Issue is: I'm not receiving any data. Everywhere I'm looking tells me I should be receiving data. The MQ index exists. There are no warning or errors in the logs. The forwarder reports this: 09-20-2018 12:46:49.014 -0400 INFO TailingProcessor - Adding watch on path: /home/mqm/mqstatistics/splunk. 09-20-2018 12:46:49.014 -0400 INFO TailingProcessor - Adding watch on path: /home/mqm/mqstatistics/splunk. 09-20-2018 12:46:49.013 -0400 INFO TailingProcessor - Parsing configuration stanza: monitor:///home/mqm/mqstatistics/splunk/*_QM_Q_*. 09-20-2018 12:46:49.013 -0400 INFO TailingProcessor - Parsing configuration stanza: monitor:///home/mqm/mqstatistics/splunk/*_QM_CHL_*. I am receiving data from other sources for this Forwarder, just not this one. Why doesn't this inputs.conf work? Thanks.

Do you have to be an admin user to add data?

$
0
0
This has been asked before, but not answered. 1. Do you have to be an admin to add data? 2. The [roles and capabilities][1] section of the documentation doesn't seem to describe a specific capability for "add data". Previous answers to this question have pointed to this section of the doc. [1]: http://docs.splunk.com/Documentation/Splunk/7.1.3/Security/Rolesandcapabilities
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>