Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

Lookup command returning incorrect (and inconsistent) null values

$
0
0
I encountered a very weird behaviour. I think to have found a way around it, but I want to make sure that I didn't misunderstand anything and I want to isolate/define the issue as well as possible. Maybe this is already known to some of you. I have a lookup which gives inconsistent results. It seems like if I feed a lot into it via | lookup I don't always get output even if the entry exists. This is inconsistent. One search might return a result, the next might not. My search is something like this (very simplified) | index=myindex sourcetype=mysourcetype someparameters=myparameters [|inputlookup listofnumbers.csv | fields number] | dedup number | lookup numberToText number output text as text1 | search number <1000 | lookup numberToText number output text as text2 | table number, text1,text2 the first lookup has to look up about 10000 values. Sometimes they get a text1, sometimes they dont, even if they are in the lookup numberToText. The second lookup, now dealing with a smaller amount, always seems to give the correct output. Does anyone ever experience this? I know that subsearches in the top can only return 10k restults to the search. But I am not aware of any restriction of the lookup command itself. The lookup is a definition which points to a csv. It makes no difference if the csv is addressed directly.

Cleaning up orphaned searches and reports

$
0
0
We migrated search heads and there was content in user directories from users that have since quit, and therefore no username got created. I get a message that there are orphaned searched. Any advice?

Double spaces are suppressed in search results

$
0
0
|makeresults| eval owner_realname="Andrew Gerber" | where match (owner_realname,"\s{2}") Search above generates output, but in the output the double space in the "owner_realname" is missing in the browser display (it is present if you download it via CSV). ![alt text][1] [1]: /storage/temp/267609-screen-shot-2019-02-09-at-103304-am.png

How can I check the CPU utilization of the SH / indexer from the search?

$
0
0
Hello, I do not have access to the OS machines of the Splunk but I suspect the CPU bottleneck because my alert jobs are having 3 min lag between scheduling and dispatching. I would like to investigate it further. Is there any way to query the internal index for the CPU utilization of the SH or indexer? Kind Regards, Kamil

how to calculate starttime and Endtime duration

$
0
0
how to calculate starttime and Endtime duration |08-feb-2019 01:30:18|08-feb-2019 01:30:28

how to calculate the starttime and endtime between duration ?

$
0
0
actually iam new to splunk in my logs starttime and endtime is there need to calculate duration starttime endtime |01-feb-2019 01:30:18|01-feb-2019 01:30:28 fieldnames are starttime and endtime

Lookup command returning incorrect null values

$
0
0
I encountered a very weird behaviour. I think to have found a way around it, but I want to make sure that I didn't misunderstand anything and I want to isolate/define the issue as well as possible. Maybe this is already known to some of you. Update: I did some more testing and while I still have this issue I have not been able to recreate it with fake data. However I minimized my query to vast degree and pretty much every element is essential now: |inputlookup faketestlogs.csv | eval test.number=mvindex('test.number',0,0) | lookup fakedictionary.csv test.number output color | eval mydump='test.number' | eval mydump2=color | eventstats dc(test.id) as #ids by test.number | lookup fakedictionary.csv test.number output color as color2 | search test.number=500 So in the real world (even with real data saved to csvs) I get the following result: color is null (incorrect) color2 is correct mydump is correct mydump2 is null (incorrect) Removing the mvindex fixes the issue. Removing the eventstats also fixes the issue. I still have no idea why. Maybe it's some kind of weird formating issue with splunk interpreting the data in an unintended way?

How to get upcoming friday date

$
0
0
I have a date field in my feed as "2/15/2019" , want to compare this with upcoming friday date value in search. please help how to do this

Default indexes in Splunk Enterprise

$
0
0
My Splunk Enterprise is running for a few months. I'm sending all my logs (HEC and UDP) to index "main". However, I see some indexes defined, mainly I'm concerned about the top-consuming ones: `_audit`, `_internal` and `_introspection`. ![indexes][1] What processes are sending data to them? What value is that for me? Is it consuming my license quota? And where can I configure/disable these? Thanks [1]: https://i.imgur.com/UUBJFf2.png

_thefishbucket empty

$
0
0
Hi. We are migrating our Splunk instance to a new server. We do not want it to re-index a directory that we have as a monitor. It was recommended that I copy over our fishbucket. I'm looking in /opt/splunk/var/lib/splunk and I see _thefishbucket.dat, but it's empty. How can I prevent a re-index if my fishbucket is empty? Thanks!

Need help getting number value and averaging it

$
0
0
I am trying to get the value, in this case it the # of seconds to respond, so that I can graph it or set alerts to it. Below is the log entries I am dealing with. STATUS | wrapper | main | 2019/02/10 10:38:08.885 | Pinging the JVM took 5 seconds to respond. So I need help pulling the number and the search for being able to graph this per a host.

Using transaction or stats to filter different parts of a query

$
0
0
Hi Experts! Im looking for a way to show where i get bookingresponses with the SAME (duplicate) platformid but different reactorids. Example: 2019/02/03 12:02:14.458 [server1] event="Received booking response" platformid=12345 reactorid=72E1X9785 2019/02/04 18:02:14.458 [server2] event="Received booking response" platformid=12345 reactorid=92D3X1865 I tried a mix of using dedup and transasction but cant seem to filter on having what i want left. Thanks in advance Paul

How to capture Individual loading time of URIs in a URL using Splunk ?

$
0
0
Hi all, I know that in Splunk i can capture the end to end response time of an URL. But, is there any option to capture the metrics like Google developer tools. I want to capture the metrics like DNS Lookup time, Initial Connection time, etc. for every URI within a URL. Please let me know if there is any app or any solution for this. ![alt text][1] [1]: /storage/temp/267615-2019-02-10-17-54-46.png

Background image for any chart!!

$
0
0
Can we have any background image on a line chart in Splunk? Like I will have my line chart which does it job (with plain background image) ; But I want to have a different image (not background color alone) altogether and want to display my line chart on top of it, instead of that plain background image. Please suggest. Thanks

Restrict access to Savedsearches for specific roles

$
0
0
Hi, I have many savedsearches running in my environment that are regularly writing data to summary indexes and metric store. And some savedsearches that are just meant to perform the basic search function. I have restricted savedsearches read access to all users in the environment except those who belong to the admin role. However I would like to grant read access to some savedsearches to a specific role/group. I tried the below however that does not work. The users given access to the search -Summary_Find cant see any savedsearches. [savedsearches] access = read : [ dev, admin, power ], write : [ admin, power, dev ] export = none [savedsearches/Summary_Find] access = read : [ admin, business_admin, dev, support, power ], write : [ admin, dev, power ] export = none owner = nobody Please do let me know if there is a solution to do this in Splunk.

How to troubleshoot why a Universal Forwarder is not sending data to the Deployment Server?

$
0
0
Hi all, I did read and try numerous if not all the subject similar to mine. I installed a Deployment Server on my Splunk Enterprise Server. I followed the tutorial and made the "sendtoindexer" app following [Splunk App for Windows Infrastructure 1.4][1] documentation. Everything works fine. I did put the "*Splunk_TA_Windows*" in the correct folders on Deployment Server. Infact everything works perfectly, except that my Universal Forwarder on the Deployment client doesn't use the outputs.conf from the "sendtoindexer" app... The [outputs.conf][2] file is in the folder ![alt text][3] When I'am looking at the splunkd.log on the UF I do have this.... *02-11-2019 15:16:04.947 +1100 ERROR TcpOutputProc - LightWeightForwarder/UniversalForwarder not configured. Please configure outputs.conf. 02-11-2019 15:16:16.497 +1100 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 02-11-2019 15:16:22.364 +1100 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 02-11-2019 15:16:28.497 +1100 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected* I admit the message "Please configure outputs.conf" is pretty obvious but can't solve my problem.... but when i troubleshot with "splunk btool outputs list --debug" there's no use of the file: ![alt text][4] I did restart, uninstall/install multiple times the UF, but it never works, I can't see any logs in my Splunk Enterprise instance. **But** when I just copy the outputs.conf file from *C:\Program Files\SplunkUniversalForwarder\etc\apps\sendtoindexer* to *C:\Program Files\SplunkUniversalForwarder\etc\system\local* and restart my UF, everything works fine and the logs are sended to my splunk instance....so no network problems... and the debug command show the stanzas from the conf file. So the conf file is OK.... I'am pretty lost right now...made so many tests... Help please. [1]: https://docs.splunk.com/Documentation/MSApp/1.5.1/MSInfra/Createthesendtoindexerapp [2]: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Outputsconf?utm_source=answers&utm_medium=in-answer&utm_term=outputs.conf&utm_campaign=refdoc [3]: /storage/temp/267616-screenshot-1.jpg [4]: /storage/temp/267617-btool-outputs-list-debug.jpg

Integrate Microsoft Cloud app security with splunk

$
0
0
Hi I want to integrate Microsoft Cloud app security with Splunk..for this is there any add-on available ? Which fields are required to integrate with Splunk and how? Thanks,

Splunk Enterprise Sofware Installer - deb and tar.gz files

$
0
0
Hi, Just wanted to ask about Splunk software installer files like tar.gz and deb files. We currently have Splunk Enterprise v.6.5.2 and we wanted to upgrade to v.6.6.5 Before, the Splunk Enterprise v.6.5.2 was installed using the "deb" file. Is it okay, if we upgrade to Splunk Enterprise v.6.6.5 using the tar.gz file? Or do we need to upgrade/install using deb file only? I hope you can enlighten us. Reason is, we are more familiar on installing/upgrading using tar.gz file.

how to resolve the below snmp error when i try to convert mib files to py .Can someone please help with this

$
0
0
build-pysnmp-mib -o IMAP_NORTHBOUND_MIB-V2.py IMAP_NORTHBOUND_MIB-V2.mib Empty input smidump -k -f python IMAP_NORTHBOUND_MIB-V2.mib | /bin/libsmi2pysnmp fails make sure you are using libsmi version > 0.4.5 (or svn)

Conditional alerts in splunk

$
0
0
I want to generate an alert on a specific condition .if alert is generated from an id for the first time email needs to be send.If next Alert is received within 30 mins for same id, then Email notification will be sent after 30 mins with no of alerts within lase 30 mins.
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>