Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

How to join two searches that both have subsearches and transactions

$
0
0
I have an index with email data. With it, I have two separate searches that utilize subsearches to put together a set of logs, logs_A and logs_B. All the relevant logs are logs_A + logs_B but I haven't figured out how to combine them. logs_A have a dcid but no message ID. logs_B has a message ID but no message dcid. **query A --> logs_A** index=email | [search index=email sender=jsmith@aol.com | dedup dcid server | table dcid server ] | transaction dcid server **query B --> logs_B** index=email | [search index=email sender=jsmith@aol.com | dedup message_id server | table message_id server ] | transaction message_id server According to the [docs][1], *"If there is a transitive relationship between the fields in the fields list and if the related events appear in the correct sequence, each with a different timestamp, transaction command will try to use it."* I tried combining my two queries into one and changing the order of server, dcid, message_id but I could not get it to work no matter what I tried. Logs_A tend to come before B, but not always so I think that's why the transitive property isn't working. Typically A and B are within 2-3 seconds (at most) from each other, more often than note they share timestamps. Any ideas would be appreciated. I tried running both searches (each with its subsearch) and then using `append` but that didn't seem to work either. Feel like I'm going in circles on this one... [1]: http://docs.splunk.com/Documentation/Splunk/7.1.2/SearchReference/Transaction

Why doesn't "Wrap results" fit to the screen (or is there a way?)

$
0
0
After I perform a search and click the "Format" Icon above the search results, there is an option for "Wrap Results". I check this and it does an attempt at wrapping results, but I still have to scroll incredibly far to the right to see each result. It seems to just pick some arbitrary length and wraps it there. I should note, in the events there are plenty of spaces that could be used as clean wrap points but the system just seems to not use them. Is there some sort of config setting that controls wrap behavior that I could tweak? This is Splunk 7 I'm on as well.

Dedup and extract a new filed from a field

$
0
0
Hello, I am new to using rex and extract. I am trying to comeup with a regex to extract certain data from a field only if that field exists. Like in this query [[0;37m2018-08-28 22:40:32.999[0m] [32mINFO [0m [pid:27567] [request_id:xxxxxxxx] [host:xxxxxxx] [remote_ip:xxxxx] [session_id:xxxxxxxx] [auth_id:] method=GET path=/questions/2044288 format=html controller=questions action=show status=410 duration=130.55 view=118.49 db=1.78 **params={"id"=>"2044288"}** I am trying to extract the id number from params field and export it as article_number field. Can somebody help me how do I remove duplicates and use rex with extract? Thanks, -Ameya

Why is table row highlighting not working when using text comparison for cell value?

$
0
0
I have gone through all the answers here, and can not find one that was actually answered with details to make this work. All examples from the dashboard app refer to int values. I have been unsuccessful in any attempt to make it work with string comparisons. Does anyone have a working example with rows highlighted based on text values?

How can I use rex to dedup and extract certain data from a field?

$
0
0
Hello, I am new to using `rex` and `extract`. I am trying to come up with a regex to extract certain data from a field only if that field exists. Like in this query [[0;37m2018-08-28 22:40:32.999[0m] [32mINFO [0m [pid:27567] [request_id:xxxxxxxx] [host:xxxxxxx] [remote_ip:xxxxx] [session_id:xxxxxxxx] [auth_id:] method=GET path=/questions/2044288 format=html controller=questions action=show status=410 duration=130.55 view=118.49 db=1.78 **params={"id"=>"2044288"}** I am trying to extract the id number from params field and export it as article_number field. Can somebody help me how do I remove duplicates and use rex with extract? So far I came up with **index="cto-lc-app-prdidx" status=410 path="*" params="*" | dedup path,params | rex field=params ""** Thanks, -Ameya

SAML Configuration: What does "you must use the same signing certificate on all search head members" mean?

$
0
0
Today, I use CA signed certificates on all search head cluster members. These members are behind a load balancer. The load balancer DNS name and the unique host name (per server) are present in the subject alternative name. The server certificate is part of a chain along with Intermediate and Root certificates. This all seems to work fine. I need to configure SAML, and have, for a single member. I can't make heads or tails of the "configuring SAML in a search head cluster" doc. According to the doc, there is a common "signing certificate" I need to copy to the other members. What is this? Has anyone had experience with SAML configuration in a search head cluster? Your thoughts are appreciated.

Index Processor: The index processor has paused data flow. Too many tsidx files in....

$
0
0
Hello, I have recently inherited a Splunk Enterprise (v6.6) instance with some serious issues. The architecture is a distributed one with the Search head, Indexer and Heavy Forwarder all residing on different hosts. The primary problem I am facing is that after a short period of the time the queues (parsing, aggregator, typing and index) reach 100% and result in the error mentioned in the title. Upon investigating the Index file directory where the errors are reported, there are 100+ .lock files that seem to replicate as file.lock, file.lock.lock, file.lock.lock.lock etc etc. The machines that are running Splunk have more than enough RAM,CPU and IOPS. I have manually run splunk-optimize with no effect. I am lost on what to do next and almost considering deleting the index (not preferred) to resolve this issue. Any help would be much appreciated.

Upgrade to 7.1.2 from 6.5.1 - Universal Forwarder Upgrade

$
0
0
Hello Team, We are planning to upgrade Splunk Enterprise v6.5.1 to v7.1.2. I understand that we need to upgrade or make changes to SSL/TLS config as per http://docs.splunk.com/Documentation/Forwarder/7.1.2/Forwarder/Compatibilitybetweenforwardersandindexers Current UF Version Deployed and connecting to Heavy Forwarders. 6.2.6 6.3.0 6.3.7 6.4.3 6.5.1 6.5.2 I am confused as in link it says to change the cipher suite on forwarder but when clicked on Known issue list it is not clear where to make the changes. From Known issue: SPL-141964 - For splunktcp-ssl - we are not using it SPL-141961 - This seems to be applicable but it states "Upgrade your older instances to the latest maintenance releases or on your 6.6.x Splunk instances. Add the following stanza to server.conf:" [sslConfig] sslVersions = *,-ssl2 sslVersionsForClient = *,-ssl2 cipherSuite = TLSv1+HIGH:TLSv1.2+HIGH:@STRENGTH Can you advise what changes need to be done? I believe it is SPL-141961 but where this change need to be done IDX/HF/UF?

Do I need to download the old version of APP?

$
0
0
I want to use horseshoe to visualize my data . but my splunk version is 6.4 I saw from the dashboard sample APP that it can support the 6.4 version of Splunk. ![alt text][1] but on splunkbase (splunkbase.splunk.com/app/3166/) , it tell me that only can support 7.1, 7.0, 6.6 version . ![alt text][2] Do I need to download the old version of APP to make the 6.4 version of Splunk support the horseshoe ? [1]: /storage/temp/255839-wepng.png [2]: /storage/temp/255840-we1.png

Search with different MAC formats in dashboard

$
0
0
Hi Splunkers, I've created a dashboard that searches a MAC address and displays L1-L3 information. My only problem is, the search box only accepts aa:aa:aa:aa format. What is the best way to allow different search formats. I want to be able to search with in :/-/. formats.

graphics custom primary axis - second axis

$
0
0
I have 5 series I would like to create graphic with all series. I would add 3 series in primary axis and 2 series in second axis. I am trying to see behavior of series over time. is there componet/graphic at splunk that can create primary axis and second axis ??

Index Detail: Instance page on the DMC not showing any data

$
0
0
As the title says, there is no data on the Index Detail page. The search results says" Search is waiting for input..." for all the panels. Please advise a solution to this.

Json data parsing

$
0
0
{ "results": [ { "statement_id": 0, "series": [ { "name": "sqlserver_server_properties", "columns": [ "time", "last" ], "values": [ [ "2018-08-07T00:00:00Z", 144 ]} This is my json data . I extracted the time and last using index=..|spath output=time path=results{}.series{}.values{}{0}|spath output=count1 path=results{}.series{}.values{}{1} I want to expand the values as separate events . Help would be appreciated.

Script in custom alert action app is not working properly

$
0
0
I have created custom alert action app to restart Splunk. Here is restart_splunk.bat file, which I used in custom alert action app. :start cd "C:\Program Files\Splunk\bin\" break>"C:\Program Files\Splunk\etc\apps\restart_splunk\bin\data.dat" splunk search "| rest /services/search/jobs | search dispatchState=Running OR dispatchState=Finalizing OR dispatchState=Backgrounded | table author" -auth admin:changeme >> "C:\Program Files\Splunk\etc\apps\restart_splunk\bin\data.dat" for /f %%i in ('find /v /c "" ^<"C:\Program Files\Splunk\etc\apps\restart_splunk\bin\data.dat"') do set myint=%%i IF %myint%==3 ( cd "C:\Program Files\Splunk\bin\" splunk restart ) IF NOT %myint%==3 ( timeout 60 goto start ) When I run this script manually, it is working fine. But when I am scheduling custom alert, it just stops the Splunk instead of restart. I tried this using "splunk stop" and "splunk start" instead of "splunk restart", but result is same. Has anyone else faced a similar situation ?

multivalue value field values to a single value rows

$
0
0
![alt text][1] [1]: /storage/temp/255841-jsonop.png This is my output of my json data . I would want to see it as a separate rows not in a single row. When i do mvexpand for each time its taking all the count1 values. My output should be separate rows with time and count1 value.

The precise sourcetype setting when importing ESET logs

$
0
0
I currently use the ESET Remote Administrator. However, I can not divide log fields with sourcetype. Please tell me the precise sourcetype setting when importing ESET logs. 2018-08-28T10:59:14+09:00 eset.user.info {"message":"1 2018-08-28T01:59:14.307Z iptpeset01 ERAServer 5360 - - {\"event_type\":\"Audit_Event\",\"ipv4\":\"172.18.1.30\",\"hostname\":\"eset01\",\"source_uuid\":\"014b605e-aede-40a3-b15e-c2bc1b3509a5\",\"occured\":\"28-Aug-2018 01:59:14\",\"severity\":\"Information\",\"domain\":\"Native user\",\"action\":\"Logout\",\"target\":\"Administrator\",\"detail\":\"Logging out native user 'Administrator'.\",\"user\":\"00000000-0000-0000-7002-000000000002\",\"result\":\"Success\"}"} 2018-08-28T11:34:16+09:00 eset.user.warn {"message":"1 2018-08-28T02:34:16.220Z iptpeset01 ERAServer 5360 - - {\"event_type\":\"Threat_Event\",\"ipv4\":\"172.17.18.249\",\"hostname\":\"local\",\"source_uuid\":\"e2b5397c-c61b-43e0-9ae6-f53acf0cae7b\",\"occured\":\"28-Aug-2018 02:33:47\",\"severity\":\"Warning\",\"threat_type\":\"test file\",\"threat_name\":\"Eicar\",\"scanner_id\":\"HTTP filter\",\"scan_id\":\"virlog.dat\",\"engine_version\":\"17954 (20180827)\",\"object_type\":\"file\",\"object_uri\":\"http://www.eicar.org/download/eicar.com.txt\",\"action_taken\":\"connection terminated\",\"threat_handled\":true,\"need_restart\":false,\"username\":\"yamada\",\"processname\":\"C:\\\\Program Files\\\\Mozilla Firefox\\\\firefox.exe\",\"circumstances\":\"Threat was detected upon access to web.\",\"hash\":\"3395856CE81F2B7382DEE72602F798B642F14140\"}"}

Search Query need to fetch the Saved Searches with the enabled Email - Id's

$
0
0
We have configured around 700+ Searches and Reports (Saved searches) in our Search Head server and for most of those we have enabled email so just we got an requirement to replace few of the email ids which no longer exist and also we need to add additional email DLs into it. But we cant able to go and check each and every Saved search whether the existing email id is present or not and its a tedious process so is it possible to pull an report with the help of Search query stating something like if we search with the email id or DL and it needs to show the Saved Search name for which and all configured with the existing email id so that i can go ahead and replace them quickly so that it will really helpful. So is this possible to fetch the information. If yes, kindly help on the request.

Search Head Field Extractions: Are They Superficial (or there's something I need to understand better)?

$
0
0
Newbie here. So, I have learned that there are two types of field extractions: those that happen during the input phase and those that are created on search heads, such as Calculated Fields, Field Aliases, and other field extractions. I'd like to know better if these field extractions happen only during search-time and are RAM-dependent or are permanently disk-written changes made to the index. Like, if I made a calculated field called "Status" today and it works as it expected, then I turn it of (disable or delete) 5 days from now, will the the events today up to 5 days permanently going to consider the "Status" calculated field even if that's already deleted? Thanks in advance.

number of memebers in a Search Head Cluster

$
0
0
Hi at all, for a customer, I need to replicate knowledge objects between two Search Heads and high availability. The best solution is a Search Head Cluster, bat the problem is that I have only two Search Heads and Splunk best practices require at leats three members. From your experience, could I use a Search Head Cluster with only two members without great problems? If I cannot use a Cluster, as a workaround, I thought to use a script to replicate all the knowledge object from SH1 to SH2, anyone can suggest a different workaround. Bye. Giuseppe

Search Head not indexing _internal and summarization errors

$
0
0
Our Splunk Search Head is no longer indexing _internal logs (splunkd.log etc), the searches still run but are really slow. We see the following errors: 08-29-2018 11:08:56.126 +0200 WARN AdminManager - Handler 'summarization' has not performed any capability checks for this operation (requestedAction=list, customAction="", item=""). This may be a bug. 08-29-2018 11:08:58.109 +0200 WARN AdminManager - Handler 'summarization' has not performed any capability checks for this operation (requestedAction=list, customAction="", item=""). This may be a bug. 08-29-2018 11:08:59.044 +0200 WARN AdminManager - Handler 'summarization' has not performed any capability checks for this operation (requestedAction=list, customAction="", item=""). This may be a bug. 08-29-2018 11:09:00.047 +0200 WARN TcpOutputProc - Forwarding to indexer group cluster blocked for 5560 seconds. Does anyone know what might be causing this?
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>