Quantcast
Channel: Questions in topic: "splunk-enterprise"
Viewing all 47296 articles
Browse latest View live

change splunk app logo

$
0
0
I am new to splunk, how can we change splunk `app logo` can anyone give me detailed instructions/steps?

bins for count values

$
0
0
Hi at all, I have a problem that I don't know if it's solveble: I have a search with a stats command with a values option (e.g. `| stats values(prog) AS prog BY key` ) prog can have few and many values I have to use key and prog in a drilldown in another dashboard. I created an hidden field to pass to drilldown progs with the "OR" separators ( `value1 OR value2 OR value3 OR ...` ) This drilldown correctly runs when I have not many progs (until around 150 progs) but when I have more progs I reach the limit of URL legth (error message "Request-URI Too Long"). So how can i solve the problem? I thought to show in my main dashboard one row every 150 progs and use them in drilldown but I don't know if it's possible and how to do that. Has anyone any idea? Thank you. Bye. Giuseppe

How to update host name in 200 alerts ?

$
0
0
If I am having 200 alerts and want to change host name in all alert , how to do that ?

Split a column in the search data into multiple columns

$
0
0
Hi All, I have a file of Tickets to analyse. I want to arrange the data as per the following image. What can I do to achieve the same. ![alt text][1] [1]: /storage/temp/226683-expected.png

Splunk Stream: Finding NTLM V1 and LM Usage

$
0
0
Hi, This article describes how NTLM v1 and LM usage can be detected: https://blogs.technet.microsoft.com/askds/2012/02/02/purging-old-nt-security-protocols/ Based on the article I came up with the following Wireshark filter: (ntlmssp.auth.ntresponse) ||( !(ntlmssp.auth.lmresponse == 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00) && (ntlmssp.auth.lmresponse)) Is there a way I could configure/abuse the Splunk App for Stream to log events based on that filter? It will probably be difficult/impossible to configure a regex based field using "src_content" or "dest_content" In Splunk_TA_stream/default/vocabularies/smb.xml or Splunk_TA_stream/default/streams/smb I do not see any Fields that correspond to the Lan Manager Response OR NTLMv1 Response Running Strings on streamfwd and grepping for smb shows that there is a SMBProtocolHandler implemented. So I suspect that the binary has to be modified. Is this assumption correct? Regards Chris

Not getting all the files from forwarders

$
0
0
Hi, I know there are lot of questions under the same topic,but i am stuck.i have an application server which forwards the logs to splunk.The way logs are written is that are on random selection.i will share that information as well. So, when there is a process and being written into the log it picks a random one from all the logs and appends to it.even the log date modified is ,lets say today, when i open up the log it might start with a date and a process written onto that log from 3 months ago and at the end of that log i can see the latest process from today ,and when another process happens it writes it to another log and that is the cycle. here is my inputs.conf [default] host = xxxxxx [monitor://D:\y\Log Files\] disabled = 0 index=z followTail = 0 sourcetype=Data Import ignoreOlderThan = 30d Here are the screenshots alt text could post the last screenshot but it is showing the end of the same log i posted with today's date. My question is,i am not getting all the log files form that location.not sure how long this has been happening for but i jut found out about this couple days ago.Lets say i have 15 log files from yesterday,i only got 3 of them.To troubleshoot the issue i tried looking at the splunkd but that did not give me much. this is the latest entry on splunkd 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - New scheduled exec process: D:\splunk\bin\splunk-wmi.exe 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - interval: 10000000000 ms 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - New scheduled exec process: D:\splunk\bin\splunk-MonitorNoHandle.exe 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - interval: 60000 ms 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - New scheduled exec process: D:\splunk\bin\splunk-admon.exe 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - interval: 60000 ms 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - New scheduled exec process: D:\splunk\bin\splunk-netmon.exe 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - interval: 60000 ms 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - New scheduled exec process: D:\splunk\bin\splunk-perfmon.exe 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - interval: run once 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - New scheduled exec process: D:\splunk\bin\splunk-powershell.exe 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - interval: 60000 ms 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - New scheduled exec process: D:\splunk\bin\splunk-powershell.exe --ps2 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - interval: 60000 ms 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - New scheduled exec process: D:\splunk\bin\splunk-regmon.exe 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - interval: 60000 ms 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - New scheduled exec process: D:\splunk\bin\splunk-winevtlog.exe 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - interval: 60000 ms 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - New scheduled exec process: D:\splunk\bin\splunk-winprintmon.exe 01-09-2018 12:21:38.010 -0500 INFO ExecProcessor - interval: 60000 ms 01-09-2018 12:21:38.041 -0500 INFO PipelineComponent - Launching the pipelines for set 0. 01-09-2018 12:21:38.088 -0500 INFO TailingProcessor - TailWatcher initializing... 01-09-2018 12:21:38.088 -0500 INFO TailingProcessor - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk. 01-09-2018 12:21:38.088 -0500 INFO TailingProcessor - Parsing configuration stanza: batch://$SPLUNK_HOME\var\spool\splunk\...stash_new. 01-09-2018 12:21:38.088 -0500 INFO TailingProcessor - Parsing configuration stanza: monitor://$SPLUNK_HOME\etc\splunk.version. 01-09-2018 12:21:38.088 -0500 INFO TailingProcessor - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk. 01-09-2018 12:21:38.088 -0500 INFO TailingProcessor - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk\license_usage_summary.log. 01-09-2018 12:21:38.088 -0500 INFO TailingProcessor - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk\metrics.log. 01-09-2018 12:21:38.088 -0500 INFO TailingProcessor - Parsing configuration stanza: monitor://$SPLUNK_HOME\var\log\splunk\splunkd.log. 01-09-2018 12:21:38.088 -0500 INFO TailingProcessor - Parsing configuration stanza: monitor://D:\y\Log Files\. 01-09-2018 12:21:38.088 -0500 INFO TailReader - State transitioning from 1 to 0 (initOrResume). 01-09-2018 12:21:38.088 -0500 INFO TailReader - State transitioning from 1 to 0 (initOrResume). 01-09-2018 12:21:38.088 -0500 INFO TailingProcessor - Adding watch on path: D:\y\Log Files. 01-09-2018 12:21:38.088 -0500 INFO TailingProcessor - Adding watch on path: D:\splunk\etc\splunk.version. 01-09-2018 12:21:38.088 -0500 INFO TailingProcessor - Adding watch on path: D:\splunk\var\log\splunk. 01-09-2018 12:21:38.088 -0500 INFO TailingProcessor - Adding watch on path: D:\splunk\var\spool\splunk. 01-09-2018 12:21:38.088 -0500 INFO TailReader - Registering metrics callback for: tailreader0 01-09-2018 12:21:38.088 -0500 INFO TailReader - Starting tailreader0 thread 01-09-2018 12:21:38.088 -0500 INFO TailReader - Registering metrics callback for: batchreader0 01-09-2018 12:21:38.088 -0500 INFO TailReader - Starting batchreader0 thread 01-09-2018 12:21:38.088 -0500 INFO loader - Limiting REST HTTP server to 3333 sockets 01-09-2018 12:21:38.088 -0500 INFO loader - Limiting REST HTTP server to 1365 threads 01-09-2018 12:21:39.710 -0500 INFO WatchedFile - Will begin reading at offset=988394 for file='D:\y\Log Files\DataImport-62-[2384].log'. 01-09-2018 12:21:39.726 -0500 INFO WatchedFile - Will begin reading at offset=3402522 for file=''D:\y\Log Files\DataImport-62-[2364].log'. 01-09-2018 12:21:39.804 -0500 INFO TcpOutputProc - Connected to idx=10.14.0.246:9997, pset=0, reuse=0. 01-09-2018 12:21:52.876 -0500 INFO WatchedFile - Will begin reading at offset=344718 for file=''D:\y\Log Files\DataImport-62-[5712].log'. 01-09-2018 12:22:12.220 -0500 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='D:\splunk\var\log\splunk\splunkd_ui_access.log'. 01-09-2018 12:22:12.220 -0500 INFO WatchedFile - Will begin reading at offset=50885 for file='D:\splunk\var\log\splunk\splunkd-utility.log'. 01-09-2018 12:22:12.220 -0500 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='D:\splunk\var\log\splunk\searchhistory.log'. 01-09-2018 12:22:12.220 -0500 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='D:\splunk\var\log\splunk\scheduler.log'. 01-09-2018 12:22:12.236 -0500 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='D:\splunk\var\log\splunk\remote_searches.log'. 01-09-2018 12:22:12.236 -0500 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='D:\splunk\var\log\splunk\mongod.log'. 01-09-2018 12:22:12.314 -0500 INFO WatchedFile - Will begin reading at offset=12261005 for file='D:\splunk\var\log\splunk\metrics.log'. 01-09-2018 12:22:12.314 -0500 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='D:\splunk\var\log\splunk\license_usage_summary.log'. 01-09-2018 12:22:12.314 -0500 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='D:\splunk\var\log\splunk\license_usage.log'. 01-09-2018 12:22:12.314 -0500 INFO WatchedFile - Will begin reading at offset=11480 for file='D:\splunk\var\log\splunk\conf.log'. 01-09-2018 12:22:12.314 -0500 INFO WatchedFile - Will begin reading at offset=77366 for file='D:\splunk\var\log\splunk\audit.log'. 01-09-2018 12:50:02.481 -0500 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file=''D:\y\Log Files\DataImport-62-[2384].log'. 01-09-2018 12:50:02.481 -0500 INFO WatchedFile - Will begin reading at offset=0 for file=''D:\y\Log Files\DataImport-62-[2384].log'. 01-09-2018 12:50:03.495 -0500 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file=''D:\y\Log Files\DataImport-62-[2364].log'. 01-09-2018 12:50:03.495 -0500 INFO WatchedFile - Will begin reading at offset=0 for file=''D:\y\Log Files\DataImport-62-[2364].log'. 01-10-2018 03:29:25.021 -0500 INFO WatchedFile - Checksum for seekptr didn't match, will re-read entire file='D:\splunk\var\log\splunk\metrics.log'. 01-10-2018 03:29:25.021 -0500 INFO WatchedFile - Will begin reading at offset=0 for file='D:\splunk\var\log\splunk\metrics.log'. 01-10-2018 03:29:25.099 -0500 INFO WatchedFile - Will begin reading at offset=24999075 for file='D:\splunk\var\log\splunk\metrics.log.1'. i deleted the splunkd and restarted the splunk service and check to see if i was getting the missing logs and that worked for a day.and whenever i made a change to the log it was being captured and sent to indexer.But today,it is the same behavior.i am missing log files in splunk. i hope this is not too complicated .i am kind of stuck and need second set of eyes to tell me that i missing something.Any help is appreciated. Thanks

Get Data into Splunk from Elasticsearch

$
0
0
Hi What is the best way to get data into Splunk from Elasticsearch, so i can put Datamodles on to it. Thanks Robert Lynch

Splunk to uCMDB Integration

$
0
0
I have been doing some research and need to know if there is any Splunk certified secure app in splunkbase for integrating Splunk to uCMDB OR what is the correct and tested procedure of doing so that we can get data from uCMDB to Splunk e.g. CIs, CI's attributes etc.? I was going thru Splunkbase and found only one app named CMDB-to-ITSI. It is not just indexing the uCMDB logfiles via Splunk. Appreciate your assistance in-advance. Thanks.

How to setup HP Procurve switch only security logs send to Splunk server

$
0
0
How to setup HP Procurve switch only security logs send to Splunk server

Blank Login Page

$
0
0
Hi, So I have just installed splunk for the first time on my linux ubuntu machine. The only changes I have made during configuration is I have changed the HTTPS port in the web.conf file so that it uses port 12300 instead of 8000 port 8000 is being used by another program). I start splunk like usual and the output says that the ports are open and available to use and then links me the page of the splunk interface. When I click on the link it opens in my firefox browser and directs to the splunk web interface log in page. However nothing is displayed on the page. I know that the web page is working correctly apart from its blank. The URL file structure changes to the log in page directory and I can also view the page source which is populated with what should be shown on the web page. I have also ran through the log files and there are no errors there either. Everything I have looked at looks like it should be running smoothly but it isnt. I have tried solutions on the internet like stopping iptables from running at the same time but I dont have iptables setup so this is not an cause. If anyone has had and fixed this problem then please could you enlighten me as I feel like I have hit a brick wall with trying to install this. Cheers

Using Timewrap to compare yesterday to today per hour

$
0
0
I have the following search as I'm trying to compare yesterday's count to today's count per hour and I am seeing events per hour for latest_day, but no events per hour for today index=foo | timechart count span=1h | timewrap 1d Is the fact that I have the span set to 1h and timewrap set to 1d an issue? Thx

Combine RegEx with a condition

$
0
0
Assume the following squid log samples: (squid-1): 1515606581.001 100 1.2.3.4 TCP_TUNNEL/200 500 CONNECT some.fqdn.com:443 - DIRECT/1.2.3.4 (squid-1): 1515606582.002 200 1.2.3.4 TCP_TUNNEL/200 2000 CONNECT some.fqdn.com:443 - DIRECT/1.2.3.4 (squid-1): 1515606583.003 200 1.2.3.4 TCP_TUNNEL/200 5000 CONNECT some.fqdn.com:443 - DIRECT/1.2.3.4 Example search with a regular expression to filter for TIME, SIZE and URL: squid-1 |rex field=_raw "squid-1\):\s+\d+\.\d+\s+(?

Where is the first part of the index home path defined?

$
0
0
I've sort of took on Splunk administration for my company so I'm trying to make sense of this as quickly as I can. Under Indexes I see you can define a "Home Path" and here is what I currently see: ![alt text][1] [1]: /storage/temp/226692-homepath.png When I place my mouse over the row it displays "/opt/splunkdata/smt_tableau/db" but notice how the Home Path is titled as "volume:summit_ps/smt_tableau/db" in the picture. So I'm guessing there's a place where "volume:summit_ps" is defined to look in /opt/splunkdata but I can't find it anywhere. Sorry if this is confusing.

Modifying an input for dashboard. (Change a time format to fit the _time format)

$
0
0
(Sorry if this is confusing) I want to create a dashboard to find like events that happens at a certain time. This is going to be searching a datamodel so I can see all the events that happens at a certain time. I want to be able to input in one format for the token and have it search in another but I'm running into problems to figure this out. I want to modify "1/1/18 2:00:20.000 PM" format to fit into the _time field "2018-01-01T14:00:20.000-06:00" so i can search with that. Any ideas on how I can achieve this? I'm looking at the XML code but I'm confused on how I'd achieve that. Much thanks,

Transformation to index events to different index not working

$
0
0
**Goal** I wish to place some events into a longer living index "staging-boeing-audit" for audit purposes. All other events I wish to continue to be indexed as before. **What I have tried** I fabricated a simple example to prove this method will work. - I added a TRANSFORMS-"name" line to my props.conf for a test sourcetype - I added a transforms.conf to regex some events into a new index - I used oneshot to place a test file into Splunk **What happened** All data was placed into the original index "marktransform1" No events were matched by the transform and thus the target index "staging-boeing-audit" is empty. **My props.conf** root@myhost:/opt/splunk/etc/system/local# cat props.conf [mectest] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = info.created TIME_FORMAT = %Y-%d-%m %H:%M:%S.%3Q KV_MODE= none AUTO_KV_JSON= true category = Custom description = added via ui disabled = false pulldown_type = 1 TRANSFORMS-routing = route_boeing **My transforms.conf** root@myhost:/opt/splunk/etc/system/local# cat tranforms.conf [route_boeing] REGEX = . DEST_KEY = _MetaData:Index FORMAT = staging-boeing-audit **My oneshot command** root@myhost:/opt/splunk/bin# ./splunk add oneshot ulfs.log -sourcetype mectest -index marktransform1 **Sample from ulfs.log file** root@myhost:/opt/splunk/bin# tail -2 ulfs.log {"context": {}, "info": {"name": "starwood.core", "msg": "Got rate from Starwood", "levelname": "INFO", "levelno": 20, "pathname": "/home/ubuntu/envs/airborne/src/tune/tune/utils.py", "filename": "utils.py", "module": "utils", "exc_info": null, "exc_text": null, "stack_info": null, "lineno": 45, "funcName": "debug", "created": "2018-01-10 17:52:12.253", "msecs": 252.66528129577637, "relativeCreated": 33089947.247982025, "thread": 140501375943792, "threadName": "DummyThread-686", "processName": "MainProcess", "process": 21272, "currency": "DKK", "event_type": "get_rate", "message_type": "starwood", "content_type": "profiling", "time": 0.0021333694458007812, "message": "Got rate from Starwood", "asctime": "2018-01-10 09:52:12,252", "loggername": "starwood.core"}} {"context": {}, "info": {"name": "tune.memory", "msg": "GC status", "levelname": "DEBUG", "levelno": 10, "pathname": "/home/ubuntu/envs/airborne/src/tune/tune/memory.py", "filename": "memory.py", "module": "memory", "exc_info": null, "exc_text": null, "stack_info": null, "lineno": 49, "funcName": "gc_monitor", "created": "2018-01-10 17:52:33.135", "msecs": 135.5295181274414, "relativeCreated": 33110830.112218857, "thread": 140501389641576, "threadName": "DummyThread-1081", "processName": "MainProcess", "process": 22337, "current_collections_count": [86, 1, 30], "current_frames": 5, "enabled": true, "garbage_count": 0, "gc_stats": [{"collections": 26842, "collected": 5718217, "uncollectable": 0}, {"collections": 2440, "collected": 2079011, "uncollectable": 0}, {"collections": 101, "collected": 1168870, "uncollectable": 0}], "max_rss": 351032, "total_objects": 326350, "message": "GC status", "asctime": "2018-01-10 09:52:33,135", "loggername": "tune.memory"}}

splunk IA-sourcefire connector app is not reporting logs .

$
0
0
Hi , I have issue with splunk sourcefire connector app , it is conifigured on one of the splunk Heavy forwarder . it was working upto 4 th jan . I had tried resetting the connector and also restarted services of splunk if that might help but it didn't . below is the configuration local from the app. estreamer.conf [estreamer] changed = 0 pkcs12_password = XXXXXX client_disabled = 0 log_extra_data = 1 log_metadata = 1 pkcs12_file = /opt/splunk/etc/apps/XX-IA-sourcefire/local/XX.XX.XXX.pkcs12 server = XX.XX.XX.XXX watch = 1 debug = 1 /app.conf # Autogenerated file [install] state = enabled is_configured = 1 props.conf [sourcefire:network:ids] TZ = GMT

splunk sourcefire connector app is not reporting logs .

$
0
0
Hi , I have issue with splunk sourcefire connector app , it is conifigured on one of the splunk Heavy forwarder . it was working upto 4 th jan . I had tried resetting the connector and also restarted services of splunk if that might help but it didn't . below is the configuration local from the app. estreamer.conf [estreamer] changed = 0 pkcs12_password = XXXXXX client_disabled = 0 log_extra_data = 1 log_metadata = 1 pkcs12_file = /opt/splunk/etc/apps/XX-IA-sourcefire/local/XX.XX.XXX.pkcs12 server = XX.XX.XX.XXX watch = 1 debug = 1 /app.conf # Autogenerated file [install] state = enabled is_configured = 1 props.conf [sourcefire:network:ids] TZ = GMT

Uploading new release does not complete - "package validation in progress" endless spinner

$
0
0
Hi, I am trying to upload a new release to Splunkbase. After uploading the file, splunkbase is performing some type of package validation. Usually, this validation is very quick. Today (Jan 10, 2018), the package validation appears to not complete at all. I get a message with a spinner that "Package validation is in progress", but, it never returns. I waited over 10 minutes after upload for this to resolve. Manual package extraction worked, so the package is not corrupted. Thanks Sajjad Lateef

Splunk windows docker image

$
0
0
Dear Splunk team, I am trying to pull docker windows image. I can find only the linux image in the docker store. https://store.docker.com/images/splunk Where I can find the equivalent windows docker image? Thanks.

IP Reputation threatscore not working.

$
0
0
Hi, I have installed application correctly. but i still don't get the threatscore displayed. I have added the key to file ***scorelookup.py*** at ***/ipreputation/bin/scorelookup.py*** and restarted splunk. still not working. sample query tried: index="test" dest_port=80 | stats count by src_ip dst_ip | lookup threatscore clientip AS dst_ip | sort -threatscore i have even tried with the sample IPs given in scorelookup.py (14.139.155.194) for which i should be getting a score of 35. but its displayed as 0. Pls advice
Viewing all 47296 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>