How do I use Splunk with Wasabi?
Splunk Enterprise is certified to store indexed application logs in Wasabi Hot Cloud Storage. Any indexed logs that can be stored by Splunk locally can be stored remotely with Wasabi using Splunk’s SmartStore. SmartStore creates a remote storage tier (Wasabi) that most data resides on, and a local cache is used for recent data.
To use Splunk Enterprise with Wasabi, please follow the instructions below.
Prerequisites
Splunk Enterprise deployed and licensed. This solution was most recently tested with Splunk Enterprise version 10.0.1.
Existing index(es) created within Splunk.
An active Wasabi Hot Cloud Storage account.
Wasabi access and secret keys.
A Wasabi bucket with Versioning enabled. Note that Object Lock must not be used because Splunk requires read/write/delete access to data in your bucket. See Bucket Versioning for further information on Versioning and how to create a bucket with Versioning enabled. Choose the Wasabi region closest to your Splunk deployment.
Procedure to Enable Wasabi Storage in Splunk
Refer to Configure SmartStore for further information.
The Cache Manager needs to be enabled on each Indexer that SmartStore will be utilized. Too small of a cache will result in premature eviction of buckets to remote storage. Too large of a cache will take up excess space in the local storage. Using a reference of a 100 GB per day ingest, 1 day of hot storage, a 30 day cache retention, a replication factor of 2, and a compression factor of 50%, we came to a cache size of or 1550 GB or 1550000 MB.
These settings should be verified with Splunk in order to have a functional deployment.
See Reference Configs for more details.
For our testing, we assigned this stanza to server.conf in the /opt/splunk/etc/system/local directory.
[cachemanager]
#A non-zero max_cache_size is necessary to enable SmartStore. This can be adjusted to a a lower value for testing.
max_cache_size = 1550000
#hotlist settings protect critical data from eviction, these settings will vary per deployment
hotlist_bloom_filter_recency_hours = 1
hotlist_recency_secs = 60
Next we want to add our Wasabi volume information to indexes.conf (typically located in the /opt/splunk/etc/system/local directory) in order to connect to the remote storage. This is usually at the top of the conf file.
See the Splunk Admin Manual for additional information.
For this volume, provide a name (“wasabi” in our example). It is only a reference within Splunk to the remote storage. Under path, you need to reference your Wasabi bucket.[volume:wasabi]
storageType = remote
path = s3://<your_bucket_name>/
remote.s3.access_key = <your_access_key>
remote.s3.secret_key = <your_secret_key>
remote.s3.endpoint = https://s3.us-east-1.wasabisys.com
remote.s3.auth_region = us-east-1
# It is preferred to use Versioning. If versioning is not being utilized, add the following along with Wasabi Lifecycle policies:
# remote.s3.supports_versioning = false
Note that this config example discusses the use of Wasabi's us-east-1 storage region. To use other Wasabi storage regions, please use the appropriate Wasabi service URL as described in this Service URLs for Wasabi's Storage Regions.
Restart Splunk:
/opt/splunk/bin/splunk restart
Create a sample txt file by typing:
echo "hello world" > test01.txt
Let’s use Splunk to attempt to push it into Wasabi using the volume we created:
/opt/splunk/bin/splunk cmd splunkd rfs -- putF test01.txt volume:wasabi
If no errors occur, we can list from the cli as well to verify:
/opt/splunk/bin/splunk cmd splunkd rfs -- ls --starts-with volume:wasabi
As a result, we should see:
Size | Name |
12B | test01.txt |
We should also be able to see it listed within the Wasabi Console.

If something is not working, it would be logged in /opt/splunk/var/log/splunk/splunkd-utility.log under the S3Client heading. You can check from the CLI with:
grep S3Client /opt/splunk/var/log/splunk/splunkd-utility.log
Now that we have verified connectivity, we can add this remote storage to a provisioned index. In our example, the index is called “syslog”. We need to mount the volume under the syslog index stanza in indexes.conf.
These settings will vary per deployment and you should check with Splunk before rolling it into production.
The key part is the remote path, which references the volume we created, as well as the index name (“syslog” in our example).
[syslog]
coldPath = $SPLUNK_DB/syslog/colddb
homePath = $SPLUNK_DB/syslog/db
maxTotalDataSizeMB = 512000
thawedPath = $SPLUNK_DB/syslog/thaweddb
enableDataIntegrityControl = false
enableTsidxReduction = false
remotePath = volume:wasabi/syslog
hotlist_bloom_filter_recency_hours = 48
hotlist_recency_secs = 86400
Once that is edited, restart Splunk.
/opt/splunk/bin/splunk restart
Now, the wasabi volume is linked to the specific index. At this point we can begin ingesting data to that index. The data will roll from hot -> warm after a certain time or size, which in this case is 1 day or 1550 GB. If we want to force a roll for test purposes we can perform an internal rest call to make it happen. The format is:
/opt/splunk/bin/splunk _internal call /data/indexes/<index>/roll-hot-buckets
For our example:/opt/splunk/bin/splunk _internal call /data/indexes/syslog/roll-hot-buckets
Then enter your Splunk admin username and password.Once the bucket is rolled to warm we should see it populate in its own folder within our Wasabi bucket.

Now SmartStore is fully enabled for the specific index.
For troubleshooting information, refer to Troubleshoot SmartStore.
Splunk SmartStore Dashboards are available within the Splunk Monitoring Console to check the status and errors related to the SmartStore deployment.