Splunk With Wasabi
    • 20 Jun 2024
    • 3 Minutes to read
    • PDF

    Splunk With Wasabi

    • PDF

    Article summary

    How do I use Splunk with Wasabi?

    Splunk is now certified for use with Wasabi. To use this product with Wasabi, please follow the instructions below. 

    Minimum version:

    • Splunk Enterprise 7.2.6 is necessary for Clustered Indexer Deployments

    • Splunk Enterprise 7.3.1.1 for Standalone Indexer Deployments

    Documentation Reference:

    https://docs.splunk.com/Documentation/Splunk/7.3.1/Indexer/ConfigureSmartStore

    1. First, a bucket must be created in Wasabi for Smart Store to connect to: in this case, we named it “smartstore”. Enable versioning on the bucket & choose Wasabi region closest to the Splunk deployment.

    splunk.PNG
    1. The Cache Manager needs to be enabled on each Indexer that Smart Store will be utilized. These settings should be verified with Splunk in order to have a functional deployment. Too small of a cache will result in pre-mature eviction of buckets to remote storage. Too large of a cache will take up excess space in the local storage. Using this reference of a 100GB per day ingest, we came to a cache size of 1460MB. We considered 1 day of hot storage, 30 Day cache retention, 36 month full retention, replication factor of 2, and a compression factor of 50%

      Reference Configs: https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Serverconf#server.conf.spec

      For our testing, we assigned this stanza to server.conf in the /opt/splunk/etc/apps/search/local/

      [cachemanager]

      #max_cache_size is necessary to enable smart store

      max_cache_size = 1460

      #hotlist settings protect critical data from eviction, these settings will vary per deployment

      hotlist_bloom_filter_recency_hours = 1

      hotlist_recency_secs = 60

    2. Next we want to add our Wasabi volume information to indexes.conf in order to connect to the remote storage. This is usually at the top of the conf file. 

      https://docs.splunk.com/Documentation/Splunk/7.3.1/Admin/Indexesconf#indexes.conf.spec

    3. For this volume, Provide a name, wasabi (as an example), but the name is irrelevant. It is only a reference within Splunk to the remote storage. Under path, you need to reference the bucket you created.

      [volume:wasabi]

      storageType = remote

      path = s3://smartstore/

      emote.s3.access_key =

      remote.s3.secret_key =

      remote.s3.endpoint = https://s3.wasabisys.com

      remote.s3.auth_region = us-east-1

      # While preferred, if versioning is not being utilized add:

      # remote.s3.supports_versioning = false

      Note that this config example discusses the use of Wasabi's us-east-1 storage region. To use other Wasabi storage regions, please use the appropriate Wasabi service URL as described in this article 

    4. Restart Splunk:

      ./opt/splunk/bin/splunk restart

      Create a sample txt file by typing:

      echo "hello world" > test01.txt

      Lets use Splunk to attempt to push it into Wasabi using the volume we created:

      ./opt/splunk/bin/splunk cmd splunkd rfs -- putF test01.txt volume:wasabi

      If no errors occur, we can list from the cli as well to verify:

      ./opt/splunk/bin/splunk cmd splunkd rfs -- ls --starts-with volume:wasabi

      As a result, we should see:

    Size

    Name

    12B

    test01.txt

    We should also be able to see it listed within Wasabi WebUI also.

    splunk2.png
    1. If something is not working, it would be logged in /opt/splunk/var/log/splunk/splunkd-utility.log under the S3Client heading. You can check from the CLI with:

      grep S3Client /opt/splunk/var/log/splunk/splunkd-utility.log

    2. Now that we have verified connectivity, we can add this remote storage to a provisioned index. In this case, the index was also called wasabi. We need to mount the volume under the wasabi index stanza in indexes.conf

      Yet another disclaimer, these settings will vary per deployment and you should check with Splunk before rolling it into production. The key part is the remote path, which references the volume we created as well as the index name. 

      [wasabi]

      coldPath = $SPLUNK_DB/wasabi/colddb

      enableDataIntegrityControl = 0

      enableTsidxReduction = 0

      homePath = $SPLUNK_DB/wasabi/db

      maxTotalDataSizeMB = 512000

      thawedPath = $SPLUNK_DB/wasabi/thaweddb

      remotePath = volume:wasabi/wasabi

      hotlist_bloom_filter_recency_hours = 48

      hotlist_recency_secs = 86400

      Once that is edited, restart Splunk

      ./opt/splunk/bin/splunk restart

    3. Now, the wasabi volume is linked to the specific index. At this point we can begin ingesting data to that index. Once the data rolls from hot -> warm, which is in this case 1 day or 1460MB. But, if we want to force a roll for test purposes we can perform an internal rest call to make it happen.

      The format is:

      ./splunk _internal call /data/indexes//roll-hot-buckets –auth (admin_username):(admin_password)

      (if you do not use auth it will just prompt you for credentials)

      ./opt/splunk/bin/splunk _internal call /data/indexes/wasabi/roll-hot-buckets

    4. Once the bucket is rolled to warm we should see it populate in its own folder within our Wasabi bucket.

    splunk1.PNG

    Now Smart Store is fully enabled for the specific index.

    Troubleshooting info: 

    https://docs.splunk.com/Documentation/Splunk/7.3.1/Indexer/TroubleshootSmartStore 

    Note:  Splunk Smart Store Dashboards are available within Splunk Monitoring Console to check status and errors related to the Smart Store deployment