Microsoft Sentinel is a Security Information and Event Management (SIEM) service that can be used to ingest Wasabi bucket logs to see S3 events affecting your bucket's data, such as when an object is uploaded or deleted. This requires the use of an on-premise server running Rclone to retrieve the logs from your Wasabi bucket and Logstash to send them to Sentinel, both of which are open-source.
Note that using Sentinel for analyzing and storing your bucket logs will incur additional charges from Microsoft.
This article details the procedure to configure your Wasabi buckets, Rclone, Logstash, and Sentinel (via Azure).
Prerequisites
An active Wasabi Cloud Storage account.
Wasabi access and secret keys. It is recommended to create a sub-user with their own set of keys for this purpose rather than using your root keys. See Creating a User for more details. You may also restrict what access the sub-user has, such as read-only access to a specific bucket, using IAM policies. See IAM and Bucket Policies for details.
Access to the Wasabi Console as the account's root user or a sub-user with WasabiFullAccess permissions.
A Linux server or virtual machine (VM). This solution was tested using Ubuntu Linux 24.04.3 LTS, Rclone v1.72.1, and Logstash 8.19.10.
An active Sentinel subscription that includes Log Analytics.
Access to the Azure portal with sufficient permissions.
High-Level Configuration Steps
Create a Wasabi “logging bucket” for storing logs from other buckets that store your data.
Create a test bucket and configure it to send logs to the new logging bucket.
Install and configure Rclone to run as a service.
Install and configure Logstash.
Upload, download, and delete test objects to/from your test bucket.
Observe Logstash creating a sample JSON file on your Linux server. Save this file.
Create a Data Collection Endpoint in Azure.
Create a Data Collection Rule (DCR) based table in Azure Log Analytics.
Register an Azure application and create a secret for it.
Give the application appropriate permissions.
Configure Logstash to run as a service.
Create more test uploads and downloads to generate Wasabi bucket logs.
Observe your bucket logs in Azure Log Analytics and/or Sentinel.
Configure your other buckets to log to your logging bucket.
Creating a Wasabi Logging Bucket
Log in to the Wasabi Console.
Create a Wasabi “logging bucket” for storing logs from other buckets. See Creating a Bucket for details on this procedure. Enable Object Lock and Versioning on this bucket during the creation process to make your logs immutable for a configurable period of time. Note the name of this bucket and the region it is located in.
Click Buckets and then click the name of your logging bucket.

Click the gear icon on the right to open the bucket’s settings.

Click the Object Lock tab. Enable Default Object Retention, select the Enable Compliance Mode radio button, and enter the number of days and time scale you wish logs to be immutable for (where they cannot be deleted). Click Apply.

It is recommended to create a Lifecycle Policy to delete older log files. Click Lifecycle, then click Create New Rule.

Give the rule a name and select the radio button next to Apply to all objects in the bucket. Scroll down.
.png)
Under Actions, check the boxes next to Expire current version of objects and Permanently delete noncurrent versions of objects. Enter the number of days after the object creation (for example, 91 days) and the days after the object becomes non-current (1 day). This will fully delete a log file 92 days after it was created, which is two days after a log file’s immutability period is up since we previously configured a 90-day Object Lock Retention Time. Scroll down.

Click Save.

Creating a Wasabi Test Bucket
Create a test bucket. It does not have to have Object Lock or versioning enabled. This will be used for test object uploads, downloads, and deletions.
In the Wasabi console, click Buckets, then click the name of the bucket.

Click the settings gear wheel on the right.

Under Properties enable Bucket Logging. Select the previously created logging bucket, and give a logging prefix (the name of the bucket works well).

Click Save Settings.

Installing and Configuring Rclone
Log in to your Linux server via Secure Shell (SSH). The commands given here were executed on Ubuntu 24.04.3 LTS.
Install Rclone.
sudo apt install rclonerclone configCreate a new remote by entering “n”.
$ rclone config Current remotes: Name Type ==== ==== wasabi s3 e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> nName the remote “wasabi”.
Enter name for new remote. name> wasabiEnter the number associated with Amazon S3 Compliant Storage Providers (“4” in our example).
Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. 1 / 1Fichier \ (fichier) 2 / Akamai NetStorage \ (netstorage) 3 / Alias for an existing remote \ (alias) 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other ... Storage> 4Enter the number associated with Wasabi. This is “44” in our example, but this number changes over time.
Option provider. Choose your S3 provider. Choose a number from below, or type in your own value. Press Enter to leave empty. ... 44 / Wasabi Object Storage \ (Wasabi) provider> 44Enter “1” to enter your Wasabi access and secret keys in the next step.
Option env_auth. Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own boolean value (true or false). Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true) env_auth> 1Enter your Wasabi access and secret keys. It is recommended to use a sub-user’s keys, not the root user’s keys.
Option access_key_id. AWS Access Key ID. Leave blank for anonymous access or runtime credentials. Enter a value. Press Enter to leave empty. access_key_id> 8X9HK*************** Option secret_access_key. AWS Secret Access Key (password). Leave blank for anonymous access or runtime credentials. Enter a value. Press Enter to leave empty. secret_access_key> vu6vZ3**********************************Enter “1” to use v4 signatures.
Option region. Region to connect to. Leave blank if you are using an S3 clone and you don't have a region. Choose a number from below, or type in your own value. Press Enter to leave empty. / Use this if unsure. 1 | Will use v4 signatures and an empty region. \ () / Use this only if v4 signatures don't work. 2 | E.g. pre Jewel/v10 CEPH. \ (other-v2-signature) region> 1This configuration example discusses the use of Wasabi's us-east-1 storage region. Use the region your bucket is located in. For a list of regions, see Available Storage Regions.
Option endpoint. Endpoint for S3 API. Required when using an S3 clone. Choose a number from below, or type in your own value. Press Enter to leave empty. 1 / Wasabi US East 1 (N. Virginia) \ (s3.wasabisys.com) 2 / Wasabi US East 2 (N. Virginia) \ (s3.us-east-2.wasabisys.com) 3 / Wasabi US Central 1 (Texas) \ (s3.us-central-1.wasabisys.com) 4 / Wasabi US West 1 (Oregon) \ (s3.us-west-1.wasabisys.com) 5 / Wasabi CA Central 1 (Toronto) \ (s3.ca-central-1.wasabisys.com) 6 / Wasabi EU Central 1 (Amsterdam) \ (s3.eu-central-1.wasabisys.com) 7 / Wasabi EU Central 2 (Frankfurt) \ (s3.eu-central-2.wasabisys.com) 8 / Wasabi EU West 1 (London) \ (s3.eu-west-1.wasabisys.com) 9 / Wasabi EU West 2 (Paris) \ (s3.eu-west-2.wasabisys.com) 10 / Wasabi EU South 1 (Milan) \ (s3.eu-south-1.wasabisys.com) 11 / Wasabi AP Northeast 1 (Tokyo) endpoint \ (s3.ap-northeast-1.wasabisys.com) 12 / Wasabi AP Northeast 2 (Osaka) endpoint \ (s3.ap-northeast-2.wasabisys.com) 13 / Wasabi AP Southeast 1 (Singapore) \ (s3.ap-southeast-1.wasabisys.com) 14 / Wasabi AP Southeast 2 (Sydney) \ (s3.ap-southeast-2.wasabisys.com) endpoint> 1Press Enter to leave the location constraint empty.
Option location_constraint. Location constraint - must be set to match the Region. Leave blank if not sure. Used when creating buckets only. Enter a value. Press Enter to leave empty. location_constraint>Enter “1” in the Option acl step.
Option acl. Canned ACL used when creating buckets and storing or copying objects. This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Note that this ACL is applied when server-side copying objects as S3 doesn't copy the ACL from the source but rather writes a fresh one. If the acl is an empty string then no X-Amz-Acl: header is added and the default (private) will be used. Choose a number from below, or type in your own value. Press Enter to leave empty. / Owner gets FULL_CONTROL. 1 | No one else has access rights (default). \ (private) / Owner gets FULL_CONTROL. 2 | The AllUsers group gets READ access. \ (public-read) / Owner gets FULL_CONTROL. 3 | The AllUsers group gets READ and WRITE access. | Granting this on a bucket is generally not recommended. \ (public-read-write) / Owner gets FULL_CONTROL. 4 | The AuthenticatedUsers group gets READ access. \ (authenticated-read) / Object owner gets FULL_CONTROL. 5 | Bucket owner gets READ access. | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ (bucket-owner-read) / Both the object owner and the bucket owner get FULL_CONTROL over the object. 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ (bucket-owner-full-control) acl> 1Enter “n” for the Advanced configuration.
Edit advanced config? y) Yes n) No (default) y/n> nEnter “y” to keep the remote configuration.
Configuration complete. Options: - type: s3 - provider: Wasabi - access_key_id: 8X9HK*************** - secret_access_key: vu6vZ3********************************** - endpoint: s3.wasabisys.com - acl: private Keep this "wasabi" remote? y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> yEnter “q” to quit the configuration.
Current remotes: Name Type ==== ==== wasabi s3 e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> qStart Rclone with the following commands to create a local directory and test connectivity to your Wasabi bucket. Replace YOUR_USER and YOUR_GROUP with your Linux user and group and YOUR_LOGGING_BUCKET with the name of your logging bucket.
sudo mkdir /mnt/wasabi-logs/ sudo chown YOUR_USER:YOUR_GROUP /mnt/wasabi-logs/ rclone mount wasabi:/YOUR_LOGGING_BUCKET/ /mnt/wasabi-logs/Perform some test uploads, downloads, and deletes on your test bucket (not the logging bucket). After a short period of time (approximately 30 minutes or so), a log file should be generated in your logging bucket.
Log in to your Linux server with another SSH session and issue the following command. You should see a bucket log listed.
ls -la /mnt/wasabi-logs/Go back to your original SSH session and issue a “Ctrl+C” command to stop Rclone.
Create an “rclone.service” file in /etc/systemd/system with the following contents. You can use
sudo vi /etc/systemd/system/rclone.serviceto create the file, or use whatever your preferred Linux text editor is (for example, vi, vim, nano). Replace YOUR_USER and YOUR_GROUP with your Linux user and group and YOUR_LOGGING_BUCKET with the name of your logging bucket.[Unit] Description=Startup script for Rclone to mount Wasabi logs bucket as /mnt/wasabi-logs After=network-online.target Wants=network-online.target [Service] ExecStart=/usr/bin/rclone mount wasabi:/YOUR_LOGGING_BUCKET/ /mnt/wasabi-logs/ ExecStop=/bin/fusermount -u /mnt/wasabi-logs Restart=always RestartSec=10 User=YOUR_USER Group=YOUR_GROUP Type=simple [Install] WantedBy=multi-user.targetIssue the following commands to run Rclone as a service that persists across reboots.
sudo systemctl daemon-reload sudo systemctl enable rclone.service sudo systemctl start rclone.serviceTest to make sure Rclone is running by issuing the following command. You should see your bucket log file(s).
ls -la /mnt/wasabi-logs/
Installing and Configuring Logstash
As of the writing of this document, the latest supported version of Logstash with the microsoft-sentinel-logstash-output-plugin is 8.15. We ran Logstash version 8.19.10 in our tests and did not encounter any issues.
Log in to your Linux server via SSH.
Issue the following commands to install Logstash. To install the version used during our testing, issue the following command in place of the last one:
sudo apt install logstash=1:8.19.10-1wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elastic-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/elastic-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list sudo apt update sudo apt install logstashIf you do not want to not update Logstash automatically going forward, issue the following command:
sudo apt-mark hold logstashIssue the following command to install the Microsoft Sentinel Logstash Output plugin:
sudo /usr/share/logstash/bin/logstash-plugin install microsoft-sentinel-logstash-output-pluginIn /etc/logstash/logstash.yml uncomment (remove the preceding # mark from the line) and set the following. This affects all pipelines, but it may instead be set for individual pipelines if desired. Use the command
sudo vi /etc/logstash/logstash.ymlto edit the file, or use your favorite text editor.pipeline.ecs_compatibility: disabledCreate a sample config file for testing purposes using your favorite Linux text editor. For example:
sudo vi /etc/logstash/conf.d/wasabi-to-sentinel-sample.conf
Insert the following text into the file and save it.input { file { path => "/mnt/wasabi-logs/*" } } output { microsoft-sentinel-logstash-output-plugin { create_sample_file => true sample_file_path => "/tmp/logstash/" } }Issue the following command to create the temporary directory for testing:
mkdir /tmp/logstashTo test with a log file from your bucket, execute the following command.
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/wasabi-to-sentinel-sample.confUpload, download, and delete files from your test bucket several times and then wait for a log file to appear in your logging bucket and in /mnt/wasabi-logs/. It can take approximately 30 minutes or so for a log file to show up.
Save the .json file in /tmp/logstash/ to your computer for use later.
- If you see a log file appear in your /mnt/wasabi-logs/directory but do not see any output file in /tmp/logstash, modify the ExecStart line in /etc/systemd/system/rclone.service to be the following, replacing YOUR_LOGGING_BUCKET with the name of your logging bucket:
ExecStart=/usr/bin/rclone mount wasabi:/YOUR_LOGGING_BUCKET/ /mnt/wasabi-logs/ --dir-cache-time 10s --poll-interval 10s --allow-other
- Uncomment (remove the preceding # character from the line) the following line in /etc/fuse.conf:user_allow_other
- Reboot your Linux server for all changes to take effect, then repeat steps 9-10.
Creating a Data Collection Endpoint in Azure
Log in to the Azure Portal.
Go to the Azure Monitor.

Under Settings on the left hand side, click Data Collection Endpoints.

Click Create.

Give the endpoint a name of “Logstash-Wasabi”, select your subscription and resource group, along with the appropriate region. Click Review + Create.
.png)
Click Create.
.png)
Click the name of your Data Collection Endpoint, Logstash-Wasabi.

Copy the Logs Ingestion URL and save it to a file on your computer.
.png)
Creating a Data Collection Rule (DCR) Based Table in Azure Log Analytics
Go to your Log Analytics workspace and under Settings click Tables. Click Create.

Click New custom log (Direct Ingest).

Give the table a name of “Logstash_Wasabi”, select Basic as the table plan, and click Create a new data collection rule.

Select the appropriate Subscription and Resource group. Give the rule a name of “Logstash_Wasabi_DCR”. Click Done.
.png)
Select the previously created DCR from the drop-down menu along with the previously created “Logstash-Wasabi” Data Collection Endpoint. Click Next.

Under Schema and transformation click Browse for files.

Upload the sample .json file previously saved from /tmp/logstash on your Linux server.
Click Transformation editor.
.png)
Download and open the attached “Log-Analytics-Transformation.txt” file and copy and paste the contents into the textbox.
Click Run.

You will see the contents of your sample .json file from your Linux server in the output. Click Apply.

Click Next.

Click Create.

Registering Application and Creating Secret
Go to your Azure Directory Overview page and click Add registration application.

Give the application a name of “Logstash_Wasabi”. Select the radio button next to Accounts in this organizational directory only. Click Register.

Copy the application (client) ID and save it to your computer for use later. Click Add a certificate or secret.
.png)
Click + New client secret.

Give a description of “Logstash_Wasabi” and select the appropriate expiration value for your organization (we selected the recommended value of 180 days in our testing). This will need to be changed periodically so as not to have an interruption in Wasabi log delivery to Sentinel. Click Add.

Copy the Secret Value and save it to a secure location. Note the expiration date.

Under Microsoft Azure Monitor, search for “data collection rules”. Click Data collection rules.

Click Logstash_Wasabi_DCR.

Click JSON View.
.png)
Copy the “immutableId” value and save it to your computer.
.png)
Scroll down and copy the “streams” value under “dataFlows”. In our testing it is “Custom-Logstash_Wasabi_CL”.

Giving Application Permissions
Under your Logstash_Wasabi Data Collection Rule, click Access control (IAM). Click Add then click Add role assignment.

Search for “monitoring metrics publisher”. Select Monitoring Metrics Publisher and click Next.

Click + Select members.

Search for “Logstash” and select the “Logstash_Wasabi” application. Click Select.

Click Review + assign.

Click Review + assign again on the next screen.
.png)
Creating New Logstash Configuration File
Remove the previously created temporary Logstash configuration file:
sudo rm /etc/logstash/conf.d/wasabi-to-sentinel-sample.confCreate a new /etc/logstash/conf.d/wasabi-logs-to-sentinel.conf file by issuing the following command, or by using your favorite text editor.
sudo vi /etc/logstash/conf.d/wasabi-logs-to-sentinel.confInsert the following text into the file, replacing all the values in CAPS with your own values. For your tenant_id, search for “tenant properties” in your Azure portal and copy the Tenant ID.
input { file { path => "/mnt/wasabi-logs/*" # Use absolute paths; wildcards are supported } } output { microsoft-sentinel-logstash-output-plugin { client_app_Id => "YOUR_CLIENT_APP_ID" client_app_secret => "YOUR_CLIENT_APP_SECRET" tenant_id => "YOUR_AZURE_TENANT_ID" data_collection_endpoint => "YOUR_DATA_COLLECTION_ENDPOINT_URL" dcr_immutable_id => "YOUR_DCR_IMMUTABLE_ID" dcr_stream_name => "YOUR_DCR_STREAM_NAME" } }
Below is an example file.input { file { path => "/mnt/wasabi-logs/*" # Use absolute paths; wildcards are supported } } output { microsoft-sentinel-logstash-output-plugin { client_app_Id => "ffaada90-****************************" client_app_secret => "tjG8Q~***********************************" tenant_id => "aa9d7384-****************************" data_collection_endpoint => "https://logstash-wasabi-ufnq.eastus-1.ingest.monitor.azure.com" dcr_immutable_id => "dcr-d034fa6b4***********************" dcr_stream_name => "Custom-Logstash_Wasabi_CL" } }Start Logstash as a service and make it persist across reboots by issuing the following commands. Logstash will automatically use the new configuration file. The status command will show if the service is running.
sudo systemctl start logstash sudo systemctl enable logstash sudo systemctl status logstash
Generating New Bucket Logs and Observing in Azure Log Analytics and/or Sentinel
Generate new bucket logs by performing test uploads, downloads, and deletes in your test bucket. It may take 30 minutes or so for logs to show up in your logging bucket and Azure.
Observe the new log files in Azure Log Analytics and/or Sentinel by going to your Log Analytics workspace, clicking on Logs, and running a query. Here is an example screenshot of logs in Azure Log Analytics.
.png)
If you see log files in /mnt/wasabi-logs/ on your Linux server but do not see them in Azure Log Analytics or Sentinel, see the note in step 10 of the Installing and Configuring Logstash section above.
Configuring Other Buckets to Log to Your Logging Bucket
Repeat steps 2-5 of the Creating Wasabi Test Bucket section on your other existing buckets to log to your logging bucket.
Observe new bucket log entries as they appear in Log Analytics and/or Sentinel.
Updating Azure Secret Periodically
Your Azure secret used by Logstash will need to be updated periodically (i.e. every 180 days or whatever value was configured earlier).
Log in to the Microsoft Entra Admin Center.
Click App registrations and click the Logstash_Wasabi name.
.png)
Click the link under Client credentials.

Click + New client secret.

Give the secret a name, an expiry time, and click Add.

Copy the secret Value and save it in a safe location.

Log in to your Linux server running Logstash.
Edit the /etc/logstash/conf.d/wasabi-logs-to-sentinel.conf file using your favorite text editor, such as:
sudo vi /etc/logstash/conf.d/wasabi-logs-to-sentinel.confChange the client_app_secret value to be the new secret value and save the file.
Restart Logstash.
sudo systemctl restart logstashVerify you are seeing new logs in Azure Log Analytics and/or Sentinel. It may take 30 minutes or so for new logs to be available after any S3 action is performed on the bucket (object is uploaded, downloaded, deleted, and so on).
Delete the old secret in the Microsoft Entra Admin Center.
Appendix A - Example Bucket Log
Below is an example bucket log as it appears in a logging bucket before it is modified by Logstash and Azure Log Analytics.
Record format: [BucketOwner Bucket Time RemoteIP Requester RequestId Operation Key Request-URI HttpStatus ErrorCode BytesSent ObjectSize TotalTime Turn-AroundTime Referrer User-Agent VersionId]
=========================================================================================================================================================
49147859625EDC969DDAB60F1069DFA737576CF81F72FD97CCCDA1F9F8867C7E mt-test-bucket [20/Jan/2026:01:36:59 +0000] 121.200.4.63 49147859625EDC969DDAB60F1069DFA737576CF81F72FD97CCCDA1F9F8867C7E CF7DB7BEDE0BBA67:A REST.HEAD.OBJECT test.txt "HEAD /test.txt" 404 NoSuchKey - 0 4 0 "" "rclone/v1.72.0" -