Wasabi AiR (AI Recognition) is where AI meets storage. Wasabi AiR turns passive data into intelligent, actionable content. By embedding AI directly into your storage, you reduce the time, cost, and complexity of metadata generation, enabling faster discovery, better governance, and real-time operational insights—all without leaving your cloud storage environment.
Wasabi AiR is an intelligent data layer built into Wasabi’s cloud storage platform. Designed to transform how organizations manage and extract value from unstructured data (including video, images, audio, and documents), Wasabi AiR automates the process of making content searchable, discoverable, and usable by AI systems. Wasabi AiR:
Uses built-in machine learning (ML) services to extract rich metadata (objects, faces, speech, text, logos, and more),
Applies semantic context to every file you store, and
Generates descriptive JSON outputs.
Your media can be searched and filtered in real-time, without manual tagging, external pipelines, or data preparation.
Wasabi AiR is storage that sees, hears, and understands your data. Whether you are organizing a media archive, analyzing visual content, or processing large document libraries, Wasabi AiR helps you make sense of your data at scale and in real-time.
Key Capabilities
AI-Powered Metadata Extraction: Detect and label objects, text, people, logos, speech, and scenes from video, image, and audio content.
Semantic Tagging: Understand context and apply human-like descriptions (for example, "a player scores a goal" or "a beach scene at sunset").
Natural Language Search: Search files using everyday language queries instead of relying on filenames or folders. \
Embedded Metadata Storage: Store AI-generated metadata natively alongside your data.
Plug into AI/ML Pipelines: Accelerate training, inference, and retrieval by connecting AiR-tagged data directly to RAG pipelines, labeling workflows, and LLM fine-tuning processes.
Built-In Machine Learning Services
Service | Description | Sample Use Case |
---|---|---|
Optical Character Recognition (OCR) | Extracts printed or handwritten text from images, documents, or video frames. | Tag text from signage, whiteboards, or scoreboards in media content. |
Speech-to-Text | Converts spoken audio into transcribed, searchable text. | Auto-tag interviews or broadcast commentary. Extract dialogue or narration from interviews, meetings, or news segments. |
Logo Detection | Detects and labels brand logos within visual media. | Track advertisement impressions during events. Identify sponsor appearances, advertisement placements, or branded content. |
Natural Language Description | Generates human-readable descriptions of visual scenes using computer vision. | Identify moments such as “player scoring a goal” in sports videos. Summarize a scene as “a person running through a park” or “a car crash during a race. |
Use Cases by Industry
Digital Media and Entertainment
Automatically tag scenes, props, and environments (for example, “aerial shots,” “cityscapes,” or “car chases”) for content libraries.
Enhance search, recommendation, and advertisement targeting in streaming platforms.
Sports and Live Events
Detect key plays, athlete appearances, sponsor visibility, or injury events using real-time video analysis.
Accelerate highlight reels, track player statistics, and verify advertisement placement return on investment (ROI).
Compliance and Content Moderation
Identify sensitive or regulated content such as license plates, faces, or prohibited terms.
Redact private data before public release, or flag violations in media assets.
Document Management
Use OCR to extract searchable text from scanned PDFs and documents for digital archives, compliance monitoring, or enterprise search.
Downstream Integration Options
Retrieval-Augmented Generation (RAG) pipelines for AI enrichment
Media Asset Managers (MAMs) for enhanced metadata context
Enterprise search tools to index and query multimedia content