Resource group per slug
claw-<slug> isolates each customer environment as its own Azure footprint.
Solutions
This page describes the technical shape of the service: how a slug-scoped environment is deployed, where the trust boundaries sit, how runtime state persists, how observability is wired, and which optional Azure dependencies can be attached around the core platform so data privacy and residency requirements can be met when they matter. Telegram is currently supported, and additional chat interfaces are being vetted before they are included in the standard operating model.
Azure Architecture Pattern
Each OpenClaw environment is deployed as its own Azure footprint under a dedicated resource group named claw-<slug>. Identity, secrets, runtime, persistence, observability, and Azure AI Foundry integration are kept explicit so the environment can meet data privacy, residency, and audit requirements without collapsing into a shared tenant model.
Isolation model
claw-<slug> isolates each customer environment as its own Azure footprint.
The shared environment boundary for the slug deployment, with logging and storage wiring attached at the environment level.
Public gateway app running the OpenClaw container plus an NGINX sidecar that separates the Teams bot webhook path and proxies other traffic internally.
Security boundary
Install creates or updates the slug app registration, then applies shire-admins and shire-<slug> groups as the main authorization boundary.
Stores the Entra secret, OpenClaw gateway token, Teams and Telegram secrets, optional Foundry key, and the Azure Files storage key.
Receives AcrPull for the registry and Key Vault Secrets User so the gateway can pull its image and load secrets without embedded credentials.
Unauthenticated traffic is redirected to Microsoft login except the bot webhook path, and access is constrained to the configured Entra groups and optional explicit identities.
Runtime and persistence
Holds the OpenClaw runtime image consumed by the gateway container app.
The public Container App hosts the OpenClaw app container and NGINX sidecar and exposes the environment to users and bot callbacks.
The Azure Files share is registered with the managed environment and mounted into the gateway so pairing state and runtime configuration survive revisions and restarts.
Observability
Collects the platform log stream for the managed environment and supports operational analysis.
Receives OpenTelemetry traces and logs so the OpenClaw runtime can be monitored as a first-class production service.
Operational visibility is surfaced through the Azure monitoring stack rather than through app-local diagnostics alone.
Optional branches
A Container Apps job can be added for brokered lifecycle operations without changing the main gateway runtime path.
A small separate storage account can host the static Teams tab configuration page.
LLM access is wired from an existing Azure AI Foundry or Azure OpenAI account into Key Vault and the Container App so privacy and residency requirements stay within Azure.
Each deployment is a separate Azure environment, not an in-app tenant. That makes identity scope, secret handling, operational ownership, and future teardown or migration clearer.
The runtime, secrets, storage, and telemetry remain inside the slug-specific Azure environment, while Azure AI Foundry stays as a separate optional Azure service rather than being folded into the core deployment footprint.
Official Microsoft Azure architecture icons are included here in a labeled architecture-diagram context.
Service components
Managed OpenClaw
OpenClaw is deployed into a dedicated resource group per slug, with the core runtime hosted on Azure Container Apps rather than as an in-app tenant inside a shared footprint.
Security operations
Microsoft Entra ID is enforced at the edge, Key Vault is used as the central secret store in RBAC mode, and the managed identity model removes embedded pull and secret credentials from runtime configuration.
Continuous improvement
Azure Files preserves state across restarts and revisions, while Log Analytics and workspace-based Application Insights keep the environment observable as an operated production service.
Delivery around the platform
Broker jobs, Teams static website storage, and an existing Azure AI Foundry or Azure OpenAI account can be attached around the core environment without changing the main hosting boundary. Telegram is currently supported, and additional chat interfaces are under evaluation.
Delivery sequence
01
We start with the target subscription, identity boundary, security requirements, residency needs, and the operational workflows the environment has to support.
02
We provision the resource group, runtime, identity wiring, secret handling, persistence, and observability model through a known Azure deployment path.
03
From there the environment is upgraded, monitored, and reviewed, while optional integrations, AI services, and workflow-specific extensions can be added around the core path.
Delivery context
Experience includes cloud architecture, analytics, NLP automation, reinforcement learning prototypes, and secure, scalable production delivery for operational environments.
Co-founded and helped shape technical direction for a company focused on optimization and ML-driven systems, including threat detection work that informed later AI and agent research.
Delivery history includes automation, integration, databases, web services, and analytics across multiple industries and operational contexts.
Client feedback
“Rajesh is that rare mix of highly technical and a fantastic communicator. He is mature, capable of owning work without supervision, and consistently raises the right flags when appropriate.”
A. Brooks Hollar, Director of Engineering, Ad Adapted
“Rajesh and Theresa demonstrated a high level of competency in the technical aspects of UNIX, X-Windows, and C language design and development. I would retain their services again without hesitation.”
Frank Kistner, Director of Software Development, Alcatel
Next step
The fastest next step is a technical conversation about the target Azure environment, identity and security requirements, and whether the deployment needs only the managed core platform or additional Azure services around it.