We built PrivateClaw because the hosted OpenClaw platforms on the market today require you to trust them with plaintext. PrivateClaw removes that requirement at the hardware layer.
PrivateClaw runs AI agents inside Trusted Execution Environments (TEEs), backed by AMD’s SEV-SNP standard. This means that your data is encrypted at the hardware level, enforced by the AMD Secure Processor outside the host OS trust boundary.
PrivateClaw comes with inference that also runs inside TEEs, which means your prompts and completions are private as well.
How it works:
Each user gets a dedicated CVM (Confidential VM) — no shared tenancy. SEV-SNP provides hardware-enforced memory encryption with a per-VM key managed by the AMD Secure Processor, outside the host OS trust boundary. The hypervisor cannot read guest memory.
Onboard now by running ssh privateclaw.dev in your terminal of choice.
How you verify it:
Our open-source CLI https://github.com/lunal-dev/privateclaw-cli is installed by default on all user CVMs and enables users to perform a 5-step verification:
1. SEV-SNP attestation — fetches a signed attestation report from the AMD PSP and validates it against AMD's root of trust 2. vTPM verification — confirms the virtual TPM's endorsement key is bound to the CVM's attestation 3. Host key binding — verifies the SSH host key you're connecting to is the one measured in the attestation report 4. Inference endpoint check — confirms the inference and inference proxy cert is bound to their respective TEE measurements 5. Access control audit — validates that only your SSH key is authorized and the cloud’s guest agent is disabled
Every step is transparent and auditable, and the CLI that does this for you is open source.
Today, we enable you to verify that your agent is running inside a TEE. Attestable builds are on our roadmap, which will also enable users to verify what software is running inside the TEE.
Architecture:
PrivateClaw runs the user CVM and inference gateway on Azure Confidential Compute, and inference itself is powered by Confidential AI's TEE-backed vLLM deployment. The launch digest for each CVM is in the attestation report, so you can verify the boot state. Binding specific userland binaries to published source is on our reproducible build roadmap.
Pricing:
Free tier available. Pro, with greater limits, is $69/mo.
Try it: ssh privateclaw.dev
Comments URL: https://news.ycombinator.com/item?id=47891569
Points: 5
# Comments: 0