February 7, 2026/Guide, News/10 min read

What Is OpenClaw? The Self-Hosted AI Assistant That “Actually Does Things” (2026 Guide)

OpenClaw is a self-hosted AI assistant you run on your own device or server. Unlike a standard chatbot, it can use tools and integrations to take real actions – with permissions you control.

OpenClaw 🦞 (formerly Clawdbot, later Moltbot) is a self-hosted AI assistant you run on your own device (or a server you control) and talk to from chat apps you already use.

The key difference vs a normal chatbot is that OpenClaw can take actions through tools and integrations, not just answer questions.

If you have ever thought “AI is impressive, but I still end up doing the work manually,” OpenClaw is designed for that gap. It aims to turn chat instructions into real outcomes, with permissions and guardrails you fully control.

Quick take: OpenClaw is best understood as an LLM connected to real tools (email, calendar, files, messaging, and more) with a permissions model. Self-hosting can improve control and privacy, but it also means you own the security decisions and operational hygiene.

Key takeaways:

  • OpenClaw is a personal AI assistant you host that can work through common chat channels like Telegram, Slack, Discord, WhatsApp, iMessage, etc. The full supported list is maintained in the project repository. (GitHub)
  • “Actually does things” means tool use and actions – not just text generation.
  • Self-hosting can improve control, but shifts security responsibility to your setup and policies. (OpenClaw security docs)
  • Extensions (“skills“) are powerful, and also a supply-chain risk if you install untrusted code. (VirusTotal analysis)
  • The best way to start is small: 1 channel (Telegram), 2 tools, tight allowlists, sandboxing, and one measurable workflow.

What “AI that actually does things” means

Most people’s first AI experience is a chatbot: you ask a question, it answers, and you copy-paste the output into email, a document, or a spreadsheet. That is useful, but it is still manual work.

An “AI assistant that actually does things” is different because it can:

  • Use tools – messaging, calendars, email, files, a browser, internal APIs
  • Take actions – draft and send messages, schedule meetings, update files, run approved commands
  • Live where work happens – inside chat channels, not just a web tab

OpenClaw is built around this action layer. What it can do depends on which tools you connect and what permissions you allow.

What OpenClaw is and is not

OpenClaw is

  • A self-hosted AI assistant platform you run on your own hardware (laptop, homelab, or VPS).
  • Multi-channel: it can operate through chat platforms listed in its repo (for example WhatsApp, Telegram, Slack, Discord, and others).
  • Tool- and extension-driven: you extend it by enabling tools and installing skills.
  • Permissioned by design: it documents pairing, allowlists, and policies to limit who can talk to the assistant and what it can do.

OpenClaw is not

  • Just a chatbot: the point is outcomes via actions, not only better answers.
  • Only a copilot inside one app: it is designed to work across tools and channels, not just a single product surface.
  • Only a developer framework: it includes onboarding and a dashboard so non-devs can operate it, even if advanced setups still benefit from technical help.

How OpenClaw works: high-level overview

You do not need to memorize OpenClaw’s architecture to use it well. You do need a simple mental model of what is talking to what, and where risk lives.

The model (LLM)

OpenClaw can be configured to use different model providers and supports model selection and failover behavior. The docs describe a model order (primary, then fallbacks) to improve reliability. (Model concepts)

It can also use self-hosted models if you expose an OpenAI-compatible API endpoint, and it explicitly mentions Ollama as a straightforward way to run local models. (FAQ)

Tools and actions

Tools are how the assistant moves from text to work. Depending on what you enable, that can include messaging actions, calendar/email actions, file read/write, browser control, and command execution. This is also where permissions matter most.

Skills (extensions)

Skills are add-ons that can bundle instructions and scripts.

They can speed up setup, but they also introduce supply-chain risk if you install third-party code without review. Security researchers have documented malicious patterns in skills and social-engineering installation flows.

Permissions and the Gateway

OpenClaw runs a Gateway service and provides a Control UI / dashboard for management. The docs cover onboarding and basic operation (gateway status, dashboard, etc.).

Security-wise, OpenClaw documents DM policies such as pairing, allowlists, and “open” modes – and provides a security audit command to flag common risky configurations.

Hosted vs self-hosted: quick comparison

Topic Hosted AI assistants Self-hosted (OpenClaw style)
Setup Fastest start More setup and maintenance
Data/control Provider-controlled storage and policies You control runtime, storage, and access decisions
Customization Often limited to what the product exposes Deeper customization via tools, skills, and config
Security Shared responsibility, provider guardrails More control – but more ways to misconfigure
Cost model Subscription-based Infrastructure + model usage (API) or hardware (local models)
Best fit Convenience-first workflows Sensitive workflows, automation, “my keys, my rules”

Use cases for OpenClaw

OpenClaw tends to shine when you want repeatable workflows with tight control over permissions and data handling.

Here are self-hosted-friendly examples that map well to real work.

Inbox triage and follow-ups (with guardrails)

  • Summarize new messages into 5 bullets
  • Draft replies for approval
  • Create a “needs my input” list
  • Escalate anything from VIP senders

Tip: start with a dedicated mailbox or limited-scope credentials so mistakes do not hit your primary account.

Calendar and meeting coordination

  • Propose meeting times based on availability
  • Generate agendas from recent context
  • Send confirmations to participants

Marketing ops: content pipelines

  • Turn rough notes into publish-ready drafts
  • Create variant copy for ads and landing pages
  • Maintain a brand glossary and banned phrases
  • Draft weekly changelogs from internal updates

Lightweight sales and client support automation

  • Draft outreach emails based on a prospect’s website
  • Generate call prep briefs
  • Route urgent requests to the right channel
  • Create follow-up tasks and reminders

Ops and IT runbooks (careful)

  • Summarize incidents from logs and propose next steps
  • Run read-only diagnostic commands
  • Prepare remediation steps for a human to execute

This is where you want the tightest permissions and the strongest sandboxing.

Self-hosting: what you need

OpenClaw documents support for macOS, Linux, and Windows (with WSL2 recommended for Windows in the docs), and it lists Node.js 22+ as a requirement.

Deployment options

  • Your laptop/desktop: simplest, but be careful mixing personal browsing and agent credentials.
  • Dedicated mini PC / homelab: a strong “always-on” option without public exposure.
  • VPS: convenient uptime, but higher exposure risk if you open ports incorrectly. Prefer VPN/tailnet patterns where possible.

Tip: If you want a straightforward VPS that’s powerful and affordable, Hetzner is a solid option. New accounts get a ~$20 credit via our referral link.

Model options – hosted vs local

Hosted models (API) – the easiest and usually most reliable for tool-use.

  • Best choice if you want consistent quality, fast responses, and minimal setup.
  • Good default for most workflows: Gemini 3 Flash. It’s fast, cost-efficient, and strong enough for summaries, drafts, routing, light analysis, and everyday agent tasks.
  • “Smart but expensive” option: Claude Opus 4.5 / 4.6. These models are better for deep reasoning, long multi-step workflows, complex coding, and difficult edge cases. Because they are costly, it usually makes sense to use them only for the hardest 10–20% of tasks.
  • Practical setup: use a fast, cheaper model as your default, and escalate to Opus only when the task clearly needs more reasoning power.

Tip: use a model router

  • A model router like OpenRouter lets you access many models through one API.
  • This makes it easy to switch models, add fallbacks, or route “easy” vs “hard” tasks to different models without changing your whole setup.
  • This pairs well with OpenClaw’s model selection and fallback logic.

Local models – mainly for advanced users.

  • Best when maximum data control matters and you are willing to trade speed and simplicity for privacy.
  • Requires adequate CPU/GPU resources and more tuning.
  • Tool-use reliability varies a lot with local models, so it’s safer to keep tools restricted at first and expand gradually once behavior is predictable.

Storage and backups

Plan for where configs, logs, transcripts, and any cached files live.

Back up what you must (configs/workflows) and avoid backing up raw secrets unless encrypted and access-controlled.

Simple sizing guide

These are practical starting points for a single-user setup.

They are estimates, not official requirements, and real needs depend on concurrency, enabled tools, and whether you run local inference.

  • Orchestration only (hosted LLM): 2-4 vCPU, 4-8 GB RAM, 10-30 GB SSD
  • Heavier tool usage (more logs/files): 4-8 vCPU, 8-16 GB RAM, 30-100 GB SSD
  • Local model inference (rough ranges): small models often need ~8-12 GB VRAM; smoother tool-use often benefits from ~16-24+ GB VRAM; CPU-only is possible but slow

OpenClaw’s docs note that weaker/over-quantized models can be more vulnerable to prompt injection and unsafe behavior when tools are enabled.

Security and privacy considerations

OpenClaw is powerful because it can do real work. That also makes it a higher-risk system than a typical chatbot tab. The safety strategy is not “trust the prompt” – it is permissions, isolation, and explicit allowlists.

The two big risk buckets

  • Prompt injection: an attacker crafts a message that tries to trick the model into unsafe actions. The docs recommend enforcing safety via tool policies, approvals, sandboxing, and allowlists.
  • Skill supply chain: skills are third-party code. Researchers have documented malicious patterns in skills and “paste this into your terminal” social engineering flows.

There has also been mainstream coverage warning about deployment and misconfiguration risks with AI agent systems, including OpenClaw.

A sane baseline (practical hardening)

  • Run openclaw security audit regularly, especially after configuration changes.
  • Use pairing or allowlist DM policies – avoid “open” modes unless you truly want a public bot.
  • Enable DM session isolation if more than one person can message the bot.
  • Keep browser control and command execution tightly gated, sandboxed, and explicitly allowlisted.
  • Use separate credentials (or dedicated accounts) for email/calendar/chat where possible.

Checklist (copy-paste):

  • [ ] Run openclaw security audit after setup changes
  • [ ] DM policy set to pairing or allowlist (not open)
  • [ ] Group chats require mention or are allowlist-only
  • [ ] DM sessions isolated if multiple people can message the bot
  • [ ] Sandbox enabled for any agent with file or shell access
  • [ ] Tool allowlist is explicit (deny exec/browser unless needed)
  • [ ] Dedicated browser profile for the agent (avoid personal profiles)
  • [ ] Separate credentials for critical tools where possible
  • [ ] Backups exclude raw secrets or are encrypted and access-controlled
  • [ ] Skills only from trusted sources; review before installing

Getting started: a “first weekend” plan

The goal is to get something useful fast without creating a security incident. Start with one channel, two tools, tight permissions, and one workflow you can measure.

Day 1: install and lock down access

  • Install + onboard using the recommended flow, then confirm the gateway is running and open the dashboard. (Getting started)
  • Pick your model approach: hosted models are usually the easiest starting point for reliable tool-use.
  • Connect one chat channel and set DM access to pairing/allowlist.
  • Run the security audit and fix obvious issues before enabling extra tools.

Day 2: connect 2 tools and build 1 measurable workflow

  • Connect two “boring” tools that create immediate leverage (email + calendar, or chat + files).
  • Set permissions: default to read-only and “ask before acting”; widen later.
  • Build one workflow (example: “Monday inbox triage” – summarize, draft replies, produce a short action list).
  • Measure outcomes: minutes saved, reduced back-and-forth, error rate, and how often you had to step in.

GPTBot.io tip: document your allowed actions policy and your first successful workflow. Those two artifacts make scaling safer and easier.

FAQ

What is OpenClaw?

OpenClaw is a self-hosted AI assistant you run on your own device or server, designed to operate through chat apps and take tool-based actions under your control.

Do I need a GPU to use OpenClaw?

Not if you use hosted model providers. A GPU mainly matters if you want local model inference.

Can OpenClaw run fully offline?

Potentially, if you use local models and avoid online tools. In practice, many workflows (email, calendar, chat) require internet connectivity.

Is it safe to expose OpenClaw to the public internet?

It can be risky. OpenClaw’s docs emphasize allowlists, pairing, sandboxing, and careful network exposure. Prefer VPN/tailnet access first.

What is the difference between tools and skills?

Tools are action interfaces (messaging, files, exec, browser). Skills are extension packages that can include scripts and instructions, and should be treated like installing code.

How do I reduce the risk of malicious skills?

Install only from trusted sources, review skill contents, avoid opaque terminal commands, keep tools sandboxed, and re-run security audits after changes.

What should a company do before deploying OpenClaw?

Start in a non-production environment, use dedicated credentials, lock down inbound access, define allowed actions, and plan incident response (rotation, logs, audit trails).

Glossary

  • Agent – a configured assistant profile (rules, tools, workspace access).
  • Gateway – the always-on service that powers the assistant and UI.
  • Control UI / Dashboard – the web interface to manage and chat with the assistant.
  • Tool – an action capability (read files, send messages, run commands).
  • Skill – an extension package that can include instructions/scripts; treat like code.
  • Sandboxing – isolating tools and workspace access to reduce blast radius.
  • Pairing – a DM safety mode where unknown senders must be approved.
  • Allowlist – explicit “who can use the assistant” permissions.
  • Prompt injection – messages designed to manipulate a model into unsafe actions.
  • Model failover – switching between models/providers when the primary fails.
Ultimate AI Prompt Patterns Cheatsheet

Get the Ultimate Prompt Patterns Cheatsheet (Free)

Unlock 20+ proven AI prompt patterns for better results from ChatGPT, Claude, Gemini, and more.

You'll also get occasional AI tips, tools & new prompt drops. No spam. Unsubscribe anytime.