Understanding the Gateway
The Gateway is the heart of OpenClaw. It is a single long-lived Node.js process that handles everything—channel connections, session state, the agent loop, model API calls, tool execution, and memory persistence. In this lesson you will learn how the Gateway processes messages, how the agentic loop works, and what makes autonomous execution possible.
What the Gateway Does
Think of the Gateway as a switchboard operator that sits between you and your AI model. It:
- Receives messages from any connected channel (WhatsApp, Discord, Slack, etc.)
- Standardizes them into a common format regardless of the source
- Assembles context by injecting memory, skills, and configuration
- Sends the prompt to your chosen AI model
- Executes tool calls the model requests
- Loops until the task is resolved or a limit is reached
- Streams the response back to the original channel
- Persists everything to JSONL transcripts for auditing and replay
The Gateway runs as a background daemon (launchd on macOS, systemd on Linux), which means it stays active even when you close your terminal.
The Message Pipeline
Every message your agent receives passes through a strictly defined pipeline with six stages:
Channel Adapter → Gateway Server → Lane Queue → Agent Runner → Agentic Loop → Response Path
1. Channel Adapter
Each platform has its own adapter that converts incoming messages into a standard internal format. A WhatsApp voice note, a Discord slash command, and a Slack message all end up as the same type of object by the time they reach the Gateway Server.
This is why you can switch platforms without changing anything about your agent's behavior.
2. Gateway Server
The Gateway Server is the session coordinator. It:
- Identifies the user and conversation
- Checks access controls (is this user allowed?)
- Assigns the message to the correct processing queue
3. Lane Queue
The Lane Queue enforces serial execution by default. Messages are processed one at a time to prevent conflicts—if your agent is booking a flight, you do not want a second message to trigger a conflicting action mid-booking.
Parallelism is only allowed for tasks explicitly marked as low-risk.
4. Agent Runner
The Agent Runner prepares the prompt for the AI model. It:
- Selects the model based on your configuration
- Manages API key rotation (cooling overloaded keys)
- Assembles the system prompt from
SOUL.md,USER.md,AGENTS.md, and relevant skills - Manages the context window (compacting old messages when approaching token limits)
5. Agentic Loop
This is where the autonomous behavior happens. The loop follows a cycle:
Input → Context → Model → Tools → Repeat → Reply
More specifically:
- Perceive: The model receives the user's message plus context (memory, skills, conversation history)
- Plan: The model formulates a reasoning chain (e.g., "book a meeting" becomes: check calendars → pick a slot → send invite → create event)
- Execute: The model calls tools (skills) to carry out each step
- Evaluate: Results are fed back to the model to decide the next action
- Loop or Respond: If more steps are needed, go back to step 2. Otherwise, compose the final reply.
The loop continues until the model decides the task is complete, an error occurs, or a configured limit is reached (max iterations, max tokens, or timeout).
6. Response Path
The final response is streamed back through the channel adapter to the original platform. Simultaneously, the entire exchange—user message, model reasoning, tool calls, tool results, and final response—is written to a JSONL transcript file for auditing and replay.
Plugin Hooks
OpenClaw provides hooks at key points in the pipeline where you can inject custom logic:
| Hook | When It Runs |
|---|---|
message_received | A new message arrives from any channel |
before_agent_start | Before the agentic loop begins |
before_tool_call | Before a skill/tool is executed |
after_tool_call | After a skill/tool returns a result |
before_compaction | Before old messages are compacted to save context |
after_compaction | After compaction completes |
agent_end | After the agentic loop finishes |
message_sending | Before the response is sent to the channel |
message_sent | After the response is delivered |
Hooks let you add logging, filter content, trigger notifications, or enforce custom policies without modifying the core Gateway code.
The Heartbeat
The heartbeat is a timer that fires every 30 minutes by default. On each heartbeat the agent:
- Reads
HEARTBEAT.mdfrom your workspace - Evaluates each item on the checklist
- Decides whether any item requires action
- Either messages you with an update or responds
HEARTBEAT_OK
Example HEARTBEAT.md:
- Check if any GitHub PRs need review
- Remind me about today's meetings 15 minutes before they start
- Monitor the staging server health endpoint
The heartbeat turns your agent from a reactive chatbot into a proactive assistant that works even when you are not sending messages.
Context Window Management
AI models have a maximum number of tokens they can process at once (the context window). OpenClaw manages this automatically:
- Compaction: When the conversation history approaches the model's context limit, older messages are summarized and compressed
- Selective skill injection: Only the skills relevant to the current turn are included in the prompt, rather than injecting every installed skill
- Memory layering: Long-term facts go into
MEMORY.md, daily logs go into date-stamped files, and the full raw record goes into JSONL transcripts
This means your agent can maintain coherent behavior across long conversations without hitting token limits.
Key Takeaway
The Gateway is a single Node.js daemon that orchestrates the entire agent lifecycle—from receiving a WhatsApp message to executing multi-step tool chains and streaming back a response. The agentic loop (input → plan → execute → evaluate → repeat) is what makes OpenClaw autonomous rather than reactive. In the next lesson, you will learn about skills and memory—the systems that give your agent capabilities and persistence.

