Core Concepts
Core Concepts
This document explains the fundamental abstractions and terminology used throughout the Monstrum platform. Understanding these concepts is essential before diving into the architecture or plugin development.
Design Philosophy
Bots as Managed Agents
In Monstrum, AI bots are not just wrappers around LLMs. A Bot is a managed agent — an independent entity with identity, permissions, credentials, memory, and a budget. Managing AI agents and managing employees in an organization turn out to be structurally the same problem.
| What an Employee Gets | What a Bot Gets |
|---|---|
| Identity & role — name, title, department | Bot profile — name, description, model, personality prompt |
| Capability boundaries — “you may use these tools” | Tool visibility — only authorized tools exist in the bot’s world |
| Credentials — badge, keys (can’t see the safe code) | Encrypted credentials — used at execution time, never visible to AI |
| Policy constraints — “read-only access to production” | Declarative scope rules — parameter-level checks enforced by code |
| Memory — experience, context, team knowledge | Partitioned memory — global, channel, task, and resource scopes |
| Standard procedures — SOPs, checklists | Workflows — visual DAG editor with branching, parallelism, approval gates |
| Teamwork — delegate tasks, share context | Bot-to-bot delegation — permissions can only narrow, never widen |
| Budget — spending limits, expense tracking | Token budget — real-time tracking with automatic cutoff |
| Activity records — timesheets, audit trails | Audit log — every tool call, every LLM request, every token recorded |
| Communication channels — email, Slack, phone | Multi-channel gateway — Slack, Feishu, Telegram, Discord, Webhook |
Platform Enforcement, Not AI Self-Discipline
LLMs are capable but untrustworthy — they hallucinate, can be prompt-injected, and may exceed their authority. Monstrum’s core stance: security is enforced by the platform architecture, not by relying on the AI’s self-restraint.
Three governance principles follow:
- Least Privilege — Unauthorized tools are completely invisible to the LLM. Not “visible but forbidden” — the LLM doesn’t know they exist.
- Credential Isolation — Credentials are AES-256 encrypted. The LLM never touches plaintext credentials at any point in its lifecycle.
- Fail-Closed — If anything goes wrong, the bot can do nothing rather than having unrestricted access.
ResourceType
A ResourceType declares the capabilities of an integration — what tools it provides, what credentials it needs, and what permission dimensions it supports. Think of it as the “class definition” for a type of external system.
Every ResourceType includes:
- Tools (
ToolDef[]) — The LLM-callable tool definitions with names, descriptions, parameter schemas, and operation categories. - Scope Dimensions (
ScopeDimension[]) — Declarative rules that define what parameters are subject to permission checking (e.g., “check therepoparameter against the allowed repos list using fnmatch”). - Auth Methods (
AuthMethodDef[]) — Supported ways to authenticate (OAuth, API Key, SSH Key, etc.). - Credential Schema — Fields required for credentials (encrypted at rest).
- Config Schema — Non-sensitive configuration fields (API base URL, region, etc.).
Monstrum ships with 8 built-in ResourceTypes: SSH, MCP, Bot (inter-bot delegation), Local (sandboxed command execution), Web (search + fetch), Browser (headless automation), RMCP (reverse MCP via WebSocket), and Docker (container sandbox). External plugins (like GitHub) add more.
Resource
A Resource is a concrete instance of a ResourceType — the actual connection to an external system. For example, “My Company’s GitHub” is a Resource of type github, with a specific API URL and encrypted credentials.
One ResourceType can have many Resources. A workspace might have two GitHub Resources: one for the company organization and one for a personal account, each with different credentials.
Bot
A Bot is the platform’s managed AI entity. It combines:
- An LLM configuration (which model, which provider)
- Resource bindings (which external systems it can access)
- Permission policies (what operations and parameter ranges are allowed)
- Runtime configuration (system prompt, memory settings, agent mode)
- Budget (monthly token limit)
A Bot is not the same as the LLM itself. The Bot is the “shell” that gives the LLM identity, tools, and constraints.
BotResource
A BotResource is the binding between a Bot and a Resource. This is the core carrier of permissions. Each binding specifies:
- Which Resource the Bot can access
- Which credential to use
- RolePermissions: allowed operations (glob patterns like
issue.*), allowed tools (for dynamic types), scope constraints (parameter-level restrictions likerepos: ["myorg/*"]), and delegate constraints (for Bot-to-Bot calls).
When a Bot has multiple bindings of the same ResourceType, the platform automatically prefixes tool names with the resource name to disambiguate (e.g., Work__github_list_issues vs Personal__github_list_issues).
Permission Engine (Guardian)
Monstrum uses a two-layer permission system. Both layers are independent — either one alone is sufficient to prevent privilege escalation.
Layer 1: ToolResolver (Pre-LLM)
Before the LLM sees any tools, ToolResolver filters the available tools based on BotResource bindings. The LLM only receives tools that the Bot is explicitly authorized to use. If an error occurs, an empty tool list is returned (fail-closed).
Layer 2: Guardian (Post-LLM)
After the LLM selects a tool and provides parameters, Guardian validates the parameters against the ScopeDimension declarations. For example, it checks whether the repo parameter matches the allowed patterns in scope_constraints. If validation fails, the call is rejected before reaching the executor.
This means even prompt injection attacks cannot bypass permission enforcement — the LLM might try to call unauthorized tools or use unauthorized parameters, but Guardian blocks them at the code level.
Plugin System
A plugin adds a new ResourceType to the platform. Custom integrations are defined through declarative YAML manifests that specify tools, credentials, permission dimensions, and auth methods. The platform auto-renders dashboard forms, indexes tools, and enforces permissions based on the manifest alone.
You can upload custom integration manifests through the dashboard or the Python SDK. The platform handles credential injection, permission enforcement, and audit logging automatically.
Tool Catalog
The ToolCatalog is a data-driven global index of all available tools. At startup, it loads tool definitions from the database (both built-in types and plugins). It maps tool names to their ResourceType and operation, enabling ToolResolver to quickly determine which tools a Bot can see.
For dynamic ResourceTypes (MCP, RMCP), tools are registered and unregistered at runtime as external tool servers connect and disconnect.
Session and Task
Bots operate in two modes:
- Session — Triggered by user messages from IM channels. Supports multi-turn conversations with persistent context. Sessions are automatically recycled after 30 minutes of inactivity, triggering memory extraction before cleanup.
- Task — Triggered by API calls, scheduled jobs, Bot-to-Bot delegation, or events. Single execution with independent context. Completes when done.
Bot Memory
Monstrum’s memory system gives bots long-term context that persists across sessions. Memories are stored in the database and partitioned by scope:
- Global — Shared across all contexts
- Channel — Specific to an IM conversation
- Task — Exists for a single task execution
- Resource — Related to a specific resource
Memories are automatically extracted from conversations by an LLM-based extractor when sessions expire or tasks complete. Bots can also manage their own memories with built-in tools (write, delete, load/unload cross-scope memories).
Workflow Orchestration
Monstrum provides a visual DAG-based workflow editor for multi-step task coordination. Workflows support:
- Sequential steps, parallel branches, conditional routing
- Human approval gates
- Variable piping between steps
- Safe AST-based expression evaluation (no
eval()) - Three trigger methods: API, cron schedule, or platform event
- Timeout budgets at step and workflow level
- Fail-fast parallel execution
Event System
Platform events cover the full lifecycle: task completion/failure, workflow completion/failure, schedule triggers, session creation/expiration, tool execution results, and custom events. Bots can subscribe to event patterns and react automatically. Events can also trigger workflows directly.
Audit and Cost Control
Every tool call, LLM request, and token consumption is recorded. The audit system provides:
- Full-chain operation logs with request IDs, parameters, and results
- Token consumption tracking per bot
- Cost estimation in USD
- Budget enforcement (automatic execution termination when budget is exceeded)
Multi-Tenancy
All entities are isolated by Workspace. Each workspace has its own bots, resources, credentials, and audit logs. Workspace members can be assigned roles: Owner, Admin, Member, or Viewer.