Development Guide

Monstrum Plugin Development Guide

Complete reference for building, testing, and distributing Monstrum plugins.


Table of Contents


Architecture Overview

Monstrum is an AI Agent control platform. Every tool call an AI bot makes passes through a strict permission pipeline:

User → Gateway → Session → LLM → ToolResolver → Guardian → Executor → Auditor
                                   (pre-LLM)     (post-LLM)  (your code)

A plugin adds a new ResourceType to the platform. A ResourceType is the complete declarative contract:

DeclarationPurposeConsumed By
tools[]LLM-callable tool definitionsToolCatalog → LLM
scope_dimensions[]Permission check rulesGuardian (auto-checked)
auth_methods[]Supported credential flowsFrontend (auto-rendered UI)
credential_schema[]Credential field definitionsFrontend (auto-rendered forms)
config_schema[]Resource config field definitionsFrontend (auto-rendered forms)

The platform drives all behavior from these declarations. You write an Executor class to implement the actual API calls; everything else — permission enforcement, UI rendering, audit logging, credential encryption — is handled by the platform.

Three-Layer Resource Model

ResourceType  →  Resource (+Credential)  →  Bot
 (your plugin)    (admin configures)        (granted access via BotResource)
  • ResourceType: What your plugin is — tool definitions, permissions, auth methods.
  • Resource: A concrete instance — e.g., “My Company’s GitHub” with API URL and credentials.
  • Bot: An AI agent bound to Resources via BotResource with permission constraints.

Design Philosophy

Why This Architecture?

Monstrum’s plugin model is built on one core belief: the platform enforces permissions, not AI self-discipline. LLMs cannot be trusted to self-police their tool usage. The platform must guarantee that permission checks, audit trails, and scope constraints happen regardless of what the LLM generates.

This leads to three design principles:

1. Declarative over Imperative

Plugins declare what they need — tools, permissions, auth methods — and the platform decides how to enforce them. A scope_dimensions entry in your manifest is all it takes to get parameter-level permission checking; you write zero authorization code. This eliminates an entire class of bugs where a plugin author forgets to check permissions or implements the check incorrectly.

2. Separation of Data Plane and Control Plane

Your executor handles the data plane: making API calls, parsing responses, returning results. The platform handles the control plane: which tools the Bot can see, whether the parameters are within scope, which credentials to inject, and what to log. Your code never sees credentials directly — they arrive pre-injected in ExecuteRequest.credential_fields. Your code never enforces permissions — Guardian does that before your handler is called.

3. Convention-Driven Integration

A plugin is a directory with a monstrum.yaml and an executor.py. The manifest drives everything: the frontend auto-renders credential forms from credential_schema, the ToolCatalog indexes tools from tools[], Guardian evaluates permissions from scope_dimensions[]. This means adding a new integration doesn’t require touching platform code — the manifest is the complete contract.

What Plugins Are (and Aren’t)

A plugin is:

  • A thin adapter between an external API and the platform’s execution model
  • A declarative manifest that describes the integration’s capabilities and constraints
  • Stateless request handlers that transform ExecuteRequest into API calls and return ExecuteResult

A plugin is not:

  • A general-purpose Python application (no background threads, no startup hooks, no global state)
  • Responsible for security or audit (the platform handles both)
  • Aware of the LLM, the user, or the session (your handler sees only the current tool call)

Permission Model

Understanding the permission model is essential before building a plugin, because it dictates what you declare in your manifest and what you can omit from your executor code.

RBAC + Declarative ABAC Hybrid

Monstrum uses a hybrid authorization model:

  • RBAC (Role-Based Access Control) governs which operations a Bot can perform. An admin assigns a Role to a BotResource binding, and the Role’s allowed_operations and allowed_tools determine which tools the Bot can see. This is a coarse-grained gate — the Bot either has access to issue.read or it doesn’t.

  • Declarative ABAC (Attribute-Based Access Control) governs which parameter values are allowed within an authorized operation. This is where scope_dimensions come in. Even if a Bot is authorized for issue.read, its scope might be constrained to repos: ["myorg/*"], meaning it can only read issues from repos matching that pattern.

The key insight for plugin developers: you declare the ABAC rules, the platform enforces them. Your scope_dimensions entries define the attributes, match modes, and error templates. Guardian evaluates them automatically — you never call check_scope() yourself.

Three-Layer Tool Permission

Layer 1: ToolResolver (Pre-LLM)

Before the LLM sees any tools, ToolResolver filters the tool list based on the Bot’s BotResource bindings. The LLM only sees tools the Bot is authorized to use.

Controlled by: RolePermissions.allowed_operations (glob patterns like issue.*) and RolePermissions.allowed_tools (glob patterns like github_*).

Layer 2: Guardian (Post-LLM)

After the LLM selects a tool and provides parameters, Guardian validates the parameters against scope_dimensions declarations and RolePermissions.scope_constraints.

This is where your scope_dimensions come into play. Guardian calls check_scope_declarative() which:

  1. Iterates your scope_dimensions
  2. For each dimension, extracts the parameter value via param_paths
  3. Checks if the value matches any entry in scope_constraints[key]
  4. Returns scope violation if no match

Layer 3: Delegate Scope (Bot-to-Bot)

When Bot A calls Bot B (via BotExecutor), the BotResource binding can carry DelegateConstraints that further restrict what Bot B can do:

class DelegateConstraints:
    allowed_tools: list[str] | None    # Pre-LLM: fnmatch tool filter
    scope_constraints: dict[str, list[str]] | None  # Post-LLM: intersected with own scope

This prevents Confused Deputy attacks: even if Bot B has broad GitHub access, the delegate constraints from Bot A’s binding can restrict Bot B to only github_list_* tools on public-org/* repos.

Declarative Scope Checking

For plugins, scope checking is fully declarative — declare scope_dimensions in your manifest and the platform handles everything:

# Example: restrict by project and issue type
scope_dimensions:
  - key: projects
    param_paths: [project, project_key]
    match_mode: pattern
    error_template: "Project {value} is not authorized"

  - key: issue_types
    param_paths: [issue_type]
    match_mode: exact
    operation_filter: "issue.write"
    error_template: "Issue type {value} is not allowed"

An admin then configures the BotResource with:

{
  "scope_constraints": {
    "projects": ["PROJ-*", "INFRA"],
    "issue_types": ["Bug", "Task"]
  }
}

The Bot can only access projects matching PROJ-* or INFRA, and can only create Bug or Task issues.

Delegate Scope (Bot-to-Bot)

When your plugin allows Bot-to-Bot communication, delegate scope prevents Confused Deputy attacks:

{
  "delegate": {
    "allowed_tools": ["github_list_*"],
    "scope_constraints": {
      "repos": ["public-org/*"]
    }
  }
}

This means: when this Bot calls another Bot, the called Bot can only use GitHub list tools and only on public-org/* repos, regardless of its own permissions.


Quick Start

Create a Jira integration plugin in 3 files:

1. Directory Structure

plugins/
└── jira/
    ├── monstrum.yaml      # Plugin manifest
    ├── executor.py         # Executor implementation
    └── locales/
        ├── en-US.json      # English translations
        └── zh-CN.json      # Chinese translations

2. monstrum.yaml

name: jira
version: 1.0.0
description: Jira integration plugin — issues, projects, and transitions
author: Your Name
license: MIT

resource_type:
  id: jira
  name: Jira
  mode: plugin
  tool_discovery: static
  auth_flow: manual

  credential_schema:
    - field: api_token
      type: secret
      required: true
      description: "Jira API Token"
    - field: email
      type: string
      required: true
      description: "Jira account email"

  config_schema:
    - field: api_base
      type: url
      required: true
      description: "Jira instance URL (e.g., https://yourcompany.atlassian.net)"

  auth_methods:
    - method: api_key
      label: API Token
      description: "Authenticate with email + API token"
      credential_schema:
        - field: api_token
          type: secret
          required: true
        - field: email
          type: string
          required: true

  tools:
    - name: jira_list_issues
      description: "List issues from a Jira project."
      operation: issue.read
      input_schema:
        type: object
        properties:
          project:
            type: string
            description: "Project key (e.g., PROJ)"
          status:
            type: string
            description: "Filter by status"
          max_results:
            type: integer
            default: 50
        required: [project]

    - name: jira_create_issue
      description: "Create a new Jira issue."
      operation: issue.write
      input_schema:
        type: object
        properties:
          project:
            type: string
            description: "Project key"
          summary:
            type: string
            description: "Issue summary"
          description:
            type: string
            description: "Issue description"
          issue_type:
            type: string
            default: Task
            description: "Issue type (Task, Bug, Story, etc.)"
        required: [project, summary]

  scope_dimensions:
    - key: projects
      param_paths: [project]
      match_mode: pattern
      error_template: "Project {value} is not authorized"

executor:
  module: executor
  class_name: JiraExecutor

3. executor.py

from __future__ import annotations

import logging

import httpx

from monstrum_sdk import ExecuteRequest, ExecuteResult, HttpExecutorBase

logger = logging.getLogger(__name__)


class JiraExecutor(HttpExecutorBase):
    resource_type = "jira"
    default_api_base = ""  # Set per-resource via config_schema
    default_headers = {
        "Accept": "application/json",
        "Content-Type": "application/json",
    }
    supported_operations = ["issue.read", "issue.write"]

    OPERATION_HANDLERS = {
        "issue.read": "_handle_issue_read",
        "issue.write": "_handle_issue_write",
    }

    # ── Auth override (Jira uses Basic Auth, not Bearer) ──

    def _build_auth_headers(self, request: ExecuteRequest) -> dict[str, str]:
        import base64
        headers = dict(self.default_headers)
        if request.credential_fields:
            email = request.credential_fields.get("email", "")
            token = request.credential_fields.get("api_token", "")
            if email and token:
                encoded = base64.b64encode(f"{email}:{token}".encode()).decode()
                headers["Authorization"] = f"Basic {encoded}"
        return headers

    # ── Error handling ──

    async def handle_execute_error(
        self, request: ExecuteRequest, error: Exception
    ) -> ExecuteResult:
        if isinstance(error, httpx.HTTPStatusError):
            logger.error(f"Jira API error: {error}")
            return ExecuteResult.error_result(
                f"Jira API error: {error.response.status_code}"
            )
        return await super().handle_execute_error(request, error)

    # ── Handlers ──

    async def _handle_issue_read(
        self, request: ExecuteRequest
    ) -> ExecuteResult:
        project = request.params.get("project", "")
        status = request.params.get("status")
        max_results = request.params.get("max_results", 50)

        jql = f"project = {project}"
        if status:
            jql += f" AND status = \"{status}\""

        data = await self._http_get(
            request,
            "/rest/api/3/search",
            params={"jql": jql, "maxResults": max_results},
        )
        return ExecuteResult.success_result(data)

    async def _handle_issue_write(
        self, request: ExecuteRequest
    ) -> ExecuteResult:
        project = request.params.get("project", "")
        summary = request.params.get("summary", "")
        description = request.params.get("description", "")
        issue_type = request.params.get("issue_type", "Task")

        data = await self._http_post(
            request,
            "/rest/api/3/issue",
            json={
                "fields": {
                    "project": {"key": project},
                    "summary": summary,
                    "description": {
                        "type": "doc",
                        "version": 1,
                        "content": [{
                            "type": "paragraph",
                            "content": [{"type": "text", "text": description}],
                        }],
                    },
                    "issuetype": {"name": issue_type},
                },
            },
        )
        return ExecuteResult.success_result(data)

That’s it. Place the jira/ directory under plugins/, and the platform auto-discovers and registers it at startup.


Plugin Structure

plugins/{plugin_name}/
├── monstrum.yaml          # Required: Plugin manifest
├── executor.py            # Required: Executor implementation
├── __init__.py            # Optional: Package init
├── locales/               # Optional: i18n translations
│   ├── en-US.json
│   └── zh-CN.json
└── requirements.txt       # Optional: Extra pip dependencies

Auto-Discovery

At startup, PluginManager.scan_and_load_all() scans plugins/ for directories containing monstrum.yaml. For each plugin it:

  1. Parses the manifest and validates it
  2. Upserts the ResourceType into the database (tools, scopes, auth methods, schemas)
  3. Loads the executor class via importlib
  4. Instantiates the executor and registers it with ExecutorRegistry
  5. Reloads the ToolCatalog so tools become available to LLMs

Hot-reloading is supported via PluginManager.reload_plugin(name).


Plugin Manifest (monstrum.yaml)

Top-level Fields

FieldTypeRequiredDescription
namestringYesUnique plugin name (lowercase, alphanumeric + hyphens)
versionstringYesSemantic version (e.g., 1.0.0)
descriptionstringYesHuman-readable description
authorstringYesAuthor name
licensestringNoLicense identifier (default: MIT)
tagslist[string]NoSearchable tags
homepagestringNoProject homepage URL
repositorystringNoSource code repository URL
locales_dirstringNoTranslation files directory (default: locales)
resource_typeobjectYesResourceType declaration (see below)
executorobjectYesExecutor loading configuration (see below)

resource_type — ResourceType Declaration

FieldTypeRequiredDefaultDescription
idstringYesUnique type identifier (e.g., github, jira). Must match executor.resource_type.
namestringYesDisplay name shown in UI
modestringNopluginplugin, endpoint, or system
descriptionstringNo""Description
iconstringNo""Icon identifier for frontend
auth_flowstringNomanualoauth, manual, or none
tool_discoverystringNostaticstatic, dynamic, or configured (see tool_discovery Modes)
toolslist[ToolDef]No[]Tool definitions
scope_dimensionslist[ScopeDimension]No[]Permission dimensions
auth_methodslist[AuthMethodDef]No[]Supported authentication methods
config_schemalist[FieldDef]No[]Resource configuration fields
credential_schemalist[FieldDef]No[]Credential fields

tools — Tool Definitions

Each tool in the tools list defines one LLM-callable tool:

tools:
  - name: github_list_issues          # Globally unique tool name
    description: "List issues..."      # Description shown to LLM
    operation: issue.read              # Maps to OPERATION_HANDLERS key
    input_schema:                      # JSON Schema for parameters
      type: object
      properties:
        repo:
          type: string
          description: "owner/repo format"
      required: [repo]
    output_schema: null                # Optional: JSON Schema for output
    cost:                              # Optional: billing information
      tokens: 0
      credits: 0.0

Key rules:

  • name must be globally unique. Convention: {resource_type}_{action} (e.g., jira_list_issues).
  • operation maps to a key in your executor’s OPERATION_HANDLERS dict.
  • Multiple tools can share the same operation — use request.tool_name in your handler to distinguish them (see tool_name Routing).
  • input_schema follows JSON Schema and is passed to the LLM for function calling.

scope_dimensions — Permission Dimensions

Scope dimensions define what parameters are subject to permission checking. Guardian evaluates them automatically after the LLM selects a tool — you don’t write any permission-checking code.

scope_dimensions:
  - key: repos                         # Key in the scope dict
    param_paths: [repo, owner_repo]    # Parameter names to extract value from
    match_mode: pattern                # Matching strategy
    operation_filter: "issue.*"        # Only apply to these operations (glob)
    error_template: "Repository {value} is not authorized"
FieldTypeRequiredDefaultDescription
keystringYesThe key in the scope constraints dict (e.g., repos, projects, domains)
param_pathslist[string]YesParameter names to extract the value from (tried in order)
match_modestringNopatternpattern (fnmatch glob), path (filesystem path), exact (string equality)
operation_filterstringNonullGlob pattern to limit which operations this dimension applies to
error_templatestringNo""Error message with {value} placeholder

Match modes:

  • pattern — fnmatch glob matching. "myorg/*" matches "myorg/repo", "*" matches everything.
  • path — Filesystem path matching. Supports ** (recursive), /* (single level), prefix matching.
  • exact — String equality only.

How it works at runtime:

  1. Admin configures a BotResource with scope: {"repos": ["myorg/*", "otherorg/public-*"]}
  2. Bot calls github_list_issues(repo="myorg/myrepo")
  3. Guardian extracts repo from params (via param_paths)
  4. Guardian matches "myorg/myrepo" against ["myorg/*", "otherorg/public-*"] (via match_mode)
  5. Match succeeds → tool call proceeds. No match → scope violation returned to LLM.

auth_methods — Authentication Methods

auth_methods:
  - method: oauth2_auth_code
    label: OAuth Login
    description: "Authorize via GitHub OAuth"
    credential_schema:
      - field: access_token
        type: secret
        required: true
    oauth_config:
      authorization_url: "https://github.com/login/oauth/authorize"
      token_url: "https://github.com/login/oauth/access_token"
      scopes: [repo, "read:org"]
      pkce_required: false

  - method: token
    label: Personal Access Token
    description: "Enter a PAT manually"
    credential_schema:
      - field: access_token
        type: secret
        required: true

Supported method values:

MethodDescription
oauth2_auth_codeOAuth 2.0 Authorization Code (browser redirect)
oauth2_client_credsOAuth 2.0 Client Credentials (M2M)
oauth2_device_codeOAuth 2.0 Device Code (CLI/IoT)
api_keyAPI key authentication
tokenBearer token
ssh_keySSH key pair
basicHTTP Basic Authentication
noneNo authentication required

The frontend auto-renders the appropriate credential form based on your auth_methods declaration. For OAuth methods, the platform handles the full flow (redirect, token exchange, refresh).

oauth_config fields:

FieldTypeDescription
authorization_urlstringOAuth /authorize endpoint
token_urlstringOAuth /token endpoint
scopeslist[string]Default scopes to request
pkce_requiredboolWhether PKCE is mandatory
device_authorization_urlstringDevice code flow endpoint

credential_schema / config_schema — Field Definitions

Both use the same FieldDef structure:

credential_schema:
  - field: access_token
    type: secret
    required: true
    description: "API access token"

config_schema:
  - field: api_base
    type: url
    required: false
    default: "https://api.example.com"
    description: "API base URL"
  - field: region
    type: enum
    required: true
    enum_values: [us, eu, ap]
    description: "API region"
FieldTypeRequiredDefaultDescription
fieldstringYesField name (used as key in credential_fields / resource_config)
typestringYesstring, integer, secret, url, enum
requiredboolNotrueWhether the field is mandatory
defaultanyNonullDefault value
enum_valueslist[string]NonullValid values (when type: enum)
descriptionstringNo""Help text shown in UI
  • credential_schema fields are encrypted in the database and never exposed to Bots.
  • config_schema fields are stored in plain text (for non-sensitive configuration like API URLs).

executor — Executor Loading Configuration

executor:
  module: executor           # Python module name (relative to plugin dir)
  class_name: GitHubExecutor # Optional: class to load (auto-detected if omitted)

If class_name is omitted, the loader scans the module for the first ExecutorBase subclass.

tool_discovery Modes

The tool_discovery field controls how tools are registered for your resource type:

static (default) — Tools are defined in the manifest and loaded at startup. This is correct for the vast majority of plugins. The ToolCatalog indexes your tools[] once at load time.

dynamic — Tools are registered at runtime, per resource instance. The manifest declares no tools[]; instead, tools are registered and unregistered via ToolCatalog.register_dynamic_tools(resource_id, tools).

The execution flow for dynamic tools:

  1. An external entity connects and registers tools for a specific resource_id
  2. ToolCatalog.register_dynamic_tools(resource_id, tool_defs) stores the tools
  3. ToolResolver detects tool_discovery: dynamic and calls catalog.get_resource_tools(resource_id) instead of catalog.get_type_tools(type_id)
  4. Each resource instance has its own independent tool list
  5. When the entity disconnects, ToolCatalog.unregister_dynamic_tools(resource_id) removes the tools

Two built-in executors use dynamic discovery:

  • monstrum-agent — Monstrum Agent — external agents connect via WebSocket (/ws/agent), authenticate with an API key, and register their tool definitions. The platform makes those tools available to Bots bound to that specific agent resource. When the agent disconnects, tools are unregistered.
  • mcp — The platform auto-discovers tools from MCP servers at startup (and when resources/credentials are created or updated). Each MCP server exposes its own set of tools, which are registered per resource instance. Discovered tools are persisted to the credential record (discovered_tools_json, discovery_status, last_discovery_at), so they survive restarts without reconnecting.

For both dynamic types, Bot bindings use allowed_tools (glob patterns) for per-tool permission control. The frontend shows discovered tool checkboxes instead of operation checkboxes when binding.

When to use dynamic: Only when each resource instance exposes a different set of tools discovered at runtime. If your tools are fixed, use static.

configured — Tools are defined per-resource configuration (stored in resource config). Not commonly used by plugins.


Executor Implementation

ExecutorBase — Abstract Base Class

Every executor extends ExecutorBase (or HttpExecutorBase). Import from the SDK:

from monstrum_sdk import ExecutorBase, ExecuteRequest, ExecuteResult, ExecuteStatus

Class variables to set:

class MyExecutor(ExecutorBase):
    resource_type = "my_plugin"      # Must match resource_type.id in manifest
    supported_operations = [          # Operations this executor handles
        "data.read",
        "data.write",
    ]
    OPERATION_HANDLERS = {            # operation → handler method name
        "data.read": "_handle_read",
        "data.write": "_handle_write",
    }

HttpExecutorBase — HTTP API Base Class

For plugins that call REST APIs (the vast majority), extend HttpExecutorBase instead. It provides:

  • Automatic auth headers — Bearer token from credentials
  • 401 auto-refresh — If the token expires, calls credential_refresh() and retries
  • HTTP convenience methods_http_get, _http_post, _http_patch, _http_delete
  • Pagination — GitHub-style Link header pagination via _paginate()
from monstrum_sdk import HttpExecutorBase

class MyExecutor(HttpExecutorBase):
    resource_type = "my_plugin"
    default_api_base = "https://api.example.com"   # Base URL for all requests
    default_headers = {"Accept": "application/json"}  # Merged into every request
    default_timeout = 30.0                          # httpx timeout in seconds

HTTP Methods:

# GET — returns parsed JSON
data = await self._http_get(request, "/endpoint", params={"key": "val"})

# POST — returns parsed JSON
data = await self._http_post(request, "/endpoint", json={"key": "val"})

# PATCH — returns parsed JSON
data = await self._http_patch(request, "/endpoint", json={"key": "val"})

# DELETE — returns raw httpx.Response
resp = await self._http_delete(request, "/endpoint")

# Paginate (GitHub-style Link headers) — returns all items as flat list
all_items = await self._paginate(request, "/items", per_page=100)

# Low-level (for full control)
result = await self._http_request(
    request,
    method="PUT",
    path="/endpoint",
    json={"key": "val"},
    raw_response=False,  # True → return httpx.Response instead of JSON
)

Overriding auth headers:

If your API doesn’t use Bearer tokens (e.g., Basic Auth, API key in header), override _build_auth_headers():

def _build_auth_headers(self, request: ExecuteRequest) -> dict[str, str]:
    headers = dict(self.default_headers)
    if request.credential_fields:
        api_key = request.credential_fields.get("api_key", "")
        headers["X-API-Key"] = api_key
    return headers

Overriding API base URL:

The API base is resolved from request.resource_config["api_base"] if present, otherwise default_api_base. Override _get_api_base() for custom logic:

def _get_api_base(self, request: ExecuteRequest) -> str:
    if request.resource_config:
        region = request.resource_config.get("region", "us")
        return f"https://api.{region}.example.com"
    return self.default_api_base

Web3ExecutorBase — EVM Blockchain Base Class

For plugins that interact with EVM-compatible blockchains (Ethereum, Polygon, Base, Arbitrum, etc.), extend Web3ExecutorBase. It provides:

  • Web3 instance management — Cached per RPC URL, auto-configured from resource config
  • Account management — Private key handling (never exposed to LLM)
  • ERC20 standard ABI — Built-in ABI for common token operations
  • Gas price guard — Configurable max gas price limit
  • Async wrappers — All synchronous web3.py calls wrapped in asyncio.to_thread()
from monstrum_sdk import Web3ExecutorBase

class MyDeFiExecutor(Web3ExecutorBase):
    resource_type = "my_defi"
    supported_operations = ["swap", "provide_liquidity"]

    async def _handle_swap(self, request):
        w3 = self._w3(request)           # Cached Web3 instance
        account = self._get_account(request)  # From credential private_key
        # Use w3 and account for DeFi operations...

Primitive methods (all async, use asyncio.to_thread internally):

MethodDescription
_get_balance(request, address, token_address?)Native or ERC20 token balance
_transfer(request, to, value)Native token transfer (value in ether)
_call_contract(request, contract, abi, function, args?)Read-only contract call
_send_transaction(request, contract, abi, function, args?, value?)Write contract call
_get_transaction(request, tx_hash)Transaction details + receipt
_read_events(request, contract, abi, event, from_block, to_block)Event log reading
_estimate_gas(request, to, value?, data?)Gas estimation
_wait_for_receipt(request, tx_hash, timeout?)Wait for tx confirmation

ExecuteRequest — Request Object

Every handler receives an ExecuteRequest with all context:

@dataclass
class ExecuteRequest:
    request_id: str           # Unique request ID (for audit trail)
    bot_id: str               # Bot performing the action
    task_id: str              # Task ID (for grouping related calls)
    operation: str            # Operation name (e.g., "issue.read")
    params: dict[str, Any]    # Tool parameters from LLM

    # Credentials (never visible to Bot — injected by platform)
    credential_value: str | None       # Legacy: plain string credential
    credential_fields: dict[str, str] | None  # Structured credential fields

    # Scope & config
    scope: dict[str, Any] | None          # Permission scope constraints
    resource_config: dict[str, Any] | None  # Resource configuration

    # Advanced
    credential_refresh: Any        # async () -> dict | None (OAuth refresh)
    resource_id: str | None        # Resource ID (for multi-resource routing)
    tool_name: str                 # Original tool name (for same-operation dispatch)
    delegate: Any                  # DelegateConstraints (Bot-to-Bot delegation)

ExecuteResult — Result Object

Handlers return ExecuteResult. Use the factory methods:

# Success — data is returned to the LLM
return ExecuteResult.success_result({"issues": [...]})

# Error — error message is returned to the LLM
return ExecuteResult.error_result("Repository not found")

# Scope violation — treated as permission denial
return ExecuteResult.scope_violation("Domain not in allowed list")

Error Semantics

The platform distinguishes three result types, and the LLM sees different feedback for each:

Result TypeFactory MethodLLM FeedbackAudit StatusWhen to Use
SuccessExecuteResult.success_result(data)Tool result dataSUCCESSNormal successful execution
Execution ErrorExecuteResult.error_result(msg)Error message stringFAILUREAPI failures, invalid input, runtime errors
Scope ViolationExecuteResult.scope_violation(reason)"Scope violation: {reason}"FAILUREParameter exceeds authorized scope

There is also a fourth type that happens before your executor is called:

Result TypeSourceLLM FeedbackWhen
Permission DeniedGuardian (pre-execution)"Permission denied: {reason}"Operation/tool not authorized by role

Choosing between error_result and scope_violation:

Use scope_violation() only for cases where the parameters violate the scope constraints configured by the admin — it signals an authorization problem. Use error_result() for everything else: API errors, invalid inputs, missing resources, network failures.

In practice, most plugins only use error_result() because declarative scope_dimensions handle scope violations automatically via Guardian. You’d use scope_violation() in a custom validate_scope() override for logic too complex to express declaratively.

LLM behavior on errors: When the LLM receives an error or scope violation, it typically adjusts its approach — retrying with different parameters, informing the user of the limitation, or choosing an alternative tool. The platform does not retry tool calls automatically; the LLM decides what to do next.

Template Method: execute()

The base class provides a default execute() that dispatches to your handlers via OPERATION_HANDLERS. You typically never override execute() — just define the handler map and methods:

class MyExecutor(HttpExecutorBase):
    OPERATION_HANDLERS = {
        "data.read": "_handle_read",
        "data.write": "_handle_write",
    }

    async def _handle_read(self, request: ExecuteRequest) -> ExecuteResult:
        data = await self._http_get(request, "/data")
        return ExecuteResult.success_result(data)

    async def _handle_write(self, request: ExecuteRequest) -> ExecuteResult:
        data = await self._http_post(request, "/data", json=request.params)
        return ExecuteResult.success_result(data)

The default execute() flow:

execute(request)
  ├── 1. Lookup handler from OPERATION_HANDLERS
  │      → "Unknown operation" error if not found
  ├── 2. pre_execute(request)
  │      → Short-circuit if returns ExecuteResult
  ├── 3. validate_scope(operation, params, scope)
  │      → Scope violation if returns error string
  ├── 4. handler(request)
  │      → Your handler method
  └── 5. On exception: handle_execute_error(request, error)

Lifecycle Hooks

Override these hooks to customize behavior without replacing execute():

pre_execute(request) → ExecuteResult | None

Called before scope validation. Return an ExecuteResult to short-circuit (e.g., for dependency checks), or None to proceed normally.

async def pre_execute(self, request: ExecuteRequest) -> ExecuteResult | None:
    if not self._api_client:
        return ExecuteResult.error_result("API client not configured")
    return None

handle_execute_error(request, error) → ExecuteResult

Called when a handler raises an exception. Override for API-specific error mapping:

async def handle_execute_error(
    self, request: ExecuteRequest, error: Exception
) -> ExecuteResult:
    if isinstance(error, httpx.HTTPStatusError):
        status = error.response.status_code
        body = error.response.text[:200]
        return ExecuteResult.error_result(f"API error {status}: {body}")
    return await super().handle_execute_error(request, error)

Scope Validation

For most plugins, you don’t need to implement scope validation at all. Just declare scope_dimensions in your manifest, and Guardian handles everything declaratively.

Override validate_scope() only for complex validation logic that can’t be expressed declaratively:

def validate_scope(
    self,
    operation: str,
    params: dict[str, Any],
    scope: dict[str, Any] | None,
) -> str | None:
    """Return error message if scope validation fails, None if valid."""
    if not scope:
        return None

    # Custom: check URL scheme
    url = params.get("url", "")
    if url:
        from urllib.parse import urlparse
        parsed = urlparse(url)
        if parsed.scheme not in ("http", "https"):
            return f"Unsupported URL scheme: {parsed.scheme}"

    return None

Credential Access

The platform injects credentials into ExecuteRequest — the Bot and LLM never see them.

Using _get_token() (recommended for Bearer-style auth):

token = self._get_token(request)  # Reads credential_fields["access_token"]
token = self._get_token(request, field="api_key")  # Custom field name

Tries credential_fields[field] first, falls back to credential_value.

Accessing multiple credential fields:

if request.credential_fields:
    email = request.credential_fields.get("email", "")
    api_key = request.credential_fields.get("api_key", "")

OAuth token refresh:

If request.credential_refresh is set, the platform handles automatic token refresh. HttpExecutorBase calls it automatically on 401 responses. For custom executors:

if response.status_code == 401 and request.credential_refresh:
    new_fields = await request.credential_refresh()
    if new_fields:
        request.credential_fields = new_fields
        # Retry with new credentials

tool_name Routing

When multiple tools share the same operation, use request.tool_name to distinguish them:

# In monstrum.yaml
tools:
  - name: github_add_labels
    operation: issue.label.write    # Same operation
    ...
  - name: github_remove_labels
    operation: issue.label.write    # Same operation
    ...
# In executor.py
OPERATION_HANDLERS = {
    "issue.label.write": "_handle_label_write",
}

async def _handle_label_write(self, request: ExecuteRequest) -> ExecuteResult:
    if request.tool_name == "github_remove_labels":
        # Remove logic
        ...
    else:
        # Add logic (default)
        ...

Concurrency and Statelessness

Executors are singletons. The platform creates one instance of your executor class at plugin load time, and that single instance handles all concurrent requests for the lifetime of the process. This has important implications:

Do not store request state on self. Every request arrives as an isolated ExecuteRequest object. All per-call data — credentials, parameters, scope, resource config — lives on the request, not the executor instance.

# WRONG — shared state across concurrent requests
class BadExecutor(HttpExecutorBase):
    async def _handle_read(self, request: ExecuteRequest) -> ExecuteResult:
        self.current_token = request.credential_fields["access_token"]  # Race condition!
        data = await self._http_get(request, "/data")
        return ExecuteResult.success_result(data)

# RIGHT — all state is request-scoped
class GoodExecutor(HttpExecutorBase):
    async def _handle_read(self, request: ExecuteRequest) -> ExecuteResult:
        data = await self._http_get(request, "/data")  # credentials flow through request
        return ExecuteResult.success_result(data)

Instance state is for initialization only. Constants, configuration, and reusable clients (like an httpx connection pool) can live on self. Per-request data must not.

Handlers are async. All handler methods use async def and should use await for I/O. Never use blocking calls (requests.get, time.sleep) — they block the event loop and stall all concurrent requests.

SDK Functions

Expose your executor’s capabilities as standalone functions for programmatic use (not just LLM tool calls):

def get_sdk_functions(self) -> dict[str, Any]:
    return {
        "search": self._sdk_search,
        "create": self._sdk_create,
    }

async def _sdk_search(self, *, query: str, max_results: int = 10) -> dict:
    """Search items. Callable via platform.my_plugin.search(...)"""
    # Implementation...
    return {"results": [...]}

These functions become accessible via the Platform SDK:

from monstrum_sdk import platform

results = await platform.my_plugin.search(query="bug", max_results=5)

Important: SDK functions bypass the Guardian permission pipeline. They are direct calls to your executor methods, without scope checks or audit logging. See PluginClient vs Platform SDK for when to use each.


Internationalization (i18n)

Create JSON files in the locales/ directory (configurable via locales_dir in manifest):

locales/en-US.json:

{
  "description": "Jira integration plugin",
  "tools.jira_list_issues.description": "List issues from a Jira project.",
  "tools.jira_create_issue.description": "Create a new Jira issue.",
  "scope_dimensions.projects.error_template": "Project {value} is not authorized",
  "auth_methods.0.label": "API Token",
  "auth_methods.0.description": "Authenticate with email + API token"
}

locales/zh-CN.json:

{
  "description": "Jira 集成插件",
  "tools.jira_list_issues.description": "列出 Jira 项目的 Issue。",
  "tools.jira_create_issue.description": "创建新的 Jira Issue。",
  "scope_dimensions.projects.error_template": "项目 {value} 不在授权范围内",
  "auth_methods.0.label": "API 令牌",
  "auth_methods.0.description": "使用邮箱 + API 令牌认证"
}

Key naming convention:

Key patternOverrides
descriptionPlugin description
tools.{tool_name}.descriptionTool description
scope_dimensions.{key}.error_templateScope error message
auth_methods.{index}.labelAuth method display name
auth_methods.{index}.descriptionAuth method help text

The platform applies translations based on the user’s language preference.

Note on auth_methods keys: Auth method translations use index-based keys (auth_methods.0.label, auth_methods.1.label) rather than method-name-based keys. This means reordering auth methods in the manifest will break translation mappings. Keep the order of auth_methods stable once translations are published, or update the locale files to match.


PluginClient — Cross-Plugin Composition

PluginClient calls tools through the full permission pipeline (ToolExecutor → Guardian → Executor → Auditor). This is the correct way for plugins, workflows, and skills to call other plugins’ tools when permission enforcement and audit logging are required.

from monstrum_sdk import get_plugin_client, PluginError

# Create a client bound to a specific bot and task
github = get_plugin_client(
    "github",
    bot_id="bot-123",
    task_id="task-456",
    workspace_id="ws-789",
)

# Call tools by short name (prefix added automatically)
try:
    issues = await github.list_issues(repo="myorg/myrepo", state="open")
    # Internally calls tool "github_list_issues"
    # Routes through: Guardian scope check → GitHubExecutor → Auditor

    await github.create_issue(
        repo="myorg/myrepo",
        title="Bug: login fails",
        body="Steps to reproduce...",
    )
except PluginError as e:
    print(f"Failed: {e.message} (status: {e.status})")

PluginClient vs Platform SDK: Governance Boundary

This distinction is critical to understand:

AspectPluginClientPlatform SDK
Permission enforcementFull Guardian checkNone
Audit loggingYesNo
Credential resolutionBot-specific bindingsExplicit or none
Scope constraintsEvaluated and enforcedBypassed
Use caseCross-plugin tool calls in Bot contextDirect executor access for infrastructure code
Importget_plugin_client()platform.{type}.{fn}()

Rule of thumb: If the call originates from a Bot’s execution context (a tool handler, a workflow step, a skill), use PluginClient. The Bot’s permissions, scope constraints, and delegate limits all apply. If the call is platform infrastructure code running outside any Bot context (a scheduler, a system maintenance task), use Platform SDK.

Using Platform SDK when you should use PluginClient creates a governance hole: the call bypasses all permission checks, scope constraints, and audit logging. In a multi-tenant environment, this means a Bot could access resources it’s not authorized for.


Platform SDK

The platform singleton provides access to built-in executor capabilities and cross-cutting infrastructure. These calls bypass the Guardian permission pipeline — they call executor methods directly, without scope checks or audit logging.

from monstrum_sdk import platform

platform.oauth — OAuth Token Management

# List OAuth providers configured for a resource type
providers = await platform.oauth.list_providers(
    resource_type_id="github",
    workspace_id="ws-123",
)
# Returns: [{"id", "name", "resource_type_id", "client_id", "is_active"}, ...]

# Get current valid OAuth token
token_info = await platform.oauth.get_token(credential_id="cred-456")
# Returns: {"access_token", "token_type", "expires_at", "scope"}

platform.events — Event System

The event system allows plugins to emit events and subscribe to platform-wide events.

Emit a custom event:

result = await platform.events.emit(
    "deploy.completed",                    # Event name
    data={"version": "2.1.0", "env": "prod"},  # Payload
    workspace_id="ws-123",
    bot_id="bot-456",
)
# Returns: {"event_id": "...", "event_type": "custom.deploy.completed"}

Event name rules: alphanumeric, dots, underscores, colons, hyphens. Max 128 chars. The platform automatically prefixes with custom..

Subscribe a Bot to events:

sub = await platform.events.subscribe(
    "task.*",                              # fnmatch pattern
    bot_id="bot-456",
    workspace_id="ws-123",
    instruction="A task event occurred: {event_type}. Data: {data}",
)
# Returns: {"subscription_id": "...", "pattern": "task.*"}

The instruction field is a template sent to the Bot when a matching event fires. Supported placeholders: {event_type}, {source_type}, {source_id}, {data}, {metadata}.

Unsubscribe:

result = await platform.events.unsubscribe(
    "sub-789",
    bot_id="bot-456",  # Ownership verification
)
# Returns: {"subscription_id": "sub-789", "removed": true}

List subscriptions:

subs = await platform.events.get_subscriptions(bot_id="bot-456")
# Returns: [{"subscription_id", "pattern", "instruction", "active", "created_at"}, ...]

Event→Workflow Triggers:

Events can also trigger workflows directly (without going through a Bot). Use the Workflow Trigger REST API:

POST   /api/workflows/{workflow_id}/triggers   — Create trigger (event_pattern + instruction)
GET    /api/workflows/{workflow_id}/triggers    — List triggers
DELETE /api/workflows/{workflow_id}/triggers/{trigger_id} — Delete trigger

When a matching event fires, the platform automatically executes the linked workflow with the event data as input. Triggers are persisted in the workflow_triggers table and loaded into EventDispatcher at startup.

Built-in event types:

PatternSourceDescription
task.completedAgentRuntimeTask finished successfully
task.failedAgentRuntimeTask failed
task.cancelledAgentRuntimeTask was cancelled
workflow.completedWorkflowExecutorWorkflow finished
workflow.failedWorkflowExecutorWorkflow failed
schedule.firedSchedulerServiceScheduled event triggered
session.createdSessionManagerNew session started
session.expiredSessionManagerSession timed out
custom.*Bots via emitCustom events

Built-in Executor Namespaces

Access built-in executor capabilities directly. Remember: these bypass Guardian and are not audit-logged.

# SSH
result = await platform.ssh.run(
    host="prod-01",
    command="df -h",
    credential="ssh-key-content",
    timeout=30,
)

# MCP (Model Context Protocol — HTTP transport only)
tools = await platform.mcp.list_tools(
    server="calculator",
    url="https://mcp.example.com/sse",
)
result = await platform.mcp.call_tool(
    server="calculator",
    tool="add",
    arguments={"a": 1, "b": 2},
)

# Bot (cross-bot invocation)
task = await platform.bot.execute_task(
    target_bot_id="bot-789",
    instruction="Summarize today's issues",
    params={"project": "PROJ"},
)
answer = await platform.bot.query(
    target_bot_id="bot-789",
    question="What is the current sprint velocity?",
)
status = await platform.bot.status(target_bot_id="bot-789")

# Web
results = await platform.web.search(query="Monstrum docs", max_results=5)
page = await platform.web.fetch(url="https://example.com", extract_mode="markdown")

# Web3 (EVM blockchain)
balance = await platform.web3.get_balance(
    resource_id="res-123",
    config={"rpc_url": "https://mainnet.infura.io/v3/KEY", "chain_id": 1},
    address="0x742d35Cc6634C0532925a3b844Bc9e7595f2bD18",
)
tx = await platform.web3.transfer(
    resource_id="res-123",
    config={"rpc_url": "https://mainnet.infura.io/v3/KEY"},
    credential_fields={"private_key": "0x..."},
    to="0x...",
    value="0.1",
)

Plugin Trust & Security Model

Understanding the trust boundary between plugins and the platform is important for both plugin developers and platform administrators.

Trust Assumptions

Plugins run in-process with the platform. There is no sandbox, no code signing, and no runtime restriction on what Python code a plugin can execute. The platform trusts that:

  1. Plugins come from trusted sources. The administrator controls what gets installed in the plugins/ directory or imported via .mst packages.
  2. Plugins follow the ExecutorBase contract. The loader validates that the executor class is a subclass of ExecutorBase, but it does not restrict what code runs inside handler methods.
  3. Plugins do not tamper with platform internals. A plugin can import and call platform services directly, but doing so bypasses all security guarantees.

What the Platform Enforces

Despite running in-process, the platform provides these guarantees around your plugin code:

  • Credential isolation: Your handlers receive credentials via ExecuteRequest.credential_fields. The platform injects them; the Bot and LLM never see them. Credentials are encrypted at rest.
  • Scope enforcement: Guardian evaluates scope_dimensions before your handler is called. If the scope check fails, your handler never executes.
  • Audit trail: Every tool call — including failures — is logged by the Auditor with request ID, Bot ID, operation, parameters, and result status.
  • Built-in type protection: The platform prevents plugins from overwriting built-in types (ssh, mcp, bot).

What the Platform Does Not Enforce

  • No code sandboxing: Plugin code has full access to the Python runtime, filesystem, and network.
  • No import restrictions: Plugins can import any Python module, including platform internals.
  • No runtime resource limits: No CPU, memory, or network quotas on plugin execution.
  • No code review or signing: The .mst import validates manifest structure, not code safety.

Implications for Plugin Developers

  • Your executor runs in the same process as every other plugin and the platform itself. A crash in your handler can affect the entire platform.
  • Do not access platform databases or internal state directly. Use PluginClient or Platform SDK.
  • Do not spawn background threads or long-running processes. Executors handle individual requests; the platform manages lifecycle.
  • Treat credential fields as sensitive — do not log them, cache them, or transmit them outside the intended API call.

Implications for Administrators

  • Only install plugins from sources you trust. Review the executor code before deployment.
  • Use scope constraints to limit what any Bot can do through a plugin, regardless of what the plugin’s code allows.
  • Monitor the audit log for unexpected tool call patterns that might indicate a misbehaving plugin.

Testing

Unit Testing Your Executor

import pytest
from monstrum_sdk import ExecuteRequest, ExecuteResult


def _make_request(operation, params=None, **kwargs):
    return ExecuteRequest(
        request_id="test-req",
        bot_id="test-bot",
        task_id="test-task",
        operation=operation,
        params=params or {},
        **kwargs,
    )


class TestMyExecutor:
    @pytest.fixture
    def executor(self):
        from plugins.my_plugin.executor import MyExecutor
        return MyExecutor()

    async def test_read_success(self, executor, httpx_mock):
        httpx_mock.add_response(
            url="https://api.example.com/data",
            json={"items": [1, 2, 3]},
        )
        request = _make_request(
            "data.read",
            params={"query": "test"},
            credential_fields={"access_token": "test-token"},
        )
        result = await executor.execute(request)
        assert result.success
        assert result.data["items"] == [1, 2, 3]

    async def test_unknown_operation(self, executor):
        request = _make_request("invalid.op")
        result = await executor.execute(request)
        assert not result.success
        assert "Unknown operation" in result.error

    async def test_scope_validation(self, executor):
        error = executor.validate_scope(
            "data.read",
            {"project": "SECRET"},
            {"projects": ["PUBLIC-*"]},
        )
        assert error is not None

Testing with the Platform

from unittest.mock import AsyncMock, MagicMock, patch


async def test_plugin_client_integration():
    mock_tool_executor = AsyncMock()
    mock_tool_executor.execute.return_value = MagicMock(
        success=True,
        result={"issues": []},
    )

    with patch("services.runner.state.get_runner_state") as mock_state:
        mock_state.return_value.tool_executor = mock_tool_executor
        from monstrum_sdk import get_plugin_client

        client = get_plugin_client("github", bot_id="b1", task_id="t1")
        result = await client.list_issues(repo="org/repo")
        assert result == {"issues": []}

Running Tests

# Run your plugin's tests
pytest tests/plugins/my_plugin/ -v

# Run with the full test suite to catch regressions
pytest tests/ -x -q

# Lint
ruff check plugins/my_plugin/

Common Pitfalls

1. Storing request state on self

Executors are singletons — the same instance handles all concurrent requests. Storing per-request data on self causes race conditions:

# WRONG
self.current_user = request.credential_fields["email"]
data = await self._http_get(request, "/data")  # another request overwrites self.current_user

# RIGHT — use request-scoped data
data = await self._http_get(request, "/data")  # credentials flow through the request object

2. Using blocking I/O

All handlers are async. Blocking calls stall the entire event loop:

# WRONG — blocks the event loop
import requests
response = requests.get("https://api.example.com/data")

# RIGHT — use async HTTP
data = await self._http_get(request, "/data")

3. Implementing permission checks in the executor

Scope checking belongs in scope_dimensions, not in handler code:

# WRONG — manual permission check in handler
async def _handle_read(self, request: ExecuteRequest) -> ExecuteResult:
    repo = request.params["repo"]
    if not self._is_repo_allowed(repo, request.scope):  # Reinventing Guardian
        return ExecuteResult.error_result("Not allowed")
    ...

# RIGHT — declare in manifest, Guardian handles it
# scope_dimensions:
#   - key: repos
#     param_paths: [repo]
#     match_mode: pattern

4. Using scope_violation() for non-scope errors

scope_violation() signals an authorization problem. Don’t use it for API errors:

# WRONG — API 404 is not a scope violation
if response.status_code == 404:
    return ExecuteResult.scope_violation("Repository not found")

# RIGHT — it's an execution error
if response.status_code == 404:
    return ExecuteResult.error_result("Repository not found")

5. Forgetting resource_type must match manifest id

The resource_type class variable in your executor must exactly match resource_type.id in monstrum.yaml:

# monstrum.yaml
resource_type:
  id: my-plugin  # This string...
# executor.py
class MyExecutor(HttpExecutorBase):
    resource_type = "my-plugin"  # ...must match this string

6. Reordering auth_methods after publishing translations

Translation keys for auth methods are index-based (auth_methods.0.label, auth_methods.1.label). Reordering the auth_methods array in your manifest breaks the translation mapping. If you need to reorder, update the locale files to match.

7. Using Platform SDK when you need governance

If your code runs in a Bot’s context and calls another plugin, use PluginClient — not Platform SDK. Platform SDK bypasses all permission checks and audit logging:

# WRONG — bypasses governance in a Bot handler
async def _handle_deploy(self, request: ExecuteRequest) -> ExecuteResult:
    await platform.ssh.run(host="prod", command="deploy.sh", ...)  # No scope check!

# RIGHT — uses PluginClient for governed access
async def _handle_deploy(self, request: ExecuteRequest) -> ExecuteResult:
    ssh = get_plugin_client("ssh", bot_id=request.bot_id, task_id=request.task_id)
    await ssh.run(host="prod", command="deploy.sh")  # Guardian enforces scope

8. Logging credential values

Never log credential fields. They contain secrets (API keys, tokens, passwords):

# WRONG
logger.info(f"Calling API with token: {request.credential_fields}")

# RIGHT
logger.info(f"Calling API for bot={request.bot_id}, operation={request.operation}")

Packaging and Distribution

.mst File Format

Plugins can be packaged as .mst files (ZIP format) for distribution:

cd plugins/
zip -r my_plugin.mst my_plugin/

Install via CLI:

monstrum plugin install my_plugin
monstrum plugin install my_plugin@1.0.0  # specific version

Plugin Lifecycle

# Install
monstrum plugin install <package>[@version]

# Uninstall
monstrum plugin uninstall <package>

# List installed plugins
monstrum plugin list

# Search plugins
monstrum plugin search <query>

# View plugin details
monstrum plugin info <package>

# Update plugins
monstrum plugin update [package]

Complete Reference: GitHub Plugin

The GitHub plugin is the canonical reference implementation. Study it to understand best practices.

File: plugins/github/monstrum.yaml

name: github
version: 1.0.0
description: GitHub integration plugin — issues, comments, labels, and repository info
author: Monstrum
license: MIT
tags: [scm, github, issues]
repository: https://github.com/MonstrumAI/monstrum

resource_type:
  id: github
  name: GitHub
  mode: plugin
  tool_discovery: static
  description: "GitHub 代码托管平台"
  icon: github          # Semantic name (mapped to Ant Design icon) or filename (e.g. icon.svg)
  auth_flow: oauth

  # Credential fields (encrypted, never exposed to Bot)
  credential_schema:
    - field: access_token
      type: secret
      required: true
      description: "GitHub Access Token"

  # Resource configuration (plain text)
  config_schema:
    - field: api_base
      type: url
      required: false
      default: "https://api.github.com"
      description: "GitHub API base URL (customize for GitHub Enterprise)"

  # Authentication methods (frontend auto-renders UI)
  auth_methods:
    - method: oauth2_auth_code
      label: OAuth Login
      description: "Authorize via GitHub OAuth"
      credential_schema:
        - field: access_token
          type: secret
          required: true
          description: "OAuth Access Token (obtained automatically)"
      oauth_config:
        authorization_url: "https://github.com/login/oauth/authorize"
        token_url: "https://github.com/login/oauth/access_token"
        scopes: [repo, "read:org"]
    - method: token
      label: Personal Access Token
      description: "Configure manually with a GitHub PAT"
      credential_schema:
        - field: access_token
          type: secret
          required: true
          description: "GitHub Personal Access Token"

  # Tool definitions (visible to LLM)
  tools:
    - name: github_list_issues
      description: "List issues from a GitHub repository."
      operation: issue.read
      input_schema:
        type: object
        properties:
          repo: { type: string, description: "owner/repo format" }
          state: { type: string, enum: [open, closed, all], default: open }
          labels: { type: array, items: { type: string } }
          since: { type: string, description: "ISO 8601 timestamp" }
          per_page: { type: integer, default: 30 }
        required: [repo]

    - name: github_create_issue
      description: "Create a new issue in a GitHub repository."
      operation: issue.write
      input_schema:
        type: object
        properties:
          repo: { type: string }
          title: { type: string }
          body: { type: string }
          labels: { type: array, items: { type: string } }
          assignees: { type: array, items: { type: string } }
        required: [repo, title]

    - name: github_update_issue
      description: "Update an existing GitHub issue."
      operation: issue.write
      input_schema:
        type: object
        properties:
          repo: { type: string }
          issue_number: { type: integer }
          title: { type: string }
          body: { type: string }
          state: { type: string, enum: [open, closed] }
          labels: { type: array, items: { type: string } }
          assignees: { type: array, items: { type: string } }
        required: [repo, issue_number]

    - name: github_add_labels
      description: "Add labels to a GitHub issue."
      operation: issue.label.write
      input_schema:
        type: object
        properties:
          repo: { type: string }
          issue_number: { type: integer }
          labels: { type: array, items: { type: string } }
        required: [repo, issue_number, labels]

    - name: github_remove_labels
      description: "Remove labels from a GitHub issue."
      operation: issue.label.write
      input_schema:
        type: object
        properties:
          repo: { type: string }
          issue_number: { type: integer }
          labels: { type: array, items: { type: string } }
        required: [repo, issue_number, labels]

    - name: github_add_comment
      description: "Add a comment to a GitHub issue."
      operation: issue.comment.write
      input_schema:
        type: object
        properties:
          repo: { type: string }
          issue_number: { type: integer }
          body: { type: string }
        required: [repo, issue_number, body]

    - name: github_list_comments
      description: "List comments on a GitHub issue."
      operation: issue.comment.read
      input_schema:
        type: object
        properties:
          repo: { type: string }
          issue_number: { type: integer }
        required: [repo, issue_number]

    - name: github_get_repo
      description: "Get information about a GitHub repository."
      operation: repo.read
      input_schema:
        type: object
        properties:
          repo: { type: string }
        required: [repo]

  # Permission dimensions (Guardian auto-enforces)
  scope_dimensions:
    - key: repos
      param_paths: [repo, owner_repo]
      match_mode: pattern
      error_template: "Repository {value} is not authorized"

executor:
  module: executor
  class_name: GitHubExecutor

File: plugins/github/executor.py

from __future__ import annotations

import logging

import httpx

from monstrum_sdk import ExecuteRequest, ExecuteResult, HttpExecutorBase

logger = logging.getLogger(__name__)


class GitHubExecutor(HttpExecutorBase):
    resource_type = "github"
    default_api_base = "https://api.github.com"
    default_headers = {
        "Accept": "application/vnd.github+json",
        "X-GitHub-Api-Version": "2022-11-28",
    }
    supported_operations = [
        "repo.read",
        "issue.read",
        "issue.write",
        "issue.comment.read",
        "issue.comment.write",
        "issue.label.write",
    ]

    OPERATION_HANDLERS = {
        "repo.read": "_handle_repo_read",
        "issue.read": "_handle_issue_read",
        "issue.write": "_handle_issue_write",
        "issue.comment.read": "_handle_comment_read",
        "issue.comment.write": "_handle_comment_write",
        "issue.label.write": "_handle_label_write",
    }

    async def handle_execute_error(
        self, request: ExecuteRequest, error: Exception
    ) -> ExecuteResult:
        if isinstance(error, httpx.HTTPStatusError):
            logger.error(f"GitHub API error: {error}")
            return ExecuteResult.error_result(
                f"GitHub API error: {error.response.status_code} "
                f"- {error.response.text[:200]}"
            )
        if isinstance(error, httpx.RequestError):
            logger.error(f"GitHub request error: {error}")
            return ExecuteResult.error_result(
                f"GitHub request error: {str(error)}"
            )
        logger.exception(f"GitHub execution error: {error}")
        return ExecuteResult.error_result(f"Execution error: {str(error)}")

    # ── Helpers ──

    @staticmethod
    def _parse_repo(params: dict) -> tuple[str, str] | None:
        repo = params.get("repo", "")
        if not repo or "/" not in repo:
            return None
        return tuple(repo.split("/", 1))

    # ── Handlers ──

    async def _handle_repo_read(self, request: ExecuteRequest) -> ExecuteResult:
        parsed = self._parse_repo(request.params)
        if not parsed:
            return ExecuteResult.error_result("Invalid repo format, expected 'owner/repo'")
        owner, name = parsed
        data = await self._http_get(request, f"/repos/{owner}/{name}")
        return ExecuteResult.success_result(data)

    async def _handle_issue_read(self, request: ExecuteRequest) -> ExecuteResult:
        parsed = self._parse_repo(request.params)
        if not parsed:
            return ExecuteResult.error_result("Invalid repo format")
        owner, name = parsed
        issue_number = request.params.get("issue_number")

        if issue_number:
            data = await self._http_get(request, f"/repos/{owner}/{name}/issues/{issue_number}")
            return ExecuteResult.success_result(data)

        params = {}
        if state := request.params.get("state"):
            params["state"] = state
        if labels := request.params.get("labels"):
            params["labels"] = ",".join(labels) if isinstance(labels, list) else labels
        if since := request.params.get("since"):
            params["since"] = since
        if per_page := request.params.get("per_page"):
            params["per_page"] = per_page

        data = await self._http_get(request, f"/repos/{owner}/{name}/issues", params=params or None)
        return ExecuteResult.success_result(data)

    async def _handle_issue_write(self, request: ExecuteRequest) -> ExecuteResult:
        parsed = self._parse_repo(request.params)
        if not parsed:
            return ExecuteResult.error_result("Invalid repo format")
        owner, name = parsed
        issue_number = request.params.get("issue_number")

        body = {}
        for key in ("title", "body", "labels", "assignees", "state"):
            if val := request.params.get(key if key != "body" else "body"):
                body[key] = val

        if issue_number:
            data = await self._http_patch(
                request, f"/repos/{owner}/{name}/issues/{issue_number}", json=body,
            )
        else:
            data = await self._http_post(
                request, f"/repos/{owner}/{name}/issues", json=body,
            )
        return ExecuteResult.success_result(data)

    async def _handle_comment_read(self, request: ExecuteRequest) -> ExecuteResult:
        parsed = self._parse_repo(request.params)
        if not parsed:
            return ExecuteResult.error_result("Invalid repo format")
        issue_number = request.params.get("issue_number")
        if not issue_number:
            return ExecuteResult.error_result("issue_number is required")
        owner, name = parsed
        data = await self._http_get(
            request, f"/repos/{owner}/{name}/issues/{issue_number}/comments",
        )
        return ExecuteResult.success_result(data)

    async def _handle_comment_write(self, request: ExecuteRequest) -> ExecuteResult:
        parsed = self._parse_repo(request.params)
        if not parsed:
            return ExecuteResult.error_result("Invalid repo format")
        issue_number = request.params.get("issue_number")
        body = request.params.get("body")
        if not issue_number:
            return ExecuteResult.error_result("issue_number is required")
        if not body:
            return ExecuteResult.error_result("Comment body is required")
        owner, name = parsed
        data = await self._http_post(
            request, f"/repos/{owner}/{name}/issues/{issue_number}/comments",
            json={"body": body},
        )
        return ExecuteResult.success_result(data)

    async def _handle_label_write(self, request: ExecuteRequest) -> ExecuteResult:
        """Add or remove labels. Distinguishes by tool_name."""
        parsed = self._parse_repo(request.params)
        if not parsed:
            return ExecuteResult.error_result("Invalid repo format")
        issue_number = request.params.get("issue_number")
        labels = request.params.get("labels", [])
        if not issue_number:
            return ExecuteResult.error_result("issue_number is required")
        if not labels:
            return ExecuteResult.error_result("labels are required")
        owner, name = parsed
        is_remove = request.tool_name == "github_remove_labels"

        if is_remove:
            results = []
            for label in labels:
                resp = await self._http_delete(
                    request, f"/repos/{owner}/{name}/issues/{issue_number}/labels/{label}",
                )
                results.append({"label": label, "removed": resp.status_code == 200})
            return ExecuteResult.success_result(results)

        data = await self._http_post(
            request, f"/repos/{owner}/{name}/issues/{issue_number}/labels",
            json={"labels": labels},
        )
        return ExecuteResult.success_result(data)

API Reference

monstrum_sdk Exports

from monstrum_sdk import (
    # Executor bases
    ExecutorBase,         # Abstract base class for all executors
    HttpExecutorBase,     # Base class for HTTP API executors
    Web3ExecutorBase,     # Base class for EVM blockchain executors
    ExecuteRequest,       # Request dataclass
    ExecuteResult,        # Result dataclass
    ExecuteStatus,        # Enum: SUCCESS, ERROR, SCOPE_VIOLATION

    # Resource models
    ToolDef,              # Tool definition
    ScopeDimension,       # Permission dimension
    FieldDef,             # Field definition (credential/config)
    AuthMethod,           # Enum: OAUTH2_AUTH_CODE, API_KEY, TOKEN, ...
    AuthMethodDef,        # Auth method declaration
    OAuthProviderConfig,  # OAuth endpoint configuration

    # Plugin manifest
    PluginManifest,       # Complete plugin manifest
    PluginResourceType,   # ResourceType within manifest
    PluginExecutorDef,    # Executor loading config

    # PluginClient (tool-level invocation through Guardian)
    PluginClient,         # High-level tool caller
    PluginError,          # Tool call failure exception
    get_plugin_client,    # Factory function

    # Platform SDK (built-in executor capabilities)
    Platform,             # Capability entry point
    PlatformError,        # Capability failure exception
    platform,             # Global singleton
)

ExecutorBase Methods

MethodSignatureDescription
supports_operation(operation: str) → boolCheck if operation is supported (glob wildcards ok)
validate_scope(operation, params, scope) → str | NoneCustom scope validation; return error or None
_get_token(request, field="access_token") → strGet token from credentials
get_sdk_functions() → dict[str, Callable]Expose SDK functions for Platform SDK
execute(request) → ExecuteResultMain entry point (Template Method)
pre_execute(request) → ExecuteResult | NoneHook before dispatch
handle_execute_error(request, error) → ExecuteResultHook for error handling

HttpExecutorBase Methods

MethodSignatureDescription
_build_auth_headers(request) → dict[str, str]Build headers with auth
_get_api_base(request) → strResolve API base URL
_http_get(request, path, params=None) → AnyHTTP GET → JSON
_http_post(request, path, json=None) → AnyHTTP POST → JSON
_http_patch(request, path, json=None) → AnyHTTP PATCH → JSON
_http_delete(request, path) → httpx.ResponseHTTP DELETE → Response
_http_request(request, method, path, ...) → AnyLow-level HTTP (supports raw_response)
_paginate(request, path, params=None, per_page=30) → listPaginate via Link headers

Web3ExecutorBase Methods

MethodSignatureDescription
_w3(request) → Web3Get/create cached Web3 instance from resource config
_get_account(request) → AccountBuild account from credential private_key
_get_balance(request, address, token_address?) → dictNative or ERC20 balance
_transfer(request, to, value) → dictNative token transfer
_call_contract(request, contract, abi, function, args?) → dictRead-only contract call
_send_transaction(request, contract, abi, function, args?, value?) → dictWrite contract call
_get_transaction(request, tx_hash) → dictTransaction details + receipt
_read_events(request, contract, abi, event, from_block, to_block) → dictEvent logs
_estimate_gas(request, to, value?, data?) → dictGas estimation
_wait_for_receipt(request, tx_hash, timeout?) → dictWait for tx confirmation
_check_gas_price(request, w3) → NoneCheck gas price against max limit
_native_symbol(request) → strGet native token symbol from config
_tx_link(request, tx_hash) → str | NoneBuild block explorer URL

Platform SDK Namespaces

NamespaceMethods
platform.oauthlist_providers(resource_type_id, workspace_id), get_token(credential_id)
platform.eventsemit(name, data, ...), subscribe(pattern, bot_id, ...), unsubscribe(sub_id, ...), get_subscriptions(bot_id)
platform.sshrun(host, command, credential, ...)
platform.mcplist_tools(server, ...), call_tool(server, tool, arguments, ...)
platform.botexecute_task(...), query(...), status(...)
platform.websearch(query, ...), fetch(url, ...)
platform.web3get_balance(...), transfer(...), call_contract(...), send_transaction(...), get_transaction(...), read_events(...)
platform.{your_plugin}Functions from your get_sdk_functions()

Pattern Matching Utilities

The shared.utils.matching module provides standardized pattern matching functions used throughout the platform. Plugin developers can use these for custom scope validation:

from shared.utils.matching import match_glob, match_any, match_path, match_any_path
FunctionSignatureDescription
match_glob(value, pattern) → boolfnmatch glob matching
match_any(value, patterns, *, allow_regex=False) → boolMatch against any pattern
match_path(path, pattern) → boolFilesystem path matching (**, /*)
match_any_path(path, patterns) → boolMatch path against any pattern

Examples:

match_glob("issue.read", "issue.*")    # True
match_glob("anything", "*")            # True

match_any("issue.read", ["issue.*", "pr.*"])          # True
match_any("ls -la", ["^ls.*"], allow_regex=True)      # True

match_path("/home/user/docs/file.txt", "/home/user/**")  # True
match_path("/tmp/file.txt", "/tmp/*")                     # True

match_any_path("/tmp/file", ["/home/**", "/tmp/*"])    # True