run-rrm-monitoring-level
Railway Reference Model monitoring level classifier. Maps a monitoring system to ETCS-equivalent levels: L1 (spot/periodic), L2 (continuous/streaming), L3 (moving block/predictive). Higher levels subsume lower. Level determines capacity and detection latency.
Taxonomy
Linnaean classification joined from the algovigilance taxonomy index via the parent config's rank.
| Rank | Value |
|---|---|
| domain | Substrata |
| kingdom | Constructa |
| phylum | Configa |
| class | station-config |
| order | microgram |
| family | mcp-tool-config |
Characteristics:
- substrate:
config - domain:
pv - lifecycle:
continuous - authority:
read - compounding:
producer - io:
agent-request→tool-response
Input schema
update_frequency_per_hourintegerrequired — Input parameter: update_frequency_per_houris_predictivebooleanrequired — Input parameter: is_predictivedetection_latency_hoursintegerrequired — Input parameter: detection_latency_hourshas_position_reportingbooleanrequired — Input parameter: has_position_reportingis_continuousbooleanrequired — Input parameter: is_continuous
Example call
POST /api/mcp
Content-Type: application/json
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "station__microgram__run-rrm-monitoring-level",
"arguments": {
"update_frequency_per_hour": 0,
"is_predictive": false,
"detection_latency_hours": 0,
"has_position_reporting": false,
"is_continuous": false
}
}
}How to invoke from a client
From any MCP-aware client, add https://algovigilance.com/api/mcp as an MCP server, then call this tool by name. From a raw HTTP client, send the JSON-RPC body above to /api/mcp.
Agent-friendly formats
Working inside an AI assistant? Use the Copy for AI button at the top of this page (or view the raw Markdown) to paste a clean, token-budgeted version of this tool's contract into your conversation.
Related
- All tools (3062 live)
- /api/mcp — endpoint
- /AGENTS.md — agent guide
- /tools/microgram__run-rrm-monitoring-level/raw.md — this page's Markdown twin