run-rechallenge-evaluator
Rechallenge rate evaluator using cross-variable comparison: compares baseline_rate, first_exposure_rate, and rechallenge_rate to assess Bradford Hill experiment criterion. Uses cross-variable template resolution to directly compare rates against each other.
Taxonomy
Linnaean classification joined from the algovigilance taxonomy index via the parent config's rank.
| Rank | Value |
|---|---|
| domain | Substrata |
| kingdom | Constructa |
| phylum | Configa |
| class | station-config |
| order | microgram |
| family | mcp-tool-config |
Characteristics:
- substrate:
config - domain:
pv - lifecycle:
continuous - authority:
read - compounding:
producer - io:
agent-request→tool-response
Input schema
rechallenge_ratenumberrequired — Input parameter: rechallenge_ratebaseline_ratenumberrequired — Input parameter: baseline_ratefirst_exposure_ratenumberrequired — Input parameter: first_exposure_rate
Example call
POST /api/mcp
Content-Type: application/json
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "station__microgram__run-rechallenge-evaluator",
"arguments": {
"rechallenge_rate": 0,
"baseline_rate": 0,
"first_exposure_rate": 0
}
}
}How to invoke from a client
From any MCP-aware client, add https://algovigilance.com/api/mcp as an MCP server, then call this tool by name. From a raw HTTP client, send the JSON-RPC body above to /api/mcp.
Agent-friendly formats
Working inside an AI assistant? Use the Copy for AI button at the top of this page (or view the raw Markdown) to paste a clean, token-budgeted version of this tool's contract into your conversation.
Related
- All tools (3062 live)
- /api/mcp — endpoint
- /AGENTS.md — agent guide
- /tools/microgram__run-rechallenge-evaluator/raw.md — this page's Markdown twin