developerpod
The machine that runs kcups.
Developerpod is the machine. Kcups are the pods — tiny *.kcup.toml files declaring what to gather, what to ask a model, and what shape the answer should take. The machine handles the rest: context, API calls, structured output. Write new dev tools by writing TOML, not code.
Links
- Source: github.com/DevRelopers/developerpod
- Crate: crates.io/crates/developerpod
- Site: developerpod.com
Getting Started
Install
cargo install developerpod
This puts a developerpod binary on your $PATH.
Set an API key
Developerpod auto-detects which provider to use by scanning your environment. Set any one of the supported keys — for example:
export ANTHROPIC_API_KEY=sk-ant-...
# or OPENAI_API_KEY, GEMINI_API_KEY, GROQ_API_KEY, MISTRAL_API_KEY,
# COHERE_API_KEY, DEEPSEEK_API_KEY, XAI_API_KEY, OPENROUTER_API_KEY,
# or any of ~50 other accepted aliases — see the Providers chapter.
The first key found wins, in priority order. If none are set, developerpod prints the full list of every variable it scanned, grouped by provider, and exits.
Run a kcup
The repo ships with a couple of example kcups. The simplest is repo-mood:
git clone https://github.com/DevRelopers/developerpod
cd developerpod
developerpod repo-mood
The machine looks for ./<name>.kcup.toml first, then ./examples/<name>.kcup.toml, so this picks up examples/repo-mood.kcup.toml. You'll see something like:
▶ brewing with Anthropic (claude-sonnet-4-6) — key from ANTHROPIC_API_KEY
▶ loading examples/repo-mood.kcup.toml
▶ pod repo-mood — Read the current vibe of a git repo (2 gatherers)
▣ repo-mood
evidence: …
mood: …
one_liner: …
That's the full loop: gather → interpolate → call model → validate → print.
Next steps
- Browse the example kcups to see what's possible.
- Read The Kcup Format for the TOML schema.
- Write your own — see Writing Your Own Kcup.
The Kcup Format
A kcup is a single TOML file named *.kcup.toml. Four parts: identity, gatherers, prompt, output schema.
Top-level fields
| Field | Type | Required | Description |
|---|---|---|---|
name | string | yes | Identifier shown in logs and as the output title. |
description | string | no | One-line summary printed on each run. |
[[gather]] blocks
Zero or more gather blocks run before the model call. Each captures one piece of context and binds it to an id you can interpolate into the prompt.
| Field | Type | Required | Description |
|---|---|---|---|
id | string | yes | The handle used in the prompt as {{id}}. |
shell | string | one of | Shell command run via sh -c. Captures stdout, trimmed. |
file | string | one of | File path read as UTF-8. |
optional | boolean | no | If true and file cannot be read, the value is empty instead of failing. |
Exactly one of shell or file must be set per block. A shell gatherer fails the run if the command exits non-zero (its stderr is shown). A file gatherer fails if the file is missing, unless optional = true.
[prompt] block
| Field | Type | Required | Description |
|---|---|---|---|
system | string | yes | System message — sets behavior, voice, output rules. |
user | string | yes | User message template. {{id}} placeholders are replaced with gathered values before the call. |
Interpolation is plain string replace. Unknown placeholders are left as-is.
[output] block
| Field | Type | Required | Description |
|---|---|---|---|
schema | table | yes | Map of field name → JSON-schema type. Drives structured output across all providers and post-call validation. |
Supported schema types
The validator recognizes these strings (from src/output.rs):
| Type | Validator check |
|---|---|
string | JSON string |
number | JSON number (int or float) |
integer | JSON integer (signed or unsigned 64-bit) |
boolean | JSON boolean |
array | JSON array (element types are not constrained) |
object | JSON object (nested keys are not constrained) |
Any other type string is passed through to the provider untouched but the local validator will accept any value for it. Stick to the six above unless you know what you're doing.
Every field in schema is treated as required. The model is asked to return all of them.
Minimal example
name = "echo"
description = "Round-trip a single string through the model."
[prompt]
system = "You repeat the user's message back, lowercased."
user = "Hello, world!"
[output]
schema = { result = "string" }
No gatherers, one input, one output field.
Maximal example
name = "release-notes"
description = "Draft release notes from the diff and merged PRs since the last tag."
[[gather]]
id = "last_tag"
shell = "git describe --tags --abbrev=0"
[[gather]]
id = "commits"
shell = "git log $(git describe --tags --abbrev=0)..HEAD --pretty=format:'%h %s'"
[[gather]]
id = "diff_stat"
shell = "git diff --stat $(git describe --tags --abbrev=0)..HEAD"
[[gather]]
id = "changelog"
file = "CHANGELOG.md"
optional = true
[prompt]
system = """
You write release notes in the style of crates.io changelogs.
Group changes into: Features, Fixes, Internal. Be specific. Cite commit shas.
"""
user = """
Last tag: {{last_tag}}
Commits since:
{{commits}}
Diff stat:
{{diff_stat}}
Existing CHANGELOG (for tone reference):
{{changelog}}
"""
[output]
schema = { version = "string", features = "array", fixes = "array", internal = "array", summary = "string" }
Four gatherers (one optional file), a multi-line prompt, and a richer schema.
Providers
Developerpod supports nine AI providers out of the box. At startup it scans your environment for an API key in priority order; first key found wins. 53 env var names supported across all providers.
The table below mirrors
src/provider.rs. Pushes that touch that file rebuild this site automatically, but if you ever spot drift, the source is authoritative.
Supported providers
| # | Provider | --provider id | Default model | Endpoint |
|---|---|---|---|---|
| 1 | Anthropic | anthropic | claude-sonnet-4-6 | api.anthropic.com/v1/messages |
| 2 | OpenAI | openai | gpt-5.4 | api.openai.com/v1/chat/completions |
| 3 | google | gemini-2.5-flash | generativelanguage.googleapis.com/v1beta/models | |
| 4 | Groq | groq | llama-3.3-70b-versatile | api.groq.com/openai/v1/chat/completions |
| 5 | Mistral | mistral | mistral-large-latest | api.mistral.ai/v1/chat/completions |
| 6 | Cohere | cohere | command-a-03-2025 | api.cohere.com/v2/chat |
| 7 | DeepSeek | deepseek | deepseek-chat | api.deepseek.com/v1/chat/completions |
| 8 | xAI | xai | grok-4-1-fast-non-reasoning | api.x.ai/v1/chat/completions |
| 9 | OpenRouter | openrouter | anthropic/claude-sonnet-4.6 | openrouter.ai/api/v1/chat/completions |
How structured output is requested
Each provider has its own way to ask for JSON-schema-shaped output. Developerpod dispatches per provider and translates your [output].schema into that provider's idiom:
| Provider | Mechanism |
|---|---|
| Anthropic | Forced tool_use with a tool named emit_result |
| OpenAI / Groq / DeepSeek / xAI / OpenRouter | response_format: { type: "json_schema", json_schema: { strict: true, … } } |
generationConfig.responseMimeType = "application/json" + responseSchema | |
| Mistral | response_format: { type: "json_object" } with the schema inlined into the system prompt |
| Cohere v2 | response_format: { type: "json_object", json_schema: { … } } |
Detection priority
- Provider order is fixed (the table above). If both Anthropic and OpenAI keys are set, Anthropic wins.
- Within a provider, env var names are tried in the order shown on the Environment Variables page. Canonical names come first, then community conventions, then short forms, then
*_API_TOKENvariants. - Empty values (whitespace-only) are treated as unset.
Override flags
Force a specific provider, model, or both — useful for switching between providers you have keys for, or for trying a non-default model.
developerpod repo-mood --provider openai
developerpod repo-mood --provider google --model gemini-2.5-pro
developerpod repo-mood --model claude-opus-4-7 # uses your detected provider
If --provider is set, only that provider's env vars are scanned; the run errors out if none are present.
When no key is found
Developerpod exits with a list of every variable it scanned, grouped by provider, and a hint about --provider:
Error: no AI provider API key found in environment. Scanned:
Anthropic: ANTHROPIC_API_KEY, CLAUDE_API_KEY, ANTHROPIC_KEY, …
OpenAI: OPENAI_API_KEY, CHATGPT_API_KEY, OPENAI_KEY, …
…
Set one of these env vars, or pass --provider <name> if your key uses a non-standard name.
The full list lives in Environment Variables.
Writing Your Own Kcup
The whole point of developerpod is that a useful dev tool can be a single TOML file. Here's how to write one.
1. Pick the smallest useful question
A good kcup answers one specific question that needs a model in the loop — interpretation, summarization, judgment, classification — using context that's awkward to assemble by hand. If you can answer it with a regex, you don't need a kcup.
Some prompts that work well:
- "What's the actual change in this diff, ignoring the rename noise?"
- "Which of these failing tests are flaky vs. real?"
- "Draft a release-notes paragraph from these commits."
- "Read this README and tell me what the project actually does."
2. Start with one shell gatherer
Resist the urge to gather everything up front. Get one signal flowing first.
name = "scratch"
description = "Scratch pad — replace me"
[[gather]]
id = "input"
shell = "git log --oneline -10"
[prompt]
system = "You summarize git logs in one sentence."
user = "{{input}}"
[output]
schema = { summary = "string" }
Save as scratch.kcup.toml and run developerpod scratch. If that works, you have the loop end-to-end.
3. Add the gatherers you actually need
Each gatherer is either a shell command or a file read. Add them one at a time and reference them in the prompt with {{id}}:
[[gather]]
id = "diff"
shell = "git diff HEAD~1"
[[gather]]
id = "readme"
file = "README.md"
optional = true # don't fail the run if it's missing
Mark anything that might not exist as optional = true.
4. Tighten the prompt
Once context is flowing, tune the prompt:
- System message: behavior, voice, format constraints. Keep it short.
- User template: the actual data, with labels so the model knows what each chunk is.
[prompt]
system = "You read git diffs and identify the smallest accurate description of the change."
user = """
Diff:
{{diff}}
README (for project context):
{{readme}}
"""
5. Declare a real schema
The schema does double duty: it tells the provider what shape to return (so you don't get prose when you wanted a list), and it's checked locally before printing.
[output]
schema = { headline = "string", details = "string", risk_level = "string" }
Use array if you genuinely want a list, object for nested data, boolean for yes/no questions. The full type list is in The Kcup Format.
6. Iterate
Run, read the output, adjust the prompt. The fastest improvements usually come from:
- Adding a missing piece of context as another gatherer.
- Renaming schema fields so the model knows what each one means.
- Tightening the system message — "be terse", "cite shas", "avoid hedging", whatever you actually want.
You're done when the output is the thing you would have written yourself.
Kcups vs. agent skills
Kcups look superficially similar to agent skills (Claude Code skills, OpenAI Assistants instructions, etc.) — both are file-based ways to package "how to use a model for a thing" without writing application code. The difference is in who reads the file and when the work happens.
| Kcup | Agent skill | |
|---|---|---|
| Read by | The developerpod CLI (deterministic). | The agent (an LLM) deciding what to do next. |
| Invoked | Explicitly: developerpod <name>. | Implicitly, when the agent judges the skill is relevant to the request. |
| Shape | TOML with declared sections and a typed output schema. | Markdown prose describing capability, triggers, and guidance. |
| Control flow | One pass: gather → prompt → structured output → exit. | Multi-turn loop: the agent picks tools, branches, asks follow-ups. |
| Side effects | None. Kcups gather and print; they don't take actions. | Whatever the agent's tools allow — file edits, shell, API calls, etc. |
| State across calls | None. | The agent carries conversation state. |
| Runtime requirement | The developerpod binary + an API key. | An agent harness (Claude Code, Claude API agent loop, Cursor, etc.). |
| Output | A JSON object matching a declared schema, pretty-printed. | Whatever the agent decides to say or do next. |
Reach for an agent skill when the task needs judgment about what to do next — choosing among tools, deciding when something's done, recovering from errors, handling a back-and-forth.
Reach for a kcup when the task is "look at this specific context, give me back this specific shape" and you want to call it from a Makefile, a git hook, CI, or a one-line shell alias. Kcups are scripts, not assistants.
The two compose: a kcup makes a great pre-step that hands a structured result to an agent, and an agent can absolutely shell out to developerpod <name> mid-loop when it needs an opinion on something concrete.
When not to write a kcup
- The answer doesn't need a model. Use a shell script.
- You need to take an action based on the output, not just read it. Kcups print; they don't act. Pipe the JSON elsewhere if you need to.
- The context exceeds what you want to send to a model. Kcups gather everything before the call — they don't paginate or retrieve.
- The right tool is an agent skill (see above).
repo-mood
Read the current vibe of a git repo. Two gatherers — recent commits and the README — feed a single prompt that returns a one-line mood, supporting evidence, and a one-liner.
The kcup
name = "repo-mood"
description = "Read the current vibe of a git repo"
[[gather]]
id = "commits"
shell = "git log --oneline -20"
[[gather]]
id = "readme"
file = "README.md"
optional = true
[prompt]
system = "You read repo signals and return the current mood."
user = """
Recent commits:
{{commits}}
README:
{{readme}}
"""
[output]
schema = { mood = "string", evidence = "string", one_liner = "string" }
How it breaks down
Identity
name = "repo-mood"
description = "Read the current vibe of a git repo"
name matches the lookup path: developerpod repo-mood resolves to ./repo-mood.kcup.toml or ./examples/repo-mood.kcup.toml. description is printed on each run.
Gatherers
[[gather]]
id = "commits"
shell = "git log --oneline -20"
[[gather]]
id = "readme"
file = "README.md"
optional = true
The commits gatherer runs the git log command via sh -c and captures stdout. The readme gatherer reads README.md as text; optional = true means a missing README produces an empty string instead of an error — which is what you want for a tool you might point at any repo.
Prompt
[prompt]
system = "You read repo signals and return the current mood."
user = """
Recent commits:
{{commits}}
README:
{{readme}}
"""
{{commits}} and {{readme}} are replaced with the gathered values before the call. The system message sets the framing; the user message hands over the labeled context.
Output schema
[output]
schema = { mood = "string", evidence = "string", one_liner = "string" }
Three string fields. Whichever provider is detected, developerpod converts this into that provider's structured-output mechanism (forced tool use, response_format, responseSchema, etc.) so the model returns exactly these fields.
Run it
developerpod repo-mood
Sample output
▶ brewing with Anthropic (claude-sonnet-4-6) — key from ANTHROPIC_API_KEY
▶ loading examples/repo-mood.kcup.toml
▶ pod repo-mood — Read the current vibe of a git repo (2 gatherers)
▣ repo-mood
evidence: Fresh scaffold landed in rapid succession — initial commit,
provider auto-detection across 9 providers, model refresh, env-var
expansion. README is thorough and self-described as v0.1 with a
clear missing-features list.
mood: energized early-stage
one_liner: Fresh scaffold with real ambition — the foundation is solid and the roadmap is already visible in what's missing.
standup
Generate a standup report from your recent git activity. Five shell gatherers (yesterday's commits, today's commits, current branch, uncommitted changes, recently active branches) feed a prompt that returns the three classic standup fields: yesterday, today, blockers.
The kcup
name = "standup"
description = "Generate a standup report from recent git activity"
[[gather]]
id = "yesterday_commits"
shell = "git log --since='yesterday.midnight' --until='today.midnight' --author=\"$(git config user.email)\" --pretty=format:'%h %s' --all"
[[gather]]
id = "today_commits"
shell = "git log --since='today.midnight' --author=\"$(git config user.email)\" --pretty=format:'%h %s' --all"
[[gather]]
id = "current_branch"
shell = "git rev-parse --abbrev-ref HEAD"
[[gather]]
id = "status"
shell = "git status --short"
[[gather]]
id = "recent_branches"
shell = "git for-each-ref --sort=-committerdate --count=5 --format='%(refname:short) %(committerdate:relative)' refs/heads/"
[prompt]
system = """
You generate concise standup reports from git activity. Be factual and specific — reference actual commit messages and files. Do not invent work that isn't evidenced in the data. If yesterday had no commits, say so. Keep each field to 1-3 short sentences.
"""
user = """
Yesterday's commits (by me):
{{yesterday_commits}}
Today's commits so far (by me):
{{today_commits}}
Current branch: {{current_branch}}
Uncommitted changes:
{{status}}
Recently active branches:
{{recent_branches}}
Generate a standup report.
"""
[output]
schema = { yesterday = "string", today = "string", blockers = "string" }
How it breaks down
Gatherers
[[gather]]
id = "yesterday_commits"
shell = "git log --since='yesterday.midnight' --until='today.midnight' --author=\"$(git config user.email)\" --pretty=format:'%h %s' --all"
--all picks up commits across every branch (so work-in-progress branches you never merged still show up). --author="$(git config user.email)" filters to your own commits using whatever email your local git is configured with — no hardcoding. Note the escaped double quotes around the $(...) so the variable expansion survives into the shell.
[[gather]]
id = "today_commits"
shell = "git log --since='today.midnight' --author=\"$(git config user.email)\" --pretty=format:'%h %s' --all"
Same pattern, narrowed to today.
[[gather]]
id = "current_branch"
shell = "git rev-parse --abbrev-ref HEAD"
[[gather]]
id = "status"
shell = "git status --short"
[[gather]]
id = "recent_branches"
shell = "git for-each-ref --sort=-committerdate --count=5 --format='%(refname:short) %(committerdate:relative)' refs/heads/"
current_branch and status together describe what you're sitting on right now — useful for the "today / blockers" framing. recent_branches lists the five most recently touched branches so the model can spot context-switches you might want to mention.
Prompt + schema
[prompt]
system = """
You generate concise standup reports from git activity. Be factual and specific — reference actual commit messages and files. Do not invent work that isn't evidenced in the data. If yesterday had no commits, say so. Keep each field to 1-3 short sentences.
"""
[output]
schema = { yesterday = "string", today = "string", blockers = "string" }
The system message is doing real work here: it explicitly forbids hallucination ("Do not invent work that isn't evidenced in the data") and constrains length (1–3 short sentences per field). Without those constraints the model tends to pad.
Run it
developerpod standup
Sample output
▶ brewing with Anthropic (claude-sonnet-4-6) — key from ANTHROPIC_API_KEY
▶ loading examples/standup.kcup.toml
▶ pod standup — Generate a standup report from recent git activity (5 gatherers)
▣ standup
blockers: None — current branch is clean and pushed; no uncommitted work in flight.
today: Started on the docs site scaffold (mdBook + GitHub Actions deploy).
yesterday: Shipped 0.2.0 to crates.io. Auto-detect rewrite landed (9 providers,
53 env var names) and per-provider default models refreshed to
current April 2026 IDs.
Variations to try
- Add merged PRs:
gh pr list --author @me --state merged --limit 5 --json title,mergedAt— feed it as a sixth gatherer if you haveghinstalled. - Cross-repo: replace the
git logcalls with a script that loops over a list of repo paths. - Slack-ready: change the system message to "format the result as a single Slack-ready message under 200 words" and collapse the schema to
{ message = "string" }.
CLI Reference
developerpod [OPTIONS] <KCUP>
Run a kcup. Resolves <KCUP> to ./<KCUP>.kcup.toml, then ./examples/<KCUP>.kcup.toml.
Arguments
| Name | Required | Description |
|---|---|---|
<KCUP> | yes | Name of the kcup to run (without the .kcup.toml suffix). Resolution order described above. |
Options
| Flag | Description |
|---|---|
--provider <ID> | Force a specific provider. Valid IDs: anthropic, openai, google, groq, mistral, cohere, deepseek, xai, openrouter. Skips auto-detection and only scans that provider's env vars. |
--model <NAME> | Override the model name for whichever provider is selected. Useful when a provider's default has been deprecated or when you want to try a different tier (e.g. Opus instead of Sonnet on Anthropic). |
-h, --help | Print help and exit. |
-V, --version | Print version and exit. |
Examples
# Auto-detect provider, run the bundled example
developerpod repo-mood
# Force OpenAI even if Anthropic is also set
developerpod repo-mood --provider openai
# Force a specific model on the detected provider
developerpod repo-mood --model claude-opus-4-7
# Force both
developerpod repo-mood --provider google --model gemini-2.5-pro
Exit codes
| Code | Meaning |
|---|---|
| 0 | Success — the model returned a valid response matching the declared schema and it was printed. |
| 1 | Any failure — kcup not found, parse error, gatherer failure, missing API key, API error, schema validation failure. The full error chain (via anyhow) is printed to stderr. |
There are currently no distinct exit codes per failure type. If you need to programmatically distinguish between (for example) a missing API key and a network error, parse the stderr message — and open an issue, since that's a reasonable thing to want.
Output streams
- stdout: only the final pretty-printed result.
- stderr: the progress lines (
▶ brewing with …,▶ loading …,▶ pod …) and any error chain.
This means you can pipe developerpod <name> into other tools and only get the result, while the progress output stays visible in your terminal.
Environment Variables
The full list of environment variables developerpod scans, grouped by provider, in priority order. The first non-empty value wins.
Mirrored from
src/provider.rs. Pushes that touch that file rebuild this site.
Anthropic
ANTHROPIC_API_KEY
CLAUDE_API_KEY
ANTHROPIC_KEY
CLAUDE_KEY
ANTHROPIC_API_TOKEN
CLAUDE_API_TOKEN
OpenAI
OPENAI_API_KEY
CHATGPT_API_KEY
OPENAI_KEY
CHATGPT_KEY
GPT_API_KEY
GPT_KEY
OPENAI_API_TOKEN
OPENAI_TOKEN
CHATGPT_API_TOKEN
GEMINI_API_KEY
GOOGLE_API_KEY
GOOGLE_GENERATIVE_AI_API_KEY # Vercel AI SDK default
GOOGLE_AI_API_KEY
GOOGLE_GENAI_API_KEY
GOOGLE_GEMINI_API_KEY
GEMINI_KEY
Groq
GROQ_API_KEY
GROQ_KEY
GROQ_API_TOKEN
Mistral
MISTRAL_API_KEY
MISTRALAI_API_KEY
MISTRAL_KEY
MISTRAL_API_TOKEN
Cohere
COHERE_API_KEY
CO_API_KEY # Cohere SDK default
COHERE_KEY
CO_KEY
COHERE_API_TOKEN
DeepSeek
DEEPSEEK_API_KEY
DEEPSEEK_KEY
DEEPSEEK_API_TOKEN
DEEPSEEK_TOKEN
xAI
XAI_API_KEY
GROK_API_KEY
XAI_KEY
GROK_KEY
XAI_API_TOKEN
GROK_API_TOKEN
OpenRouter
OPENROUTER_API_KEY
OPEN_ROUTER_API_KEY
OPENROUTER_KEY
OPENROUTER_API_TOKEN
Notes
- Empty values count as unset. Whitespace-only values are skipped, so
export OPENAI_API_KEY=""won't accidentally select OpenAI. - Provider order is fixed. If both Anthropic and OpenAI keys are set, Anthropic wins. To override, pass
--provider. - Within a provider, the canonical name is first, then community conventions, then short forms, then
*_API_TOKEN/*_TOKENvariants. - Missing your name? Open an issue or PR. The list is meant to be generous — if a sensible convention exists in the wild and we don't catch it, that's a bug.