Your schema already has the logic. Your inference provider just ignores it.
Your agent drafts a plan: roll out auth-api to production, run a smoke test and update the incident channel. It calls the tool deploy_service with:
{ "service": "auth-api", "version": "2.1.0", "environment": "production" }
The tool rejects it: production deploys also require an approval ticket, a rollback version, and a health check endpoint. The agent retries, adds the ticket, retries again for the rollback version, then has to re-plan because the deploy step is blocked.
This failure pattern is very common, and is also very avoidable. The business rules the agent is tripping over are already expressible with JSON Schema. The problem is that many inference providers' structured outputs don't support conditional keywords. Worse, some don't return an error and simply ignore them (We've documented the full quality gap between us and others). So your conditional requirements show up as tool-call failures.
JSON that parses is not enough
In agentic workflows, either the arguments satisfy the tool's contract, or the step fails. A failure forces the agent to recover by retrying the call, and sometimes revisiting the plan. That's where latency and token spend pile up.
The catch is that many tool contracts are conditional:
- If
environmentis"production", requireapproval_ticket,rollback_version, andhealth_check_endpoint. - If
sort_byis present, requiresort_order. - If
page_tokenis present, requirepage_size.
JSON Schema already expresses these rules with keywords like if/then/else
and dependentRequired. But many structured output implementations enforce
only the easy subset (types, required, enums) and ignore
conditional/dependency constraints.
What Teams Do Today
When conditional structure isn’t enforced at generation time, teams bolt on guardrails around the agent loop. They work, but move complexity into the most expensive layer: runtime.
Make everything optional and validate after. You flatten the schema, make every conditionally-required field optional, let the LLM generate, then validate in application code. When it fails, especially with multiple interacting conditions, you retry. Each retry is another API call: more latency, more tokens, more money. You’ve turned schema logic into a rejection-sampling loop.
Split one tool into many. Instead of one
deployment tool with conditional branches (dev vs. production),
you define multiple tools: deploy_to_dev, deploy_to_staging and
deploy_to_production.
It reduces schema complexity, but it increases the agent’s action space and
prompt footprint. Agents need to choose the right tool. Tool explosion makes
that harder, slower, and less reliable.
Push rules into application code. You accept flat output and enforce conditions in your code. Now your schema says one thing and your code says another. As business rules change, schemas and code end up drifting.
Encode constraints in the prompt. You add instructions like "if the environment is production, include an approval_ticket matching PROD-XXXX." This is probabilistic and brittle, and the failure mode is silent: output that looks valid but violates a constraint the model forgot about.
These are all different ways of saying: "the agent can’t rely on the schema as a contract."
What JSON Schema Can Already Express
The frustrating part is that the rules you’re reimplementing in retries and orchestration are already first-class in JSON Schema.
PROD-XXXX), a rollback version, and a
health check endpoint. One tool definition, two
different sets of requirements.
{
"type": "object",
"properties": {
"service": { "type": "string" },
"version": { "type": "string" },
"environment": { "enum": ["dev", "staging", "production"] },
"approval_ticket": { "type": "string" },
"rollback_version": { "type": "string" },
"health_check_endpoint": { "type": "string" }
},
"required": ["service", "version", "environment"],
"if": {
"properties": { "environment": { "const": "production" } }
},
"then": {
"required": ["approval_ticket", "rollback_version", "health_check_endpoint"],
"properties": {
"approval_ticket": { "pattern": "^PROD-[0-9]{4,}$" }
}
}
}
{
"type": "object",
"properties": {
"query": { "type": "string" },
"filter": { "type": "string" },
"sort_by": { "type": "string" },
"sort_order": { "enum": ["asc", "desc"] },
"page_token": { "type": "string" },
"page_size": { "type": "integer" }
},
"required": ["query"],
"dependentRequired": {
"sort_by": ["sort_order"],
"page_token": ["page_size"]
}
}
Why Generation-Time Enforcement Matters
Validation after generation tells you what went wrong. Enforcement during generation prevents the wrong branch from being produced in the first place. In agent loops, that difference matters: it keeps steps from failing, so the agent spends less time recovering and more time progressing.
Try it on your schema
We implemented generation-time enforcement for these conditional features.
If your agent relies on retries, prompt glue, or tool-splitting to handle conditional arguments, send us your tool schemas. We’ll point out where conditional JSON Schema can replace orchestration logic, and what it’s likely to do to failure rate, token spend, and end-to-end latency.