Use Textual Gradient Descent to optimize a math word problems prompt on Claude Haiku 4 against p95 latency without regressing safety.
Use Textual Gradient Descent to optimize a multi-hop QA prompt on GPT-4o against accuracy without regressing safety.
Use Textual Gradient Descent to optimize a research synthesis prompt on Gemini 2.5 Pro against F1 score without regressing safety.
Use Textual Gradient Descent to optimize a legal brief summarization prompt on GPT-4.1 against F1 score without regressing safety.
Use Textual Gradient Descent to optimize a incident post-mortems prompt on Gemini 2.0 Flash against factuality without regressing safety.
Use Textual Gradient Descent to optimize a data pipeline debugging prompt on o1-mini against refusal rate without regressing safety.
Use Textual Gradient Descent to optimize a resume screening prompt on Claude 3.7 Sonnet against toolcall precision without regressing safety.
Use Textual Gradient Descent to optimize a math word problems prompt on o3-mini against toolcall precision without regressing safety.
Use Textual Gradient Descent to optimize a multi-hop QA prompt on Llama 3.3 70B against format-compliance rate without regressing safety.
Use Textual Gradient Descent to optimize a legal brief summarization prompt on Llama 3.1 405B against hallucination rate without regressing safety.
Use Textual Gradient Descent to optimize a incident post-mortems prompt on Command R+ against user satisfaction (CSAT) without regressing safety.
Use Textual Gradient Descent to optimize a data pipeline debugging prompt on Claude Haiku 4 against inter-judge agreement without regressing safety.