LLM function calls don't scale; code orchestration is simpler, more effective
(bearblog.dev)
TL;DR: Giving LLMs the full output of tool calls is costly and slow. Output schemas will enable us to get structured data, so we can let the LLM orchestrate processing with generated code. Tool calling in code is simplifying and effective.
TL;DR: Giving LLMs the full output of tool calls is costly and slow. Output schemas will enable us to get structured data, so we can let the LLM orchestrate processing with generated code. Tool calling in code is simplifying and effective.