Join daily and weekly newsletters to obtain the latest updates and exclusive content to cover the leading artificial intelligence in the industry. Learn more
Researchers from Sochao University From China I introduced a series of tools (Cotools), a new framework designed to enhance how large LLMS models use external tools. Cotools aims to provide a more efficient and flexible approach compared to current methods. This will allow LLMS to take advantage of the vast tool groups directly in the thinking process, including those that have not been explicitly trained.
For institutions looking to build advanced AI agents, this capacity can open more powerful and adaptable applications without typical defects of current tools integration techniques.
While modern LLMS excels in generating texts, understanding and even complex thinking, they need to interact with external resources and tools such as databases or applications for many tasks. Llms equipped with external tools-Applications programming or jobs that they can mainly call-it is very important to expand their capabilities to practical applications in the real world.
However, the current ways to enable the use of the tool face large barters. One of the common approaches involves Llm polishing Examples of using the tool. Although this can make the model progress in calling the specific tools seen during training, it often restricts the model only on these tools. Moreover, the control process itself can sometimes negatively affect LLM’s general thinking capabilities, such as the Cot Series (COT), which may reduce the basic strengths of the foundation model.
The alternative approach depends on Learning within the context (ICL), where LLM is provided with descriptions of available tools and examples of how to use them directly within the claim. This method provides flexibility, allowing the model to use tools that have not been seen before. However, the creation of these complex claims can be stressful, and the efficiency of the model significantly with the growth of the number of tools available, making them the least process of scenarios with large and dynamic tools.
Researchers also notice in Paper Provide a series of tools, LLM agent “should be able to manage a large amount of tools effectively and use fully invisible songs during Cot thinking, as many new tools may appear daily in the real world application scenarios.”
Cotools provides a convincing alternative to the current methods by combining aspects of confirmation and semantic understanding while maintaining the “frozen” basic LLM-which remains its original weights and strong thinking capabilities without prejudice. Instead of adjusting the entire form, Cotools train specialized stereotypes that operate along with LLM during the process of generating them.
The researchers wrote: “The basic idea of Cotools is to take advantage of the semantic representation capabilities of the frozen basic models to locate the tools and communication tools.”
In essence, Cotools continues to the rich understanding included in the internal representations of LLM, which is often called “hidden cases”, which are calculated as the text of the form of the form of the form and the generation of response symbols.

The Cotools framework includes three main ingredients that work successively during the thinking of LLM:
Tool judge: Since LLM creates its response icon with the distinctive symbol, the tool judge analyzes the hidden status associated with the distinctive capabilities and decides whether calling the tool is appropriate at that point specified in the thinking chain.
Repeal tool: If the judge decides that a tool is needed, then Retriever chooses the most appropriate tool for the task. The recovered tool has been trained to create an inclusion to inquire and compare it with the available tools. This allows this to define the most effective tool from the available tools, including “invisible” tools (i.e. part of the Cotools unit training data).
Call the tool: Once defined the best tool, Cotools uses the ICL router that shows the fullness of the tool parameters based on the context. This targeted use of ICL avoids the incompetence of adding thousands of demonstrations in the demand for the first tool. Once the specified tool is performed, its results are again inserted into the generation of LLM response.
By separating the decision -making (judge) and choice (recovered) based on the semantic understanding of the teacher’s fullness (contact via concentrated ICL), Cotools achieve efficiency even with huge tools while maintaining the basic capabilities of LLM and allowing the flexible use of new tools. However, since Cotools require access to the hidden situations of the model, they can only be applied to open weight models such as Lama and Mistral instead of special models such as GPT-4O and Claude.

The researchers evaluated Cotools via two distinct scenarios of the application: numerical thinking using calculation tools and answering knowledge -based questions (KBQA), which requires restoring the rules of knowledge.
On mathematical standards such as GSM8K-XL (using basic processes) and FUNCQA (using more complex functions), Cotools are applied to Lama 2-7 b Achieve a similar performance for Chatgpt on GSM8K-XL and a little outperform or match another way to learn tools, toolkept, on Funcqa variables. The results highlighted that Cotools effectively enhance the basic foundation model capabilities.
For KBQA tasks, it was tested on the Kamel Data collection and the newly created Simpletoolquesions (STQUESTIONS) data group (1836 tools, including 837 invisible in the test set), Cotolls showed the accuracy of choosing superior tools. It was particularly distinguished in the scenarios by huge tools and when dealing with invisible tools, and taking advantage of the descriptive information for their effective retrieval, as methods of trained tools are stumbling. Experiments also indicated that Cotools maintained a strong performance despite low -quality training data.
The effects of the institution
A series of tools provides a promising direction to build more practical and powerful agents working for LLM in the institution. This is especially useful as new standards such as Form context protocol (MCP) Enable developers to easily integrate tools and external resources into their applications. Institutions can publish factors adapting to new internal or external application programming facades and jobs with minimal re -training of public expenditures.
Dependence on the framework on semantic understanding through hidden cases allows the selection of precise and accurate tools, which may lead to more reliable AI’s assistants in tasks that require interaction with various information sources and systems.
“Cotools explores the method of providing LLMS with huge new tools in a simple way,” said Mengsong Wu, the main author of Cotools Paper and Machine Learning Provession at Soochow University, for Venturebeat. “It can be used to build an Amnesty International’s personal agent with MCP and do complex thinking with scientific tools.”
However, Wu also indicated that they only performed preliminary exploratory work so far. Wu said: “To apply it in a real environment, he still needs to find a balance between the cost of installation and the efficiency of calling the generalized tools.”
The researchers released the Judge Units Training Blog and Recovery on Jaytab.
The researchers say: “We believe that the ideal learning agent framework for tools depends on the frozen LLMS with the practical perception method can be useful in realistic applications and even pay more development to learn tools.”
https://venturebeat.com/wp-content/uploads/2025/04/LLM-tool-use.webp?w=1024?w=1200&strip=all
Source link