MirrorYuChen
MirrorYuChen
Published on 2025-03-23 / 15 Visits
0
0

Qwen_Agent源码解析(二):FnCallAgent类

Qwen_Agent源码解析(二):FnCallAgent类

1.初始化接口

  • (1) 参数解析
参数 参数描述
function_list 智能体能调用工具列表
llm 大语言模型配置或大语言模型实例
system_message 系统提示词
name 智能体名
description 智能体描述
files 知识库文件列表
  • (2) 源码解析:首先调用父类Agent的初始化方法完成初始化,然后初始化Memory对象来管理知识库文件。
def __init__(self,
                 function_list: Optional[List[Union[str, Dict, BaseTool]]] = None,
                 llm: Optional[Union[Dict, BaseChatModel]] = None,
                 system_message: Optional[str] = DEFAULT_SYSTEM_MESSAGE,
                 name: Optional[str] = None,
                 description: Optional[str] = None,
                 files: Optional[List[str]] = None,
                 **kwargs):
        """Initialization the agent.

        Args:
            function_list: One list of tool name, tool configuration or Tool object,
              such as 'code_interpreter', {'name': 'code_interpreter', 'timeout': 10}, or CodeInterpreter().
            llm: The LLM model configuration or LLM model object.
              Set the configuration as {'model': '', 'api_key': '', 'model_server': ''}.
            system_message: The specified system message for LLM chat.
            name: The name of this agent.
            description: The description of this agent, which will be used for multi_agent.
            files: A file url list. The initialized files for the agent.
        """
        # 1.初始化父类Agent
        super().__init__(function_list=function_list,
                         llm=llm,
                         system_message=system_message,
                         name=name,
                         description=description)

        # 2.初始化Memory,使用Memory来管理知识库文件
        if not hasattr(self, 'mem'):
            # Default to use Memory to manage files
            if 'qwq' in self.llm.model or 'qvq' in self.llm.model:
                mem_llm = {
                    'model': 'qwen-turbo-latest',
                    'model_type': 'qwen_dashscope',
                    'generate_cfg': {
                        'max_input_tokens': 30000
                    }
                }
            else:
                mem_llm = self.llm
            self.mem = Memory(llm=mem_llm, files=files, **kwargs)

2.重写父类的内部运行方法

  • (1) 参数解析
参数 参数描述
messages 传入的消息队列
lang 传入消息的语言类型
  • (2) 代码解析:
def _run(self, messages: List[Message], lang: Literal['en', 'zh'] = 'en', **kwargs) -> Iterator[List[Message]]:
        messages = copy.deepcopy(messages)
        num_llm_calls_available = MAX_LLM_CALL_PER_RUN
        response = []
        # 多次调用LLM模型,最多MAX_LLM_CALL_PER_RUN次LLM调用机会
        while True and num_llm_calls_available > 0:
            num_llm_calls_available -= 1
            # 1.调用LLM模型
            extra_generate_cfg = {'lang': lang}
            if kwargs.get('seed') is not None:
                extra_generate_cfg['seed'] = kwargs['seed']
            output_stream = self._call_llm(messages=messages,
                                           functions=[func.function for func in self.function_map.values()],
                                           extra_generate_cfg=extra_generate_cfg)
            # 2.LLM模型有输出
            output: List[Message] = []
            for output in output_stream:
                if output:
                    yield response + output
            # 3.判断是否需要调用工具,当需要调用工具时,调用工具
            # 4.调用工具后,将工具响应存入messages中,再次调用LLM模型,直到不需要调用工具
            if output:
                response.extend(output)
                messages.extend(output)
                used_any_tool = False
                for out in output:
                    use_tool, tool_name, tool_args, _ = self._detect_tool(out)
                    if use_tool:
                        tool_result = self._call_tool(tool_name, tool_args, messages=messages, **kwargs)
                        fn_msg = Message(
                            role=FUNCTION,
                            name=tool_name,
                            content=tool_result,
                        )
                        messages.append(fn_msg)
                        response.append(fn_msg)
                        yield response
                        used_any_tool = True
                if not used_any_tool:
                    break
        yield response

3.内部工具调用方法

  • (1) 参数解析
参数 参数描述
tool_name 工具名
tool_args 工具调用的传参
  • (2) 代码解析:
def _call_tool(self, tool_name: str, tool_args: Union[str, dict] = '{}', **kwargs) -> str:
        # 1.判断工具是否存在
        if tool_name not in self.function_map:
            return f'Tool {tool_name} does not exists.'
        # Temporary plan: Check if it is necessary to transfer files to the tool
        # Todo: This should be changed to parameter passing, and the file URL should be determined by the model
        # 2.判断工具是否需要文件访问,如果需要,将文件传递给工具,通过extract_files_from_messages函数提取messages中的文件
        if self.function_map[tool_name].file_access:
            assert 'messages' in kwargs
            files = extract_files_from_messages(kwargs['messages'], include_images=True) + self.mem.system_files
            return super()._call_tool(tool_name, tool_args, files=files, **kwargs)
        else:
            # 3.工具不需要文件访问,直接调用工具
            return super()._call_tool(tool_name, tool_args, **kwargs)

4,参考资料


Comment