LangChain是一个框架,用于开发由大型语言模型(LLM)驱动的应用程序。
LangChain 简化了 LLM 应用程序生命周期的每个阶段:
总结: LangChain是一个用于开发由LLM支持的应用程序的框架,通过提供标准化且丰富的模块抽象,构建LLM的输入输出规范,主要是利用其核心概念chains,可以灵活地链接整个应用开发流程。(即,其中的每个模块抽象,都是源于对大模型的深入理解和实践经验,由许多开发者提供出来的标准化流程和解决方案的抽象,再通过灵活的模块化组合,才得到了langchain)
添加描述
想象一下,如果要组织一个AI应用,开发者一般需要?
由上边的内容,引出LangChain抽象的一些核心模块:
LangChain通过模块化的方式去高级抽象LLM在不同场景下的能力,其中LangChain抽象出的最重要的核心模块如下:‘
LangChain的特点如下:
添加描述
在使用大模型的过程中,一些行业痛点:
pip install langchain
Agent核心思想React的由来: 在React思想之前,Reason和Act是割裂的,即推理和行动。如上图所示:
伪代码介绍执行顺序:
next\_action = agent.get\_action(...)
while next\_action != AgentFinish:
observation = run(next\_action)
next\_action = agent.get\_action(..., next\_action, observation)
return next\_action
过程介绍:
解决的问题:
Agent的核心思想是使用LLM来选择一系列要执行的动作,图示相关内容介绍如下。
1、左侧内容:
2、右侧内容:
Self-ask with search: 指的是Agent通过搜索,自我批判来解决复杂问题。这里我们使用tavily 搜索。 tavily 搜索引擎API获取地址 (*别看了,没别的介绍,我也是第一次用,不是不用Google搜索,是捣鼓了很久获取不到验证码。* 🤷♂️)
tavily 安装:
pip install -U langchain-community tavily-python
TavilySearchResults参数介绍
Demo测试:
import os
os.environ["TAVILY\_API\_KEY"] = ""
from langchain\_community.tools import TavilySearchResults
tool = TavilySearchResults(
max\_results=5,
include\_answer=True,
include\_raw\_content=True,
include\_images=True,
# search\_depth="advanced",
# include\_domains = []
# exclude\_domains = []
)
tool.invoke({'query': '谁是世界上最美丽的女人?'})
输出:
Agent构建:
from langchain.agents import load\_tools, get\_all\_tool\_names
from langchain.agents import initialize\_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
from langchain.agents import tool
from langchain.schema import SystemMessage
from langchain.agents import OpenAIFunctionsAgent
from langchain.agents import AgentExecutor
from langchain\_community.chat\_models import ChatZhipuAI
from langchain.agents import initialize\_agent, AgentType, Tool
import os
os.environ["TAVILY\_API\_KEY"] = ""
from langchain\_community.tools import TavilySearchResults
tools = [TavilySearchResults(name = 'Intermediate Answer', max\_results=5)]
os.environ["ZHIPUAI\_API\_KEY"] = ""
llm = ChatZhipuAI(
model="glm-4",
temperature=0,
)
# 实例化 SELF\_ASK\_WITH\_SEARCH Agent
self\_ask\_with\_search = initialize\_agent(
tools, llm, agent=AgentType.SELF\_ASK\_WITH\_SEARCH, verbose=True
)
# 实际运行 Agent,查询问题(正确)
self\_ask\_with\_search.invoke(
"成都举办的大运会是第几届大运会?"
)
输出:
OpenAIFunctionsAgent 是一个特定的代理(Agent),它被设计用来与 OpenAI 提供的函数调用功能进行交互。这个代理允许用户定义和执行自定义的函数,这些函数可以被语言模型调用,以执行特定的任务或操作。
以下是 OpenAIFunctionsAgent 的一些关键特性:
from langchain.agents import load\_tools, get\_all\_tool\_names
from langchain.agents import initialize\_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
from langchain.agents import tool
from langchain.schema import SystemMessage
from langchain.agents import OpenAIFunctionsAgent
from langchain.agents import AgentExecutor
os.environ["ZHIPUAI\_API\_KEY"] = ""
llm = ChatZhipuAI(
model="glm-4",
temperature=0,
)
# 自定义函数
# @tool
# def get\_word\_length(word: str) -> int:
# """Returns the length of a word."""
# return len(word)
# tools = [get\_word\_length]
# 调用自带的函数
tools = load\_tools(["llm-math"], llm=llm)
system\_message = SystemMessage(content="你是一个非常强大的AI助手")
prompt = OpenAIFunctionsAgent.create\_prompt(system\_message=system\_message)
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)
# 实例化 OpenAIFunctionsAgent
agent\_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent\_executor.invoke("单词“educaasdfasdf”中有多少个字母? 字母次数的3次方是多少? 结果再开方,最终结果是什么?")
输出:
获取所有工具列表:
1、获取提示词模板:
from langchain import hub
# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/react")
prompt
输出: React的提示词模板如下所示。
2、构建React智能体
from langchain\_community.chat\_models import ChatZhipuAI
from langchain\_community.tools.tavily\_search import TavilySearchResults
from langchain.agents import AgentExecutor, create\_react\_agent
import os
os.environ["ZHIPUAI\_API\_KEY"] = ""
llm = ChatZhipuAI(
model="glm-4",
temperature=0,
)
os.environ["TAVILY\_API\_KEY"] = ""
tools = [TavilySearchResults(max\_results=1)]
# 构建React智能体
agent = create\_react\_agent(llm, tools, prompt)
# 解析错误解决方案:添加参数handle\_parsing\_errors=True
agent\_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent\_executor.invoke({"input": "2024年有什么重大事件?"})
输出:
3、与聊天记录一起使用:
# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/react-chat")
# Construct the ReAct agent
agent = create\_react\_agent(llm, tools, prompt)
agent\_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
from langchain\_core.messages import AIMessage, HumanMessage
agent\_executor.invoke(
{
"input": "what's my name? Only use a tool if needed, otherwise respond with Final Answer",
# Notice that chat\_history is a string, since this prompt is aimed at LLMs, not chat models
"chat\_history": "Human: Hi! My name is Bob\nAI: Hello Bob! Nice to meet you",
}
)
输出:
Prompt:
# 报错: cannot import name 'tarfile' from 'backports' (C:\ProgramData\anaconda3\Lib\site-packages\backports\\_\_init\_\_.py)
# 解决方案:删掉C:\ProgramData\anaconda3\Lib\site-packages下的backports文件夹
from langchain.text\_splitter import RecursiveCharacterTextSplitter
from langchain\_community.document\_loaders import WebBaseLoader
from langchain\_community.vectorstores import FAISS
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.embeddings import ModelScopeEmbeddings
from langchain.embeddings import SentenceTransformerEmbeddings
# embeddings = HuggingFaceEmbeddings()
# model\_id = "damo/nlp\_corom\_sentence-embedding\_english-base"
# embeddings = ModelScopeEmbeddings(model\_id=model\_id)
embeddings = ModelScopeEmbeddings()
# embeddings = SentenceTransformerEmbeddings()
loader = WebBaseLoader("https://docs.smith.langchain.com/overview")
# loader = WebBaseLoader("https://baijiahao.baidu.com/s?id=1723160948933478116")
docs = loader.load()
documents = RecursiveCharacterTextSplitter(
chunk\_size=1000, chunk\_overlap=200
).split\_documents(docs)
vector = FAISS.from\_documents(documents, embeddings)
retriever = vector.as\_retriever()
from langchain.tools.retriever import create\_retriever\_tool
retriever\_tool = create\_retriever\_tool(
retriever=retriever,
name="langsmith\_search",
description="Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",
)
from langchain.tools.file\_management import (
ReadFileTool,
CopyFileTool,
DeleteFileTool,
MoveFileTool,
WriteFileTool,
ListDirectoryTool,
)
from langchain.agents.agent\_toolkits import FileManagementToolkit
from tempfile import TemporaryDirectory
# We'll make a temporary directory to avoid clutter
working\_directory = TemporaryDirectory()
toolkit = FileManagementToolkit(
root\_dir=str(working\_directory.name)
) # If you don't provide a root\_dir, operations will default to the current working directory
toolkit.get\_tools()
tools = FileManagementToolkit(
root\_dir=str(working\_directory.name),
selected\_tools=["read\_file", "write\_file", "list\_directory"],
).get\_tools()
read\_tool, write\_tool, list\_tool = tools
write\_tool.run({"file\_path": "example.txt", "text": "2024年11月25日"})
from langchain.agents import Tool
from langchain\_experimental.tools.python.tool import PythonAstREPLTool, PythonREPLTool
# 创建PythonREPLTool实例
python\_repl\_tool = PythonREPLTool()
# 定义要传入的查询语句(这里以一个简单的打印语句为例)
query\_value = "print('1+1')"
# # 调用\_run方法并传入查询语句
# result = python\_repl\_tool.\_run(
# query=query\_value,
# )
# result
# 您可以创建一个工具来传递给一个代理
repl\_tool = Tool(
name="python\_repl",
description="一个Python shell。使用它来执行python命令。输入应该是一个有效的python命令。如果你想看到一个值的输出,你应该用`print(...)`打印出来。",
func=python\_repl\_tool.run,
)
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。