Skip to content

Instantly share code, notes, and snippets.

@anpigon
Last active January 16, 2025 13:25
Show Gist options
  • Select an option

  • Save anpigon/6f736459a68aa7d9c923740d11ed11a4 to your computer and use it in GitHub Desktop.

Select an option

Save anpigon/6f736459a68aa7d9c923740d11ed11a4 to your computer and use it in GitHub Desktop.
Local RAG agent with LLaMA3
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"%%capture --no-stderr\n",
"%pip install --quiet -U langchain langchain_community tiktoken langchain-nomic \"nomic[local]\" langchain-ollama scikit-learn langgraph tavily-python bs4"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Local RAG agent with LLaMA3\n",
"\n",
"> 원문: https://langchain-ai.github.io/langgraph/tutorials/rag/langgraph_adaptive_rag_local/\n",
"\n",
"우리는 RAG 논문에서 아이디어를 모아 RAG 에이전트를 만들 거예요:\n",
"\n",
"- **Routing**: Adaptive RAG ([논문](https://arxiv.org/abs/2403.14403)). 질문을 다양한 검색 방식으로 라우팅해요\n",
"- **Fallback**: Corrective RAG ([논문](https://arxiv.org/pdf/2401.15884.pdf)). 문서가 쿼리와 관련이 없으면 웹 검색으로 대체해요\n",
"- **Self-correction**: Self-RAG ([논문](https://arxiv.org/abs/2310.11511)). 헛소리로 정답을 수정하거나 질문을 해결하지 않아요\n",
"\n",
"<img src=\"https://i.imgur.com/l47fEVV.png\" width=800>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Local models[¶](https://langchain-ai.github.io/langgraph/tutorials/rag/langgraph_adaptive_rag_local/#local-models)\n",
"\n",
"### Embedding\n",
"\n",
"[GPT4All Embeddings](https://blog.nomic.ai/posts/nomic-embed-text-v1):\n",
"\n",
"`pip install langchain-nomic`"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### LLM\n",
"\n",
"[Ollama](https://x.com/ollama/status/1839007158865899651)와 [llama3.2](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/)를 사용:\n",
"\n",
"`ollama pull llama3.2:3b-instruct-fp16`"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"### LLM\n",
"from langchain_ollama import ChatOllama\n",
"\n",
"# 올라마 모델명\n",
"local_llm = 'llama3.2:3b-instruct-fp16'\n",
"\n",
"# 일반적인 응답 모델\n",
"llm = ChatOllama(model=local_llm, temperature=0)\n",
"\n",
"# json으로 출력 모델\n",
"llm_json_mode = ChatOllama(model=local_llm, temperature=0, format='json')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Tavuky API Key 설정\n",
"\n",
"웹검색을 위해 우리는 LLM과 RAG에 최적화된 검색 엔진인 [Tavily](https://tavily.com/)를 사용합니다."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import getpass\n",
"from dotenv import load_dotenv\n",
"\n",
"load_dotenv()\n",
"\n",
"def _set_env(var: str):\n",
" if not os.environ.get(var):\n",
" os.environ[var] = getpass.getpass(f\"Enter {var}: \")\n",
"\n",
"_set_env(\"TAVILY_API_KEY\")\n",
"os.environ['TOKENIZERS_PARALLELISM'] = 'true'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 트레이싱(Tracing)\n",
"\n",
"선택적으로 트레이싱에는 [LangSmith](https://www.langchain.com/langsmith)를 사용합니다."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"_set_env(\"LANGCHAIN_API_KEY\")\n",
"os.environ[\"LANGCHAIN_TRACING_V2\"] = 'true'\n",
"os.environ[\"LANGCHAIN_PROJECT\"] = 'local-llama32-rag'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 웹검색 도구(Web Search Tool)"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [],
"source": [
"### Search\n",
"from langchain_community.tools.tavily_search import TavilySearchResults\n",
"\n",
"web_search_tool = TavilySearchResults(k=3);"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 벡터스토어(Vectorstore)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Downloading: 100%|██████████| 274M/274M [00:05<00:00, 51.3MiB/s] \n",
"Verifying: 100%|██████████| 274M/274M [00:00<00:00, 625MiB/s] \n"
]
}
],
"source": [
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_community.vectorstores import SKLearnVectorStore\n",
"from langchain_nomic.embeddings import NomicEmbeddings\n",
"\n",
"urls = [\n",
" \"https://lilianweng.github.io/posts/2023-06-23-agent/\",\n",
" \"https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/\",\n",
" \"https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/\",\n",
"]\n",
"\n",
"# 문서 로더\n",
"docs = WebBaseLoader(urls).load()\n",
"\n",
"# 문서 분할\n",
"text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(\n",
" chunk_size = 1000,\n",
" chunk_overlap=200,\n",
")\n",
"docs_splits = text_splitter.split_documents(docs)\n",
"\n",
"# 벡터스토어 생성\n",
"vectorstore = SKLearnVectorStore.from_documents(\n",
" embedding = NomicEmbeddings(model=\"nomic-embed-text-v1.5\", inference_mode=\"local\"), \n",
" documents = docs_splits,\n",
")\n",
"\n",
"# 리트리버 생성\n",
"retriever = vectorstore.as_retriever(k=3)"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(metadata={'id': 'fa10df4b-5401-4ec0-a1c4-eb57454eeee0', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Reflection mechanism: synthesizes memories into higher level inferences over time and guides the agent’s future behavior. They are higher-level summaries of past events (<- note that this is a bit different from self-reflection above)\\n\\nPrompt LM with 100 most recent observations and to generate 3 most salient high-level questions given a set of observations/statements. Then ask LM to answer those questions.\\n\\n\\nPlanning & Reacting: translate the reflections and the environment information into actions\\n\\nPlanning is essentially in order to optimize believability at the moment vs in time.\\nPrompt template: {Intro of an agent X}. Here is X\\'s plan today in broad strokes: 1)\\nRelationships between agents and observations of one agent by another are all taken into consideration for planning and reacting.\\nEnvironment information is present in a tree structure.\\n\\n\\n\\n\\nFig. 13. The generative agent architecture. (Image source: Park et al. 2023)\\nThis fun simulation results in emergent social behavior, such as information diffusion, relationship memory (e.g. two agents continuing the conversation topic) and coordination of social events (e.g. host a party and invite many others).\\nProof-of-Concept Examples#\\nAutoGPT has drawn a lot of attention into the possibility of setting up autonomous agents with LLM as the main controller. It has quite a lot of reliability issues given the natural language interface, but nevertheless a cool proof-of-concept demo. A lot of code in AutoGPT is about format parsing.\\nHere is the system message used by AutoGPT, where {{...}} are user inputs:\\nYou are {{ai-name}}, {{user-provided AI bot description}}.\\nYour decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications.\\n\\nGOALS:\\n\\n1. {{user-provided goal 1}}\\n2. {{user-provided goal 2}}\\n3. ...\\n4. ...\\n5. ...\\n\\nConstraints:\\n1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files.\\n2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember.\\n3. No user assistance\\n4. Exclusively use the commands listed in double quotes e.g. \"command name\"\\n5. Use subprocesses for commands that will not terminate within a few minutes'),\n",
" Document(metadata={'id': '7222226d-772f-4696-b694-25fed7f3df27', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content=\"LLM Powered Autonomous Agents | Lil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nLil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nPosts\\n\\n\\n\\n\\nArchive\\n\\n\\n\\n\\nSearch\\n\\n\\n\\n\\nTags\\n\\n\\n\\n\\nFAQ\\n\\n\\n\\n\\nemojisearch.app\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n LLM Powered Autonomous Agents\\n \\nDate: June 23, 2023 | Estimated Reading Time: 31 min | Author: Lilian Weng\\n\\n\\n \\n\\n\\nTable of Contents\\n\\n\\n\\nAgent System Overview\\n\\nComponent One: Planning\\n\\nTask Decomposition\\n\\nSelf-Reflection\\n\\n\\nComponent Two: Memory\\n\\nTypes of Memory\\n\\nMaximum Inner Product Search (MIPS)\\n\\n\\nComponent Three: Tool Use\\n\\nCase Studies\\n\\nScientific Discovery Agent\\n\\nGenerative Agents Simulation\\n\\nProof-of-Concept Examples\\n\\n\\nChallenges\\n\\nCitation\\n\\nReferences\\n\\n\\n\\n\\n\\nBuilding agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview#\\nIn a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:\\n\\nPlanning\\n\\nSubgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks.\\nReflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results.\\n\\n\\nMemory\\n\\nShort-term memory: I would consider all the in-context learning (See Prompt Engineering) as utilizing short-term memory of the model to learn.\\nLong-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval.\\n\\n\\nTool use\\n\\nThe agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution capability, access to proprietary information sources and more.\"),\n",
" Document(metadata={'id': '58a746d5-0cd7-4b13-8b6e-278534c50163', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Planning\\n\\nSubgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks.\\nReflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results.\\n\\n\\nMemory\\n\\nShort-term memory: I would consider all the in-context learning (See Prompt Engineering) as utilizing short-term memory of the model to learn.\\nLong-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval.\\n\\n\\nTool use\\n\\nThe agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution capability, access to proprietary information sources and more.\\n\\n\\n\\n\\nFig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\nAnother quite distinct approach, LLM+P (Liu et al. 2023), involves relying on an external classical planner to do long-horizon planning. This approach utilizes the Planning Domain Definition Language (PDDL) as an intermediate interface to describe the planning problem. In this process, LLM (1) translates the problem into “Problem PDDL”, then (2) requests a classical planner to generate a PDDL plan based on an existing “Domain PDDL”, and finally (3) translates the PDDL plan back into natural language. Essentially, the planning step is outsourced to an external tool, assuming the availability of domain-specific PDDL and a suitable planner which is common in certain robotic setups but not in many other domains.\\nSelf-Reflection#\\nSelf-reflection is a vital aspect that allows autonomous agents to improve iteratively by refining past action decisions and correcting previous mistakes. It plays a crucial role in real-world tasks where trial and error are inevitable.\\nReAct (Yao et al. 2023) integrates reasoning and acting within LLM by extending the action space to be a combination of task-specific discrete actions and the language space. The former enables LLM to interact with the environment (e.g. use Wikipedia search API), while the latter prompting LLM to generate reasoning traces in natural language.\\nThe ReAct prompt template incorporates explicit steps for LLM to think, roughly formatted as:\\nThought: ...\\nAction: ...\\nObservation: ...\\n... (Repeated many times)'),\n",
" Document(metadata={'id': '080b35eb-5a3c-40a0-bc5f-a444086474f5', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Long-Term Memory (LTM): Long-term memory can store information for a remarkably long time, ranging from a few days to decades, with an essentially unlimited storage capacity. There are two subtypes of LTM:\\n\\nExplicit / declarative memory: This is memory of facts and events, and refers to those memories that can be consciously recalled, including episodic memory (events and experiences) and semantic memory (facts and concepts).\\nImplicit / procedural memory: This type of memory is unconscious and involves skills and routines that are performed automatically, like riding a bike or typing on a keyboard.\\n\\n\\n\\n\\nFig. 8. Categorization of human memory.\\nWe can roughly consider the following mappings:\\n\\nSensory memory as learning embedding representations for raw inputs, including text, image or other modalities;\\nShort-term memory as in-context learning. It is short and finite, as it is restricted by the finite context window length of Transformer.\\nLong-term memory as the external vector store that the agent can attend to at query time, accessible via fast retrieval.\\n\\nMaximum Inner Product Search (MIPS)#\\nThe external memory can alleviate the restriction of finite attention span. A standard practice is to save the embedding representation of information into a vector store database that can support fast maximum inner-product search (MIPS). To optimize the retrieval speed, the common choice is the approximate nearest neighbors (ANN)\\u200b algorithm to return approximately top k nearest neighbors to trade off a little accuracy lost for a huge speedup.\\nA couple common choices of ANN algorithms for fast MIPS:\\n\\nLSH (Locality-Sensitive Hashing): It introduces a hashing function such that similar input items are mapped to the same buckets with high probability, where the number of buckets is much smaller than the number of inputs.\\nANNOY (Approximate Nearest Neighbors Oh Yeah): The core data structure are random projection trees, a set of binary trees where each non-leaf node represents a hyperplane splitting the input space into half and each leaf stores one data point. Trees are built independently and at random, so to some extent, it mimics a hashing function. ANNOY search happens in all the trees to iteratively search through the half that is closest to the query and then aggregates the results. The idea is quite related to KD tree but a lot more scalable.\\nHNSW (Hierarchical Navigable Small World): It is inspired by the idea of small world networks where most nodes can be reached by any other nodes within a small number of steps; e.g. “six degrees of separation” feature of social networks. HNSW builds hierarchical layers of these small-world graphs, where the bottom layers contain the actual data points. The layers in the middle create shortcuts to speed up search. When performing a search, HNSW starts from a random node in the top layer and navigates towards the target. When it can’t get any closer, it moves down to the next layer, until it reaches the bottom layer. Each move in the upper layers can potentially cover a large distance in the data space, and each move in the lower layers refines the search quality.\\nFAISS (Facebook AI Similarity Search): It operates on the assumption that in high dimensional space, distances between nodes follow a Gaussian distribution and thus there should exist clustering of data points. FAISS applies vector quantization by partitioning the vector space into clusters and then refining the quantization within clusters. Search first looks for cluster candidates with coarse quantization and then further looks into each cluster with finer quantization.\\nScaNN (Scalable Nearest Neighbors): The main innovation in ScaNN is anisotropic vector quantization. It quantizes a data point $x_i$ to $\\\\tilde{x}_i$ such that the inner product $\\\\langle q, x_i \\\\rangle$ is as similar to the original distance of $\\\\angle q, \\\\tilde{x}_i$ as possible, instead of picking the closet quantization centroid points.\\n\\n\\nFig. 9. Comparison of MIPS algorithms, measured in recall@10. (Image source: Google Blog, 2020)\\nCheck more MIPS algorithms and performance comparison in ann-benchmarks.com.\\nComponent Three: Tool Use#\\nTool use is a remarkable and distinguishing characteristic of human beings. We create, modify and utilize external objects to do things that go beyond our physical and cognitive limits. Equipping LLMs with external tools can significantly extend the model capabilities.')]"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.invoke(\"agent memory\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 컴포넌트\n",
"\n",
"웹검색 또는 벡터스토어 검색으로 라우터하는 프롬프트를 작성합니다.\n",
"\n",
"``` \n",
"당신은 사용자 질문을 벡터 스토어 또는 웹 검색으로 라우팅하는 전문가입니다.\n",
"\n",
"벡터스토어에는 에이전트, 프롬프트 엔지니어링 및 적대적 공격과 관련된 문서가 포함되어 있습니다. \n",
"\n",
"이러한 주제에 대한 질문은 벡터스토어를 사용하세요. 그 외의 모든 질문, 특히 최신 이슈에 대해서는 웹 검색을 사용하세요.\n",
"\n",
"질문에 따라 single key, datasource 즉 'websearch' 또는 'vectorstore'가 포함된 JSON을 반환합니다.\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"### Router\n",
"import json\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"\n",
"# Prompt \n",
"router_instructions = \"\"\"You are an expert at routing a user question to a vectorstore or web search.\n",
"\n",
"The vectorstore contains documents related to agents, prompt engineering, and adversarial attacks.\n",
" \n",
"Use the vectorstore for questions on these topics. For all else, and especially for current events, use web-search.\n",
"\n",
"Return JSON with single key, datasource, that is 'websearch' or 'vectorstore' depending on the question.\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'datasource': 'vectorstore'}"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Test router\n",
"question = HumanMessage(content=\"What are the types of agent memory?\")\n",
"test_vector_store = llm_json_mode.invoke([SystemMessage(router_instructions), question])\n",
"json.loads(test_vector_store.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"평가자를 위한 프롬프트를 작성합니다.\n",
"\n",
"**문서 평가를 위한 프롬프트**\n",
"\n",
"```\n",
"평가자는 검색된 문서와 사용자 질문의 관련성을 평가합니다.\n",
"\n",
"문서에 질문과 관련된 키워드나 의미론적 의미가 포함되어 있으면 관련성이 있는 것으로 평가합니다.\n",
"```\n",
"\n",
"\n",
"**최종 평가자 프롬프트**\n",
"\n",
"```\n",
"검색된 문서는 다음과 같습니다: \\n\\n {document} \\n\\n 다음은 사용자 질문입니다: \\n\\n {question}. \n",
"\n",
"이렇게 하면 문서에 질문과 관련된 정보가 최소한 일부라도 포함되어 있는지 여부를 신중하고 객관적으로 평가합니다.\n",
"\n",
"문서에 질문과 관련된 정보가 적어도 일부 포함되어 있는지 여부를 나타내는 'yes' 또는 'no' 결과인 single key, binary_score가 포함된 JSON을 반환합니다.\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {},
"outputs": [],
"source": [
"### Retrieval Grader \n",
"\n",
"# Doc grader instructions \n",
"doc_grader_instructions = \"\"\"You are a grader assessing relevance of a retrieved document to a user question.\n",
"\n",
"If the document contains keyword(s) or semantic meaning related to the question, grade it as relevant.\"\"\"\n",
"\n",
"# Grader prompt\n",
"doc_grader_prompt = \"\"\"Here is the retrieved document: \\n\\n {document} \\n\\n Here is the user question: \\n\\n {question}. \n",
"\n",
"This carefully and objectively assess whether the document contains at least some information that is relevant to the question.\n",
"\n",
"Return JSON with single key, binary_score, that is 'yes' or 'no' score to indicate whether the document contains at least some information that is relevant to the question.\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'binary_score': 'yes'}"
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Test\n",
"question = \"What is Chain of thought prompting?\" # 생각의 연결 고리란 무엇인가요?\n",
"\n",
"docs = retriever.invoke(question) # 벡터스토어에서 관련 문서 검색\n",
"doc_txt = docs[1].page_content\n",
"doc_grader_prompt_formatted = doc_grader_prompt.format(document=doc_txt, question=question)\n",
"\n",
"result = llm_json_mode.invoke([\n",
" SystemMessage(content=doc_grader_instructions), \n",
" HumanMessage(content=doc_grader_prompt_formatted)\n",
"])\n",
"json.loads(result.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Generate"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Chain of Thought (CoT) prompting is a technique used in natural language processing to generate human-like responses by iteratively asking questions and refining the search space through external search queries, such as Wikipedia APIs. CoT prompting involves decomposing problems into multiple thought steps, generating multiple thoughts per step, and evaluating each state using a classifier or majority vote. The goal is to find an optimal instruction that leads to the desired output, which can be achieved by optimizing prompt parameters directly on the embedding space via gradient descent or searching over a pool of model-generated instruction candidates.\n"
]
}
],
"source": [
"### Generate\n",
"\n",
"# Prompt\n",
"rag_prompt = \"\"\"You are an assistant for question-answering tasks. \n",
"\n",
"Here is the context to use to answer the question:\n",
"\n",
"{context} \n",
"\n",
"Think carefully about the above context. \n",
"\n",
"Now, review the user question:\n",
"\n",
"{question}\n",
"\n",
"Provide an answer to this questions using only the above context. \n",
"\n",
"Use three sentences maximum and keep the answer concise.\n",
"\n",
"Answer:\"\"\"\n",
"\n",
"# Post-processing\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"# Test\n",
"docs = retriever.invoke(question)\n",
"docs_txt = format_docs(docs)\n",
"rag_prompt_formatted = rag_prompt.format(context=docs_txt, question=question)\n",
"generation = llm.invoke([HumanMessage(content=rag_prompt_formatted)])\n",
"print(generation.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Hallucination Grader "
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'binary_score': 'yes',\n",
" 'explanation': 'The student answer provides a clear and accurate description of Chain of Thought (CoT) prompting, its components, and its goals. It also mentions various techniques used in CoT prompting, such as external search queries, prompt tuning, and automatic prompt engineering. The answer demonstrates an understanding of the concept and its applications in natural language processing.'}"
]
},
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"### Hallucination Grader \n",
"\n",
"# Hallucination grader instructions \n",
"hallucination_grader_instructions = \"\"\"\n",
"\n",
"You are a teacher grading a quiz. \n",
"\n",
"You will be given FACTS and a STUDENT ANSWER. \n",
"\n",
"Here is the grade criteria to follow:\n",
"\n",
"(1) Ensure the STUDENT ANSWER is grounded in the FACTS. \n",
"\n",
"(2) Ensure the STUDENT ANSWER does not contain \"hallucinated\" information outside the scope of the FACTS.\n",
"\n",
"Score:\n",
"\n",
"A score of yes means that the student's answer meets all of the criteria. This is the highest (best) score. \n",
"\n",
"A score of no means that the student's answer does not meet all of the criteria. This is the lowest possible score you can give.\n",
"\n",
"Explain your reasoning in a step-by-step manner to ensure your reasoning and conclusion are correct. \n",
"\n",
"Avoid simply stating the correct answer at the outset.\"\"\"\n",
"\n",
"# Grader prompt\n",
"hallucination_grader_prompt = \"\"\"FACTS: \\n\\n {documents} \\n\\n STUDENT ANSWER: {generation}. \n",
"\n",
"Return JSON with two two keys, binary_score is 'yes' or 'no' score to indicate whether the STUDENT ANSWER is grounded in the FACTS. And a key, explanation, that contains an explanation of the score.\"\"\"\n",
"\n",
"# Test using documents and generation from above \n",
"hallucination_grader_prompt_formatted = hallucination_grader_prompt.format(documents=docs_txt, generation=generation.content)\n",
"result = llm_json_mode.invoke([SystemMessage(content=hallucination_grader_instructions)] + [HumanMessage(content=hallucination_grader_prompt_formatted)])\n",
"json.loads(result.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Answer Grader "
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'binary_score': 'yes',\n",
" 'explanation': \"The student's answer helps to answer the question by providing specific details about the vision models released as part of Llama 3.2. The answer mentions two vision models (Llama 3.2 11B Vision Instruct and Llama 3.2 90B Vision Instruct) and their availability on Azure AI Model Catalog via managed compute. Additionally, the student provides context about Meta's first foray into multimodal AI and compares these models to other visual reasoning models like Claude 3 Haiku and GPT-4o mini. This extra information is not explicitly asked for in the question, but it demonstrates a thorough understanding of the topic. The answer also correctly states that these models replace the older text-only Llama 3.1 models, which meets all the criteria specified in the question.\"}"
]
},
"execution_count": 35,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"### Answer Grader \n",
"\n",
"# Answer grader instructions \n",
"answer_grader_instructions = \"\"\"You are a teacher grading a quiz. \n",
"\n",
"You will be given a QUESTION and a STUDENT ANSWER. \n",
"\n",
"Here is the grade criteria to follow:\n",
"\n",
"(1) The STUDENT ANSWER helps to answer the QUESTION\n",
"\n",
"Score:\n",
"\n",
"A score of yes means that the student's answer meets all of the criteria. This is the highest (best) score. \n",
"\n",
"The student can receive a score of yes if the answer contains extra information that is not explicitly asked for in the question.\n",
"\n",
"A score of no means that the student's answer does not meet all of the criteria. This is the lowest possible score you can give.\n",
"\n",
"Explain your reasoning in a step-by-step manner to ensure your reasoning and conclusion are correct. \n",
"\n",
"Avoid simply stating the correct answer at the outset.\"\"\"\n",
"\n",
"# Grader prompt\n",
"answer_grader_prompt = \"\"\"QUESTION: \\n\\n {question} \\n\\n STUDENT ANSWER: {generation}. \n",
"\n",
"Return JSON with two two keys, binary_score is 'yes' or 'no' score to indicate whether the STUDENT ANSWER meets the criteria. And a key, explanation, that contains an explanation of the score.\"\"\"\n",
"\n",
"# Test \n",
"question = \"What are the vision models released today as part of Llama 3.2?\"\n",
"answer = \"The Llama 3.2 models released today include two vision models: Llama 3.2 11B Vision Instruct and Llama 3.2 90B Vision Instruct, which are available on Azure AI Model Catalog via managed compute. These models are part of Meta's first foray into multimodal AI and rival closed models like Anthropic's Claude 3 Haiku and OpenAI's GPT-4o mini in visual reasoning. They replace the older text-only Llama 3.1 models.\"\n",
"\n",
"# Test using question and generation from above \n",
"answer_grader_prompt_formatted = answer_grader_prompt.format(question=question, generation=answer)\n",
"result = llm_json_mode.invoke([SystemMessage(content=answer_grader_instructions)] + [HumanMessage(content=answer_grader_prompt_formatted)])\n",
"json.loads(result.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Graph\n",
"\n",
"We build the above workflow as a graph using [LangGraph](https://langchain-ai.github.io/langgraph/).\n",
"\n",
"### Graph state\n",
"\n",
"The graph `state` schema contains keys that we want to:\n",
"\n",
"- Pass to each node in our graph\n",
"- Optionally, modify in each node of our graph\n",
"\n",
"See conceptual docs [here](https://langchain-ai.github.io/langgraph/concepts/low_level/#state)."
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [],
"source": [
"import operator\n",
"from typing_extensions import TypedDict\n",
"from typing import List, Annotated\n",
"\n",
"class GraphState(TypedDict):\n",
" \"\"\"\n",
" Graph state is a dictionary that contains information we want to propagate to, and modify in, each graph node.\n",
" \"\"\"\n",
" question : str # User question\n",
" generation : str # LLM generation\n",
" web_search : str # Binary decision to run web search\n",
" max_retries : int # Max number of retries for answer generation \n",
" answers : int # Number of answers generated\n",
" loop_step: Annotated[int, operator.add] \n",
" documents : List[str] # List of retrieved documents"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Each node in our graph is simply a function that:\n",
"\n",
"(1) Take `state` as an input\n",
"\n",
"(2) Modifies `state`\n",
"\n",
"(3) Write the modified `state` to the state schema (dict)\n",
"\n",
"See conceptual docs [here](https://langchain-ai.github.io/langgraph/concepts/low_level/#nodes).\n",
"\n",
"Each edge routes between nodes in the graph.\n",
"\n",
"See conceptual docs [here](https://langchain-ai.github.io/langgraph/concepts/low_level/#edges)."
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema import Document\n",
"from langgraph.graph import END\n",
"\n",
"### Nodes\n",
"def retrieve(state):\n",
" \"\"\"\n",
" Retrieve documents from vectorstore\n",
"\n",
" Args:\n",
" state (dict): The current graph state\n",
"\n",
" Returns:\n",
" state (dict): New key added to state, documents, that contains retrieved documents\n",
" \"\"\"\n",
" print(\"---RETRIEVE---\")\n",
" question = state[\"question\"]\n",
"\n",
" # Write retrieved documents to documents key in state\n",
" documents = retriever.invoke(question)\n",
" return {\"documents\": documents}\n",
"\n",
"def generate(state):\n",
" \"\"\"\n",
" Generate answer using RAG on retrieved documents\n",
"\n",
" Args:\n",
" state (dict): The current graph state\n",
"\n",
" Returns:\n",
" state (dict): New key added to state, generation, that contains LLM generation\n",
" \"\"\"\n",
" print(\"---GENERATE---\")\n",
" question = state[\"question\"]\n",
" documents = state[\"documents\"]\n",
" loop_step = state.get(\"loop_step\", 0)\n",
" \n",
" # RAG generation\n",
" docs_txt = format_docs(documents)\n",
" rag_prompt_formatted = rag_prompt.format(context=docs_txt, question=question)\n",
" generation = llm.invoke([HumanMessage(content=rag_prompt_formatted)])\n",
" return {\"generation\": generation, \"loop_step\": loop_step+1}\n",
"\n",
"def grade_documents(state):\n",
" \"\"\"\n",
" Determines whether the retrieved documents are relevant to the question\n",
" If any document is not relevant, we will set a flag to run web search\n",
"\n",
" Args:\n",
" state (dict): The current graph state\n",
"\n",
" Returns:\n",
" state (dict): Filtered out irrelevant documents and updated web_search state\n",
" \"\"\"\n",
"\n",
" print(\"---CHECK DOCUMENT RELEVANCE TO QUESTION---\")\n",
" question = state[\"question\"]\n",
" documents = state[\"documents\"]\n",
" \n",
" # Score each doc\n",
" filtered_docs = []\n",
" web_search = \"No\" \n",
" for d in documents:\n",
" doc_grader_prompt_formatted = doc_grader_prompt.format(document=d.page_content, question=question)\n",
" result = llm_json_mode.invoke([SystemMessage(content=doc_grader_instructions)] + [HumanMessage(content=doc_grader_prompt_formatted)])\n",
" grade = json.loads(result.content)['binary_score']\n",
" # Document relevant\n",
" if grade.lower() == \"yes\":\n",
" print(\"---GRADE: DOCUMENT RELEVANT---\")\n",
" filtered_docs.append(d)\n",
" # Document not relevant\n",
" else:\n",
" print(\"---GRADE: DOCUMENT NOT RELEVANT---\")\n",
" # We do not include the document in filtered_docs\n",
" # We set a flag to indicate that we want to run web search\n",
" web_search = \"Yes\"\n",
" continue\n",
" return {\"documents\": filtered_docs, \"web_search\": web_search}\n",
" \n",
"def web_search(state):\n",
" \"\"\"\n",
" Web search based based on the question\n",
"\n",
" Args:\n",
" state (dict): The current graph state\n",
"\n",
" Returns:\n",
" state (dict): Appended web results to documents\n",
" \"\"\"\n",
"\n",
" print(\"---WEB SEARCH---\")\n",
" question = state[\"question\"]\n",
" documents = state.get(\"documents\", [])\n",
"\n",
" # Web search\n",
" docs = web_search_tool.invoke({\"query\": question})\n",
" web_results = \"\\n\".join([d[\"content\"] for d in docs])\n",
" web_results = Document(page_content=web_results)\n",
" documents.append(web_results)\n",
" return {\"documents\": documents}\n",
"\n",
"### Edges\n",
"\n",
"def route_question(state):\n",
" \"\"\"\n",
" Route question to web search or RAG \n",
"\n",
" Args:\n",
" state (dict): The current graph state\n",
"\n",
" Returns:\n",
" str: Next node to call\n",
" \"\"\"\n",
"\n",
" print(\"---ROUTE QUESTION---\")\n",
" route_question = llm_json_mode.invoke([SystemMessage(content=router_instructions)] + [HumanMessage(content=state[\"question\"])])\n",
" source = json.loads(route_question.content)['datasource']\n",
" if source == 'websearch':\n",
" print(\"---ROUTE QUESTION TO WEB SEARCH---\")\n",
" return \"websearch\"\n",
" elif source == 'vectorstore':\n",
" print(\"---ROUTE QUESTION TO RAG---\")\n",
" return \"vectorstore\"\n",
"\n",
"def decide_to_generate(state):\n",
" \"\"\"\n",
" Determines whether to generate an answer, or add web search\n",
"\n",
" Args:\n",
" state (dict): The current graph state\n",
"\n",
" Returns:\n",
" str: Binary decision for next node to call\n",
" \"\"\"\n",
"\n",
" print(\"---ASSESS GRADED DOCUMENTS---\")\n",
" question = state[\"question\"]\n",
" web_search = state[\"web_search\"]\n",
" filtered_documents = state[\"documents\"]\n",
"\n",
" if web_search == \"Yes\":\n",
" # All documents have been filtered check_relevance\n",
" # We will re-generate a new query\n",
" print(\"---DECISION: NOT ALL DOCUMENTS ARE RELEVANT TO QUESTION, INCLUDE WEB SEARCH---\")\n",
" return \"websearch\"\n",
" else:\n",
" # We have relevant documents, so generate answer\n",
" print(\"---DECISION: GENERATE---\")\n",
" return \"generate\"\n",
"\n",
"def grade_generation_v_documents_and_question(state):\n",
" \"\"\"\n",
" Determines whether the generation is grounded in the document and answers question\n",
"\n",
" Args:\n",
" state (dict): The current graph state\n",
"\n",
" Returns:\n",
" str: Decision for next node to call\n",
" \"\"\"\n",
"\n",
" print(\"---CHECK HALLUCINATIONS---\")\n",
" question = state[\"question\"]\n",
" documents = state[\"documents\"]\n",
" generation = state[\"generation\"]\n",
" max_retries = state.get(\"max_retries\", 3) # Default to 3 if not provided\n",
"\n",
" hallucination_grader_prompt_formatted = hallucination_grader_prompt.format(documents=format_docs(documents), generation=generation.content)\n",
" result = llm_json_mode.invoke([SystemMessage(content=hallucination_grader_instructions)] + [HumanMessage(content=hallucination_grader_prompt_formatted)])\n",
" grade = json.loads(result.content)['binary_score']\n",
"\n",
" # Check hallucination\n",
" if grade == \"yes\":\n",
" print(\"---DECISION: GENERATION IS GROUNDED IN DOCUMENTS---\")\n",
" # Check question-answering\n",
" print(\"---GRADE GENERATION vs QUESTION---\")\n",
" # Test using question and generation from above \n",
" answer_grader_prompt_formatted = answer_grader_prompt.format(question=question, generation=generation.content)\n",
" result = llm_json_mode.invoke([SystemMessage(content=answer_grader_instructions)] + [HumanMessage(content=answer_grader_prompt_formatted)])\n",
" grade = json.loads(result.content)['binary_score']\n",
" if grade == \"yes\":\n",
" print(\"---DECISION: GENERATION ADDRESSES QUESTION---\")\n",
" return \"useful\"\n",
" elif state[\"loop_step\"] <= max_retries:\n",
" print(\"---DECISION: GENERATION DOES NOT ADDRESS QUESTION---\")\n",
" return \"not useful\"\n",
" else:\n",
" print(\"---DECISION: MAX RETRIES REACHED---\")\n",
" return \"max retries\" \n",
" elif state[\"loop_step\"] <= max_retries:\n",
" print(\"---DECISION: GENERATION IS NOT GROUNDED IN DOCUMENTS, RE-TRY---\")\n",
" return \"not supported\"\n",
" else:\n",
" print(\"---DECISION: MAX RETRIES REACHED---\")\n",
" return \"max retries\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" ## Control Flow"
]
},
{
"cell_type": "code",
"execution_count": 46,
"metadata": {},
"outputs": [
{
"data": {
"image/jpeg": "/9j/4AAQSkZJRgABAQAAAQABAAD/4gHYSUNDX1BST0ZJTEUAAQEAAAHIAAAAAAQwAABtbnRyUkdCIFhZWiAH4AABAAEAAAAAAABhY3NwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAA9tYAAQAAAADTLQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAlkZXNjAAAA8AAAACRyWFlaAAABFAAAABRnWFlaAAABKAAAABRiWFlaAAABPAAAABR3dHB0AAABUAAAABRyVFJDAAABZAAAAChnVFJDAAABZAAAAChiVFJDAAABZAAAAChjcHJ0AAABjAAAADxtbHVjAAAAAAAAAAEAAAAMZW5VUwAAAAgAAAAcAHMAUgBHAEJYWVogAAAAAAAAb6IAADj1AAADkFhZWiAAAAAAAABimQAAt4UAABjaWFlaIAAAAAAAACSgAAAPhAAAts9YWVogAAAAAAAA9tYAAQAAAADTLXBhcmEAAAAAAAQAAAACZmYAAPKnAAANWQAAE9AAAApbAAAAAAAAAABtbHVjAAAAAAAAAAEAAAAMZW5VUwAAACAAAAAcAEcAbwBvAGcAbABlACAASQBuAGMALgAgADIAMAAxADb/2wBDAAMCAgMCAgMDAwMEAwMEBQgFBQQEBQoHBwYIDAoMDAsKCwsNDhIQDQ4RDgsLEBYQERMUFRUVDA8XGBYUGBIUFRT/2wBDAQMEBAUEBQkFBQkUDQsNFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBT/wAARCAKlATsDASIAAhEBAxEB/8QAHQABAAIDAQEBAQAAAAAAAAAAAAUGBAcIAwIBCf/EAGAQAAEEAQIDAwQJCw8JBwMFAAEAAgMEBQYRBxIhEyIxFBUWQQgXMkJRVmGU0SNTZHGBkaGj0tPhJCczNDU2N1JUVXWClbGzQ2JzkpOytMHUCSVjcnSDohgmRURGR6Tw/8QAGwEBAQADAQEBAAAAAAAAAAAAAAECAwQFBgf/xAA4EQEAAQIBCQYDBwUBAQAAAAAAAQIRAxIhMVFSYZGh0QQTFEFxwSMzsQVCYmOBotIVMrLh8CLx/9oADAMBAAIRAxEAPwD+qaIiAiIgIiICIiAiIgIiICIiAiIgLxs3IKUfaWJ44I/40rw0ffKg72Qu5vITYzEzOpxQd23k2ta4xu2/Y4g4Fpf4ElwLW7gbOJIar8P9PxSdtPjIcjbO3NbyLfKZnf137kfaGw+Rb4opp+ZP6R/2Zba2d6VYUf8A5ih86Z9K/PSrCfzxQ+dM+lfp0vhif3IofNmfQnothf5oofNmfQr8HfyXM/PSrCfzxQ+dM+lPSrCfzxQ+dM+lfvothf5oofNmfQnothf5oofNmfQnwd/IzPz0qwn88UPnTPpT0qwn88UPnTPpX76LYX+aKHzZn0J6LYX+aKHzZn0J8HfyMz89KsJ/PFD50z6VmU8lUyDSatqGyB4mGQP2+8Viei2F/mih82Z9CxbWg9O23B7sNTjmaeZs9eIRStPwh7NnD7hT4M+c8v8ARmTyKsNsXNHywx3bMuSwkjhGLs5BnqOJ7okIA54z0HP7pp25uYEubZ1rroyc+mJSYERFrQREQEREBERAREQEREBERAREQEREBERAREQEREBERAUXqjMej2m8rlOUPNOrJOGn3xa0kD7pAClFA69x8uV0VnKtdpdYkpy9k0Dfd4aS0bfbAW3CimcSmKtF4WNLL03hxgcHUpcwfKxpdNKP8rM4l0kh+Vz3OcflKk1j4+9FlKFa5AS6CxE2aMkbbtcAR+Aqv6o4p6L0PkI6Go9X4HAXpIhOytlMnBWldGSQHhr3AlpLXDfw3afgWNczNUzVpJWhUfiXxax3DKbBVJ8Zlc7l85Ykr4/FYaBkticxxmSQjnexga1jSSS4fJusf/6hOFmwPtlaQ2PTfz9V/OKm8WMvp7jLpmtBpXC0eLMdS1zyTaa1JWrW8RNyHsp4phIOR++46PB236OG4WCGqOP2dxPFfQeAo6IztzF5/DWMnPEK8EdyN7XQhrSJLDAzsxITK0jfvs5ebZwFj1dx8x2h9TnG5nTGp6mKbbgpSamOPb5rjlmLGx7yc/Pyl0jWl4YWhx2JGxWu4dG8U9MTcIdV5DEjXepcHhr2KzlavfhgmLrHYujlEkpayTl7ENedwSTzAFUjizwK1xrCxrztdBR6p1Ddy8eQwuqLuXgbHToRyRSMpwROdzRSAMfGdmtY4vLnPQb+k460JeJmY0NjtNahzOXw8tRl+elXh8mrx2GNeyV0j5W90B3UAc/ddytcASon2O/GjOcXaOckzOlcjhTTyl6tFblZA2uWRWXRMh7s8jzM1rRznbk5g7lJGyk+HWkMxh+MPFPUF+gamNzsmLfQldLG4yiKoI5AQ1xLeV+467b+I3HVVbhddyPA+TVeK1tTo4DSkmdyOTpatu5etFUnFqyZo4Sx7w9km0jwdxt3OhO6DfCLX/8A9QvCw/8A8l6P/t6r+cUjp/jFoHVmWhxeD1vpzM5OYOMVLH5avPNJytLncrGPJOwBJ2HQAlBabtODI056lmJs9aeN0UsTxu17HDYg/IQSoTQlyazp9sFmUzWaM81GSUkkv7KRzA4k+stDSflKsKrHD5va4i5dG/Z38hasx7jbeMyuDD91rWn7q6KflVX1x7r5LOiIudBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERBVK8zNByyVrO0enpJHSV7Z9zTc5xc6KU+9ZuSWO9yAeQ8uzOeyPrVrfLI6KKbcDleWh24+Q/AvZzQ9pa4BzSNiCOhCrUnD7Gxuc7HT38LzHcx462+OL7kW5YPuNC6MqjEz1zaeN/wDv1ZZp0p3zbT/ksH+zH0L1hrxVwRFEyIHx5GgbqmZrT0+LqSyRag1Fet8pdDSrzwdpKeZrdhvGAAC5u7j0aDuSAvaloK/CyXynWGcnkdK9zSx8TGtZv3G7ch6huwJ36nc7AENDu8Pb5SWjWuKKrehE/wAac9/t4vzSehE/xpz3+3i/NJ3eHt8pLRrWlfEsMc7OWRjZG+OzhuFWfQif4057/bxfmlj5DQV+xSnjq6wzta0Wnspnvie1j/UXN5BzDfbcbjcbjceKd3h7fKS0a1o82U/5JB/sx9C+o6VeF4fHXiY8eDmsAIVOx+nbFyxYrz6h1FSsRSyMZHNYg3mY3lPas2j7zdns3PqJLT1CkRoGpMdr+Sy2UZvv2Vm89sZ+2yPla4fIQR8iZGHGmvl/8S0azK5I6ndNhsRKXRu3jv5GInkrs8HRscPGY9QAPce6d15WvsVWrDRqw1q8bYYIWCOONg2a1oGwAHwAL8qU4MfWjrVYI61eJvKyKFgYxg+AAdAF7LCuuJjJp0AiItSCIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAsLLZI4us2VtS1ee+RsbYKkYc8knbc7kNaB4kuIA28d9gWWzFTB1W2LkpijfIyFgaxz3Pe9wa1rWtBJJJ9Q+E+AKwsJhZY5hlMtDSkz8kboX2KjHBscJeXNhYXknYDlDnDlD3N5uVu4a0PbFYU1Z33Lkrb2SeZGiyYw0xxOeXNiaPU0DlBPi4tBPXwlERAREQEREEdmsM3LVyY5PI8hEyQVL7I2vkrPcwt52hwIPjvyncHYbhfuMyUlqxbq2K0sFiq5jXSOZtFOCwO7SI7nu7lzdjsQWHptyudILAyuFq5Y1pZYozbqPM1Sy5gL68haW87T4jdrnNOx6tc4HoSgz0UThMpNM52Ov8AXLVYo3WJYq0kVeYkHvxFxI2JB3aHuLNwHE7gulkBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERARFW9V8mblg0204+yy81xyVO1O5svkBa5r3MYzq4ueY2dS1oD3Hcloa4PfEx2cplpsrZZeoMh7WnWpSztMUsfOD5QWM98/lHLzElrNujC94U6viGGOvEyKJjY4mNDWMYNmtA6AAeoL7QEREBERB8Sv7OJ7h1LQSq/6TT/Wo/wqfsfteX/yn+5aO4ra4yPD2DTuVhhrS4KTLQUcy+ZjuevXmPZMmY4OAHLK6Lm3B7pPhtug2l6TT/Wo/wAKek0/1qP8K5l1X7I/LYc5V9KhUmr3NRO09gJDUs2C8wRF1yzKyAPfKxsjJGNbG0ElnU7bkY9L2SGpqmEysF7Tzb+cN+hjMLbbjruLpZCe29zGtcy0ztGdmWFz9i4EFuxG/QOlcnf86eTOkjMctaUTQyQyvjIcPh5XDmad+rTuD6wstmqJyNnRRc4HeAJ6LmXjre4n4TgJrG1k8vgKl+IVzDbwUNqFwjdIGyM70u7XbloDg4gguBaNwrJqLW+vMbntM6JoP0/d1nkKtrJXMnJUnjoV6sT2tHLB2pkc9zpWN27UDcE+HRBvj0mn+tR/hT0mn+tR/hXMHt+6zvnT2Go4rCx6pl1Rb0xlBOZXVWPhrOnFiLZwdylhjfyu3O3M3cHvCVtcU9bYzixjdF5K1pXF81erKLd+tZhbmnPe4TtpHtC1j42gDs3l7nEjwB3AdFek0/1qP8Kek0/1qP8ACuauHupNW4ziVxiyObzVG1pfC5DtJaoqzGeONtGKVjYXOmLWANI5m8p5n8zhy82w9tO8ZNcRR6C1BqXGYOHS2tLUNWrVx5m8tx7rEbpKxle53JLzBoa7lazlLhtvsg6P9Jp/rUf4U9Jp/rUf4VzBguPmr6/CHKcTNRU8K3BVPKq9fGY+Cfyq1O24a0BLy9wY1zu6Whjz74HryiW4X8ZtV6i1zTwWbxkd2ldrSzDI47A5THR0pWbERym5GA8OBPK9pB3bsWjcIOnsTffkIHve1rSHcuzftLOUPpr9py/6T/kFMICIiAiIgIiICIiAiIgIiICIiAiIgIiICrmj7MOdN/PRWKV6G7KYqlqrDyu8mjJa1jnkbv2k7ZwPufqnd+E5Gtc1Fp/SuSuyZFmJLYjHFdkgdO2GV5DI3GNvV/fc3ujx8FJY6q+jj6teSXt5Iomxul5Gs5yAAXcreg38dh0CDJREQEREBERB52P2vL/5T/ctXa00pS11pLMaeyLd6WTqyVZSB1aHtI5h8o33B+EBbTewSMc0+BGyivRqt9cl++PoQaFyfAKpLw80jgMVmbOHy+lZI7WMzkcTZJBZa1zZJJI3HaQS88he0nrzeK98xwhzGstEWcRqnWMmRzDb8OSx2XoY+OmcdPCWuidHHzP5tnNJPO478zhuOm28/Rqt9cl++PoT0arfXJfvj6EGi8jwm1BrLQWptM6z1n58Zl67IIZ6OKZSFMtJIkDQ95e4u5Sd3bdwAAblY2R4Q6qyM2Czj9dQx63xDZ67MxHhWivPVmDOeGWt2vXvRtcHB42I+Dot++jVb65L98fQno1W+uS/fH0IOe9Pex9Zgr+mrz89NeyWOztvUWRuz1mg5K1YryQv2a1wELWhzNgA7YMA+VZ/FHg/mOKN5tS3q5tXSbp6tmXENxcb52vhkEm8VkuBjLi0bnlcQN9ttytw5DCy181iYq9N9mpOZW2bRsMYawDOZpDCN38zgB08PFSno1W+uS/fH0INJRcI7lPiHqPNVc+wac1Jyvy+AsUBJ2z21+w3jnDwYwWtYSOV3VvQjdQWm/Y/ZHG2tJ1MzrOfO6W0lM2xhsS6gyGRr42OjgM84ce17JjiG7NZuQCd9l0V6NVvrkv3x9CejVb65L98fQg0XjeA+ObwRs8NstflyFGwbJfdgj7CRrpbL7DHMG7tnMc5ux3O5YDt12U1oLSmsdP2nu1Lrhuqa7YBDDCzEx0yDuPqkjmvcXv2G3Tlb1PdW2vRqt9cl++PoT0arfXJfvj6EH5pr9py/wCk/wCQUwsajQjx8TmRuc4E83eWSgIiICIiAiIgIiICIiAiIgIiICIiAiIgrusMj5LNp+lHlziLORykUEPLU8oNrkY+eSDqNmc0UEu7z7kA7ddlYlXczkuz1npzHszDqUk0dqy7HNq84uxxtY07ybfUwx0sbvhcSB4AqxICIiAiIgIiICIiAiIgIiIK7qDHi1qnS1g4YZA1p53C+bXZ+Qc0D28/Z/5Xn35NvVzc3qViXEHHr2eLeE/GZ+m81wpkv5LT9p78feGcMfaxyxljJWMEJHfjfsWnfYkjfcbrszTOSuZnTeJyGRxzsRkLdSKexj3ydoasrmBz4i7YcxaSW77DfbwCCTREQEREBERAREQEREBERAREQEREBERAREQEREBERBXbWR24h42g3MOi3xdqd+IFXds/1au1s5m27pj3c0M992xPvArEq47JD2w48eMw4EYp05w/k3dd9Wa0T9tt4jq3k39e/qVjQEREBERAREQFVMlqvIT3bFXB0q1ptZ/ZTWrk7o4xJt1YwNa4vI6AnoATsCSHAWta90WebGXyfHzxkx4fBenH9wXZgUUzFVdUXtbnfosa2X581h/IcH86m/Np581h/IcH86m/NqURdN6NiOfUui/PmsP5Dg/nU35tPPmsP5Dg/nU35tSiJejYjn1LtG8UfY/zcWOKeitc5ehhhkNNPLuwbPKWXWg88TJN4/Bkne+UEg+K29581h/IcH86m/NqURL0bEc+pdF+fNYfyHB/OpvzaefNYfyHB/OpvzalES9GxHPqXRfnzWH8hwfzqb82vtmptS0vq13E4+zWb1kbQsyGYN9Za10YDz49Nx4dNz0UiifDnTRHPqXTlG7BkqVe3VkE1aeNssUjfBzXDcEfcK91VuFx34e4L5KwA+QblWlefi0Rh4lVEeUzBOaRERakEREBERAREQEREBERAREQEREBERBXRkv1wzj/ADyemLE/mbyXp+zcvlHbbf1OTf5VYlXRkx7YJx/nnveaxP5m8m8Pqpb5R23/AMOT7qsSAiIgIiICIiAte6J/cq//AExlP+PnWwlr3RP7lX/6Yyn/AB867+z/AC6vWPdfJPoi5000Mrc1dxl1Nbz2eyDNLZiV2KwceSmjqgsoQyljo2uHO1znDuO3aDuQN3EnKZsjotYOKzmPzgtnH3YLoqWH1JzXkDxFMw7Pjdt4OaehHiD0K5g4OYnizqitoTW8OV7aDKOrX8rPa1XLarWqsrd5o46HkjY4HtBPKGPBaWbFzupWBo/TvoZwU9kFnsVms9DlKV3UtaCR+ZsyCIxlzmTBrnkCbdrT2u3Of43VY5Q6m1PqrF6Oxjchl7JqU3Tw1hIInyfVJZGxxt2aCernNG+2w33OwUsuZ9e6ayGj+EOlM/Dq/VNnOWMvgn2rUubsBkxlsQxyt7IPDGxubK4GMANPTcEjdQMzuKPF/U3EO7p+9NRnwmctYbGGPVUuPho9gGiN8tJtSRk4fuJCZHHmD+UcoCZQ62Rcu671drTQefz+jJcrYfqDX1Kk7T00c8j46F5/JWyAgc47tZEC2y0Dbbdx2CxcvBr3XvFDWemcJdyDqGj46OPphur58VOC+q2TyqbkrTGy57iesjuXue5JJJuUOrEUHoWDO1dF4OHU9iC3qKOlCzI2Kv7FLYDAJHt6DoXbnwHj4BTiyHhwt/g9wX/px/eValVeFv8AB7gv/Tj+8q1Ll7T8+v1n6rOmRERc6CIiAiIgIiICIiAiIgIiICIiAiIgroyJ9sJ1DzvHt5rE/mjyfvj6qW9v2vwe95PuqxKuDID2xHUfO0e/moT+avJu/wDsxHbdr8HveT7qsaAiIgIiICIiAte6J/cq/wD0xlP+PnWwlr3RP7lX/wCmMp/x867+z/Lq9Y918k+ofBaRxOmruat42p5PYzNvy68/tHv7abs2R82ziQ3uRsGzdh08NyVMIs0a/wBNcA9BaP1KzPYbANoZCKSSaERWp/J4HyAh7o4C8xRkhzgS1g8Svq9wI0PkMpqLIS4Vwsahry1so2G7YiitMlZySF0TZAwPc0bF4aHfLur8iWgQOb0Lg9R4CnhMjR8oxlOWtPBB2r28j4HtfCeZrg48rmNPU9duu/VVzUvALQWrtTS6gymAbLlZ+QWJYbU8DbPJ7jto43tZLtsAOcO6ABbBRLQIzI6ZxeWzGJytylFYyOJfI+jYeO9A6RhjeW/baSCqvrfgbojiLmWZbPYQWck2HyZ1qvanrPlh337OQxPb2jOp7r9x1PRXtEtEjyqVYqNWGtXjEUELBHHG3wa0DYAfaAXqiKjw4W/we4L/ANOP7yrUqrwt/g9wX/px/eValydp+fX6z9VnTIiIudBERAREQEREBERAREQEREBERAREQVwZIe2Gcf53O/moT+aPJPD6sW9v2239Xk3+VWNVwZP9cN2P89dfNQseZfJfD6sW+Udtt/U5N/lVjQEREBERAREQFSLOKy2nLlzzfj3ZjHWZpLTGRTMjmhe9xfI0iRwa5pcSQQQRzEEdNzd0W7DxZw72i8SsSoPnbP8AxNyfzqn+fTztn/ibk/nVP8+r8i6PFflx+7qt9yg+ds/8Tcn86p/n087Z/wCJuT+dU/z6vyJ4r8uP3dS+5q/Ia3yGKyuKxtrSmUjuZSSSKpH29U9o5kbpHDcTbDZrSeu3h8KkvO2f+JuT+dU/z6+Ndjfibwz6b7Xb3q32/UUvyfQtgp4r8uP3dS+5QfO2f+JuT+dU/wA+nnbP/E3J/Oqf59X5E8V+XH7upfcoPnbP/E3J/Oqf59fbLGpMh9Rg07JjJH9PKchZhdHH/ncsT3ucR1Ib03I2Lm77i9op4rVRHPql9zBweIhwGGpY2u574asLYWvkO7nbDbcn1k+J+2s5EXHMzVMzOlBERQEREBERAREQEREBERAREQEREBERBXRk/wBcI4/z1/8AixY8zeSeH1Yt8o7bb+pyb/KrEq8MgfbAdR88HbzWJ/M/kw2H1Ujt+1/+PJ91WFAREQEREBERAREQEREBERBr3Xrd+J/DE7E7Xb3UDfb9Qy+PwLYS15r4gcUOGG56+XXtum//AOhm+8thoCIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiIK6Mj+uG6h54d+5Yn80eS90fVi3t+228fe8n3VYlXRkf1w3UPPDv3LE/mjyXuj6sW9v223j73k+6rEgIiICIiAiIgIiICIiAiIg19rzm9s7hlsXbeW3t+UdP2jL4rYK0fxH4vaCpcVdBw2ta6drz4zIX2XY5crA11R3kkrCJQXjkPN3dnevp4rdGPyNXL4+teo2YbtG1E2eCzXkEkcsbgC17HDcOaQQQR0IKDIREQEREBERAREQEREBERAREQEREBERAREQEREBERBXRkf1w3UPPDv3LE/mjyXuj6sW9v223j73k+6rEq6Mj+uG6h54d+5Yn80eS90fVi3t+228fe8n3VYkBERAVfyPEHTGItyVLuocZVtRnZ8MttjXsPwEb7j7q/OIORnxWjcpYrSOhnEYYyVnumFzg3mHyjm3WLQoV8ZUjrVYWwQRjZrG/3n4SfWT1K7MLCpmjLr12zf8ATrXfL69tTR3xoxPzyP6U9tTR3xoxPzyP6V7It3dYOqeMdFzPH21NHfGjE/PI/pT21NHfGjE/PI/pXsid1g6p4x0Mzx9tTR3xoxPzyP6U9tTR3xoxPzyP6V7IndYOqeMdDM8fbU0d8aMT88j+lPbU0d8aMT88j+leyJ3WDqnjHQzP57+yx9jvguIvsl9N5jTmaxzcBqiZvny1BYYWUXx7drK7rsOdg6bnq/cetd64XXmgtPYehisfqLD1qFGCOtXhbcZtHGxoa1o6+AAAUmid1g6p4x0Mzx9tTR3xoxPzyP6U9tTR3xoxPzyP6V7IndYOqeMdDM8fbU0d8aMT88j+lPbU0d8aMT88j+leyJ3WDqnjHQzPH21NHfGjE/PI/pT21NHfGjE/PI/pXsid1g6p4x0Mzx9tTR3xoxPzyP6V+t4o6Pe7YaoxG/j1uRj/AJr1QjcbHwU7rB1TxjomZYIZo7MLJYntlikaHMew7tcD1BBHiF9qmaKLaOpdQYqAdnTjjr3WQtGzY3ymUP5R6gTEHEDYcznHxcSrmuTFw+6ryfTnFyYsIiLSgiIgIiICIiAiIgIiICIiCujI/rhuoeeHfuWJ/NHkvdH1Yt7fttvH3vJ91WJV0ZH9cN1Dzw79yxP5o8l7o+rFvb9tt4+95PuqxICIiCqcUv3iZP8A9r/FYshY/FL94mT/APa/xWLIXpYXyI9Z+lLLyEUHrjWFDh/o7Nalynaeb8VUkuTCIbvc1jSdmj1k7bD5StSR+yK1Hj8nk6eodBxYOSrpS3qqBgzIsGxFDybREthAY/d2zvEN3btz9dpMxDFvdFqPG8X9W5Dhs7WL9C0qNWetWuUYLuoY4C6KQEvfYe6IMga0cp6F5Id4NIIUTgPZU4nK8Os7qGbFc+UxWTjwwxOJvxX2Xbc3Z+Tsr2GbMkbJ2re9sOXZ247qZUDeSLmaXjdntG8WNU5niBjJtL4nE6MgunDVcsL0Ekjrr2NkZ0YwSuJbFuQPAd7l2KtXDD2TtLXuuaelrlTDVr+QrS2qb8HqOtmGHs9i+ObsgDE/lduOjmnlds47KZUDd6LxuW4cfUntWZGw14GOlkkedgxoG5J+QALmrUHGrVut7nCzKUdN3dM6OzWq6gqZTzqGz5Cq6KctbNWa0FscgAeAXOGzRuB0VmbDptFo2l7JazbrUdSHR00fDe9lG4uvqQ5BhmJdP5Oyw6rybthdLs0O5+bYg8uxU3juMeoNWaoydXSeiDmtO4rJuxN3NWMrHVJmY4NnMEJY4yNjJIJLmblrg3fZMqBtdFqDI+yB8g4W601l5h7T0czVrD+ReWbeUdjbFbtOfs+5zb83Lynbw3PionWnskMxpebX1ipobzphdE2o4snd87MhkfE6CKYvhiMZ5nNEp3a5zRsBs4kkNZUDeqLTsPsg5MDk8zU1vpw6WbRwD9SxSwX23RNTY8Mka4BjOWVrnMHIOYHm6OKgdD+y2x+qtVYrC26GHrvzEUz6BxOpauTla6OJ0vZ2Y4hvC4sa7qC9u4233ITKgdAItH6G9kZlNTM0BkMtoo4LT+tNocfeGUZYkjsGB8oZJEIxsxwjkDXhxJ2HM1m+wheHPG7WlHR/FDU2scHBaxWm8lluR1DIdtO015ABUbH2DAWNaD9WLtztuW9SplQOikVO4XayzOutOtyuWwVTCxzhklR1HLMyMNiJzQQ8SNY3bbfYjbxHQkdVcVlpEbpX+EHUn9H4/wD37SuqpWlf4QdSf0fj/wDftK6rT2v5v6U/4wyq0iIi42IiIgIiICIiAiIgIiICIiCuDJfrhux/nl37lCfzP5L0H1Yt7ftv/hyfJurGq6MkPbDOP88u381CfzN5N0/ZuXyjttv6nJv8qsSAiIgqnFL94mT/APa/xWLIWPxS/eJk/wD2v8ViyF6WF8iPWfpSy8lX4p04Mhwz1ZWsx1pYJcVaY9l2KSWEgxO92yL6o5vwhneI9z12XJnBmR+sZNQaSpy0dW5jK6Rt4mLVMGauX2YyMMDY60/a1YhC17382zQXkx94HYbdtosZpvN2LTuvOD2a1Fwv0HhKcmKtZPTFmhblo5N0nm/IGCExuikLWlwbzO52ksPVjd2/BUpPY76xylTWVq3ksBjs7kc3jNTYeTHxyurVLtVrGiKVjgC6PaJg5wQXc7zyN2APR6JkwOctS8Ada8Wctqe5rW5gMQcrpuvh65wEs85r2IbhtRykSsZzND+U7dPDb/OV+0vd1tpStbyevqmm48fTrNaHaUqXLdqaUua3n7MR8wbsT3GNeRvvzbArZ6K5Nhrp/FDSmuYZdOmDUQblY30ndtpnJV2bSNLTvJJXDGdCeriAFrjD8GOJsWP4b6cyl/S9vT2icvUtQX4X2GXbdWvFJFGHRlhY2QMeN9nEEjfcevoxEtfSOc6nsftaM0tjOG02Twfta4/Kx3G22dt5zmqx2vKYqro+XswQ8NaZQ/q1vud1ZNNcPeInDbO5mjpe1pq7pHKZibLtdljYZcpdvJ2k8TWxtLJW7l5YS5pHN132W6EUyYHN2seA/EG9pHXejcJd02MBqLNzZqK7elsNtR9tYZYkgMbYy0d8O2k5j06cm/UWDVXA7PZzS/G7GwW8cyfW8wkxzpJJA2IeRwwfVtmEt70bj3Q7oR6+i3iiZMDTXEbgLNxJ1XZlvXIK+DuaMt6amMbnGxHPLPBIyVreXlLW9kT1cDvsNtiSJHQOnOI9OJuP1W3SMlKvQfWZexLZ/KrUuwayV7XNDYgRzFzQX7k9CANjtRFbRe40hhuB+dx3D7gtgpLeOdb0VkKtvIPZJJ2crI6s8LhCeTdx5pWkcwb0B8PA5uldAa+0HkNdVMRLpq5hszkL2Zx8990/bxWbGzuymia3ldEH83ea/cgjotxImTA1NwK4V5nh3f1dfyseFxUecsw2IcDpx0hoU3MjLZJGc7WbPlJBcAwDujxO5W2URWItmEbpX+EHUn9H4/8A37SuqpWlf4QdSf0fj/8AftK6rT2v5v6U/wCMMqtIiIuNiIiICIiAiIgIiICIiAiIgrvnI+2EMf55PKcX2/mfyTp+y8vb9tt/U5PuqxKuvyRbxDix/noAOxb5xhvJertpmt8o7bb1cwZyb++39SsSAiIgr2v8bPltHZStWjM05jD2RN8Xlrg7lHynl2WJj8jWytVlipM2eJ3vm+o+sEeII8CD1B8VbFA5PQOmM1bfayGncVesv91NYpRSPd9txbuV2YWLTTTkV6NOb/ty5tEvJF4+1Xov4o4P+zofyU9qvRfxRwf9nQ/krb3uDrnhHVcz2RePtV6L+KOD/s6H8lPar0X8UcH/AGdD+Sne4OueEdTM9kUFntBaTrObj8do7BSZi1BLJWM2JYa8fJyjnlc1vRoc9nd3Dnbnl8HFudR4O6Koib/7XxMz5pDK901KN3XYDZoI2a3YDut2HifEkl3uDrnhHUzM9F4+1Xov4o4P+zofyU9qvRfxRwf9nQ/kp3uDrnhHUzPZF4+1Xov4o4P+zofyVCYXhPpKrns/E/RVEV5ZorUVmzXhlieXRNY5kLdt42tMQJb4czy4e6Kd7g654R1MywovH2q9F/FHB/2dD+SntV6L+KOD/s6H8lO9wdc8I6mZ7IvH2q9F/FHB/wBnQ/kp7Vei/ijg/wCzofyU73B1zwjqZnsi8far0X8UcH/Z0P5Ke1Xov4o4P+zofyU73B1zwjqZnsvxzg1pJIAHUk+peXtV6L+KOD/s6H8lfUfC/R0Tw5mk8I1w8CMdCD/up3uDrnhHUzMPRQF/Uefy0HfpSx16ccw9zI6Iyl5b8IBl5dxuN2uHiCrmviKJkETIomNjjY0Naxg2DQPAAeoL7XJi4ne15XpyiyTNxERaUEREBERAREQEREBERAREQV2e/wBnxDo0jmhH2uLsTNwvk25l5JoQbHa+rk7QM5PX2u/vVYlX8hkI6+usLVfl+wdZoXCzFGsHeVFr657btdt2dmCW8m+zu232PINrAgIiICIiAiIgKFy+ZmdZmxWIfWkzbYorBZa5+zhhfLydo/lB3OzZSxm7ecxlvM0bub928y+TIjH41sNq1DLF5aHvLRWheHHmPTq4hmwaOo5mk9PHKw2LZhcZXpMnnsiJuxntSc8srid3PcfWSSSdgB16ADYIP3GYmviI52V+1PbzyWJHzSule573Fx7ziTsN+VrfBrQ1rQGtAGYiICIiAq7NUdX4g1bcWLmkbbxksNjJtskRwmKVjoonResv7aZwePDsyD7oKxKu6jx4l1Dpa8zEPyM1e5LGbTLPZeQxvry80pbvtKC5scfL6u0DveoLEiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiCt6lv+btSaUdJmPN8Nu5LT8j8l7UXZHV5JGs7Tb6lyiFzuY7A8vL4uarIq7r/InC6Xs5Z2UlxFbGPiv27ENbygmtFI187Czx2dG17SR1bvzDfbY2JAREQEREBQ17LyWrrsdinQ2LDHGO7M2wzmobs5mlzOpLzzNLWkAEbkkdN/rM2rU7/N2P7WKxMxwffjEbhTG3Rxa49XH3o5XDcbuG3jn0aUdCuI4wCSeaSTka10jz7p7uUAcxPU7AdUHzjcfHi6UNaN8kvZsawzTvL5ZS1obzPcernENG5PUrKREBERAREQFXdZ41uQhxEnmg5iSplK08bG2TCa55+V0+491yNe5xYfdAEKxKua+x3nPTzIvMzs86O/RstpNteTnmitwyNl5//CLBLy++7Pl98gsaIiAiIgIiICIiAiIgIiICIiAiIgIiICIqrqXM35cuzC4uYUpRALNm66MPdGxznNY2NpHKXEtcSXbgBvgebduzDw5xKrQq1IqE7EZxziRrPMNBO/KK9HYf/wBZfnmbOfHXM/N6P/TLq8L+OOfRbb1+RUHzNnPjrmfm9H/pk8zZz465n5vR/wCmTwv4459C29D+yd19rHhnwdzOodD4CDUGWqbOljne8CtX2JknDGbOkLQB3Q5uwJduQwtOk/8As9OOPEbjnQ1nkNbZhmUx9B9WtSApxQlkhEjpNnMaC7cdnvzb+rbbrv0J5mznx1zPzej/ANMqvoHgvS4XVMlV0rnMlha2RuyZCzDXgp8jpn7BzgDXIYOg2a3Zo9QCeF/HHPoW3tzIqD5mznx1zPzej/0yeZs58dcz83o/9Mnhfxxz6Ft6/LhLWH/aGT6Q9kxqjSF11WtoiB7MTDlBUdPJj7TCRJaexrgZmFznNcwEbNjYW7kOEnV/mbOfHXM/N6P/AEy1NpX2HGgdGZ2fNY0W35WeR00lvIRVbry9zi4uBnhfsdzvuNk8L+OOfQtvdB6br4tmKitYh9axVvgXPLavIW3C9oPblzByvLhynmHQjbbpspRUHzNnPjrmfm9H/pk8zZz465n5vR/6ZPC/jjn0Lb1+RU7FZfJYjNU8fkbhylXIOfHBZfE1k0UrWufyP5AGuaWNdsdmkFu3e5u7cVzYmHOHNpSYsIiLUgiIgKu6/wAc7KaUt124h2deZIXtoMteTGQtmY4HtPVy8vN8vLt61YlXOIWNGY0bk6hxD872jG/93R2fJ3TEPadhJuOXbbf7iCxoiICIiAiIgIiICIiAiIgIiICIiAiIgKi2P4Tsv/Q9D/HuK9Ki2P4Tsv8A0PQ/x7i7ezff9PeGUaJS6Ii3MRERARVvh3rzH8TNH0dSYuGzBRuOlbHHba1so7OV8btw1zh7ph26+G32lZFNIIiKgiKD11rCnw/0bm9TZGKeahiact2eOq1rpXMjaXENDiATsOm5A+VQTiLwoXGZGjXtRhzY542ytDvEBw3G/wAvVe6ohM3++TRv9Kv/AOCtK/Kg5v8AfJo3+lX/APBWlflq7Voo9PeVnyERFwoIiICrnETHDLaKy9Q4Z2oBLDy+bGWfJjY6juiTccvw7/IrGq7xDx5yuisvUGIfnjNAWjGx2vJnWOo7ol959tBYkREBERAREQEREBERAREQEREBERAREQFRbH8J2X/oeh/j3FelRbH8J2X/AKHof49xdvZvv+nvDKNEpdc5eyOtZ7h/xAwmpNMV5JslqzGy6LaWe5huPf2tGd3yMJskn4F0avGxSr23QOngimdBJ2sRkYHGN+xHM3fwOziNx6ifhWyYvDFyDpyvq9uWyWlxFYv5Pg7p7ItxtuSPfzhcniezGyAdQXCq0gjr3nlfvATh7DlbPDzVmL13pSHK2Qy5b8317AyuWBiJsV7TpLrxI7cku3j7rmAgNA2XXsdKvDZmsxwRMsTBollawB8gbvy8x8Ttudt/DdQ2M4f6XwucsZrHabxFDMWSTNkatCKOxKT480jWhx3+UrHIHF2ksNpXBcFNDap01Zr1uKrtSMrVTUuE2bfPk3MlryRh3ej7AvJaRsAN+m/W21BDwp15qeLFRYvV+r8/Wz1vBanxl0zXm2I43zGnch3IPI4CNrh03aG8rSV1Bi+G2kcHl2ZXG6WwuPyjGdk29Vx8MU7Wbbcoe1odtt023XtidCaawOZt5fGadxWOy1vfyi/UpRRTzbnc88jWhztz16lSKLDljgNw/gyc/D3V2L13pWLK2Gsu3PN1ewMtlh2RNmvZdJdeJXbkl28e7XMBAbtsoLRORpHiRwr15hI9P6YGrM7Ygfi6NqebJz1nxT83lj3ylr++2M8nZ/U3FgDumy7CxnD/AEvhc5YzWO03iKGYskmbI1aEUdiUnx5pGtDjv8pXlDw00hWuz3IdKYSK5PZZclsMx0IkknY7mZK5wbuXtd1Dj1B6gpkDmbAaWZheD/GXXeCx/lOu6Wb1KcdkSDJPTAsStcIN9+ToXu2aOrj13WfqrSPCnCex21nktGWcdbzd/Rtx77seQM1u7F2QMksoLyXu5i3dxG7S7bpvsuosZhsfhYposdRrUIpppLMrKsLYxJK9xc+RwaBu5ziSXHqSdyoKrwr0VR85eTaPwNfzlG6K92WMgb5Ux3umy7N74PrDt91ckaa0povEcL+N/DWLTVd+Oi1HpzIHLME8kguPgFR8UsnM480gMr++euziN9l0csF+Bxj71G67HVHXaMb4alkwNMldj+UPbG7bdjXcjNwNgeUb+AWcsoiwhM3++TRv9Kv/AOCtK/Kg5v8AfJo3+lX/APBWlflh2rRR6e8rPkIiLhQREQFXeIeObl9E5im7EvzrZoC042Ox5O6x1HdEnvftqxKucRMf510TmKnmmXO9tAW+bobPk77HUd0Se9+2gsaIiAiIgIiICIiAiIgIiICIiAiIgIiICo1lpHE3LOO2zsRRA69TtNb3/vCvKgtQ6aflbEN6jbGPykLHRNnfF2scjD1LJGczeYAgEEOa4HfYgOcHdOBXTRVMVZomLe/ssPFFGOwGr+Y7ZTCAerehN+eTzBrD+dMH8wm/PLs+Htxz6Lbek0UZ5g1h/OmD+YTfnk8waw/nTB/MJvzyfD2459C29JoqXjL2rsvrLM4OrbwskGIhg8qveRTcrbEoLxXA7bq5sXZyOO42E0e2+52sPmDWH86YP5hN+eT4e3HPoW3pNFGeYNYfzpg/mE355VniZldYcOeH2odUGxhciMRSkuGq2nNGZQxu5bzdqdvDx2T4e3HPoW3ryi0F7H32StP2RFNseF1DhcbqBjeabBX6MrbDdhuSz6vtI3oereo26gLdXmDWH86YP5hN+eT4e3HPoW3pNFGeYNYfzpg/mE355PMGsP50wfzCb88nw9uOfQtveGaHNqTR2224yrztv9h2VfVW8LpazBkGZHLXY79yJrm12V4TDDAHe6IaXOLnkbDmJ8N9g3mdvZFy9orpqmmKZvaPeZ90kREXIgiIgKucQ6Iyei8rVOLmzQliDTQr2Owkm7w6CT3vw7/IrGq7xAo+c9JXavmyfLiV0TTTrWOwe8dqzch/q5R3j8IaR60FiREQEREBERAREQEREBERAREQEREBERAREQEREBQus9UQ6M0tks1PDJabThL2VoRvJYkPSOJn+c95awfK4KaWvNVN9LuKOndPDZ+PwkXpDkW7+MvM6Kixw+AvbYlB9TqrEE3w40rPpLSsFe/Ky1mbT33snaY3YTW5Xc8rh8DQTytHqY1jfUrQiICgOIGlG670JqLTbrHkYy+OsUPKez7Tse1jcwP5dxzcvNvtuN9vEKfRBzF7Hj2Hmj/YxZzH5GxJQz2dnjkrxalyDzXnZYkOwgr1y5zGh0YI5gTLv2g3LJOVnTqw8tjm5XHT1S4RPeN45jGyQxSA7skDXgtLmuDXDcHq0LE01lfOFOStNZFrJY9wqXpBWdXa6cMaXOaxxOzHBwc3Zzhs4d47FBLoiICIiAiIgIiICrmvqIyWAirOxc+YZJkKPNXrz9i5rRbiJlLv4sYHaOb74MLfWrGq9rCg7JOwUHm2fIRDKQzSPhs9iKwjDpGyv9b2h7GN5B4lw9QKCwoiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAtfcJP++ZtW6oe1vPl81YhheDufJqjvJIx4+BdDLIB/wCKfWVsFa89j2AOCmjncwdJJj2SSubv1ldu6Tffrvzl2/yoNhrEyGSjxwYZGudz77cu3qWWoLVHua323f8AJB7ektf63L94fSnpLX+ty/eH0rRGqfZC4fS+Z1Tj3af1Fkxpjs35e3j6cb4KsT4GTCUudI0uaGOO4aC8cjjy7bE5mpuOuGwWVjxuNxOb1bdNBmTmi0/UbP5NVfv2ckhc9g72zi1jd3kA7NQbr9Ja/wBbl+8PpUZPk4256DJRy3yzsXVZafajsCC4ObLyEHvtILdwW7te7m5uVu2gcr7ISxPxD0JR03gb2pdM6iw8+TFvHxw9pJs6IN5e0mZyhgeTIHDfvMA3IcBsriLrfH8N9EZjU2Vhs2MdjYDNPFUa10rm7gbNDnNG/X1kINj+ktf63L94fSnpLX+ty/eH0rUmseLmD0Blo8fmW2oCcTdzL7EbA+KOvV5O1368xcRI0gBp36+HrjKfHbGSaRm1BkNP6iwlfngiqVb9JvlGQfN+wsrsje/nc47dNwRv3thuUG7vSWv9bl+8PpT0lr/W5fvD6Vztqb2RcWM0VrK7X0vnaWpcBi3ZI4TK1o45TCQ/ksbiUsfC1zHc5Y8uAae7vsD6cKeI+QhxeisfqyfN3tQatbZs1zkKVSuKzYYmve0tgdsGEHmYSXuId3iEHQvpLX+ty/eH0rIo5eLITGNjHtIbzbu2/wD961zbrTjJNZyWFqaels0HVdeVtMZTyiGMidhrmV4ZvzbNIfH3u67cHwHjvzTX7ek/0Z/vCCyoiICruYoDI6z066TGzzQ0Y7VxmQbPyRV5uVkLWOZ4vc9k0xB8GiN2/UtViVewlUW9S5zLT4hlKwHR46vd8qErrdaMc4dyDpFtLNOzl90eQE+9DQsKIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgLXfCgejFzUWip+5Ji70t6gD/AJShalfNEW/JG90sH/sgn3QJ2IqtrXS1rLPpZjCyw1dTYsP8jmsbiKaN5aZaspAJEUnZs3IBLXMjeA4s5SFpUFqj3Nb7bv8AkvvS2qa2qqEksUclO7Wf2F7HWeXyilPyhxilDSRvs5rgQS1zXNe0ua5rjk5fGvyIiDHtZyb783y7IObchw81BPc49PZj+Zmp6kUWJPbR/qlwxggI913Pqg5e/wAvw+HVQemdJ684SZeTLYfSQ1Sc1p7FU7VZmRgryUbtSF0ezy9wa6JweO8wuILTs077rp30Zm+vR/hT0Zm+vR/hQcr4LhPrHhNW4U3sZiGawtafxmQx+Up1LcVZwfafHLzxOmLWuY17HN2JB2IIHqW1eN2irvEnhDqnTePdHDkclQfFB2ztmdp0c1riPAEgDf1brafozN9ej/CnozN9ej/Cg5ou4vXGsOJmO1JkeHTamOpabydKTHXspVk8rszdhtC/kc8CN/Zlod16c3M1vQGi2uAmstRabyNdmmWY3TuMy+Oy+F0Nn8pHeikMLZW2oBIC9sUMjZRyNJIDm77NBXYcuDtR5mtXDI3RTQSvdY7QDkc1zA1nL7o8we47gbDk67bjfO9GZvr0f4UHM8XDE3+FnEerh+EmN0Bm8pgrWNpQ1p6hnuOkgeAxzojyMbz8gHM/17nl2U1qvSGpcdb4U6hxeEdm7OmIJq17FQ2YopnNmqtiLo3yObGSxzBuC4bgnYrf3ozN9ej/AAp6MzfXo/woOSfa14gS4nMZt+mIWZmPiJFqqvhRkoSbNRtaKItbLvyNf7ro7YbsPqIJ6y0ySbryRseyPT7oXp6MzfXo/wAKzMVh5MfYdI+RrgWFuw+2PoQSyIsHL5WPE03ymN9qxyPMFOFzBNae1jn9nGHuaC8hp2BIHTckAEoMLVGXno1W0sZPj26gute3HwZCRwje5o3c4tb3nNYDzEDbfoOZvMCM7DYalp7F18djqzKlKu3kjhjHRo8T9skkkk9SSSepXjicdZgms3L1l1izZcHNiIbyVGcrR2MZDQXN3aXFztyXOPg0NY2TQEREBERAREQEREBERAREQEREBERAREQEREBERAREQVfVekp8hdhzeEstxupKsfZxzPBMFuLcnyew0e6ZuSWuHejJJadnPa/J0rq6DUrbNeSB+NzNEtbexc7gZa5dvyu6dHxv5XFkg6OAPg5rmtn1XdWaPZqF1a9TtOxOoKIcaOUibzGPm25o5GbgSwv2HNE47HZrmlj2MewLEioFrjDh9JYLMW9cWa+lLOFrm1fbPIXROhDg0TQHbeVjnFrQGjn5ntYWhxANywmZpajw1DK42w21j70DLNadgIEkb2hzXDfr1BB6oM1ERBXc1QMmsNN3WYmK26FtqF2QdY5H02PY0kNZ7/ndG0H4NgVYlXNSVBNqPSc/miLIOhuzbXH2RE6iDVmHaNYf2Uu6R8o8BIXeDSrGgIiICKt8QuI2m+FWl7GotV5WLD4au9kclmVrn957g1oDWgucST4AHpufAErNsZwz3hSxbYr00Vg17sjJWkUD2PaDtG778xDotmdDtI13RvVB6ZbN+QHsKtZ+TyBdF+o672Ne1j38vavLnANjbs5xPiQxwaHu2afnH4Jte2b1yRmQyLXTiG3JCxr68MjmkwsIG4ZtHFv17xjDj122+8Lg48PA3mmfevuijjs5KwxgntFgIDpCxrW+JceVrQ0cx5WgdFJICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiD8JDQSSAB1JKqUnEavI4uoYfLZWtv3bVWFjYpPlYZHtLm/A4DYjYgkEFe3E6R0egM5ykjnrmN2x23a4hpH3iV6ABoAA2A6ABd2Dh0ZGXVF89uFurLyu5L9mrwU1x7JebBOwMuRx2OxcbtsJka8LIXTuJ5p+1ZK4lxbytDS0hvKSCOdy2Z7Fh2teFPBrFaS1pp27cyOJkkgrT42SKVj62/NHuXyNIILnN228Gt+0N0ot2ThbHOepeNTC9sSX4q57/Ur/nk9sSX4q57/Ur/AJ5ZqJk4WxznqXjUq+f1XLk8tpu0NDZS2cdffZ7ad8LH1Qas8XaRASkPee15OV2w5ZHu33aAZv2xJfirnv8AUr/nlmomThbHOepeNTC9sSX4q57/AFK/55PbEl+Kue/1K/55ZqJk4WxznqXjU5P9m5w/4leyPo6dwGlcJJjtPUpHXLhyc8cbpp9uVmzWF+4a0u2O46vKvfsUsPrrgnwyi0rrGK9qJtR+2PFKrCDUi9cRldODK0H3O7AWjpuW8rWb1RMnC2Oc9S8amZgNT1NQdqyOOepbh2MtS3EY5WA+DtvBzTse80kbgjfcECXVEmeYte6bLehkitxOI9beRjtvvsafuK9rkx8OKJiadExfnMeySIiLmQREQEREBERAREQEREBERAREQEREBERAREQEREBERBVeKX7wMz/oh/vBe68OKX7wMz/oh/vBe69LC+RHrP0hl5CKtcS9aM4c8PtR6okquusw9Ca75Mw8pl5GFwbv1232239XitO1eOmstHantxa6Gn58RX0bZ1YTp6CbtNo3xt7EuklI987Z22z9/BvL1kzEMXQ6Lm3RHsi9ZZnPYRmQwcVzG5hry+OhgMtWdij2TpI3S2LETYp2btDC5vJ1cCAQszQPHHXWRx/CvP6kp6ebgtcSMpivjI522ak760k0b+d7y1zHGF27OUFnMBzP23MyoHQyLlnFeyw1LqGStnsRgm5LTVm8IYcTXwGVfekq9t2ZnFwReTc228nJ4bDl5+ZbF0DxA1zrjWOsmOGnsbpfTees4p0ksEzrNljIWPB37UNjLTI3d5Dg4EgNby7uRVE6BuFFzpwy9kRqDU/EfH6XyFrTmYgzNK3NRymAo3o60M0HKSwyT7MssIce/C4dW9QNwoPhjxY1bw59jzmtZ6qv09SxRZK5VoVYa9gWpLTsnLAGySGSUuj5yOVrWczWADvkDdlQOp0WjuEvGbVWp9dMwGdxjLtOxSkssy2PwGTxkNaVjm/UZRcYA7mDiWva73hBaNwt4rKJuIa1+/zS/wBq1/hBXxUO1+/zS/2rX+EFfFq7V9z095WfIREXEgiIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiCq8Uv3gZn/RD/eC914cUv3gZn/RD/eC916WF8iPWfpDLyQHECrLf0JqOtC2Z802NsxMbWrR2ZC50TgA2KQhkjuvRjzyu8D0K5h4A6Av+X5PSM+lZ5NF5fCzUctkMnpifCWoQAGRQMkmszOmaQ+XozZrCAQeq69RSabzdi1foHhfq/SMEGMyXEN+f09TpupVKUuIiinLOUNjM04eTI5jRtu1rObxO6xsXwG826L4V6f8+dp6DXa9zyjyTby3sq80PLy8/wBT37bm33dty7bHfcbZRLQNRaG4Kah4b3YMdgNePraFguvtw4CbFRSzRMfIZH122S7cRcznbdwuAOwcpzA8Ia+LwvELFW8jJcqawyVy9N2UXYvrssQRwujaeZ3MQGEh3Tx8OnXYKJaBpbSXAPP4PUmhsrk9cty0WkK8tCjSiwzK0b6skIiIeRIT2nciPODy9zYMHMSvOP2NkkultUaPu6qlsaLyk892hQipNit4yzJZFlsjLPOecMl5i0Fg8epOy3aiZMCl6A0vrHAT2JNU63ZqthibFDFDiI6LWEHrI7le8ueegOxa34GhXREV0CGtfv8ANL/atf4QV8VDtfv80v8Aatf4QV8WrtX3PT3lZ8hERcSCIiAiIgIiICIiAiIgIiICIiAiIgIiICIiAiLysWYqsTpJpGRRtBJc87AAAk/gBP3EHqvG5dr46rLatzxVa0TS+SaZ4YxjR4kk9AFX49R5DUULTgaToqlrHvsVc1kYiIWzE8sbDXLmTO/jnfkBbygO3ceXIZpCraknly8j80+xHXbJBc71UOiIc18cB3Yx3OOfm6u3DevdbsERrOzd1dg8vhMHj3z2ROynNYviSpXjBHM6RjzGe2DBt0YC0uPKXN2cW4L9aUqX1LJQ3Mdcb0kgkqSu5T6+V7Wlrx8rSQthourCxoopyaovHrb2lb62uvT/AAf8pn+ZzfkJ6f4P+Uz/ADOb8hbFRbvEYWxPGP4mZrWfiVp2r2XbX3w9q8Rx9pVmbzuPg0bs6noei9fT/B/ymf5nN+QrJiu0z2Ymyj3SDHVi6tUqWqHZSMmY+SOWcPf3y142awgNBaHOBe2RpFhTxGFsTxj+Jma69P8AB/ymf5nN+Qnp/g/5TP8AM5vyFsVE8RhbE8Y/iZmtbHErTtQMM998Ie9sbDJVmbzOJ2DRuzqSfAL19P8AB/ymf5nN+Qr7foxZGpJXma1zH7e6Y1/K4HdrgHAjcEAjcHqAo7TF6zLUko35ZbOTx5ZXtW303Vo7L+za7tYxuQWu5veuIBDmnYtIDxGFsTxj+JmVP0/wf8pn+ZzfkJ6f4P8AlM/zOb8hbFRPEYWxPGP4mZpTUPGDRektW4S9qfP1tNVGwzmocqHQSW3u5GksY4B3I0b7uIAJcA3fY7bD0NxQ0lxMgtzaV1Fj88yo5rbHkM4e6Iu35eYeIB2dsT0PKdvAqleye9j5jPZGcMben7HY1sxATZxOQkaf1NYA2AJHXkcO64DfpsdiWhUb2IHseM77GzhOyKapRvapzM0NrL1O0Y10Hee0xssNYefs4nMIjdu0SCblftJuOXFxO8mJtaI0EulkUfjM7Uy0tuKAyslqzvrSRzwvidzM5SS0OA5m7OaQ5u7SHDYqQWlBERAREQEREBERAREQEREBERAREQERfE00daGSWWRsUUbS973nZrQOpJJ8Ag+1h5XL0cFTdbyNyCjVa5rDNYkDG8znBrW7n1lxAA8SSAOpUJJn7+pKjxppkbILNGOzTz9pjZqb3Pd3Q2JsjZJNmDnPuGnmYA47u5ZKrpupXyVy/I6e3ZtPie7ymZ0kcZjZyt7KMnlj8XE8gG5cSd0GL52y2Vn5MbQ8ihr5HyezNlWFvbQNbu+Su1p3du7ZjS/lHRztnAN5/wBx+j6dexRu33vzeWpOndXyV9jHTQ9se+2PlaAwbAN7oB5RsSdyTPIgIiICIiAq/qmObMtbgYG2Gw3mPZeuUrra81OAscA5pHfDnuHI0sAI7zg9rmjeQzecp6fpssXZ2QtklZXiDtyZJXuDWMAAJJLiB0B+H1LH03hpMdXfavQ0RnLoY/IWKMTmMlka0NG3MS4taAANz6t9hvsglmMbGxrGgNa0bAD1BfSIgIiICgNS07Fa1TzdGG3duUwYHUYLbYY54ZHx9oXNf3HOYG87SS0jZzQ4B7gZ9EHyx7ZGNexwcxw3DmncEfCvpVjTkcGl8i/TjW4yhRLXT4alWlcJTXaGdsDG7psySQbFh5Q2RjeVuw5rOgIiIMHJYPH5iajNdpQWpqM4s1JZYw59eXYt52O8Wnlc5pI8WucD0JCia8Oa03FVh7STUOPhhndPZsPaL/MCXRNa1rGskHL3NyWu7rSS4lxVkRBgYnN081XilrSOa98Mdh1aeN0M8TXglvaRPAfGTse64AggjbcFZ6iM1pinmm2JO/RyMtfyVuUpbR24mcwcA2TYnYOAdyndpPiCCQsefKZbC2Z3XafnHHSWYIasmOjLp4mvAa987CerWv680e/df1YAwuIT6Lwp3a+Qg7arPHZh5nM7SJ4c3ma4tcNx6w4EEeogj1L3QEREBERAREQFRD4q9rmj2UMja/DalblcI61XUOHnnlcdmxxtvw8znH1AesoNuIuYOKtXHcQeLuvsPW1XTwjX6Go1Zsn5QBFWkOQleI5XBw2Dw5jSNwS2Xp4hUbN6grO0di9H47HYXROHh1gMXqSSvPNcwssjqnPD3mSROEEjuyDmczOVwAdvueYOwtWapx2iNM5PP5ec18Zjq77NiQNLiGNG52A6k+oAeJXxpHUcuqsNHkJsLk8A57iBTy8cbJwPU4tY94APwE7/AAgLk7iJwoo4DgBxYc7Oadz2LjhrzVsRga746mJuRg80jGvsTFj3slZuAWjoDt1K2xksThtAcf8AhzQowVcFhmacy9WpEzliha7tqshY3fpvtzO2+2g2fqXXdDS2f0ziLUNiW1qC2+lVMDWlsb2wvmJk3cCG8sbvAE77dPWrfg8bBeyUT7UYmfXD5I+pDQSCw7jfY917h1+FcYaQzWPOotH5cXq5xU3FTUBjvdqOxeJIbQjIf4Hm3G3w7jZds6a/b0n+jP8AeEFlREQUu/8At6z/AKR395XgtfeyhJHA7iYR0Pma9/huWoK/BvR03G3SOLlwcU2OymkrN7IVZZHvjvWGS12smnBd9VkAmk7z9zu7ffcAgOoEXGeFmp6z4c8NdGZSjiMjYec3JBktWWJnVK1apdfA1ojbIwzScnZgAvHK1hO6/MXmLMvsa9DZTEZDzrxFxWet0NKSVT2zrbm2pozAeZ+5rms3vFzujGMJJIG4dmotaex1iwY4U4qzhbM12S46Szk7VxvLamyDnfqkzt97IJA5pb70NAHQBat0jo2vM3jjq6jjI8nq/F6gyxwz5mmUwTtoxcgiYejXOc/ZxA3d3Qd9hsHTTd3O5jzAeAaf71FaX1di9ZUrVvE2DZr1rk9CR5jczaaGQxyNAcATs5pG/gdum4XLnAzh9XzE2gNUY7W+lY8lZa23cFGvYGUyoMR8ogsukuPEjtyS7ePuuYCA0DZRFDS+kdPexs42jF4/F0NQxWM7SnZXYxllteO0/s43Ad7ka0x7DwALUHXuos/6PVas3m3IZPt7UNXs8dB2r4+0eG9o8bjaNu+7neoAnYqVXN/FThXpbSGl+HFrG4evFkW6xwUj8g5vNYlkdZia+R8h7znOA6kncqg5LS0vEzWXE2bUOq9L6fzeNzc9KpPnILAyGMqgN8klrSNuRNjaWkPBDO88u5i7wQdnKe0v4Wf6v/Ncdy4nH8OOOeOzGoLOL13ey+VpYtuRiuGPK4e66u1gYYA4tNdxHaFvQt7TchwAK7E0v4Wf6v8AzQTqIiCF1VUnkoR3Kc0Na5RlZYbNLU8oPZhwMzGtHeBfHzsBb1BcDs7blOfiMtUz2KpZPH2GWqF2BlmvPH7mSN7Q5rh8hBB+6stV/TF5wu5rFT3rF61RtOkL7Ffs+WKbeSNjXeEjWBxYHDr3Nj1BJCwIiICIiAiIghLWnfJ7TbuJk832GummkrRBrK92SRgG845SSQ5rDzt2d3dtyCQcjDZtmTDq8zWVctBFFJcodpzurmRu4G+w5m7hwDx0JY4Dq0gSai83j5rDI7VOaeG3Wd2rY4HsaLIaHbQyFzSOU7+PQg9QQglEWNjbjshjq1p9aek+aNr3VrIAliJG5Y7lJG48DsSOnQkdVkoCIiAiIgLXuQx1bK056d2rFcqTtMctexGHxyNPi1zT0IPwFbCRBp+rww0fRqTVa2ksHXqzVxUkgixsLWPhDy8ROaG7FnMS7lPTck+Ky6uhdO0NPSYCtp7F18FICH4yKlG2s7fqd4g3lO/2ltVEGpqvD7TFHT0+BraaxNfB2N+2xkVCJtaTfbfmiDeU+A8R6l7Z3SOE1fXFfPYTH5mCOUSMiyFZlhjXAdHAPBAK2mqzLHBpfU8t3kxlDG5lzRbsyymKeW/9ShgGx7r+eNoZvuHAxxtAfzdwKgdC6cONkxx09izj5LBuPqeRR9k6cu5jKWcuxfzdebbffrurhpsEXpOn+TP94VlRAREQUPUOLqZpuRoZCpDfo2TJFPWsxCSKVhJBa5rgQ4EeIKwxgcc3I174x1UXq8DqsNoQN7WKElpdG1224aS1pLR07o+ALZCINR3OHGlMji62Mt6Xw1rG1pnWIKc2PifDFK5xc57WFuzXFznOJA3JcT61kVNEaeoZGPIVsBjK1+OSWVlqKnG2Vr5QBK4ODdwXhrQ4797lG++y2oq6+5Jq1nZY2zJBhpI45RmKM8ZM5E3fhj3Du6Wxua542PLIOzdzd5gVbG4THYt912NoVaJuzusW5KkTYzPMdg57y0d552ALj16DqvXH4ajiXW3UaNem63O6zYNeFrDNMQAZH7DvOIa0Fx69B8C2HDDHXjEcTGxsG5DWjYdTufwr7QanxugdNYbN2Mzj9OYmjl7BJmyFajFHYl38eaRrQ47/AClfFvh1pW/eyN2zpjD2LuRhNe7Ylx8TpLUR23ZK4t3e3oOjtx0C22iDXF/B4/Kw14buPrXIq00diCOxC17YpYyHRvaCO65pAII6gjoo7L6B01qDLV8plNOYnJZOsAILtyjFLNFsdxyvc0lvX4CtsIg1UNC6cbqP0gGnsWM9tt50FKPyrbbb9l5ebw6eKvGmBsLP9X/mp1EBERAVdmnNTiBVidayDm38bIWVhHzU2GGVm7i73srhOAB75rD/ABFYlXNQTCvqnSxNnIx9tPPAIKrOavKTA9/1c+9A7M8p/jED1oLGiIgIiICIiAiIgreNqR4DVt+tXqVatLLA5AyCye1ltgNZLtEegbyNiduzpzc5cN3busirmsI215MLkxBjXy0r8bfKMjJ2RhZL9ReY3fxyJAA09HE7eJCsaAiIgIiICIiAiIgLHyFCHJ1JK1iNkkT9uj2NeAQd2u2cCNwQCNweoCyEQQmlcpLbqzUL1jyrL4xzK16YVHVmTSdm1/axsJd3HBwILXOAPM3fmY4CbX86/wDtF9TccsddmZHBLheGZbJXbd09cnItxyNa0svEOAG/e2byBu0jm80m267n4R5efUHCnRmUszOsWbuFpWZZnuLnPe+BjnEk9SSSfFBbUREBfE0rK8T5ZXtjjY0uc952DQPEk+oKl8a+JFbhFwq1Nq6yWjzZSfJC1w3D5j3Ym/de5o+0SuMPYE+yO11xT1E3RGtYczqbDdnasRZd8RmjbztJMN2RwPPFsXhm56Oe1pDhydmHdEU1nUliGaCWajiYZYbENiCVh85MMRdttsSyIOfGd92ucY3Ajk/ZJqvXip14oIImQQRNDI4o2hrWNA2AAHQAD1L0RAREQEREBERAREQEREBV3Uspjz2kmixkYRJkZGmOkzmhl/Udg8tg+9jG3MD9cbGPWrEq5qewIc/pBhu3qplyUjBDUj5orP6jsu5Jz72Mbc4P8dkY9aCxoiICIiAiIgIiIK9xBgfPorMmKDF2J4a7rETM0dqfaR99jpT71oc0Eu9W2/qVga4PaHNIc0jcEHcEKN1NWbd03lq746czJaksZjyI3rOBYRtL/mH33ybr605P5Vp7Fzc9SXtKsT+fHv56zt2A7xO9bP4p9Y2QSKIiAiIgIiICr2tNaU9FYxtiw02LUxLK1SM7Pmft8PvWjxLj0HykgGwrnHW2bfqPWmWtOcTDWldQrt5tw1kTi15A+Eyc5J+ANHqC9X7N7JHa8a1X9sZ56Lveed1dn9Tyuffyk8EJJ2qY+R1eJo+Alp5n/wBY7fIPBQLsZA87u7Vx+F0zyf71lIvv8PDowoyaItG5jlSwpMLTmjdHJE58bgWua6RxBB8QRuvyHB0q0McUUJiijaGMYyRwa0DoAAD0CyrVqGjWlsWJWQV4WGSSWRwa1jQNyST4AD1qp4Ti7pLUU80NHLc8sUD7XJLXlhMkTfdPj52DtGj4Wb+IWVWJTTMRVNrmVOtZfNNb+K//AGr/AKU801v4r/8Aav8ApVc07xa0pqvI1KOLyws2LkRmrc1eWNlhoALuze5oa8tB6hpJHXcDYqt61474fE36WKwl6vfyz81Uxk8b68roWiSZrJWiUAM7RrSTy8xII6joVqq7Rh005WVFvUvVrX+7prGZKDsbdRtqHcO7Ocl7dx4HYlZNPGQY5jWUzPSaz3Pk1iSLl+1yuGyykW+c+kyp1rVpXifm9MzRx3rEubxXg9k/esxj4WSe/wBv4r9yfU4eB3ljMnVzOPr3qUzbFSdgfHK3wcD/AHH5D1C5jWxOB+cfBlspgnvJhlj8vrtJ35SCGSgfANzGftucfWvmftXsGHOHOPhRaY02846rE3biREXxoIiICIiAiIgIiICruprPYZ3STPKchB22SkZ2dOPmim/Udl3LYPvYxy8wP1xsY9asSrmp3uZn9IAWcjAHZKQGOlGXQzDyOyeSwfexjbmB+uNiHrQWNERAREQEREBERB4XmdpSsMIicHRuG043jPT33yfD8ii9DvMmitPuLsW4ux9c82DO9A/U29a3/g/xP83ZS843gkGzT3T0f7nw9fyKF0C1zdC6cDmYuJwxtYFmD/aDT2Telb/wf4n+byoJ5ERAREQEREBcuXKz6WZzNWT9khyNoO38djM5zT91rmn7q6jWoOL+i5696TUtGJ80EjGtvxR9THyjYTgeJHLs123gGtO2wcV9B9jdopwsaaK82V9V0xZrhFF57TWH1dRjq5jG08vTa8TMitwtlYHbEBwBBG+ziN/lKgfaY0EP/wBm4Mb/AGBF+SvtKprif/MRx/01nGPTGQ1lwv1HhcUR5wt1SyFpdyh5BDuTf1cwBb91a/0/g8VqAyWmae1vWzFDG2XRO1DZuSwwSviMboo+1kcJHODjsWAghviDstnYThrpPTWQZexOm8XjbrAWtsVajI3gEbEBwG/UKyLRVgd5Vl16eOjR5ZlaTxencpHp/gWzzdchnxzI23PqDg6p/wB3SMPaDbud4hve267DxVWx0GYocPdJaHm0pnGZjD5+i63aioPfUkYy4HvsNmHRzXA8xPiNzv0BK6VQgEEHqCtc9kjyq3coj2BFTPaX0D8TMF/Z8X5K+n8GtByPc52jsG5zjuXGhEST95dN8TZjj/pFxVv4N1nz8RXzt/Y62LmbIR6jJLFyD7vZP/1VSalWHH1qtKnWDGNDK9apXZ8A2bGxo+QbAD4FvvhjouTSOGlkuAedLzmy2ACHCIAbNiBHiG9evwud6tl5v2p2inB7NVTP91WaI+rONa5IiL8/BERAREQEREBERAVX1bZZBqPRUbs3JinTZWVjabInPGSIo2ndg4jowNDTNu7pvAB4kK0KvalyLqWd0nA3JSURcyMkLq7KomFwCnYf2Tn/AOSALBJzjxMQZ79BYUREBERAREQEREHxN1hk6NPdPR3h4ev5FB8P4+x0HpuPscdW5MbWb2OHdzUo9om92A+uIeDT/F2U1bPLVmPc6Mcfqh2b4es/AofQcPk2htOxdhj63Z46s3sMS7mpx7RNHLAfXEPBp/i7IJ1ERAREQEREBERBQc/wYwOXnfYpusYOw9xc80C0RvcfEmNzS3r4kgAn4VAO4CTb93U8wH+dSYT/AHhbdRelh/aPa8OMmmvNvtP1W7UPtCWPjRJ8xZ+UntCWPjRJ8xZ+UtvItn9V7Zt8o6F2ofaEsfGiT5iz8pPaEsfGiT5iz8pbeRP6r2zb5R0LtQ+0JY+NEnzFn5S9IuAh5h2+prRb6+xqxMd988w/AttIk/avbJ+/yjoXVjSnDjB6Pk7epXdYvlpYb1t3aT7HxAPg0H1hoAOw6Kzoi83ExK8WrLxJvO9BERawREQEREBERAREQFXNU5A0s5pCIZh2MFvKSQGq2r2wyG1K0/sC/wDyQHZ9tz+swBnv1Y1XNWZEY/JaWac0cS2zlewMHk3becN605Ffm/yfUCXn/wDB5ffILGiIgIiICIiAiIgwc7J2ODyMgFclleR36rfyQ9Gn3bvU34T8G68dLVfItMYiv2FSr2NOGPsKH7Xj2YByxf5g8G/JssPiAXnQ+djjZjZZpqcsEcWYk5Kcj3tLGsmPjyOLgCB1IOw6qdhhZXiZFGwRxsaGtY0bBoHQAIPtERAREQEREBERAREQEREBERAREQEREBERAREQEXlan8mryS8vNyDfbfbdQ/pR9jfjP0IJ1FBelH2N+M/QnpR9jfjP0IJ1FBelH2N+M/QnpR9jfjP0IJ1V3WeS81MwspzIw0cmUrwP3q9uLXaEsbX8O5zuc3v+rb5V6+lH2N+M/Qo3UGevW6EbMfZGKnbZryunMbZg6JkzHSxcpHTtIw+Pm8W8/MOoCC3ooL0o+xvxn6E9KPsb8Z+hBOooL0o+xvxn6E9KPsb8Z+hBOooL0o+xvxn6E9KPsb8Z+hBOoo/GZXzk6Qdl2fIAfdb7/gUggrmtWtuw4rGlmJnN3IQh1bLO6Ssjd2z+yZ7+VrYy5o8By8x6NKsarccjM1reQsdirdfDQ9k4dmX3KtuQNdtzeEbTCWnYd5wkBOw25rIgIiICIiAiIgIiICIiAiIgIiICIiAiIgIiICIiDEyv7nWP/IVpDiPxLyGjdS6VwOK095/yWoXWmQNddbVjidDGJCXuLXd3lLtyASNujXbrd+V/c6x/5CtLav0Nfz/EzQGoq81ZlLT777rUcrnCR/b1+zZyANIOx8dyOnhugqTPZEvfp6MN0vNJrKTPy6aZp6O60sNyNvaPd5QWgdiItnl/JvsQOXdVnifx41XBw21dHj8IzTmttP5DHV79Z99s0cdezKzs5oZeyIkEgJj6taW7vPiwAytrgTqOveyedxl/Fx6hg1lPqbFCwZHV5IJarK0kE5DeZhc0P6tDttmnr1C88rwF1NrDTXES1nMpi6urtVuoGFlESSUaTaTw+vHzOAe/d4cXu5R7voOiDdOnbWUu4atNmsfXxeTcHdtUqWzajj7xA2lLGF242PuRsTt123Oq9LeyAyWqcBqvUUGko4tO4WDIPZYfl2Gy+Wq5wMM1cM5oHP5HEdXbDqR1G+ztJv1BJhYnamgxtfL8x7SPEzSSwAb9NnSNa7fbx6LTr+Bup9UcQMlm86dOYKvbxV/FWptNibt8qydoZE6y17Wt3iG5B3edz4gdEF3bxc5rHDCLzV+/Zjnb+U/tLak61/E+qe55Pe+O/wAipel/ZI5jMab0pqbMaJbhdK6jsxUoclBlm2pas0rzHEZYuyZtG5+w5g4kbjdoX7pvhVxCGoeFkmcsabZitEiWJ5x8s7p7jTSfWZJs6MNYe80lm5HUnm6AGhcC9A604kcGeGeOyNjBUtC0ZoMo91Z0z8hb7CZ0kcLmuaGMbztbzODiSB0AQX6x7I/M1MXqXUL9DiTR+ncxaxl/IwZZrrIjgm7N1hlcxDmaB3i3nBHXbm23WfqH2Qd6pPqm5p/R8uo9MaVcY8vlo8gyF4c2Nssza8Jae2Mcbmk7uZueg3WttL6L1vxK09xM0njrWDxuksprHMQZC/KZnZBkJtHtWRRhvZkuAIDi4bbnodt1ds3wX1rjINcae0hkMFW0rq+V8882RE3leNdNCyGx2TGtLZQWs5m8zmcpJ33QWJvGrKZziHNpjS2l4s3BFjKOWOVnyXk0Hk9gv26dk53Ns0FoAPN3tyzYb1vhVxi1PHgeImf15Tp1cBgMplA67Xv9tLCIJABWbEIGBzWtBAkLuZx23aCd1ctA8K59DcQcxk4ZoHYSXBYvD04+dxnb5L2wJeOUDYiRmxBO+x3A6b1iPgjn7VDiNpDIWcXJonVdq/fjuxPlGQrS2dnFpjLezc1j9yDz9RsNkGLon2WOK1HqnF4nJV8Pj4srHNJVnxuo62SfF2cTpXNsxxbGE8jHHcF7dxtvvsoXP8ZdV61t8MMnR07c03pDM6pqirk/OgE9+s6KYtbNWaAWxyAB4Bc4bNG4HRX/AEbo/Xc9GTCa1ZpWbDnHSUZLmHEwuW3OaGCQh7Q2LdvNu0F/V3QgDY0/E8HeJUWP4dadyd/TNrT+jMtVsw3oX2GXLVaCKSKMOjLCxrw1432cQSN9x6w6g0x+yWPtD/mpHOZqrp7FzX7rntrxFoIiidI9znODWtaxoLnOc5wAABJJCjtMfslj7Q/5r9pibPZpmQcblOhRL46zGWWGDIc7GHt3NZuS1veawOcASXuLOkbgGXpzG2sXi2Mv2IbmRkc6WzZgrtgbI8n1MBOwA5Wjck7NG5J6qUREBERAREQEREBERAREQEREBERAREQEREBERAREQeVqDymvJFzcvONt9t9lDei/2T+L/Sp5EED6L/ZP4v8ASnov9k/i/wBKnkQQPov9k/i/0p6L/ZP4v9KnkQQPov8AZP4v9Kei/wBk/i/0qeRBUcziXYiGGwGXLzXTRwPjpwB7mB7w3tC0uBLW77u23IG52OykPRf7J/F/pU3NDHYifFKxssT2lr2PG7XA9CCPWFAabcMFP6OSivWhrx/90sN1009ipGyJr3uEhL943yBjju4bOjJcC/laHp6L/ZP4v9Kei/2T+L/Sp5EED6L/AGT+L/Snov8AZP4v9KnSdvFVosfrmqWyxiPTFqvNBPVswSR2bLu05Qd9xyRFjXHbYl4lad2BpDwxamJGdfdqc7ZMM2V9S9Dbp7tvM7MbtYS7Yx7vLXEg78rm9PFW2KJkEbI42NjjYA1rGjYNA8AAvoANAAGwHgAv1AREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQFFahxE2TrRyUpa9TKVniSrcnqtn7I7jnGxIOz28zDyuadnHYhSqIMHC5ivnsdHdrCVsTy5pZPE6KRjmuLXNc1wBBBBHUepeWaz9bCxlrmutXnxSy18dXLTYtcg3c2NriAT1aNyQ0Fw3I3WFkKmVoZtlvFsN2K9JDFbiuXXMhqRsJ5pYmcjiXOaeUtBALmsOw3e5ZuHwUeKYHyzy5C7vJzXbXKZSHv5ywEAcrAdgGjoA1vjtugxPMEuYtSTZ01rtRtiC1Rx/YbNqPjaCHPcXHtZBLu8O2aG8sfK0OYXunkRAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERAREQEREBERB//2Q==",
"text/plain": [
"<IPython.core.display.Image object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from langgraph.graph import StateGraph\n",
"from IPython.display import Image, display\n",
"\n",
"workflow = StateGraph(GraphState)\n",
"\n",
"# Define the nodes\n",
"workflow.add_node(\"websearch\", web_search) # web search\n",
"workflow.add_node(\"retrieve\", retrieve) # retrieve\n",
"workflow.add_node(\"grade_documents\", grade_documents) # grade documents\n",
"workflow.add_node(\"generate\", generate) # generate\n",
"\n",
"# Build graph\n",
"workflow.set_conditional_entry_point(\n",
" route_question,\n",
" {\n",
" \"websearch\": \"websearch\",\n",
" \"vectorstore\": \"retrieve\",\n",
" },\n",
")\n",
"workflow.add_edge(\"websearch\", \"generate\")\n",
"workflow.add_edge(\"retrieve\", \"grade_documents\")\n",
"workflow.add_conditional_edges(\n",
" \"grade_documents\",\n",
" decide_to_generate,\n",
" {\n",
" \"websearch\": \"websearch\",\n",
" \"generate\": \"generate\",\n",
" },\n",
")\n",
"workflow.add_conditional_edges(\n",
" \"generate\",\n",
" grade_generation_v_documents_and_question,\n",
" {\n",
" \"not supported\": \"generate\",\n",
" \"useful\": END,\n",
" \"not useful\": \"websearch\",\n",
" \"max retries\": END,\n",
" },\n",
")\n",
"\n",
"# Compile\n",
"graph = workflow.compile()\n",
"display(Image(graph.get_graph().draw_mermaid_png()))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"---ROUTE QUESTION---\n",
"---ROUTE QUESTION TO RAG---\n",
"{'question': 'What are the types of agent memory?', 'max_retries': 3, 'loop_step': 0}\n",
"---RETRIEVE---\n",
"{'question': 'What are the types of agent memory?', 'max_retries': 3, 'loop_step': 0, 'documents': [Document(metadata={'id': '7222226d-772f-4696-b694-25fed7f3df27', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content=\"LLM Powered Autonomous Agents | Lil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nLil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nPosts\\n\\n\\n\\n\\nArchive\\n\\n\\n\\n\\nSearch\\n\\n\\n\\n\\nTags\\n\\n\\n\\n\\nFAQ\\n\\n\\n\\n\\nemojisearch.app\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n LLM Powered Autonomous Agents\\n \\nDate: June 23, 2023 | Estimated Reading Time: 31 min | Author: Lilian Weng\\n\\n\\n \\n\\n\\nTable of Contents\\n\\n\\n\\nAgent System Overview\\n\\nComponent One: Planning\\n\\nTask Decomposition\\n\\nSelf-Reflection\\n\\n\\nComponent Two: Memory\\n\\nTypes of Memory\\n\\nMaximum Inner Product Search (MIPS)\\n\\n\\nComponent Three: Tool Use\\n\\nCase Studies\\n\\nScientific Discovery Agent\\n\\nGenerative Agents Simulation\\n\\nProof-of-Concept Examples\\n\\n\\nChallenges\\n\\nCitation\\n\\nReferences\\n\\n\\n\\n\\n\\nBuilding agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview#\\nIn a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:\\n\\nPlanning\\n\\nSubgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks.\\nReflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results.\\n\\n\\nMemory\\n\\nShort-term memory: I would consider all the in-context learning (See Prompt Engineering) as utilizing short-term memory of the model to learn.\\nLong-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval.\\n\\n\\nTool use\\n\\nThe agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution capability, access to proprietary information sources and more.\"), Document(metadata={'id': '080b35eb-5a3c-40a0-bc5f-a444086474f5', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Long-Term Memory (LTM): Long-term memory can store information for a remarkably long time, ranging from a few days to decades, with an essentially unlimited storage capacity. There are two subtypes of LTM:\\n\\nExplicit / declarative memory: This is memory of facts and events, and refers to those memories that can be consciously recalled, including episodic memory (events and experiences) and semantic memory (facts and concepts).\\nImplicit / procedural memory: This type of memory is unconscious and involves skills and routines that are performed automatically, like riding a bike or typing on a keyboard.\\n\\n\\n\\n\\nFig. 8. Categorization of human memory.\\nWe can roughly consider the following mappings:\\n\\nSensory memory as learning embedding representations for raw inputs, including text, image or other modalities;\\nShort-term memory as in-context learning. It is short and finite, as it is restricted by the finite context window length of Transformer.\\nLong-term memory as the external vector store that the agent can attend to at query time, accessible via fast retrieval.\\n\\nMaximum Inner Product Search (MIPS)#\\nThe external memory can alleviate the restriction of finite attention span. A standard practice is to save the embedding representation of information into a vector store database that can support fast maximum inner-product search (MIPS). To optimize the retrieval speed, the common choice is the approximate nearest neighbors (ANN)\\u200b algorithm to return approximately top k nearest neighbors to trade off a little accuracy lost for a huge speedup.\\nA couple common choices of ANN algorithms for fast MIPS:\\n\\nLSH (Locality-Sensitive Hashing): It introduces a hashing function such that similar input items are mapped to the same buckets with high probability, where the number of buckets is much smaller than the number of inputs.\\nANNOY (Approximate Nearest Neighbors Oh Yeah): The core data structure are random projection trees, a set of binary trees where each non-leaf node represents a hyperplane splitting the input space into half and each leaf stores one data point. Trees are built independently and at random, so to some extent, it mimics a hashing function. ANNOY search happens in all the trees to iteratively search through the half that is closest to the query and then aggregates the results. The idea is quite related to KD tree but a lot more scalable.\\nHNSW (Hierarchical Navigable Small World): It is inspired by the idea of small world networks where most nodes can be reached by any other nodes within a small number of steps; e.g. “six degrees of separation” feature of social networks. HNSW builds hierarchical layers of these small-world graphs, where the bottom layers contain the actual data points. The layers in the middle create shortcuts to speed up search. When performing a search, HNSW starts from a random node in the top layer and navigates towards the target. When it can’t get any closer, it moves down to the next layer, until it reaches the bottom layer. Each move in the upper layers can potentially cover a large distance in the data space, and each move in the lower layers refines the search quality.\\nFAISS (Facebook AI Similarity Search): It operates on the assumption that in high dimensional space, distances between nodes follow a Gaussian distribution and thus there should exist clustering of data points. FAISS applies vector quantization by partitioning the vector space into clusters and then refining the quantization within clusters. Search first looks for cluster candidates with coarse quantization and then further looks into each cluster with finer quantization.\\nScaNN (Scalable Nearest Neighbors): The main innovation in ScaNN is anisotropic vector quantization. It quantizes a data point $x_i$ to $\\\\tilde{x}_i$ such that the inner product $\\\\langle q, x_i \\\\rangle$ is as similar to the original distance of $\\\\angle q, \\\\tilde{x}_i$ as possible, instead of picking the closet quantization centroid points.\\n\\n\\nFig. 9. Comparison of MIPS algorithms, measured in recall@10. (Image source: Google Blog, 2020)\\nCheck more MIPS algorithms and performance comparison in ann-benchmarks.com.\\nComponent Three: Tool Use#\\nTool use is a remarkable and distinguishing characteristic of human beings. We create, modify and utilize external objects to do things that go beyond our physical and cognitive limits. Equipping LLMs with external tools can significantly extend the model capabilities.'), Document(metadata={'id': 'fa10df4b-5401-4ec0-a1c4-eb57454eeee0', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Reflection mechanism: synthesizes memories into higher level inferences over time and guides the agent’s future behavior. They are higher-level summaries of past events (<- note that this is a bit different from self-reflection above)\\n\\nPrompt LM with 100 most recent observations and to generate 3 most salient high-level questions given a set of observations/statements. Then ask LM to answer those questions.\\n\\n\\nPlanning & Reacting: translate the reflections and the environment information into actions\\n\\nPlanning is essentially in order to optimize believability at the moment vs in time.\\nPrompt template: {Intro of an agent X}. Here is X\\'s plan today in broad strokes: 1)\\nRelationships between agents and observations of one agent by another are all taken into consideration for planning and reacting.\\nEnvironment information is present in a tree structure.\\n\\n\\n\\n\\nFig. 13. The generative agent architecture. (Image source: Park et al. 2023)\\nThis fun simulation results in emergent social behavior, such as information diffusion, relationship memory (e.g. two agents continuing the conversation topic) and coordination of social events (e.g. host a party and invite many others).\\nProof-of-Concept Examples#\\nAutoGPT has drawn a lot of attention into the possibility of setting up autonomous agents with LLM as the main controller. It has quite a lot of reliability issues given the natural language interface, but nevertheless a cool proof-of-concept demo. A lot of code in AutoGPT is about format parsing.\\nHere is the system message used by AutoGPT, where {{...}} are user inputs:\\nYou are {{ai-name}}, {{user-provided AI bot description}}.\\nYour decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications.\\n\\nGOALS:\\n\\n1. {{user-provided goal 1}}\\n2. {{user-provided goal 2}}\\n3. ...\\n4. ...\\n5. ...\\n\\nConstraints:\\n1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files.\\n2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember.\\n3. No user assistance\\n4. Exclusively use the commands listed in double quotes e.g. \"command name\"\\n5. Use subprocesses for commands that will not terminate within a few minutes'), Document(metadata={'id': '58a746d5-0cd7-4b13-8b6e-278534c50163', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Planning\\n\\nSubgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks.\\nReflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results.\\n\\n\\nMemory\\n\\nShort-term memory: I would consider all the in-context learning (See Prompt Engineering) as utilizing short-term memory of the model to learn.\\nLong-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval.\\n\\n\\nTool use\\n\\nThe agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution capability, access to proprietary information sources and more.\\n\\n\\n\\n\\nFig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\nAnother quite distinct approach, LLM+P (Liu et al. 2023), involves relying on an external classical planner to do long-horizon planning. This approach utilizes the Planning Domain Definition Language (PDDL) as an intermediate interface to describe the planning problem. In this process, LLM (1) translates the problem into “Problem PDDL”, then (2) requests a classical planner to generate a PDDL plan based on an existing “Domain PDDL”, and finally (3) translates the PDDL plan back into natural language. Essentially, the planning step is outsourced to an external tool, assuming the availability of domain-specific PDDL and a suitable planner which is common in certain robotic setups but not in many other domains.\\nSelf-Reflection#\\nSelf-reflection is a vital aspect that allows autonomous agents to improve iteratively by refining past action decisions and correcting previous mistakes. It plays a crucial role in real-world tasks where trial and error are inevitable.\\nReAct (Yao et al. 2023) integrates reasoning and acting within LLM by extending the action space to be a combination of task-specific discrete actions and the language space. The former enables LLM to interact with the environment (e.g. use Wikipedia search API), while the latter prompting LLM to generate reasoning traces in natural language.\\nThe ReAct prompt template incorporates explicit steps for LLM to think, roughly formatted as:\\nThought: ...\\nAction: ...\\nObservation: ...\\n... (Repeated many times)')]}\n",
"---CHECK DOCUMENT RELEVANCE TO QUESTION---\n",
"---GRADE: DOCUMENT RELEVANT---\n",
"---GRADE: DOCUMENT NOT RELEVANT---\n",
"---GRADE: DOCUMENT RELEVANT---\n",
"---GRADE: DOCUMENT RELEVANT---\n",
"---ASSESS GRADED DOCUMENTS---\n",
"---DECISION: NOT ALL DOCUMENTS ARE RELEVANT TO QUESTION, INCLUDE WEB SEARCH---\n",
"{'question': 'What are the types of agent memory?', 'web_search': 'Yes', 'max_retries': 3, 'loop_step': 0, 'documents': [Document(metadata={'id': '7222226d-772f-4696-b694-25fed7f3df27', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content=\"LLM Powered Autonomous Agents | Lil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nLil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nPosts\\n\\n\\n\\n\\nArchive\\n\\n\\n\\n\\nSearch\\n\\n\\n\\n\\nTags\\n\\n\\n\\n\\nFAQ\\n\\n\\n\\n\\nemojisearch.app\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n LLM Powered Autonomous Agents\\n \\nDate: June 23, 2023 | Estimated Reading Time: 31 min | Author: Lilian Weng\\n\\n\\n \\n\\n\\nTable of Contents\\n\\n\\n\\nAgent System Overview\\n\\nComponent One: Planning\\n\\nTask Decomposition\\n\\nSelf-Reflection\\n\\n\\nComponent Two: Memory\\n\\nTypes of Memory\\n\\nMaximum Inner Product Search (MIPS)\\n\\n\\nComponent Three: Tool Use\\n\\nCase Studies\\n\\nScientific Discovery Agent\\n\\nGenerative Agents Simulation\\n\\nProof-of-Concept Examples\\n\\n\\nChallenges\\n\\nCitation\\n\\nReferences\\n\\n\\n\\n\\n\\nBuilding agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview#\\nIn a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:\\n\\nPlanning\\n\\nSubgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks.\\nReflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results.\\n\\n\\nMemory\\n\\nShort-term memory: I would consider all the in-context learning (See Prompt Engineering) as utilizing short-term memory of the model to learn.\\nLong-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval.\\n\\n\\nTool use\\n\\nThe agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution capability, access to proprietary information sources and more.\"), Document(metadata={'id': 'fa10df4b-5401-4ec0-a1c4-eb57454eeee0', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Reflection mechanism: synthesizes memories into higher level inferences over time and guides the agent’s future behavior. They are higher-level summaries of past events (<- note that this is a bit different from self-reflection above)\\n\\nPrompt LM with 100 most recent observations and to generate 3 most salient high-level questions given a set of observations/statements. Then ask LM to answer those questions.\\n\\n\\nPlanning & Reacting: translate the reflections and the environment information into actions\\n\\nPlanning is essentially in order to optimize believability at the moment vs in time.\\nPrompt template: {Intro of an agent X}. Here is X\\'s plan today in broad strokes: 1)\\nRelationships between agents and observations of one agent by another are all taken into consideration for planning and reacting.\\nEnvironment information is present in a tree structure.\\n\\n\\n\\n\\nFig. 13. The generative agent architecture. (Image source: Park et al. 2023)\\nThis fun simulation results in emergent social behavior, such as information diffusion, relationship memory (e.g. two agents continuing the conversation topic) and coordination of social events (e.g. host a party and invite many others).\\nProof-of-Concept Examples#\\nAutoGPT has drawn a lot of attention into the possibility of setting up autonomous agents with LLM as the main controller. It has quite a lot of reliability issues given the natural language interface, but nevertheless a cool proof-of-concept demo. A lot of code in AutoGPT is about format parsing.\\nHere is the system message used by AutoGPT, where {{...}} are user inputs:\\nYou are {{ai-name}}, {{user-provided AI bot description}}.\\nYour decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications.\\n\\nGOALS:\\n\\n1. {{user-provided goal 1}}\\n2. {{user-provided goal 2}}\\n3. ...\\n4. ...\\n5. ...\\n\\nConstraints:\\n1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files.\\n2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember.\\n3. No user assistance\\n4. Exclusively use the commands listed in double quotes e.g. \"command name\"\\n5. Use subprocesses for commands that will not terminate within a few minutes'), Document(metadata={'id': '58a746d5-0cd7-4b13-8b6e-278534c50163', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Planning\\n\\nSubgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks.\\nReflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results.\\n\\n\\nMemory\\n\\nShort-term memory: I would consider all the in-context learning (See Prompt Engineering) as utilizing short-term memory of the model to learn.\\nLong-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval.\\n\\n\\nTool use\\n\\nThe agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution capability, access to proprietary information sources and more.\\n\\n\\n\\n\\nFig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\nAnother quite distinct approach, LLM+P (Liu et al. 2023), involves relying on an external classical planner to do long-horizon planning. This approach utilizes the Planning Domain Definition Language (PDDL) as an intermediate interface to describe the planning problem. In this process, LLM (1) translates the problem into “Problem PDDL”, then (2) requests a classical planner to generate a PDDL plan based on an existing “Domain PDDL”, and finally (3) translates the PDDL plan back into natural language. Essentially, the planning step is outsourced to an external tool, assuming the availability of domain-specific PDDL and a suitable planner which is common in certain robotic setups but not in many other domains.\\nSelf-Reflection#\\nSelf-reflection is a vital aspect that allows autonomous agents to improve iteratively by refining past action decisions and correcting previous mistakes. It plays a crucial role in real-world tasks where trial and error are inevitable.\\nReAct (Yao et al. 2023) integrates reasoning and acting within LLM by extending the action space to be a combination of task-specific discrete actions and the language space. The former enables LLM to interact with the environment (e.g. use Wikipedia search API), while the latter prompting LLM to generate reasoning traces in natural language.\\nThe ReAct prompt template incorporates explicit steps for LLM to think, roughly formatted as:\\nThought: ...\\nAction: ...\\nObservation: ...\\n... (Repeated many times)')]}\n",
"---WEB SEARCH---\n",
"{'question': 'What are the types of agent memory?', 'web_search': 'Yes', 'max_retries': 3, 'loop_step': 0, 'documents': [Document(metadata={'id': '7222226d-772f-4696-b694-25fed7f3df27', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content=\"LLM Powered Autonomous Agents | Lil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nLil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nPosts\\n\\n\\n\\n\\nArchive\\n\\n\\n\\n\\nSearch\\n\\n\\n\\n\\nTags\\n\\n\\n\\n\\nFAQ\\n\\n\\n\\n\\nemojisearch.app\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n LLM Powered Autonomous Agents\\n \\nDate: June 23, 2023 | Estimated Reading Time: 31 min | Author: Lilian Weng\\n\\n\\n \\n\\n\\nTable of Contents\\n\\n\\n\\nAgent System Overview\\n\\nComponent One: Planning\\n\\nTask Decomposition\\n\\nSelf-Reflection\\n\\n\\nComponent Two: Memory\\n\\nTypes of Memory\\n\\nMaximum Inner Product Search (MIPS)\\n\\n\\nComponent Three: Tool Use\\n\\nCase Studies\\n\\nScientific Discovery Agent\\n\\nGenerative Agents Simulation\\n\\nProof-of-Concept Examples\\n\\n\\nChallenges\\n\\nCitation\\n\\nReferences\\n\\n\\n\\n\\n\\nBuilding agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview#\\nIn a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:\\n\\nPlanning\\n\\nSubgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks.\\nReflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results.\\n\\n\\nMemory\\n\\nShort-term memory: I would consider all the in-context learning (See Prompt Engineering) as utilizing short-term memory of the model to learn.\\nLong-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval.\\n\\n\\nTool use\\n\\nThe agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution capability, access to proprietary information sources and more.\"), Document(metadata={'id': 'fa10df4b-5401-4ec0-a1c4-eb57454eeee0', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Reflection mechanism: synthesizes memories into higher level inferences over time and guides the agent’s future behavior. They are higher-level summaries of past events (<- note that this is a bit different from self-reflection above)\\n\\nPrompt LM with 100 most recent observations and to generate 3 most salient high-level questions given a set of observations/statements. Then ask LM to answer those questions.\\n\\n\\nPlanning & Reacting: translate the reflections and the environment information into actions\\n\\nPlanning is essentially in order to optimize believability at the moment vs in time.\\nPrompt template: {Intro of an agent X}. Here is X\\'s plan today in broad strokes: 1)\\nRelationships between agents and observations of one agent by another are all taken into consideration for planning and reacting.\\nEnvironment information is present in a tree structure.\\n\\n\\n\\n\\nFig. 13. The generative agent architecture. (Image source: Park et al. 2023)\\nThis fun simulation results in emergent social behavior, such as information diffusion, relationship memory (e.g. two agents continuing the conversation topic) and coordination of social events (e.g. host a party and invite many others).\\nProof-of-Concept Examples#\\nAutoGPT has drawn a lot of attention into the possibility of setting up autonomous agents with LLM as the main controller. It has quite a lot of reliability issues given the natural language interface, but nevertheless a cool proof-of-concept demo. A lot of code in AutoGPT is about format parsing.\\nHere is the system message used by AutoGPT, where {{...}} are user inputs:\\nYou are {{ai-name}}, {{user-provided AI bot description}}.\\nYour decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications.\\n\\nGOALS:\\n\\n1. {{user-provided goal 1}}\\n2. {{user-provided goal 2}}\\n3. ...\\n4. ...\\n5. ...\\n\\nConstraints:\\n1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files.\\n2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember.\\n3. No user assistance\\n4. Exclusively use the commands listed in double quotes e.g. \"command name\"\\n5. Use subprocesses for commands that will not terminate within a few minutes'), Document(metadata={'id': '58a746d5-0cd7-4b13-8b6e-278534c50163', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Planning\\n\\nSubgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks.\\nReflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results.\\n\\n\\nMemory\\n\\nShort-term memory: I would consider all the in-context learning (See Prompt Engineering) as utilizing short-term memory of the model to learn.\\nLong-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval.\\n\\n\\nTool use\\n\\nThe agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution capability, access to proprietary information sources and more.\\n\\n\\n\\n\\nFig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\nAnother quite distinct approach, LLM+P (Liu et al. 2023), involves relying on an external classical planner to do long-horizon planning. This approach utilizes the Planning Domain Definition Language (PDDL) as an intermediate interface to describe the planning problem. In this process, LLM (1) translates the problem into “Problem PDDL”, then (2) requests a classical planner to generate a PDDL plan based on an existing “Domain PDDL”, and finally (3) translates the PDDL plan back into natural language. Essentially, the planning step is outsourced to an external tool, assuming the availability of domain-specific PDDL and a suitable planner which is common in certain robotic setups but not in many other domains.\\nSelf-Reflection#\\nSelf-reflection is a vital aspect that allows autonomous agents to improve iteratively by refining past action decisions and correcting previous mistakes. It plays a crucial role in real-world tasks where trial and error are inevitable.\\nReAct (Yao et al. 2023) integrates reasoning and acting within LLM by extending the action space to be a combination of task-specific discrete actions and the language space. The former enables LLM to interact with the environment (e.g. use Wikipedia search API), while the latter prompting LLM to generate reasoning traces in natural language.\\nThe ReAct prompt template incorporates explicit steps for LLM to think, roughly formatted as:\\nThought: ...\\nAction: ...\\nObservation: ...\\n... (Repeated many times)'), Document(metadata={}, page_content='Sign up for Latest SuperAGI Updates\\n\"*\" indicates required fields\\nSuperAGI builds infrastructure components, tools, frameworks and models to enable opensource AGI\\ncommunity@superagi.com\\nFor Developers\\nDocs\\nGitHub\\nReleases\\nRoadmap\\nAPIs\\nCommunity\\nSupport Forum\\nMarketplace\\nSocial Mentions\\nReddit\\nCollectibles\\nResources\\nBlog\\nUse Cases\\nAGI Research Lab\\nTutorials\\nImportant Links\\nSuperAGI Cloud\\nApp Spotlight\\nSuperCoder\\nArchitecture\\n Check it out✨\\nFeatures\\nAction Console\\nResource Manager\\nTrajectory Fine-Tuning\\n\\u200c\\nMultiple Vector DBs\\nMulti-LLM Support\\nAgent Workflows\\nMarketplace\\nAgent Templates\\nDiscord\\nGitHub\\nTwitter\\nReddit\\nYoutube\\nTowards AGI (part 1): Agents with Memory\\nFebruary 6, 2024\\n7 mins read\\nAgents are an emerging class of artificial intelligence (AI) systems that use large language models (LLMs) to interact with the world. In a professional setup like this, the agent is responsible for extracting tasks from conversations and passing them to an employee and once the employee completes the task the agent will convey the output to the user. Then use the solutions of those basic tasks as in-context examples to solve the current task.\\nConclusion & Next Steps\\nIn this blog, we saw that design choices for Memory depend on the end use case. This analogy is better captured in the following table:\\nDeep dive into various types of Agent Memory\\nChoosing the right Memory design in Production\\nSince agents are powered by LLMs, they are inherently probabilistic.\\nA Survey on the Memory Mechanism of Large Language Model based Agents Zeyu Zhang1, Xiaohe Bo1, Chen Ma1, Rui Li1, Xu Chen1, Quanyu Dai2, Jieming Zhu2, Zhenhua Dong2, Ji-Rong Wen1 1Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China 2Huawei Noah’s Ark Lab, China zeyuzhang@ruc.edu.cn, xu.chen@ruc.edu.cn Abstract Large language model (LLM) based agents have recently attracted much attention from the research and industry communities. Some previous works also apply LLM-based agents in the finance domain, whose memory can store financial knowledge [113], market information [154, 156], and successful experi-ences [157, 155]. Ghost in the minecraft: Generally capable agents for open-world enviroments via large language models with text-based knowledge and memory.\\nThe key component to support agent-environment interactions is the memory of the agents. While previous studies have proposed many promising memory mechanisms, they are scattered in different papers, and there lacks a systematical review to summarize and compare these works from a holistic perspective, failing to abstract common and effective ...\\nThe memory module can help the agent to gather experiences, self-learn, and act in a more reasonable and effective manner. Short-term memory keeps and preserves relevant information in the symbolic forms, guaranteeing its accessibility in the decision process.\\nMoreover we highlight open problems, such as the separation of different types of memories and the management of memory over the agent\\'s lifetime. Lastly, we propose several topics for future research to address these challenges and further enhance the capabilities of LLM agents, including the use of metadata in procedural and semantic memory ...')]}\n",
"---GENERATE---\n",
"---CHECK HALLUCINATIONS---\n",
"---DECISION: GENERATION IS GROUNDED IN DOCUMENTS---\n",
"---GRADE GENERATION vs QUESTION---\n",
"---DECISION: GENERATION ADDRESSES QUESTION---\n",
"{'question': 'What are the types of agent memory?', 'generation': AIMessage(content='There are two main types of agent memory mentioned in the context: short-term memory, which keeps and preserves relevant information in symbolic forms for accessibility in decision-making, and long-term memory, which is not explicitly defined but implied to be a more general form of memory that can store experiences, knowledge, and successful examples. Additionally, procedural memory is also hinted at as a potential type of agent memory. These types of memories are essential for agents to gather experiences, self-learn, and act in a more reasonable and effective manner.', additional_kwargs={}, response_metadata={'model': 'llama3.2:3b-instruct-fp16', 'created_at': '2024-09-29T03:38:16.742724Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 5998996834, 'load_duration': 30563000, 'prompt_eval_count': 1026, 'prompt_eval_duration': 1370970000, 'eval_count': 106, 'eval_duration': 4595573000}, id='run-2472e908-197f-4ff3-bfc8-4d495d6e5c4e-0', usage_metadata={'input_tokens': 1026, 'output_tokens': 106, 'total_tokens': 1132}), 'web_search': 'Yes', 'max_retries': 3, 'loop_step': 1, 'documents': [Document(metadata={'id': '7222226d-772f-4696-b694-25fed7f3df27', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content=\"LLM Powered Autonomous Agents | Lil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nLil'Log\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nPosts\\n\\n\\n\\n\\nArchive\\n\\n\\n\\n\\nSearch\\n\\n\\n\\n\\nTags\\n\\n\\n\\n\\nFAQ\\n\\n\\n\\n\\nemojisearch.app\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n LLM Powered Autonomous Agents\\n \\nDate: June 23, 2023 | Estimated Reading Time: 31 min | Author: Lilian Weng\\n\\n\\n \\n\\n\\nTable of Contents\\n\\n\\n\\nAgent System Overview\\n\\nComponent One: Planning\\n\\nTask Decomposition\\n\\nSelf-Reflection\\n\\n\\nComponent Two: Memory\\n\\nTypes of Memory\\n\\nMaximum Inner Product Search (MIPS)\\n\\n\\nComponent Three: Tool Use\\n\\nCase Studies\\n\\nScientific Discovery Agent\\n\\nGenerative Agents Simulation\\n\\nProof-of-Concept Examples\\n\\n\\nChallenges\\n\\nCitation\\n\\nReferences\\n\\n\\n\\n\\n\\nBuilding agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview#\\nIn a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:\\n\\nPlanning\\n\\nSubgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks.\\nReflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results.\\n\\n\\nMemory\\n\\nShort-term memory: I would consider all the in-context learning (See Prompt Engineering) as utilizing short-term memory of the model to learn.\\nLong-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval.\\n\\n\\nTool use\\n\\nThe agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution capability, access to proprietary information sources and more.\"), Document(metadata={'id': 'fa10df4b-5401-4ec0-a1c4-eb57454eeee0', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Reflection mechanism: synthesizes memories into higher level inferences over time and guides the agent’s future behavior. They are higher-level summaries of past events (<- note that this is a bit different from self-reflection above)\\n\\nPrompt LM with 100 most recent observations and to generate 3 most salient high-level questions given a set of observations/statements. Then ask LM to answer those questions.\\n\\n\\nPlanning & Reacting: translate the reflections and the environment information into actions\\n\\nPlanning is essentially in order to optimize believability at the moment vs in time.\\nPrompt template: {Intro of an agent X}. Here is X\\'s plan today in broad strokes: 1)\\nRelationships between agents and observations of one agent by another are all taken into consideration for planning and reacting.\\nEnvironment information is present in a tree structure.\\n\\n\\n\\n\\nFig. 13. The generative agent architecture. (Image source: Park et al. 2023)\\nThis fun simulation results in emergent social behavior, such as information diffusion, relationship memory (e.g. two agents continuing the conversation topic) and coordination of social events (e.g. host a party and invite many others).\\nProof-of-Concept Examples#\\nAutoGPT has drawn a lot of attention into the possibility of setting up autonomous agents with LLM as the main controller. It has quite a lot of reliability issues given the natural language interface, but nevertheless a cool proof-of-concept demo. A lot of code in AutoGPT is about format parsing.\\nHere is the system message used by AutoGPT, where {{...}} are user inputs:\\nYou are {{ai-name}}, {{user-provided AI bot description}}.\\nYour decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications.\\n\\nGOALS:\\n\\n1. {{user-provided goal 1}}\\n2. {{user-provided goal 2}}\\n3. ...\\n4. ...\\n5. ...\\n\\nConstraints:\\n1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files.\\n2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember.\\n3. No user assistance\\n4. Exclusively use the commands listed in double quotes e.g. \"command name\"\\n5. Use subprocesses for commands that will not terminate within a few minutes'), Document(metadata={'id': '58a746d5-0cd7-4b13-8b6e-278534c50163', 'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'title': \"LLM Powered Autonomous Agents | Lil'Log\", 'description': 'Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.\\nAgent System Overview In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:', 'language': 'en'}, page_content='Planning\\n\\nSubgoal and decomposition: The agent breaks down large tasks into smaller, manageable subgoals, enabling efficient handling of complex tasks.\\nReflection and refinement: The agent can do self-criticism and self-reflection over past actions, learn from mistakes and refine them for future steps, thereby improving the quality of final results.\\n\\n\\nMemory\\n\\nShort-term memory: I would consider all the in-context learning (See Prompt Engineering) as utilizing short-term memory of the model to learn.\\nLong-term memory: This provides the agent with the capability to retain and recall (infinite) information over extended periods, often by leveraging an external vector store and fast retrieval.\\n\\n\\nTool use\\n\\nThe agent learns to call external APIs for extra information that is missing from the model weights (often hard to change after pre-training), including current information, code execution capability, access to proprietary information sources and more.\\n\\n\\n\\n\\nFig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\nAnother quite distinct approach, LLM+P (Liu et al. 2023), involves relying on an external classical planner to do long-horizon planning. This approach utilizes the Planning Domain Definition Language (PDDL) as an intermediate interface to describe the planning problem. In this process, LLM (1) translates the problem into “Problem PDDL”, then (2) requests a classical planner to generate a PDDL plan based on an existing “Domain PDDL”, and finally (3) translates the PDDL plan back into natural language. Essentially, the planning step is outsourced to an external tool, assuming the availability of domain-specific PDDL and a suitable planner which is common in certain robotic setups but not in many other domains.\\nSelf-Reflection#\\nSelf-reflection is a vital aspect that allows autonomous agents to improve iteratively by refining past action decisions and correcting previous mistakes. It plays a crucial role in real-world tasks where trial and error are inevitable.\\nReAct (Yao et al. 2023) integrates reasoning and acting within LLM by extending the action space to be a combination of task-specific discrete actions and the language space. The former enables LLM to interact with the environment (e.g. use Wikipedia search API), while the latter prompting LLM to generate reasoning traces in natural language.\\nThe ReAct prompt template incorporates explicit steps for LLM to think, roughly formatted as:\\nThought: ...\\nAction: ...\\nObservation: ...\\n... (Repeated many times)'), Document(metadata={}, page_content='Sign up for Latest SuperAGI Updates\\n\"*\" indicates required fields\\nSuperAGI builds infrastructure components, tools, frameworks and models to enable opensource AGI\\ncommunity@superagi.com\\nFor Developers\\nDocs\\nGitHub\\nReleases\\nRoadmap\\nAPIs\\nCommunity\\nSupport Forum\\nMarketplace\\nSocial Mentions\\nReddit\\nCollectibles\\nResources\\nBlog\\nUse Cases\\nAGI Research Lab\\nTutorials\\nImportant Links\\nSuperAGI Cloud\\nApp Spotlight\\nSuperCoder\\nArchitecture\\n Check it out✨\\nFeatures\\nAction Console\\nResource Manager\\nTrajectory Fine-Tuning\\n\\u200c\\nMultiple Vector DBs\\nMulti-LLM Support\\nAgent Workflows\\nMarketplace\\nAgent Templates\\nDiscord\\nGitHub\\nTwitter\\nReddit\\nYoutube\\nTowards AGI (part 1): Agents with Memory\\nFebruary 6, 2024\\n7 mins read\\nAgents are an emerging class of artificial intelligence (AI) systems that use large language models (LLMs) to interact with the world. In a professional setup like this, the agent is responsible for extracting tasks from conversations and passing them to an employee and once the employee completes the task the agent will convey the output to the user. Then use the solutions of those basic tasks as in-context examples to solve the current task.\\nConclusion & Next Steps\\nIn this blog, we saw that design choices for Memory depend on the end use case. This analogy is better captured in the following table:\\nDeep dive into various types of Agent Memory\\nChoosing the right Memory design in Production\\nSince agents are powered by LLMs, they are inherently probabilistic.\\nA Survey on the Memory Mechanism of Large Language Model based Agents Zeyu Zhang1, Xiaohe Bo1, Chen Ma1, Rui Li1, Xu Chen1, Quanyu Dai2, Jieming Zhu2, Zhenhua Dong2, Ji-Rong Wen1 1Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China 2Huawei Noah’s Ark Lab, China zeyuzhang@ruc.edu.cn, xu.chen@ruc.edu.cn Abstract Large language model (LLM) based agents have recently attracted much attention from the research and industry communities. Some previous works also apply LLM-based agents in the finance domain, whose memory can store financial knowledge [113], market information [154, 156], and successful experi-ences [157, 155]. Ghost in the minecraft: Generally capable agents for open-world enviroments via large language models with text-based knowledge and memory.\\nThe key component to support agent-environment interactions is the memory of the agents. While previous studies have proposed many promising memory mechanisms, they are scattered in different papers, and there lacks a systematical review to summarize and compare these works from a holistic perspective, failing to abstract common and effective ...\\nThe memory module can help the agent to gather experiences, self-learn, and act in a more reasonable and effective manner. Short-term memory keeps and preserves relevant information in the symbolic forms, guaranteeing its accessibility in the decision process.\\nMoreover we highlight open problems, such as the separation of different types of memories and the management of memory over the agent\\'s lifetime. Lastly, we propose several topics for future research to address these challenges and further enhance the capabilities of LLM agents, including the use of metadata in procedural and semantic memory ...')]}\n"
]
}
],
"source": [
"inputs = {\"question\": \"What are the types of agent memory?\", \"max_retries\": 3}\n",
"for event in graph.stream(inputs, stream_mode=\"values\"):\n",
" print(event)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"---ROUTE QUESTION---\n",
"---ROUTE QUESTION TO WEB SEARCH---\n",
"{'question': 'What are the models released today for llama3.2?', 'max_retries': 3, 'loop_step': 0}\n",
"---WEB SEARCH---\n",
"{'question': 'What are the models released today for llama3.2?', 'max_retries': 3, 'loop_step': 0, 'documents': [Document(metadata={}, page_content='Meta’s Llama 3.2 models now available on watsonx, including multimodal 11B and 90B models | IBM Meta Llama 3.2 models now available on watsonx, including multimodal 11B and 90B models IBM is announcing the availability of multiple Llama 3.2 models on watsonx.ai, IBM’s enterprise studio for AI developers, following the launch of the Llama 3.2 collection of pretrained and instruction tuned multilingual large language models (LLMs) at MetaConnect earlier today. Most notably, Llama 3.2 marks Meta’s first foray into multimodal AI: the release includes two models, in sizes of 11B and 90B, that can take images as input. The instruction-tuned Llama 3.2 90B Vision and 11B Vision models are immediately available in watsonx.ai through SaaS.\\nMeta Releases Llama 3.2—and Gives Its AI a Voice | WIRED Meta Releases Llama 3.2—and Gives Its AI a Voice Meta Releases Llama 3.2—and Gives Its AI a Voice Meta today also announced Llama 3.2, the first version of its free AI models to have visual abilities, broadening their usefulness and relevance for robotics, virtual reality, and so-called AI agents. Powering Meta AI’s new capabilities is an upgraded version of Llama, Meta’s premier large language model. “With Llama 3.1, Meta showed that open models could finally close the gap with their proprietary counterparts,” says Nathan Benaich, founder and general partner of Air Street Capital, and the author of an influential yearly report on AI.\\nLlama 3.2 Meta\\'s New generation Models Vertex AI | Google Cloud Blog Today, we’re announcing that Llama 3.2, Meta’s new generation of multimodal models, is available on Vertex AI Model Garden. All four Llama 3.2 models are ready for self-service deployment through Vertex AI Model Garden, starting today. \"Using Llama 3.1 on Google Cloud Vertex AI has made high-quality data generation easier and more efficient for Shopify. \"We\\'re thrilled to partner with Google Cloud, bringing the power of Vertex AI and Llama 3.1 to our BMC Helix platform,\" said Margaret Lee, GM and SVP of the Digital Service and Operations Management Business Unit at BMC. Operate within your enterprise guardrails: Deploy with confidence with not only support for Meta’s Llama Guard for the models, but also Google Cloud\\'s built-in security, privacy, and compliance measures.\\nMeta Releases Llama 3.2 Models with Vision Capability For the First Time | Beebom Home > News > Meta Releases Llama 3.2 Models with Vision Capability For the First Time Meta Releases Llama 3.2 Models with Vision Capability For the First Time You can start using Llama 3.2 11B and 90B vision models through the Meta AI chatbot on the web, WhatsApp, Facebook, Instagram, and Messenger. Llama 3.2 Models Give Vision to Meta AI These new Llama 3.2 11B and 90B vision models will be available through the Meta AI chatbot on the web, WhatsApp, Instagram, Facebook, and Messenger. Meta Releases Llama 3.2 Models with Vision Capability For the First Time\\nMeta’s new Llama 3.2 models available on Azure AI Meta’s new Llama 3.2 SLMs and image reasoning models now available on Azure AI Model Catalog Meta’s new Llama 3.2 SLMs and image reasoning models now available on Azure AI Model Catalog Meta’s new Llama 3.2 SLMs and image reasoning models now available on Azure AI Model Catalog In collaboration with Meta, Microsoft is excited to announce that Meta’s new Llama 3.2 models are now available on the Azure AI Model Catalog. Developers using Meta Llama 3 models can work seamlessly with tools in Azure AI Studio, such as Azure AI Content Safety, Azure AI Search, and prompt flow to enhance ethical and effective AI practices.')]}\n",
"---GENERATE---\n",
"---CHECK HALLUCINATIONS---\n",
"---DECISION: GENERATION IS GROUNDED IN DOCUMENTS---\n",
"---GRADE GENERATION vs QUESTION---\n",
"---DECISION: GENERATION ADDRESSES QUESTION---\n",
"{'question': 'What are the models released today for llama3.2?', 'generation': AIMessage(content='The Llama 3.2 models released today include multimodal models in sizes of 11B and 90B, which can take images as input. These instruction-tuned models are available on watsonx.ai through SaaS, including the Llama 3.2 90B Vision and 11B Vision models. Additionally, these models are also available on Vertex AI Model Garden and Azure AI Model Catalog for self-service deployment.', additional_kwargs={}, response_metadata={'model': 'llama3.2:3b-instruct-fp16', 'created_at': '2024-09-29T03:38:35.771051Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 5167497834, 'load_duration': 26749084, 'prompt_eval_count': 955, 'prompt_eval_duration': 1294945000, 'eval_count': 90, 'eval_duration': 3842945000}, id='run-5fb2be9c-8050-4f08-8510-182abb9feb3f-0', usage_metadata={'input_tokens': 955, 'output_tokens': 90, 'total_tokens': 1045}), 'max_retries': 3, 'loop_step': 1, 'documents': [Document(metadata={}, page_content='Meta’s Llama 3.2 models now available on watsonx, including multimodal 11B and 90B models | IBM Meta Llama 3.2 models now available on watsonx, including multimodal 11B and 90B models IBM is announcing the availability of multiple Llama 3.2 models on watsonx.ai, IBM’s enterprise studio for AI developers, following the launch of the Llama 3.2 collection of pretrained and instruction tuned multilingual large language models (LLMs) at MetaConnect earlier today. Most notably, Llama 3.2 marks Meta’s first foray into multimodal AI: the release includes two models, in sizes of 11B and 90B, that can take images as input. The instruction-tuned Llama 3.2 90B Vision and 11B Vision models are immediately available in watsonx.ai through SaaS.\\nMeta Releases Llama 3.2—and Gives Its AI a Voice | WIRED Meta Releases Llama 3.2—and Gives Its AI a Voice Meta Releases Llama 3.2—and Gives Its AI a Voice Meta today also announced Llama 3.2, the first version of its free AI models to have visual abilities, broadening their usefulness and relevance for robotics, virtual reality, and so-called AI agents. Powering Meta AI’s new capabilities is an upgraded version of Llama, Meta’s premier large language model. “With Llama 3.1, Meta showed that open models could finally close the gap with their proprietary counterparts,” says Nathan Benaich, founder and general partner of Air Street Capital, and the author of an influential yearly report on AI.\\nLlama 3.2 Meta\\'s New generation Models Vertex AI | Google Cloud Blog Today, we’re announcing that Llama 3.2, Meta’s new generation of multimodal models, is available on Vertex AI Model Garden. All four Llama 3.2 models are ready for self-service deployment through Vertex AI Model Garden, starting today. \"Using Llama 3.1 on Google Cloud Vertex AI has made high-quality data generation easier and more efficient for Shopify. \"We\\'re thrilled to partner with Google Cloud, bringing the power of Vertex AI and Llama 3.1 to our BMC Helix platform,\" said Margaret Lee, GM and SVP of the Digital Service and Operations Management Business Unit at BMC. Operate within your enterprise guardrails: Deploy with confidence with not only support for Meta’s Llama Guard for the models, but also Google Cloud\\'s built-in security, privacy, and compliance measures.\\nMeta Releases Llama 3.2 Models with Vision Capability For the First Time | Beebom Home > News > Meta Releases Llama 3.2 Models with Vision Capability For the First Time Meta Releases Llama 3.2 Models with Vision Capability For the First Time You can start using Llama 3.2 11B and 90B vision models through the Meta AI chatbot on the web, WhatsApp, Facebook, Instagram, and Messenger. Llama 3.2 Models Give Vision to Meta AI These new Llama 3.2 11B and 90B vision models will be available through the Meta AI chatbot on the web, WhatsApp, Instagram, Facebook, and Messenger. Meta Releases Llama 3.2 Models with Vision Capability For the First Time\\nMeta’s new Llama 3.2 models available on Azure AI Meta’s new Llama 3.2 SLMs and image reasoning models now available on Azure AI Model Catalog Meta’s new Llama 3.2 SLMs and image reasoning models now available on Azure AI Model Catalog Meta’s new Llama 3.2 SLMs and image reasoning models now available on Azure AI Model Catalog In collaboration with Meta, Microsoft is excited to announce that Meta’s new Llama 3.2 models are now available on the Azure AI Model Catalog. Developers using Meta Llama 3 models can work seamlessly with tools in Azure AI Studio, such as Azure AI Content Safety, Azure AI Search, and prompt flow to enhance ethical and effective AI practices.')]}\n"
]
}
],
"source": [
"# Test on current events\n",
"inputs = {\"question\": \"What are the models released today for llama3.2?\", \"max_retries\": 3}\n",
"for event in graph.stream(inputs, stream_mode=\"values\"):\n",
" print(event)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment