

本文為英文版的機器翻譯版本，如內容有任何歧義或不一致之處，概以英文版為準。

# 使用 BedrockSessionSaver LangGraph 程式庫存放和擷取對話歷史記錄與內容
<a name="sessions-opensource-library"></a>

您可以使用 `BedrockSessionSaver` 程式庫在 LangGraph 中存放和擷取對話歷史記錄與內容，而不是直接使用 Amazon Bedrock 工作階段管理 API。這是 LangGraph CheckpointSaver 的自訂實作。它將 Amazon Bedrock API 搭配 LangGraph 型介面使用。如需詳細資訊，請參閱 [LangChain](https://github.com/langchain-ai) GitHub 儲存庫中的 [langgraph-checkpoint-aws](https://github.com/langchain-ai/langchain-aws/tree/main/libs/langgraph-checkpoint-aws)。

下列程式碼範例示範如何在使用者與 Claude 互動時，使用 BedrockSessionSaver LangGraph 程式庫來追蹤狀態。若要使用此程式碼範例：
+ 安裝所需的相依項：
  + boto3
  + langgraph
  + langgraph-checkpoint-aws
  + langchain-core
+ 請確定您能夠存取帳戶中的 Claude 3.5 Sonnet v2 模型。或者，您可以修改程式碼以使用不同的模型。
+ 使用您的區域取代 `REGION`。
  + 執行時期用戶端和 BedrockSessionSaver 兩者的這個區域必須相符。
  + 此外必須支援 Claude 3.5 Sonnet v2 (或您正在使用的模型)。

```
import boto3
from typing import Dict, TypedDict, Annotated, Sequence, Union
from langgraph.graph import StateGraph, END
from langgraph_checkpoint_aws.saver import BedrockSessionSaver
from langchain_core.messages import HumanMessage, AIMessage
import json


# Define state structure
class State(TypedDict):
    messages: Sequence[Union[HumanMessage, AIMessage]]
    current_question: str


# Function to get response from Claude
def get_response(messages):
    bedrock = boto3.client('bedrock-runtime', region_name="us-west-2")
    prompt = "\n".join([f"{'Human' if isinstance(m, HumanMessage) else 'Assistant'}: {m.content}"
                        for m in messages])

    response = bedrock.invoke_model(
        modelId="anthropic.claude-3-5-sonnet-20241022-v2:0",
        body=json.dumps({
            "anthropic_version": "bedrock-2023-05-31",
            "max_tokens": 1000,
            "messages": [
                {
                    "role": "user",
                    "content": [
                        {
                            "type": "text",
                            "text": prompt
                        }
                    ]
                }
            ],
            "temperature": 0.7
        })
    )

    response_body = json.loads(response['body'].read())
    return response_body['content'][0]['text']


# Node function to process user question
def process_question(state: State) -> Dict:
    messages = list(state["messages"])
    messages.append(HumanMessage(content=state["current_question"]))

    # Get response from Claude
    response = get_response(messages)
    messages.append(AIMessage(content=response))

    # Print assistant's response
    print("\nAssistant:", response)

    # Get next user input
    next_question = input("\nYou: ").strip()

    return {
        "messages": messages,
        "current_question": next_question
    }


# Node function to check if conversation should continue
def should_continue(state: State) -> bool:
    # Check if the last message was from the user and contains 'quit'
    if state["current_question"].lower() == 'quit':
        return False
    return True


# Create the graph
def create_graph(session_saver):
    # Initialize state graph
    workflow = StateGraph(State)

    # Add nodes
    workflow.add_node("process_question", process_question)

    # Add conditional edges
    workflow.add_conditional_edges(
        "process_question",
        should_continue,
        {
            True: "process_question",
            False: END
        }
    )

    # Set entry point
    workflow.set_entry_point("process_question")

    return workflow.compile(session_saver)


def main():
    # Create a runtime client
    agent_run_time_client = boto3.client("bedrock-agent-runtime",
                                         region_name="REGION")
            
    # Initialize Bedrock session saver. The Region must match the Region used for the agent_run_time_client.
    session_saver = BedrockSessionSaver(region_name="REGION")

    # Create graph
    graph = create_graph(session_saver)

    # Create session
    session_id = agent_run_time_client.create_session()["sessionId"]
    print("Session started. Type 'quit' to end.")

    # Configure graph
    config = {"configurable": {"thread_id": session_id}}

    # Initial state
    state = {
        "messages": [],
        "current_question": "Hello! How can I help you today? (Type 'quit' to end)"
    }

    # Print initial greeting
    print(f"\nAssistant: {state['current_question']}")

    state["current_question"] = input("\nYou: ").strip()

    # Process the question through the graph
    graph.invoke(state, config)
    print("\nSession contents:")
    for i in graph.get_state_history(config, limit=3):
        print(i)


if __name__ == "__main__":
    main()
```