LangChain Guide: Fra Chain til Agent
LangChain er det mest populaere framework til LLM-applikationer. Denne guide tager dig fra simple chains til production-ready agents.
TL;DR
LangChain abstraherer LLM-interaktioner til composable components: Models, Prompts, Chains, Retrievers og Agents. LCEL (LangChain Expression Language) er den moderne maade at bygge pipelines pa.
1from langchain_anthropic import ChatAnthropic2from langchain_core.prompts import ChatPromptTemplate3
4prompt = ChatPromptTemplate.from_template("Forklar {topic} kort")5model = ChatAnthropic(model="claude-sonnet-4-20250514")6chain = prompt | model # LCEL: pipe operator7
8result = chain.invoke({"topic": "microservices"})9print(result.content)Prerequisites
1pip install langchain langchain-anthropic langchain-community chromadb- • Python 3.11+
- • Anthropic API key (eller OpenAI)
- • Kendskab til Claude API
Hvorfor LangChain?
Du kan kalde Claude API direkte (og det boer du goere for simple use cases). Men naar din applikation vokser, faar du brug for:
- Composability - Byg komplekse pipelines af simple komponenter
- Swappable models - Skift mellem Claude, GPT-4, Llama uden kodeaendringer
- Built-in integrations - 700+ integrations til databaser, API'er, tools
- Observability - LangSmith til debugging og monitoring
LangChain er dog ikke altid det rigtige valg. For simple API-kald er det overkill. Brug det naar du har brug for chains, RAG eller agents.
LCEL: LangChain Expression Language
LCEL er den moderne maade at bygge LangChain pipelines. Du componerer komponenter med pipe-operatoren (|):
1from langchain_anthropic import ChatAnthropic2from langchain_core.prompts import ChatPromptTemplate3from langchain_core.output_parsers import StrOutputParser4
5# Components6prompt = ChatPromptTemplate.from_messages([7 ("system", "Du er en teknisk forfatter. Skriv koncist og praecist."),8 ("user", "{input}")9])10model = ChatAnthropic(model="claude-sonnet-4-20250514")11parser = StrOutputParser()12
13# Chain them with pipe operator14chain = prompt | model | parser15
16# Invoke17result = chain.invoke({"input": "Forklar Docker i 3 saetninger"})18print(result)19
20# Stream21for chunk in chain.stream({"input": "Hvad er Kubernetes?"}):22 print(chunk, end="", flush=True)23
24# Batch (parallel execution)25results = chain.batch([26 {"input": "Hvad er React?"},27 {"input": "Hvad er Vue?"},28 {"input": "Hvad er Svelte?"},29])Custom LCEL Chains
Du kan lave mere avancerede chains med RunnableLambda og RunnablePassthrough:
1from langchain_core.runnables import RunnableLambda, RunnablePassthrough2
3# Custom processing step4def format_docs(docs: list[str]) -> str:5 return "\n".join(f"- {doc}" for doc in docs)6
7# Chain with custom logic8chain = (9 RunnablePassthrough.assign(10 formatted=lambda x: format_docs(x["docs"])11 )12 | ChatPromptTemplate.from_template(13 "Baseret paa disse dokumenter:\n{formatted}\n\nSvar: {question}"14 )15 | model16 | StrOutputParser()17)18
19result = chain.invoke({20 "docs": ["Python er dynamisk typed", "Rust er statisk typed"],21 "question": "Hvad er forskellen?"22})RAG med LangChain
LangChain goer RAG vaestenligt simplere med built-in document loaders, text splitters og retrievers:
1from langchain_anthropic import ChatAnthropic2from langchain_community.vectorstores import Chroma3from langchain_community.embeddings import HuggingFaceEmbeddings4from langchain_text_splitters import RecursiveCharacterTextSplitter5from langchain_core.prompts import ChatPromptTemplate6from langchain_core.runnables import RunnablePassthrough7from langchain_core.output_parsers import StrOutputParser8
9# 1. Load and split documents10text_splitter = RecursiveCharacterTextSplitter(11 chunk_size=500,12 chunk_overlap=50,13 separators=["\n\n", "\n", ". ", " "]14)15
16# From raw text (use document loaders for files/URLs)17docs = text_splitter.create_documents([your_text])18
19# 2. Create vector store20embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")21vectorstore = Chroma.from_documents(docs, embeddings, persist_directory="./db")22retriever = vectorstore.as_retriever(search_kwargs={"k": 5})23
24# 3. Build RAG chain25template = """Svar paa spoergsmaalet baseret paa konteksten.26Hvis konteksten ikke indeholder svaret, sig det aeerligt.27
28Kontekst: {context}29
30Spoergsmaal: {question}"""31
32prompt = ChatPromptTemplate.from_template(template)33model = ChatAnthropic(model="claude-sonnet-4-20250514")34
35rag_chain = (36 {"context": retriever, "question": RunnablePassthrough()}37 | prompt38 | model39 | StrOutputParser()40)41
42# Query43answer = rag_chain.invoke("Hvad er fordelene ved microservices?")44print(answer)Agents med LangChain
LangChain agents wrapper agent-loopet i en hoejer abstraction. Du definerer tools og lader frameworket haandtere orkestreringen:
1from langchain_anthropic import ChatAnthropic2from langchain_core.tools import tool3from langchain.agents import AgentExecutor, create_tool_calling_agent4from langchain_core.prompts import ChatPromptTemplate5
6# Define tools as decorated functions7@tool8def calculate(expression: str) -> str:9 """Beregn et matematisk udtryk. Input skal vaere et sikkert udtryk."""10 # Use a safe math parser instead of eval in production11 import ast12 try:13 tree = ast.parse(expression, mode="eval")14 return str(compile(tree, "<string>", "eval"))15 except Exception as e:16 return f"Fejl: {e}"17
18@tool19def get_current_time() -> str:20 """Hent aktuel dato og tid."""21 from datetime import datetime22 return datetime.now().strftime("%Y-%m-%d %H:%M:%S")23
24@tool25def search_docs(query: str) -> str:26 """Soeg i dokumentationen efter information."""27 # Replace with actual search implementation28 return f"Soegeresultater for '{query}': [Placeholder results]"29
30# Create agent31model = ChatAnthropic(model="claude-sonnet-4-20250514")32tools = [calculate, get_current_time, search_docs]33
34prompt = ChatPromptTemplate.from_messages([35 ("system", "Du er en hjaelpsom assistent. Brug tools naar det giver mening."),36 ("placeholder", "{chat_history}"),37 ("human", "{input}"),38 ("placeholder", "{agent_scratchpad}"),39])40
41agent = create_tool_calling_agent(model, tools, prompt)42executor = AgentExecutor(agent=agent, tools=tools, verbose=True)43
44# Run45result = executor.invoke({46 "input": "Hvad er 2^32, og hvad er klokken nu?",47 "chat_history": []48})49print(result["output"])Memory i LangChain
For chatbots og laengere interaktioner bruger du LangChains memory-system til at holde samtalehistorik:
1from langchain_anthropic import ChatAnthropic2from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder3from langchain_core.chat_history import InMemoryChatMessageHistory4from langchain_core.runnables.history import RunnableWithMessageHistory5
6model = ChatAnthropic(model="claude-sonnet-4-20250514")7
8prompt = ChatPromptTemplate.from_messages([9 ("system", "Du er en hjaelpsom AI-assistent."),10 MessagesPlaceholder(variable_name="history"),11 ("human", "{input}"),12])13
14chain = prompt | model15
16# Session-based memory17store = {}18
19def get_session_history(session_id: str):20 if session_id not in store:21 store[session_id] = InMemoryChatMessageHistory()22 return store[session_id]23
24chain_with_history = RunnableWithMessageHistory(25 chain,26 get_session_history,27 input_messages_key="input",28 history_messages_key="history",29)30
31# Chat with memory32config = {"configurable": {"session_id": "user123"}}33
34print(chain_with_history.invoke({"input": "Jeg hedder Lars"}, config=config).content)35print(chain_with_history.invoke({"input": "Hvad hedder jeg?"}, config=config).content)36# Output: "Du hedder Lars"LangChain vs Raw API
| Scenario | Raw API | LangChain |
|---|---|---|
| Simpel chatbot | Brug raw API | Overkill |
| RAG pipeline | Meget kode | Built-in support |
| Multi-model | Separate SDK'er | Ens interface |
| Agent med tools | Custom loop | AgentExecutor |
Common Pitfalls
- Overabstraktion - Brug ikke LangChain til simple API-kald. Det tilfojer kompleksitet uden vaerdi.
- Version hell - LangChain opdateres hurtigt. Pin dine versioner i requirements.txt.
- Debugging - Abstraktion goer debugging svaerere. Brug
verbose=Trueog LangSmith. - Deprecated patterns - Legacy chains (LLMChain, etc.) er deprecated. Brug LCEL.
- Token costs - Agents kan lave mange API-kald. Monitor altid dit forbrug.
Naeste skridt
LangChain er et godt udgangspunkt for LLM-applikationer. Foer du vaelger framework, overvej ogsaa MCP (Model Context Protocol) som Anthropics standard for tool-integration, eller byg din egen agent fra scratch for fuld kontrol. For den bedste RAG-pipeline, laes vores RAG implementation guide.