IntermediateLangChainPythonLCEL

LangChain Guide: Fra Chain til Agent

LangChain er det mest populaere framework til LLM-applikationer. Denne guide tager dig fra simple chains til production-ready agents.

18. marts 2026|16 min read

TL;DR

LangChain abstraherer LLM-interaktioner til composable components: Models, Prompts, Chains, Retrievers og Agents. LCEL (LangChain Expression Language) er den moderne maade at bygge pipelines pa.

langchain_minimal.py
1from langchain_anthropic import ChatAnthropic
2from langchain_core.prompts import ChatPromptTemplate
3
4prompt = ChatPromptTemplate.from_template("Forklar {topic} kort")
5model = ChatAnthropic(model="claude-sonnet-4-20250514")
6chain = prompt | model # LCEL: pipe operator
7
8result = chain.invoke({"topic": "microservices"})
9print(result.content)

Prerequisites

terminal
1pip install langchain langchain-anthropic langchain-community chromadb
  • • Python 3.11+
  • • Anthropic API key (eller OpenAI)
  • • Kendskab til Claude API

Hvorfor LangChain?

Du kan kalde Claude API direkte (og det boer du goere for simple use cases). Men naar din applikation vokser, faar du brug for:

  • Composability - Byg komplekse pipelines af simple komponenter
  • Swappable models - Skift mellem Claude, GPT-4, Llama uden kodeaendringer
  • Built-in integrations - 700+ integrations til databaser, API'er, tools
  • Observability - LangSmith til debugging og monitoring

LangChain er dog ikke altid det rigtige valg. For simple API-kald er det overkill. Brug det naar du har brug for chains, RAG eller agents.

LCEL: LangChain Expression Language

LCEL er den moderne maade at bygge LangChain pipelines. Du componerer komponenter med pipe-operatoren (|):

lcel_basics.py
1from langchain_anthropic import ChatAnthropic
2from langchain_core.prompts import ChatPromptTemplate
3from langchain_core.output_parsers import StrOutputParser
4
5# Components
6prompt = ChatPromptTemplate.from_messages([
7 ("system", "Du er en teknisk forfatter. Skriv koncist og praecist."),
8 ("user", "{input}")
9])
10model = ChatAnthropic(model="claude-sonnet-4-20250514")
11parser = StrOutputParser()
12
13# Chain them with pipe operator
14chain = prompt | model | parser
15
16# Invoke
17result = chain.invoke({"input": "Forklar Docker i 3 saetninger"})
18print(result)
19
20# Stream
21for chunk in chain.stream({"input": "Hvad er Kubernetes?"}):
22 print(chunk, end="", flush=True)
23
24# Batch (parallel execution)
25results = chain.batch([
26 {"input": "Hvad er React?"},
27 {"input": "Hvad er Vue?"},
28 {"input": "Hvad er Svelte?"},
29])

Custom LCEL Chains

Du kan lave mere avancerede chains med RunnableLambda og RunnablePassthrough:

lcel_advanced.py
1from langchain_core.runnables import RunnableLambda, RunnablePassthrough
2
3# Custom processing step
4def format_docs(docs: list[str]) -> str:
5 return "\n".join(f"- {doc}" for doc in docs)
6
7# Chain with custom logic
8chain = (
9 RunnablePassthrough.assign(
10 formatted=lambda x: format_docs(x["docs"])
11 )
12 | ChatPromptTemplate.from_template(
13 "Baseret paa disse dokumenter:\n{formatted}\n\nSvar: {question}"
14 )
15 | model
16 | StrOutputParser()
17)
18
19result = chain.invoke({
20 "docs": ["Python er dynamisk typed", "Rust er statisk typed"],
21 "question": "Hvad er forskellen?"
22})

RAG med LangChain

LangChain goer RAG vaestenligt simplere med built-in document loaders, text splitters og retrievers:

langchain_rag.py
1from langchain_anthropic import ChatAnthropic
2from langchain_community.vectorstores import Chroma
3from langchain_community.embeddings import HuggingFaceEmbeddings
4from langchain_text_splitters import RecursiveCharacterTextSplitter
5from langchain_core.prompts import ChatPromptTemplate
6from langchain_core.runnables import RunnablePassthrough
7from langchain_core.output_parsers import StrOutputParser
8
9# 1. Load and split documents
10text_splitter = RecursiveCharacterTextSplitter(
11 chunk_size=500,
12 chunk_overlap=50,
13 separators=["\n\n", "\n", ". ", " "]
14)
15
16# From raw text (use document loaders for files/URLs)
17docs = text_splitter.create_documents([your_text])
18
19# 2. Create vector store
20embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
21vectorstore = Chroma.from_documents(docs, embeddings, persist_directory="./db")
22retriever = vectorstore.as_retriever(search_kwargs={"k": 5})
23
24# 3. Build RAG chain
25template = """Svar paa spoergsmaalet baseret paa konteksten.
26Hvis konteksten ikke indeholder svaret, sig det aeerligt.
27
28Kontekst: {context}
29
30Spoergsmaal: {question}"""
31
32prompt = ChatPromptTemplate.from_template(template)
33model = ChatAnthropic(model="claude-sonnet-4-20250514")
34
35rag_chain = (
36 {"context": retriever, "question": RunnablePassthrough()}
37 | prompt
38 | model
39 | StrOutputParser()
40)
41
42# Query
43answer = rag_chain.invoke("Hvad er fordelene ved microservices?")
44print(answer)

Agents med LangChain

LangChain agents wrapper agent-loopet i en hoejer abstraction. Du definerer tools og lader frameworket haandtere orkestreringen:

langchain_agent.py
1from langchain_anthropic import ChatAnthropic
2from langchain_core.tools import tool
3from langchain.agents import AgentExecutor, create_tool_calling_agent
4from langchain_core.prompts import ChatPromptTemplate
5
6# Define tools as decorated functions
7@tool
8def calculate(expression: str) -> str:
9 """Beregn et matematisk udtryk. Input skal vaere et sikkert udtryk."""
10 # Use a safe math parser instead of eval in production
11 import ast
12 try:
13 tree = ast.parse(expression, mode="eval")
14 return str(compile(tree, "<string>", "eval"))
15 except Exception as e:
16 return f"Fejl: {e}"
17
18@tool
19def get_current_time() -> str:
20 """Hent aktuel dato og tid."""
21 from datetime import datetime
22 return datetime.now().strftime("%Y-%m-%d %H:%M:%S")
23
24@tool
25def search_docs(query: str) -> str:
26 """Soeg i dokumentationen efter information."""
27 # Replace with actual search implementation
28 return f"Soegeresultater for '{query}': [Placeholder results]"
29
30# Create agent
31model = ChatAnthropic(model="claude-sonnet-4-20250514")
32tools = [calculate, get_current_time, search_docs]
33
34prompt = ChatPromptTemplate.from_messages([
35 ("system", "Du er en hjaelpsom assistent. Brug tools naar det giver mening."),
36 ("placeholder", "{chat_history}"),
37 ("human", "{input}"),
38 ("placeholder", "{agent_scratchpad}"),
39])
40
41agent = create_tool_calling_agent(model, tools, prompt)
42executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
43
44# Run
45result = executor.invoke({
46 "input": "Hvad er 2^32, og hvad er klokken nu?",
47 "chat_history": []
48})
49print(result["output"])

Memory i LangChain

For chatbots og laengere interaktioner bruger du LangChains memory-system til at holde samtalehistorik:

langchain_memory.py
1from langchain_anthropic import ChatAnthropic
2from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
3from langchain_core.chat_history import InMemoryChatMessageHistory
4from langchain_core.runnables.history import RunnableWithMessageHistory
5
6model = ChatAnthropic(model="claude-sonnet-4-20250514")
7
8prompt = ChatPromptTemplate.from_messages([
9 ("system", "Du er en hjaelpsom AI-assistent."),
10 MessagesPlaceholder(variable_name="history"),
11 ("human", "{input}"),
12])
13
14chain = prompt | model
15
16# Session-based memory
17store = {}
18
19def get_session_history(session_id: str):
20 if session_id not in store:
21 store[session_id] = InMemoryChatMessageHistory()
22 return store[session_id]
23
24chain_with_history = RunnableWithMessageHistory(
25 chain,
26 get_session_history,
27 input_messages_key="input",
28 history_messages_key="history",
29)
30
31# Chat with memory
32config = {"configurable": {"session_id": "user123"}}
33
34print(chain_with_history.invoke({"input": "Jeg hedder Lars"}, config=config).content)
35print(chain_with_history.invoke({"input": "Hvad hedder jeg?"}, config=config).content)
36# Output: "Du hedder Lars"

LangChain vs Raw API

ScenarioRaw APILangChain
Simpel chatbotBrug raw APIOverkill
RAG pipelineMeget kodeBuilt-in support
Multi-modelSeparate SDK'erEns interface
Agent med toolsCustom loopAgentExecutor

Common Pitfalls

  • Overabstraktion - Brug ikke LangChain til simple API-kald. Det tilfojer kompleksitet uden vaerdi.
  • Version hell - LangChain opdateres hurtigt. Pin dine versioner i requirements.txt.
  • Debugging - Abstraktion goer debugging svaerere. Brug verbose=True og LangSmith.
  • Deprecated patterns - Legacy chains (LLMChain, etc.) er deprecated. Brug LCEL.
  • Token costs - Agents kan lave mange API-kald. Monitor altid dit forbrug.

Naeste skridt

LangChain er et godt udgangspunkt for LLM-applikationer. Foer du vaelger framework, overvej ogsaa MCP (Model Context Protocol) som Anthropics standard for tool-integration, eller byg din egen agent fra scratch for fuld kontrol. For den bedste RAG-pipeline, laes vores RAG implementation guide.