AdvancedAI AgentTool UsePython

Byg en AI Agent med Python

En AI agent er en LLM der kan taenke, bruge vaerktojer og handle autonomt. Her bygger vi en fra scratch - ingen frameworks, ren Python.

18. marts 2026|18 min read
Visualisering af neuralt netvaerk - AI agent arkitektur med tool use og reasoning loops

Foto: Google DeepMind / Unsplash

TL;DR

En AI agent er et loop: LLM modtager input, beslutter hvilke tools den skal kalde, eksekverer dem, og fortsaetter til opgaven er loest.

agent_minimal.py
1while not done:
2 response = llm.think(messages) # LLM beslutter naeste handling
3 if response.wants_tool:
4 result = execute_tool(response) # Kald tool
5 messages.append(result) # Feed resultat tilbage
6 else:
7 done = True # LLM er faerdig

Prerequisites

terminal
1pip install anthropic

Hvad er en AI Agent?

En AI agent er mere end en chatbot. Hvor en chatbot svarer pa spoergsmaal, kan en agent handle autonomt: den kan soege i databaser, kalde API'er, laese filer og traeffe beslutninger baseret pa resultaterne.

Kernen i en agent er tool use loopet: LLM'en modtager en opgave, beslutter hvilke vaerktojer den skal bruge, eksekverer dem, analyserer resultaterne og fortsaetter til opgaven er loest.

Forskellen fra simpel prompt engineering er at agenten har agency - den bestemmer selv sine naeste skridt.

Agent Arkitekturen

En agent bestar af fire komponenter:

  1. LLM (hjernen) - Reasoning og beslutningstagning
  2. Tools - Funktioner agenten kan kalde
  3. Memory - Samtalehistorik og kontekst
  4. Loop - Orkestrering af think-act-observe cyklussen
User Task --> [Agent Loop] --> Final Answer
                  |
            Think (LLM)
                  |
            Act (Tool Call)
                  |
            Observe (Tool Result)
                  |
            Repeat or Finish

Step 1: Definer Tools

Foerst definerer vi de tools agenten kan bruge. Hver tool har et navn, en beskrivelse (som LLM'en laeser), og et input schema:

tools.py
1import json
2import subprocess
3import httpx
4
5# Tool definitions for Claude
6TOOLS = [
7 {
8 "name": "run_python",
9 "description": "Koer Python kode og returner output. Brug til beregninger og databehandling.",
10 "input_schema": {
11 "type": "object",
12 "properties": {
13 "code": {
14 "type": "string",
15 "description": "Python kode der skal koeres"
16 }
17 },
18 "required": ["code"]
19 }
20 },
21 {
22 "name": "web_search",
23 "description": "Soeg paa nettet efter information. Returner relevante resultater.",
24 "input_schema": {
25 "type": "object",
26 "properties": {
27 "query": {
28 "type": "string",
29 "description": "Soegeord"
30 }
31 },
32 "required": ["query"]
33 }
34 },
35 {
36 "name": "read_file",
37 "description": "Laes indholdet af en fil.",
38 "input_schema": {
39 "type": "object",
40 "properties": {
41 "path": {
42 "type": "string",
43 "description": "Sti til filen"
44 }
45 },
46 "required": ["path"]
47 }
48 }
49]
50
51# Tool implementations
52def run_python(code: str) -> str:
53 try:
54 result = subprocess.run(
55 ["python3", "-c", code],
56 capture_output=True, text=True, timeout=10
57 )
58 return result.stdout or result.stderr
59 except subprocess.TimeoutExpired:
60 return "Error: Code execution timed out (10s limit)"
61
62def web_search(query: str) -> str:
63 # Replace with your preferred search API
64 response = httpx.get(
65 "https://api.search.example/v1",
66 params={"q": query, "limit": 3}
67 )
68 return json.dumps(response.json(), indent=2)
69
70def read_file(path: str) -> str:
71 try:
72 with open(path) as f:
73 return f.read()
74 except FileNotFoundError:
75 return f"Error: File not found: {path}"
76
77TOOL_FUNCTIONS = {
78 "run_python": run_python,
79 "web_search": web_search,
80 "read_file": read_file,
81}

Step 2: Agent Loop

Kernen i agenten: et loop der kalder LLM'en, eksekverer tools, og fortsaetter til LLM'en giver et endeligt svar.

agent.py
1from anthropic import Anthropic
2
3client = Anthropic()
4
5class Agent:
6 def __init__(self, system_prompt: str, tools: list, max_iterations: int = 10):
7 self.system = system_prompt
8 self.tools = tools
9 self.max_iterations = max_iterations
10 self.messages = []
11
12 def run(self, task: str) -> str:
13 """Run the agent on a task."""
14 self.messages = [{"role": "user", "content": task}]
15
16 for i in range(self.max_iterations):
17 # Think: Ask LLM what to do
18 response = client.messages.create(
19 model="claude-sonnet-4-20250514",
20 max_tokens=4096,
21 system=self.system,
22 tools=self.tools,
23 messages=self.messages
24 )
25
26 # Check if agent wants to use a tool
27 if response.stop_reason == "tool_use":
28 # Extract tool calls
29 tool_calls = [
30 block for block in response.content
31 if block.type == "tool_use"
32 ]
33
34 # Add assistant response to history
35 self.messages.append({
36 "role": "assistant",
37 "content": response.content
38 })
39
40 # Act: Execute each tool
41 tool_results = []
42 for tool_call in tool_calls:
43 print(f" Tool: {tool_call.name}({tool_call.input})")
44 result = TOOL_FUNCTIONS[tool_call.name](**tool_call.input)
45 print(f" Result: {result[:200]}")
46 tool_results.append({
47 "type": "tool_result",
48 "tool_use_id": tool_call.id,
49 "content": result
50 })
51
52 # Observe: Feed results back
53 self.messages.append({
54 "role": "user",
55 "content": tool_results
56 })
57
58 else:
59 # Agent is done - return final answer
60 return response.content[0].text
61
62 return "Agent reached max iterations without completing task."
63
64# Usage
65agent = Agent(
66 system_prompt="""Du er en hjælpsom AI-assistent med adgang til tools.
67Brug tools naar det er noedvendigt for at loese opgaven.
68Taenk trin-for-trin foer du handler.""",
69 tools=TOOLS
70)
71
72answer = agent.run("Hvad er 2^100 i scientific notation?")
73print(answer)

Step 3: Memory og Kontekst

For laengere samtaler har agenten brug for memory. Den simpleste form er at holde samtalehistorikken, men for store kontekster kan du bruge en vector database til at gemme og hente relevant information:

agent_memory.py
1from dataclasses import dataclass, field
2
3@dataclass
4class AgentMemory:
5 """Simple memory with conversation history and facts."""
6 conversation: list = field(default_factory=list)
7 facts: dict = field(default_factory=dict)
8 max_history: int = 50
9
10 def add_message(self, role: str, content: str):
11 self.conversation.append({"role": role, "content": content})
12 # Trim old messages if needed
13 if len(self.conversation) > self.max_history:
14 # Keep system context + recent messages
15 self.conversation = self.conversation[-self.max_history:]
16
17 def store_fact(self, key: str, value: str):
18 """Store a fact the agent has learned."""
19 self.facts[key] = value
20
21 def get_context(self) -> str:
22 """Build context string from stored facts."""
23 if not self.facts:
24 return ""
25 facts_str = "\n".join(f"- {k}: {v}" for k, v in self.facts.items())
26 return f"Known facts:\n{facts_str}"
27
28 def get_messages(self) -> list:
29 return self.conversation.copy()

Step 4: Guardrails

En agent der koerer autonomt kraever guardrails. Uden dem risikerer du uventede handlinger:

guardrails.py
1import os
2
3class SafeAgent(Agent):
4 """Agent with safety guardrails."""
5
6 BLOCKED_COMMANDS = ["rm -rf", "sudo", "DROP TABLE", "DELETE FROM"]
7 ALLOWED_PATHS = ["/tmp/agent", os.path.expanduser("~/agent_workspace")]
8
9 def execute_tool(self, name: str, input_data: dict) -> str:
10 # Block dangerous code execution
11 if name == "run_python":
12 code = input_data.get("code", "")
13 for blocked in self.BLOCKED_COMMANDS:
14 if blocked in code:
15 return f"BLOCKED: Code contains forbidden command: {blocked}"
16
17 # Restrict file access
18 if name == "read_file":
19 path = os.path.abspath(input_data.get("path", ""))
20 if not any(path.startswith(p) for p in self.ALLOWED_PATHS):
21 return f"BLOCKED: Access denied for path: {path}"
22
23 # Rate limit tool calls
24 if self.tool_call_count > 20:
25 return "BLOCKED: Too many tool calls. Agent may be in a loop."
26
27 self.tool_call_count += 1
28 return TOOL_FUNCTIONS[name](**input_data)

Step 5: Multi-Agent Systemer

For komplekse opgaver kan du lade flere agenter samarbejde. Hver agent har sin egen specialisering:

multi_agent.py
1class AgentTeam:
2 """Coordinator for multiple specialized agents."""
3
4 def __init__(self):
5 self.researcher = Agent(
6 system_prompt="Du er en research-specialist. Find og opsummer information.",
7 tools=[search_tool, read_tool]
8 )
9 self.coder = Agent(
10 system_prompt="Du er en Python-ekspert. Skriv og test kode.",
11 tools=[run_python_tool, read_tool]
12 )
13 self.reviewer = Agent(
14 system_prompt="Du reviewer kode og giver feedback. Vaer grundig.",
15 tools=[read_tool]
16 )
17
18 def solve(self, task: str) -> str:
19 # Step 1: Research
20 research = self.researcher.run(
21 f"Research dette emne grundigt: {task}"
22 )
23
24 # Step 2: Code
25 code = self.coder.run(
26 f"Baseret paa denne research, skriv koden:\n{research}"
27 )
28
29 # Step 3: Review
30 review = self.reviewer.run(
31 f"Review denne kode og foreslaa forbedringer:\n{code}"
32 )
33
34 return f"Research:\n{research}\n\nCode:\n{code}\n\nReview:\n{review}"
35
36team = AgentTeam()
37result = team.solve("Byg en web scraper der henter nyhedsoverskrifter")

Agent Frameworks

Vil du ikke bygge fra scratch? Disse frameworks giver dig en head start:

FrameworkStyrkeKompleksitet
Claude Tool UseSimpelt, robust, nativeLav
LangChainStort ecosystem, mange integrationsMedium
CrewAIMulti-agent, rolle-baseretMedium
AutoGenConversational agents, MicrosoftHoej

Common Pitfalls

  • Ingen max iterations - Agenten kan loope uendeligt. Saet altid en graense.
  • Manglende error handling - Tools fejler. Giv agenten meningsfulde fejlbeskeder.
  • For brede tools - Jo mere specifik tool-beskrivelsen er, jo bedre vaelger LLM'en.
  • Ingen guardrails - En agent med shell access kan goere skade. Begrans altid.
  • Token explosion - Lange tool-resultater fylder kontekstvinduet. Trim output.

Naeste skridt

Med denne foundation kan du bygge agents til alt fra kodning til dataanalyse. For mere avancerede patterns, tjek vores LangChain guide der viser hvordan frameworks haandterer agent-logik, eller MCP guiden for at laere om Model Context Protocol. For bedre tool-resultater kan du kombinere med en RAG pipeline.