IntermediateClaude APIAnthropicPython

Claude API: Komplet Guide til Danske Udviklere

Alt du skal vide for at integrere Claude i din applikation. Messages API, streaming, tool use, vision og production best practices.

25. januar 2026|15 min read

TL;DR

quickstart.py
1import anthropic
2client = anthropic.Anthropic() # Uses ANTHROPIC_API_KEY
3message = client.messages.create(
4 model="claude-sonnet-4-20250514",
5 max_tokens=1024,
6 messages=[{"role": "user", "content": "Hej Claude!"}]
7)
8print(message.content[0].text)

Quick Reference

Base URLhttps://api.anthropic.com
Latest Modelsclaude-opus-4-20250514claude-sonnet-4-20250514
Max Context200K tokens
Auth Headerx-api-key

Hvad er Claude API?

Claude er Anthropics family af large language models. Sammenlignet med OpenAI GPT-4 har Claude nogle styrker:

  • Laengere kontekst - 200K tokens vs GPT-4's 128K
  • Bedre tool use - Mere praecis function calling
  • Konstituerende AI - Traenet til at vaere helpful, harmless, honest
  • Konkurrencedygtig pris - Sonnet er billigere end GPT-4

Signup og API Key

For at bruge Claude API skal du:

  1. Opret konto pa console.anthropic.com
  2. Tilfoej betalingsmetode (pay-as-you-go)
  3. Generer API key under Settings → API Keys
  4. Gem key som environment variable: ANTHROPIC_API_KEY

Installation

Installer den officielle SDK for dit foretrukne sprog:

Python
pip install anthropic
TypeScript/JavaScript
npm install @anthropic-ai/sdk

Basic Message

Den simpleste API call. Send en besked og faa et svar:

basic_message.py
1import anthropic
2
3client = anthropic.Anthropic() # Uses ANTHROPIC_API_KEY env var
4
5message = client.messages.create(
6 model="claude-sonnet-4-20250514",
7 max_tokens=1024,
8 messages=[
9 {"role": "user", "content": "Hvad er forskellen mellem REST og GraphQL?"}
10 ]
11)
12
13print(message.content[0].text)
14print(f"Input tokens: {message.usage.input_tokens}")
15print(f"Output tokens: {message.usage.output_tokens}")

System Prompts

System prompts definerer Claudes persona og adfaerd. De er essentielle for konsistent output:

system_prompt.py
1message = client.messages.create(
2 model="claude-sonnet-4-20250514",
3 max_tokens=1024,
4 system="""Du er en senior software arkitekt.
5Svar altid med:
61. En kort forklaring
72. Kodeeksempel hvis relevant
83. Potentielle pitfalls
9
10Hold svarene koncise og tekniske. Brug dansk.""",
11 messages=[
12 {"role": "user", "content": "Hvordan implementerer jeg dependency injection i Python?"}
13 ]
14)

Multi-turn Conversations

For at holde kontekst over flere beskeder, send hele samtalehistorikken:

conversation.py
1from typing import List, Dict
2
3class Conversation:
4 def __init__(self, system_prompt: str = None):
5 self.messages: List[Dict] = []
6 self.system = system_prompt
7 self.client = anthropic.Anthropic()
8
9 def chat(self, user_message: str) -> str:
10 self.messages.append({"role": "user", "content": user_message})
11
12 response = self.client.messages.create(
13 model="claude-sonnet-4-20250514",
14 max_tokens=2048,
15 system=self.system,
16 messages=self.messages
17 )
18
19 assistant_message = response.content[0].text
20 self.messages.append({"role": "assistant", "content": assistant_message})
21
22 return assistant_message
23
24 def clear(self):
25 self.messages = []
26
27# Usage
28conv = Conversation(system_prompt="Du er en Python ekspert.")
29print(conv.chat("Forklar list comprehensions"))
30print(conv.chat("Giv mig et komplekst eksempel")) # Husker konteksten

Streaming

For bedre UX, stream responsen token-by-token. Brugere ser teksten som den genereres, hvilket foeles hurtigere:

streaming.py
1# Simple streaming
2with client.messages.stream(
3 model="claude-sonnet-4-20250514",
4 max_tokens=1024,
5 messages=[{"role": "user", "content": "Skriv en kort historie om en robot"}]
6) as stream:
7 for text in stream.text_stream:
8 print(text, end="", flush=True)
9
10# Streaming with events
11with client.messages.stream(
12 model="claude-sonnet-4-20250514",
13 max_tokens=1024,
14 messages=[{"role": "user", "content": "Forklar quantum computing"}]
15) as stream:
16 for event in stream:
17 if event.type == "content_block_delta":
18 print(event.delta.text, end="", flush=True)
19 elif event.type == "message_stop":
20 print("\n--- Done ---")
21
22# Get final message after streaming
23response = stream.get_final_message()
24print(f"\nTotal tokens: {response.usage.input_tokens + response.usage.output_tokens}")

Tool Use (Function Calling)

Lad Claude kalde funktioner i din kode. Kraftfuldt til agents og automation. Claude beslutter hvornaar det giver mening at bruge et tool.

tools.py
1import json
2
3# Define tools
4tools = [
5 {
6 "name": "get_weather",
7 "description": "Hent vejret for en given by. Brug dette naar brugeren spoerger om vejret.",
8 "input_schema": {
9 "type": "object",
10 "properties": {
11 "city": {
12 "type": "string",
13 "description": "Byens navn, f.eks. 'Kobenhavn'"
14 },
15 "unit": {
16 "type": "string",
17 "enum": ["celsius", "fahrenheit"],
18 "description": "Temperatur enhed"
19 }
20 },
21 "required": ["city"]
22 }
23 },
24 {
25 "name": "search_database",
26 "description": "Soeg i produktdatabasen efter produkter",
27 "input_schema": {
28 "type": "object",
29 "properties": {
30 "query": {"type": "string", "description": "Soegeord"},
31 "max_results": {"type": "integer", "description": "Max antal resultater"}
32 },
33 "required": ["query"]
34 }
35 }
36]
37
38# Your actual functions
39def get_weather(city: str, unit: str = "celsius") -> dict:
40 # In reality, call a weather API
41 return {"city": city, "temp": 18, "unit": unit, "condition": "Overskyet"}
42
43def search_database(query: str, max_results: int = 5) -> list:
44 # In reality, query your database
45 return [{"name": f"Produkt {i}", "price": 100 + i*10} for i in range(max_results)]
46
47# Process tool calls
48def process_tool_call(tool_name: str, tool_input: dict) -> str:
49 if tool_name == "get_weather":
50 result = get_weather(**tool_input)
51 elif tool_name == "search_database":
52 result = search_database(**tool_input)
53 else:
54 result = {"error": f"Unknown tool: {tool_name}"}
55 return json.dumps(result)
56
57# Make request with tools
58response = client.messages.create(
59 model="claude-sonnet-4-20250514",
60 max_tokens=1024,
61 tools=tools,
62 messages=[{"role": "user", "content": "Hvordan er vejret i Aarhus?"}]
63)
64
65# Handle tool use
66if response.stop_reason == "tool_use":
67 # Find the tool use block
68 tool_use = next(block for block in response.content if block.type == "tool_use")
69
70 # Execute the tool
71 tool_result = process_tool_call(tool_use.name, tool_use.input)
72
73 # Send result back to Claude
74 final_response = client.messages.create(
75 model="claude-sonnet-4-20250514",
76 max_tokens=1024,
77 tools=tools,
78 messages=[
79 {"role": "user", "content": "Hvordan er vejret i Aarhus?"},
80 {"role": "assistant", "content": response.content},
81 {
82 "role": "user",
83 "content": [{
84 "type": "tool_result",
85 "tool_use_id": tool_use.id,
86 "content": tool_result
87 }]
88 }
89 ]
90 )
91 print(final_response.content[0].text)

Vision (Image Input)

Claude kan analysere billeder. Send dem som base64 eller URL. Understotter JPEG, PNG, GIF og WebP.

vision.py
1import base64
2import httpx
3
4# From local file
5def encode_image(image_path: str) -> str:
6 with open(image_path, "rb") as f:
7 return base64.standard_b64encode(f.read()).decode("utf-8")
8
9# From URL
10def fetch_image_base64(url: str) -> str:
11 response = httpx.get(url)
12 return base64.standard_b64encode(response.content).decode("utf-8")
13
14# Analyze local image
15message = client.messages.create(
16 model="claude-sonnet-4-20250514",
17 max_tokens=1024,
18 messages=[
19 {
20 "role": "user",
21 "content": [
22 {
23 "type": "image",
24 "source": {
25 "type": "base64",
26 "media_type": "image/jpeg",
27 "data": encode_image("diagram.jpg")
28 }
29 },
30 {
31 "type": "text",
32 "text": "Forklar hvad dette arkitektur-diagram viser. List komponenterne."
33 }
34 ]
35 }
36 ]
37)
38
39# Multiple images
40message = client.messages.create(
41 model="claude-sonnet-4-20250514",
42 max_tokens=1024,
43 messages=[
44 {
45 "role": "user",
46 "content": [
47 {"type": "image", "source": {"type": "base64", "media_type": "image/png", "data": img1_b64}},
48 {"type": "image", "source": {"type": "base64", "media_type": "image/png", "data": img2_b64}},
49 {"type": "text", "text": "Sammenlign disse to designs. Hvad er forskellene?"}
50 ]
51 }
52 ]
53)

Error Handling

Robust error handling er kritisk i production. Implementer retry logic med exponential backoff:

error_handling.py
1from anthropic import (
2 Anthropic,
3 APIError,
4 RateLimitError,
5 APIConnectionError,
6 AuthenticationError
7)
8import time
9from functools import wraps
10
11client = Anthropic()
12
13def retry_with_backoff(max_retries: int = 3, base_delay: float = 1.0):
14 def decorator(func):
15 @wraps(func)
16 def wrapper(*args, **kwargs):
17 last_exception = None
18 for attempt in range(max_retries):
19 try:
20 return func(*args, **kwargs)
21 except RateLimitError as e:
22 last_exception = e
23 # Respect retry-after header if present
24 delay = float(e.response.headers.get("retry-after", base_delay * (2 ** attempt)))
25 print(f"Rate limited. Waiting {delay:.1f}s...")
26 time.sleep(delay)
27 except APIConnectionError as e:
28 last_exception = e
29 delay = base_delay * (2 ** attempt)
30 print(f"Connection error. Retrying in {delay:.1f}s...")
31 time.sleep(delay)
32 except AuthenticationError:
33 # Don't retry auth errors
34 raise
35 except APIError as e:
36 if e.status_code >= 500:
37 # Server error, worth retrying
38 last_exception = e
39 time.sleep(base_delay * (2 ** attempt))
40 else:
41 raise
42 raise last_exception
43 return wrapper
44 return decorator
45
46@retry_with_backoff(max_retries=3)
47def call_claude(messages: list, **kwargs):
48 return client.messages.create(
49 model="claude-sonnet-4-20250514",
50 messages=messages,
51 **kwargs
52 )

Pricing og Cost Optimization

Claude priser per token. Her er strategier til at minimere costs:

Priser (Januar 2026)

ModelInput/1M tokensOutput/1M tokens
claude-opus-4$15$75
claude-sonnet-4$3$15
claude-haiku-3.5$0.80$4
cost_optimization.py
1# 1. Use the right model for the task
2def get_model_for_task(task_complexity: str) -> str:
3 """Choose model based on task complexity."""
4 if task_complexity == "simple":
5 return "claude-haiku-3.5-20250101" # Cheapest
6 elif task_complexity == "moderate":
7 return "claude-sonnet-4-20250514" # Best value
8 else:
9 return "claude-opus-4-20250514" # Most capable
10
11# 2. Limit output tokens
12response = client.messages.create(
13 model="claude-sonnet-4-20250514",
14 max_tokens=500, # Don't use more than needed
15 messages=[...]
16)
17
18# 3. Use caching for repeated prompts
19from functools import lru_cache
20import hashlib
21
22@lru_cache(maxsize=1000)
23def cached_claude_call(prompt_hash: str, prompt: str) -> str:
24 response = client.messages.create(
25 model="claude-sonnet-4-20250514",
26 max_tokens=1024,
27 messages=[{"role": "user", "content": prompt}]
28 )
29 return response.content[0].text
30
31def call_with_cache(prompt: str) -> str:
32 prompt_hash = hashlib.md5(prompt.encode()).hexdigest()
33 return cached_claude_call(prompt_hash, prompt)
34
35# 4. Track costs
36def estimate_cost(response) -> float:
37 """Estimate cost in USD for a response."""
38 # Sonnet 4 pricing
39 input_cost = response.usage.input_tokens * 3 / 1_000_000
40 output_cost = response.usage.output_tokens * 15 / 1_000_000
41 return input_cost + output_cost

Claude vs GPT-4 Sammenligning

FeatureClaude Sonnet 4GPT-4 Turbo
Max context200K tokens128K tokens
Input price$3/1M$10/1M
Tool useExcellentGood
VisionYesYes
Best forLong docs, code, agentsGeneral purpose

Best Practices

  • Brug environment variables - Aldrig hardcode API keys
  • Implementer retry logic - Rate limits og netvaerksfejl sker
  • Stream for UX - Brugere foretraekker at se tekst som den genereres
  • Valider tool inputs - Trust but verify Claudes tool calls
  • Log alt - Gem requests og responses til debugging
  • Set max_tokens - Undga uventet lange (dyre) responses
  • Monitor costs - Track token usage per endpoint

Naeste skridt

Med disse fundamenter kan du bygge kraftfulde AI-applikationer. Tjek vores RAG-guide for at laere hvordan du kombinerer Claude med dine egne data, eller prompt engineering guiden for at forbedre output kvalitet.