Skip to main content

Anthropic Integration

Airtrain makes it easy to integrate with Anthropic's Claude models, providing powerful AI capabilities with strong safety features. This guide will walk you through setting up and using Claude models in your AI agents.

Setting Up Anthropic Credentials

First, set up your Anthropic credentials:

from airtrain.core.credentials import AnthropicCredentials

# Create credentials
anthropic_creds = AnthropicCredentials(
api_key="your-anthropic-api-key"
)

# Load to environment
anthropic_creds.load_to_env()

# Save to file for later use
anthropic_creds.save_to_file("anthropic_credentials.env")

# Load from file when needed
loaded_creds = AnthropicCredentials.from_file("anthropic_credentials.env")

Creating a Claude-based Agent

Once you have your credentials set up, you can create an agent using Anthropic's Claude models:

from airtrain.core import Agent
from airtrain.models import AnthropicModel
from airtrain.core.skills import TextGenerationSkill

# Create the model
claude_model = AnthropicModel(
model_name="claude-3-opus-20240229",
temperature=0.7,
max_tokens=2000
)

# Create an agent with the model
agent = Agent(
name="research_assistant",
model=claude_model,
skills=[TextGenerationSkill()]
)

# Use the agent
response = agent.process("Write a summary of the latest research on quantum computing")
print(response.content)

Advanced Usage: Tool Calling with Claude

Claude models support tool calling, allowing you to create agents that can use external tools:

from airtrain.core import Agent, Tool
from airtrain.models import AnthropicModel
from airtrain.core.schemas import InputSchema, OutputSchema
from pydantic import BaseModel, Field
from typing import List, Optional

# Define schemas for search tool
class SearchInputSchema(InputSchema):
query: str = Field(description="Search query")
num_results: int = Field(default=3, ge=1, le=10, description="Number of results to return")

class SearchResultSchema(BaseModel):
title: str
url: str
snippet: str

class SearchOutputSchema(OutputSchema):
results: List[SearchResultSchema]
total_found: int

# Create a mock search tool
def search_web(query, num_results=3):
"""Mock function to simulate web search"""
# In a real application, this would call a search API
mock_results = [
SearchResultSchema(
title="Quantum Computing Latest Research",
url="https://example.com/quantum-research",
snippet="Recent breakthroughs in quantum computing show promising results in error correction..."
),
SearchResultSchema(
title="AI and Quantum Computing Integration",
url="https://example.com/ai-quantum",
snippet="Researchers are exploring ways to combine AI with quantum computing for enhanced capabilities..."
),
SearchResultSchema(
title="Practical Applications of Quantum Computing",
url="https://example.com/quantum-applications",
snippet="From drug discovery to cryptography, quantum computing is finding practical applications..."
),
SearchResultSchema(
title="Quantum Computing Hardware Advances",
url="https://example.com/quantum-hardware",
snippet="New quantum processors are achieving better coherence times and reduced error rates..."
),
]

return SearchOutputSchema(
results=mock_results[:num_results],
total_found=len(mock_results)
)

# Create the tool
search_tool = Tool(
name="search_web",
function=search_web,
input_schema=SearchInputSchema,
output_schema=SearchOutputSchema,
description="Search the web for information"
)

# Create the agent with the tool
agent = Agent(
name="research_assistant",
model=AnthropicModel("claude-3-opus-20240229"),
tools=[search_tool]
)

# Use the agent
user_query = "What are the latest developments in quantum computing?"
response = agent.process(user_query)

# The agent will automatically call the search tool when appropriate
print(response.content)

Using Different Claude Models

Airtrain supports all Claude models. Here's how to use different models for different purposes:

from airtrain.models import AnthropicModel

# For high-quality outputs with Claude Opus
opus_model = AnthropicModel("claude-3-opus-20240229")

# For balanced performance with Claude Sonnet
sonnet_model = AnthropicModel("claude-3-sonnet-20240229")

# For faster responses with Claude Haiku
haiku_model = AnthropicModel("claude-3-haiku-20240307")

# With custom parameters
custom_model = AnthropicModel(
model_name="claude-3-opus-20240229",
temperature=0.2, # More deterministic
max_tokens=10000,
top_p=0.9,
top_k=50
)

Multi-modal Capabilities

Claude models support multi-modal inputs, allowing you to process both text and images:

from airtrain.core import Agent
from airtrain.models import AnthropicModel
from airtrain.core.skills import MultiModalSkill
from airtrain.core.schemas import MultiModalInput
import base64

# Create a multi-modal agent
agent = Agent(
name="image_analyzer",
model=AnthropicModel("claude-3-opus-20240229"),
skills=[MultiModalSkill()]
)

# Function to encode image as base64
def encode_image(image_path):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode('utf-8')

# Create a multi-modal input
image_base64 = encode_image("path/to/your/image.jpg")
input_data = MultiModalInput(
text="What can you tell me about this image?",
images=[image_base64]
)

# Process the input
response = agent.process(input_data)
print(response.content)

Best Practices for Claude Models

When working with Claude models, consider these best practices:

  1. Clear Instructions: Claude responds well to clear, detailed instructions
  2. Prompt Engineering: Use system prompts to guide Claude's behavior and response style
  3. Temperature Setting: Use lower temperatures (0.1-0.3) for factual tasks, higher (0.7+) for creative tasks
  4. Batch Processing: For large workloads, implement batch processing to manage API usage efficiently

Next Steps

Now that you've set up Anthropic integration, you can: