A comprehensive advanced course for building production-ready AI agents using the Strands Agents SDK. This repository contains 6 progressive labs that teach advanced capabilities including tool integration, memory persistence, Model Context Protocol (MCP), and comprehensive observability.
This advanced course provides foundational and advanced expertise in building production-ready AI agents, focusing on the agent class, agentic loop, and the complete ecosystem of components that power intelligent autonomous systems. You'll master core principles of agentic AI, understanding how agents combine large language models, tools, and memory to create systems that can understand, plan, and execute actions autonomously.
Course Topics:
- Strands Agents SDK - Advanced agent architecture and lifecycle management
- Model Context Protocol (MCP) - Standardized tool and service integration
- Multi-Provider Configuration - Amazon Bedrock, Anthropic, OpenAI, and Ollama
- Advanced Processing - Hooks, session management, and conversation strategies
- Memory Systems - Long-term persistent memory with FAISS, OpenSearch, and Mem0
- Enterprise Features - Observability, metrics analysis, and performance optimization
Prerequisites: Basic understanding of AI/ML concepts. For beginners, we recommend starting with Getting Started with Strands Agents (Course 1).
Files: first_agent.py
Learn fundamental agentic AI concepts and build your first Strands agent:
- Basic agent creation with default configuration (no API keys required)
- Core agent components and execution flow
- Agent result examination (message, metrics, state, stop reasons)
- Dynamic model configuration and system prompt modification
- Conversation history management and message clearing
Files: anthropic_model.py
, bedrock_model.py
, ollama_model.py
, openai_model.py
Configure agents across multiple LLM providers for flexibility and cost optimization:
- Model architecture overview and provider-specific parameters
- Bedrock model setup with structured output capabilities
- Anthropic model configuration with thinking mode
- Ollama local deployment and OpenAI integration
- Metrics analysis and performance monitoring
Files: async_example.py
, hook_example_1.py
, hook_example_2.py
Implement custom logic to intercept and modify agent behavior at lifecycle points:
- Event-driven hook system and lifecycle management
- Before/after event handling and agent modifications
- Async iterators, callback handlers, and retry logic
- Tool hook examples and precision parameter setup
Files: mcp_integration.py
, self_extending_example.py
, tools/
Extend agent capabilities with custom tools and external service integration:
- Built-in tools from strands-agents-tools library
- Custom tool creation using @tool decorator
- MCP server configuration for AWS Documentation and Pricing
- Self-extending agents and meta tooling capabilities
- Proper error handling and security implementation
Files: session_example.py
, verify_session.py
Manage conversation state and context effectively across interactions:
- Context window challenges and management strategies
- Three conversation manager approaches (Null, SlidingWindow, Summarizing)
- Session state persistence and user isolation
- File-based and Amazon S3 session storage options
Files: memory_example.py
Build agents with long-term memory capabilities across conversations:
- Memory backends integration (FAISS, OpenSearch, Mem0)
- Web search integration with DuckDuckGo
- Memory storage, retrieval, and relevance scoring
- Amazon Bedrock Knowledge Bases integration
- Retention policies and privacy controls
- Start with Lab 1 - Learn agent fundamentals with no setup required
- Progress through Labs 2-3 - Configure multiple providers and implement hooks
- Master Lab 4 - Integrate tools and MCP servers for external capabilities
- Build with Lab 5 - Implement sophisticated conversation management
- Advanced Lab 6 - Create agents with persistent memory systems
This course primarily uses the Anthropic Claude API, but all examples can be configured to work with:
- Amazon Bedrock (Claude, Llama, Titan, and other models)
- OpenAI (GPT-4, GPT-3.5-turbo)
- Ollama (Local model deployment)
- Other providers supported by Strands SDK
- Python 3.8+
- Virtual environment (recommended)
- Anthropic Claude API key (primary requirement)
- Additional API keys for specific labs:
- Amazon Bedrock (for AWS integration labs)
- OpenAI (optional alternative)
- Mem0 (for memory persistence Lab 6)
- Visit Anthropic Console
- Sign up for an account or log in
- Navigate to API Keys section
- Click Create Key and give it a name
- Copy your API key (starts with
sk-ant-
)
-
Clone the repository:
git clone <repository-url> cd sample-Advanced-Strands-Agents-with-MCP
-
Create and activate virtual environment:
python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Configure environment variables: Copy
.env.example
to.env
and set your API key:# Required - Get from https://console.anthropic.com/ ANTHROPIC_API_KEY=sk-ant-your_key_here # Optional - for specific labs only AWS_ACCESS_KEY_ID=your_aws_key # For Lab 4 MCP integration AWS_SECRET_ACCESS_KEY=your_aws_secret # For Lab 4 MCP integration AWS_SESSION_TOKEN=your_aws_token # For Lab 4 MCP integration OPENAI_API_KEY=your_openai_key # For Lab 2 model alternatives MEM0_API_KEY=your_mem0_key # For Lab 6 memory persistence
Each lab can be run independently. Start with Lab 1 for advanced fundamentals:
Lab 1 - Agent Fundamentals (No API key required):
cd Lab1
python first_agent.py
Lab 2 - Model Providers:
cd Lab2
python anthropic_model.py
python bedrock_model.py
Lab 4 - MCP Integration:
cd Lab4
python mcp_integration.py
Lab 6 - Memory Agents:
cd Lab6
python memory_example.py
- Strands Agents Documentation
- Model Context Protocol Specification
- Amazon Bedrock Documentation
- Getting Started with Strands Agents (Course 1)
Common issues and solutions:
- API Key Issues - Ensure
ANTHROPIC_API_KEY
is set correctly in your.env
file- Key should start with
sk-ant-
- Get your key from Anthropic Console
- Key should start with
- Import Errors - Run
pip install -r requirements.txt
if you encounter missing dependencies - AWS Credentials - Only needed for Lab 4 MCP integration (configure AWS CLI or environment variables)
- MCP Servers - Allow time for MCP servers to initialize before agent connections in Lab 4
- Memory Backends - Mem0 API key only required for Lab 6 memory persistence
See CONTRIBUTING for more information.
This library is licensed under the MIT-0 License. See the LICENSE file.