Introducing AWS Strands Agents: A New Paradigm in AI Agent Development
Create a Perplexity-like web search agent using Strands Agents and Amazon Bedrock SoTA foundation models, including Anthropic Claude Sonnet 4 and Amazon Nova Premier.
Introduction
As generative AI continues to evolve rapidly, we are discovering that the future of AI lies not in isolated models but in collaborative agent networks that mirror human-like problem-solving. The rapid evolution of generative AI has ushered in a transformative era for autonomous systems, with AI agents emerging as critical tools for automating complex workflows. At the forefront of this shift, Amazon Web Services (AWS) recently introduced Strands Agents, an open-source SDK designed to simplify agent development through a model-driven approach. This framework empowers developers to build sophisticated AI agents with minimal code while retaining complete control over customization, tool integration, and deployment. In parallel, AWS continues to enhance its managed service, Amazon Bedrock Agents, which streamlines enterprise-grade AI automation. This post explores Strands Agents’ capabilities, contrasts them with Amazon Bedrock Agents, and provides a hands-on tutorial for building a Perplexity-like web search agent with access to multiple tools.
The future of AI lies not in isolated models, but in collaborative agent networks that mirror human problem-solving.
Understanding Strands Agents
Strands Agents reimagines AI agent development by leveraging the inherent reasoning and planning capabilities of modern large language models (LLMs). Unlike traditional workflow-based frameworks that require rigid step-by-step definitions, Strands Agents adopts a model-first philosophy where developers specify three core components:
- Model: Supports Amazon Bedrock, Anthropic Claude, Meta Llama, Ollama, and other providers via LiteLLM.
- Tools: Integrate pre-built tools (e.g., calculator, Python REPL) or custom Python functions using the
@tooldecorator. - Prompt: Define the agent’s task and behavior through natural language instructions.
Key Features of Strands Agents
- Provider Agnosticism: Deploy agents across Cloud, hybrid, or local environments with support for multiple LLM providers.
- Production Readiness: Built-in observability via OpenTelemetry, metrics, logs, distributed tracing, and production-ready reference architectures for AWS Lambda, AWS Fargate, and Amazon EC2.
- Multi-Agent Collaboration: Coordinate specialized agents (e.g., Researchers, Analysts, and Writers) to solve complex problems through peer-to-peer or supervisor-led workflows.
- Tool Ecosystem: Access thousands of Model Context Protocol (MCP) servers and 20+ pre-built tools, including
http_requestfor API interactions andretrievefor RAG workflows.
Agent Loop
According to the documentation, Strands Agents’ Agent Loop is a core concept in the Strands Agents SDK that enables intelligent, autonomous behavior through a cycle of reasoning, tool use, and response generation. It is the process by which a Strands agent processes user input, makes decisions, executes tools, and generates responses. It’s designed to support complex, multi-step reasoning and actions with seamless integration of tools and language models.
Strands Agents vs. Amazon Bedrock Agents
While similar in many ways, Strands Agents and Amazon Bedrock Agents also differ considerably in their architectural design. Amazon Bedrock Agents use OpenAPI schemas with Action Groups to define the structure and parameters of the APIs they can invoke, enabling the agent to understand and interact with external services in a standardized way. In contrast, AWS Lambda functions are the backend implementations for these actions. When an agent receives a user request, it consults the schema to determine which Lambda function to call, passes validated parameters, and then returns the Lambda response to the user. This approach allows for seamless integration of business logic, automatic data validation, and a scalable, serverless execution environment for agent-driven workflows.
Let’s compare and contrast the features of Strands Agents to Amazon Bedrock Agents:
Use Case Guidance
- Strands Agents: Ideal for developers requiring granular control, multi-provider LLM support, or custom toolchains (e.g., research assistants with domain-specific APIs).
- Bedrock Agents: Suited for enterprises prioritizing rapid deployment, managed security, and seamless integration with AWS services (e.g., insurance claim processing bots).
Demonstration: Web Search Agent
The following tutorial demonstrates how to create a web search agent using Strands Agents and models available on Amazon Bedrock. We will access state-of-the-art (SoTA) models via Amazon Bedrock, including Anthropic’s Claude Sonnet 4 and Amazon Nova Premier. Both are SoTA foundation models that deliver frontier intelligence and industry-leading performance.
Source Code
All of the open-source code for this article is published on GitHub. All the code can be found in a Jupyter Notebook, which can be run locally, such as in Microsoft’s Visual Studio Code, or on the Cloud.
Inspiration
This demonstration was inspired by the September 2024 AWS Machine Learning Blog, Integrate dynamic web content in your generative AI application using a web search API and Amazon Bedrock Agents, by Philipp Kaindl and Markus Rollwagen. In the post, the authors utilize Amazon Bedrock Agents with web search capabilities, integrating free external search APIs from Serper and Tavily AI, alongside a Bedrock Agent. We will recreate the same functionality using Strands Agents, converting the primary functionality into Strands Agents’ custom tools. We will combine these tools with Strands Agents’ pre-built tools to allow the agent to make complex decisions.
Prerequisites
To follow along with this blog post’s demonstration, you must register for a free Serper and Tavily API key.
Serper.dev (Google Search API): Serper provides low-cost Google Search API access with 2,500 free monthly searches:
- Visit serper.dev/signup
- Create an account using Google SSO or email
- Navigate to Dashboard → API Keys
- Copy your
SERPER_API_KEY
Tavily (AI-Optimized Search): Tavily offers 1,000 free monthly credits for its AI-enhanced search API:
- Go to app.tavily.com/sign-up
- Authenticate with Google, GitHub, or email
- Access API Keys in your dashboard
- Copy your
TAVILY_API_KEY
Getting Started
To get started, follow along with the Jupyter Notebook on GitHub. First, install the necessary Python packages, including those for Strands Agents and the AWS Python SDK, boto3, using the built-in %pip magic command, which runs the pip package manager within the current kernel.
%pip install strands-agents strands-agents-tools boto3 botocore -UqqqNext, after restarting the Notebook’s kernel, import the necessary package libraries and set up logging:
# Built-in libraries
import http.client
import json
import logging
import os
import urllib.request
# Third-party libraries
import boto3
from strands import Agent, tool
from strands.models import BedrockModel
from strands_tools import calculator
# Configure logging
# Set up logging to output to console
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)Next, establish your AWS credentials for Strands Agents to call Amazon Bedrock models and AWS Secrets Manager, where you will store your Serper and Tavily API keys.
session = boto3.Session(
region_name="us-east-1",
aws_access_key_id="<YOUR_ACCESS_KEY_ID>",
aws_secret_access_key="<YOUR_SECRET_ACCESS_KEY>",
aws_session_token="<YOUR_SESSION_TOKEN>",
)Strands Agents Quick Start
Let’s start with a quick preview of Stand Agents, as found in the Strands Agents GitHub Quick Start documentation. We will utilize the Amazon Nova Micro text-only model, which delivers the lowest latency responses at a very low cost, available on Amazon Bedrock. We are making the Strands Agents’ pre-built calculator tool available to the agent, which it should call to respond to the request:
%%time
bedrock_model = BedrockModel(
boto_session=session,
model_config={
"model_id": "us.amazon.nova-micro-v1:0",
"max_tokens": 128,
"temperature": 0.2,
},
)
agent = Agent(model=bedrock_model, tools=[calculator])
result = agent("What is the square root of 1764?")We should receive a result similar to the following, indicating that the agent called the pre-built calculator tool to respond correctly to the request with the answer, 42:
I'll help you find the square root of 1764. I can use the calculator tool to compute this value.
Tool #1: calculator
╭────────────────────────────────────────────── Calculation Result ───────────────────────────────────────────────╮
│ │
│ ╭───────────┬─────────────────────╮ │
│ │ Operation │ Evaluate Expression │ │
│ │ Input │ sqrt(1764) │ │
│ │ Result │ 42 │ │
│ ╰───────────┴─────────────────────╯ │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
The square root of 1764 is 42.
CPU times: user 48.1 ms, sys: 36.3 ms, total: 84.5 ms
Wall time: 3.27 sI’ve added the built-in %%time magic command to all examples, so we understand how long the agent takes to respond, based on the complexity of the request and the model selected.
Metrics
Observability is a native feature of Strands Agents, including traces, metrics, and logs. According to Strands Agents’ documentation, metrics are essential for understanding agent performance, optimizing behavior, and monitoring resource usage. The Strands Agents SDK provides comprehensive metrics tracking capabilities that give you visibility into your agent’s operations. We can access individual metrics, like totalTokens and accumulated latencyMs, or use the get_summary() method to get all metrics as a nested JSON object:
# Access individual metrics through the AgentResult
print(f"Total tokens: {result.metrics.accumulated_usage['totalTokens']}")
print(f"Execution time: {sum(result.metrics.cycle_durations):.2f} seconds")
print(f"Tools used: {list(result.metrics.tool_metrics.keys())}")
# Access all metric
metrics = result.metrics.get_summary()
print(json.dumps(metrics, indent=2))Example results of get_summary() for the above calculator example:
{
"total_cycles": 2,
"total_duration": 1.777599811553955,
"average_cycle_time": 0.8887999057769775,
"tool_usage": {
"calculator": {
"tool_info": {
"tool_use_id": "tooluse_QV1M_S9zTYO7SpjlRS0kgQ",
"name": "calculator",
"input_params": {
"expression": "sqrt(1764)",
"mode": "evaluate"
}
},
"execution_stats": {
"call_count": 1,
"success_count": 1,
"error_count": 0,
"total_time": 0.0060520172119140625,
"average_time": 0.0060520172119140625,
"success_rate": 1.0
}
}
},
"traces": [
{
"id": "de6087b3-8e73-4182-9f4e-df02cb435948",
"name": "Cycle 1",
"raw_name": null,
"parent_id": null,
"start_time": 1748308812.505973,
"end_time": null,
"duration": null,
"children": [
{
"id": "dd23589a-ad06-4f49-9c4e-0158cb2d5cb0",
"name": "stream_messages",
"raw_name": null,
"parent_id": "de6087b3-8e73-4182-9f4e-df02cb435948",
"start_time": 1748308812.506156,
"end_time": 1748308815.285909,
"duration": 2.7797529697418213,
"children": [],
"metadata": {},
"message": {
"role": "assistant",
"content": [
{
"text": "I can calculate the square root of 1764 for you using the calculator tool."
},
{
"toolUse": {
"toolUseId": "tooluse_QV1M_S9zTYO7SpjlRS0kgQ",
"name": "calculator",
"input": {
"expression": "sqrt(1764)",
"mode": "evaluate"
}
}
}
]
}
},
{
"id": "f5c19783-1934-411e-93c4-4edcc53b6e16",
"name": "Tool: calculator",
"raw_name": "calculator - tooluse_QV1M_S9zTYO7SpjlRS0kgQ",
"parent_id": "de6087b3-8e73-4182-9f4e-df02cb435948",
"start_time": 1748308815.286336,
"end_time": 1748308815.292402,
"duration": 0.006066083908081055,
"children": [],
"metadata": {
"toolUseId": "tooluse_QV1M_S9zTYO7SpjlRS0kgQ",
"tool_name": "calculator"
},
"message": {
"role": "user",
"content": [
{
"toolResult": {
"status": "success",
"content": [
{
"text": "Result: 42"
}
],
"toolUseId": "tooluse_QV1M_S9zTYO7SpjlRS0kgQ"
}
}
]
}
},
{
"id": "bd9f2dbc-73d6-4a0f-b3d7-f0fbaee5d470",
"name": "Recursive call",
"raw_name": null,
"parent_id": "de6087b3-8e73-4182-9f4e-df02cb435948",
"start_time": 1748308815.292643,
"end_time": 1748308817.070294,
"duration": 1.7776508331298828,
"children": [],
"metadata": {},
"message": null
}
],
"metadata": {},
"message": null
},
{
"id": "f32cd814-2487-4271-8a94-81c7257f4834",
"name": "Cycle 2",
"raw_name": null,
"parent_id": null,
"start_time": 1748308815.29269,
"end_time": 1748308817.0702899,
"duration": 1.777599811553955,
"children": [
{
"id": "d9f73eaf-984d-40cd-93c3-81820f887522",
"name": "stream_messages",
"raw_name": null,
"parent_id": "f32cd814-2487-4271-8a94-81c7257f4834",
"start_time": 1748308815.292861,
"end_time": 1748308817.070283,
"duration": 1.7774219512939453,
"children": [],
"metadata": {},
"message": {
"role": "assistant",
"content": [
{
"text": "\n\nThe square root of 1764 is 42."
}
]
}
}
],
"metadata": {},
"message": null
}
],
"accumulated_usage": {
"inputTokens": 3929,
"outputTokens": 77,
"totalTokens": 4006
},
"accumulated_metrics": {
"latencyMs": 4095
}
}Custom Tool Calling
Before we build our web search agent, let’s look at one more example from the Strands Agents GitHub Features at a Glance documentation. In this simple example, we define the Python method, word_count(), as a tool by using Strands Agents’ @tool decorator. Similar to the previous example, we then make the word_count tool available to the agent, which it can use if necessary to respond to the request. We will continue to use the Amazon Nova Micro model to save costs:
%%time
@tool
def word_count(text: str) -> int:
"""Count words in text.
This docstring is used by the LLM to understand the tool's purpose.
"""
return len(text.split())
bedrock_model = BedrockModel(
boto_session=session,
model_config={
"model_id": "us.amazon.nova-micro-v1:0",
"max_tokens": 512,
"temperature": 0.2,
},
)
agent = Agent(model=bedrock_model, tools=[word_count])
result = agent("How many words are in this sentence?")We should receive a result similar to the following, indicating the agent called the custom word_count tool to respond correctly to the request with an answer, seven words:
I can help you count the words in your sentence. Let me use the word_count tool to do this.
Tool #2: word_count
There are 7 words in the sentence "How many words are in this sentence?"
CPU times: user 25.6 ms, sys: 14.3 ms, total: 39.9 ms
Wall time: 3.64 sSecuring API Keys
To start building the web search agent, we will first use AWS Secrets Manager to securely store the Serper and Tavily API keys. AWS Secrets Manager is a fully managed service that securely stores, rotates, and controls access to sensitive information like database credentials, API keys, and tokens. Although this step is not required with Strands Agents, it represents AWS’s best security practices. Make sure to replace the two SecretString placeholders below with your API keys before running the code:
# Initialize the Secrets Manager client
secrets_manager = session.client("secretsmanager")
try:
# Create Serper API key secret
serper_response = secrets_manager.create_secret(
Name="SERPER_API_KEY",
Description="The API secret key for Serper.",
SecretString="<YOUR_SERPER_API_KEY>",
)
logger.info("Serper secret created:", serper_response["ARN"])
# Create Tavily API key secret
tavily_response = secrets_manager.create_secret(
Name="TAVILY_API_KEY",
Description="The API secret key for Tavily AI.",
SecretString="<YOUR_TAVILY_API_KEY>",
)
logger.info("Tavily secret created:", tavily_response["ARN"])
except secrets_manager.exceptions.ClientError as e:
if e.response["Error"]["Code"] == "ResourceExistsException":
logger.warning("Secret already exists.")
else:
logger.error("An unexpected error occurred:", e)Retrieving API Keys
Once your API keys are securely stored in AWS Secrets Manager, we must retrieve them for use by the custom tools we will create:
def get_from_secretstore(key: str) -> str:
try:
secret_value = secrets_manager.get_secret_value(SecretId=key)
logger.info(f"Retrieved {key} from Secrets Manager.")
return secret_value["SecretString"]
except secrets_manager.exceptions.ClientError as e:
if e.response["Error"]["Code"] == "ResourceExistsException":
logger.warning("Secret already exists.")
else:
logger.error(f"Could not get {key} from Secrets Manager: {e}")
# Retrieve API keys from Secrets Manager
SERPER_API_KEY = get_from_secretstore("SERPER_API_KEY")
TAVILY_API_KEY = get_from_secretstore("TAVILY_API_KEY")Defining Our Custom Web Search Tools
Next, we will define two custom web search tools, one for using Serper, google_search, and one for using Tavily, tavily_ai_search. Similar to the previous word count example, we define our tools by using the Strands Agents’ @tool decorator to annotate two Python methods. The Python Docstrings also play a critical role in describing the intended functionality of the tool to the agent through documentation.
@tool
def google_search(search_query: str, target_website: str = "") -> str:
"""
This tool performs a Google search using the Serper API.
For targeted news, like 'what are the latest news in Austria' or similar.
Args:
search_query (str): The query to search for.
target_website (str, optional): If provided, restricts the search to this website.
Returns:
str: The JSON response from the Serper API containing search results.
"""
if SERPER_API_KEY is None:
raise ValueError(
"SERPER_API_KEY is not set. Please set the environment variable."
)
if not search_query:
raise ValueError("Search query cannot be empty.")
if target_website:
search_query += f" site:{target_website}"
conn = http.client.HTTPSConnection("google.serper.dev")
payload = json.dumps({"q": search_query})
headers = {"X-API-KEY": SERPER_API_KEY, "Content-Type": "application/json"}
search_type = "news" # "news", "search",
try:
conn.request("POST", f"/{search_type}", payload, headers)
response = conn.getresponse()
response_data = response.read().decode("utf-8")
return response_data
except http.client.HTTPException as e:
logger.error(
f"Failed to retrieve search results from Serper API, error: {e.code}"
)
return ""
@tool
def tavily_ai_search(search_query: str, target_website: str = "") -> str:
"""
This tool performs a search using the Tavily AI Search API.
To retrieve information via the Internet or for topics that the LLM does not know about and intense research is needed.
Args:
search_query (str): The query to search for.
target_website (str, optional): If provided, restricts the search to this website.
Returns:
str: The JSON response from the Tavily AI Search API containing search results.
"""
if TAVILY_API_KEY is None:
raise ValueError(
"TAVILY_API_KEY is not set. Please set the environment variable."
)
if not search_query:
raise ValueError("Search query cannot be empty.")
base_url = "https://api.tavily.com/search"
headers = {"Content-Type": "application/json", "Accept": "application/json"}
payload = {
"api_key": TAVILY_API_KEY,
"query": search_query,
"search_depth": "advanced",
"include_images": False,
"include_answer": False,
"include_raw_content": False,
"max_results": 3,
"include_domains": [target_website] if target_website else [],
"exclude_domains": [],
}
data = json.dumps(payload).encode("utf-8")
request = urllib.request.Request(base_url, data=data, headers=headers)
try:
response = urllib.request.urlopen(request)
response_data = response.read().decode("utf-8")
return response_data
except urllib.error.HTTPError as e:
logger.error(
f"Failed to retrieve search results from Tavily AI Search, error: {e.code}"
)
return ""Testing Tavily Search Tool
First, let’s test Strands Agents’ use of our custom tavily_ai_search tool. Although both web search tools are available, the agent should have the intelligence to call the tavily_ai_search tool to respond to the request. This time, we will try the recently released Anthropic Claude Sonnet 4 model, available on Amazon Bedrock since May 22, 2025:
%%time
bedrock_model = BedrockModel(
boto_session=session,
model_config={
"model_id": "us.anthropic.claude-sonnet-4-20250514-v1:0",
"max_tokens": 256,
"temperature": 0.1,
},
)
agent = Agent(model=bedrock_model, tools=[google_search, tavily_ai_search])
result = agent("What is the latest Anthropic Claude model?")Based on the request, the agent should respond as follows, indicating that it used the tavily_ai_search tool:
To answer your question about the latest Anthropic Claude model, I'll need to search for current information. Let me do that for you.
Tool #4: tavily_ai_search
Based on my search, I can provide you with information about the latest Anthropic Claude models:
The latest Anthropic Claude models are the Claude 4 series, which were released in May 2025. Specifically:
1. **Claude Opus 4** (model ID: claude-opus-4-20250514)
- This is Anthropic's most powerful model in the Claude 4 series
- According to Anthropic, they claim it's "the world's best" in certain capabilities
2. **Claude Sonnet 4** (model ID: claude-sonnet-4-20250514)
- This is another model in the Claude 4 series
These latest Claude 4 models reportedly have enhanced reasoning capabilities, particularly for multi-step reasoning tasks. The Claude 4 series follows earlier models including:
- Claude Sonnet 3.7 (released February 2025)
- Claude Sonnet 3.5 and Claude Haiku 3.5 (released October 2024)
- Claude Opus 3 (released February 2024)
Anthropic has also improved their Claude Code tool, which now integrates with IDEs and offers an SDK for connecting with third-party applications.
CPU times: user 109 ms, sys: 42.1 ms, total: 151 ms
Wall time: 12.7 sAfter a few tests, I lowered the temperature to 0.1, since when I first ran the above code with a temperature of 0.2, the agent incorrectly indicated that Claude 4 was released in May 2024, not May 2025. No matter how good the model, always double-check the accuracy of the results.
Testing Serper Search Tool
Next, let’s test Strands Agents’ use of our custom google_search tool. Again, although both web search tools are available, the agent should have the intelligence to call the google_search tool this time, to respond to the request. Again, we will try the recently released Anthropic Claude Sonnet 4 model:
%%time
bedrock_model = BedrockModel(
boto_session=session,
model_config={
"model_id": "us.anthropic.claude-sonnet-4-20250514-v1:0",
"max_tokens": 512,
"temperature": 0.1,
},
)
agent = Agent(model=bedrock_model, tools=[google_search, tavily_ai_search])
result = agent("What are the latest top 5 news headlines?")Based on the request, the agent should respond as follows, indicating that it used the google_search tool to locate five current news items, which we did:
I'll help you find the latest top 5 news headlines. Let me search for that information for you.
Tool #5: google_search
Based on my search, here are the latest top 5 news headlines:
1. **Trump speaks at Arlington National Cemetery to mark Memorial Day** - President Trump is participating in a wreath-laying ceremony at Arlington National Cemetery for Memorial Day. (NBC News, 4 hours ago)
2. **Kremlin calls Trump 'emotional' after US president says Putin is 'crazy'** - The Kremlin claimed Donald Trump was showing signs of "emotional overload" following his comments calling Vladimir Putin "absolutely crazy." (BBC, 4 hours ago)
3. **Deadly strikes hit Gaza as new aid delivery system remains in limbo** - A second official has resigned from the Gaza Humanitarian Foundation, a private aid organization backed by the U.S. and Israel aimed at improving humanitarian assistance. (The Washington Post, 50 minutes ago)
4. **Canada welcomes King Charles against a backdrop of tensions with Trump** - King Charles' visit to Canada is being viewed as an opportunity for the nation to strengthen its sovereignty amid threats from President Trump. (NPR, 4 hours ago)
5. **Palestinian official says Hamas agrees to Gaza proposal, Israel dismisses it** - A Palestinian official stated that Hamas has accepted a U.S. proposal for a hostage deal and ceasefire in Gaza, though U.S. special envoy Steve Witkoff and Israel have rejected this claim. (Reuters, 1 hour ago)
These headlines represent the most recent major news stories being reported by major news outlets today.
CPU times: user 183 ms, sys: 61.5 ms, total: 244 ms
Wall time: 13.4 sMulti-tool Use
Let’s finish the demonstration by using Strands Agents to make multiple calls to multiple tools to respond to a slightly more complex request. To answer completely, the agent must call at least two tools made available, and most likely, more than once. This time, I am using the Amazon Nova Premier model, the latest Nova model to be released on Amazon Bedrock. This model is the Nova family’s most capable model for complex tasks and a teacher for model distillation:
%%time
bedrock_model = BedrockModel(
boto_session=session,
model_config={
"model_id": "us.amazon.nova-premier-v1:0",
"max_tokens": 512,
"temperature": 0.2,
},
)
agent = Agent(model=bedrock_model, tools=[calculator, google_search, tavily_ai_search])
result = agent(
"What is the square root of the current US National Debt divided by the current US population in USD?"
)
print("\n")Based on the request, the agent should respond as follows, indicating it used multiple tools to generate a complete answer. Based on the output, the agent first used the tavily_ai_search tool twice, once to find the current US National Debt ($35.5 trillion) and once to find the current US population (340.1 million). Then, the agent used the built-in calculator tool to find the square root of the debt divided by the population: sqrt(35500000000000 / 340100000) = 323. Finally, it returned the answer as requested, approximately $323 per person.
I'll help you calculate the square root of the US National Debt divided by the US population. To do this, I'll need to:
1. Find the current US National Debt
2. Find the current US population
3. Divide the debt by the population
4. Take the square root of that result
Let me search for the most recent figures for both values.
Tool #15: tavily_ai_search
Now I'll search for the current US population:
Tool #16: tavily_ai_search
Now I have the necessary information to perform the calculation. Let me use the calculator tool to solve this problem step by step:
Tool #17: calculator
╭────────────────────────────────────────────── Calculation Result ───────────────────────────────────────────────╮
│ │
│ ╭───────────┬──────────────────────────────────╮ │
│ │ Operation │ Evaluate Expression │ │
│ │ Input │ sqrt(35500000000000 / 340100000) │ │
│ │ Result │ 323 │ │
│ ╰───────────┴──────────────────────────────────╯ │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Let me walk through the calculation steps:
1. Current US National Debt: $35.5 trillion (as of September 30, 2024)
- Written numerically: $35,500,000,000,000
2. Current US Population: 340.1 million (as of July 2024 Census Bureau data)
- Written numerically: 340,100,000
3. Divide the National Debt by Population:
$35,500,000,000,000 ÷ 340,100,000 = $104,381.36
4. Take the square root of the result:
√$104,381.36 = $323
Therefore, the square root of the US National Debt divided by the US population is approximately $323 per person.
This represents a mathematical value that doesn't have a direct economic interpretation, but it's interesting to note that the result of this calculation is $323, which means that if we took the square root of the per-capita debt, each American would mathematically be associated with this amount.
CPU times: user 130 ms, sys: 54.1 ms, total: 184 ms
Wall time: 25.5 sAlthough the Amazon Nova Premier model is considered the best-performing Nova model, I found that Amazon Nova Pro often performed better on moderately complex tasks using Strands Agents. Amazon Nova Pro is a highly capable multimodal model with the best combination of accuracy, speed, and cost for a wide range of tasks. Use the best model to meet your requirements, which may not always be the largest or most expensive. For simple tasks, use smaller, faster, cheaper models.
Conclusion
As demonstrated in this post’s simple example, Strands Agents represents a significant shift in AI development, democratizing access to sophisticated agentic systems while maintaining enterprise-grade robustness. Its open-source nature fosters community innovation, as seen in its integrations with Anthropic, Meta, Ollama, and LiteLLM.
Organizations weighing Strands Agents against Bedrock Agents must decide between control and convenience. Strands excels in scenarios demanding custom toolchains, multi-LLM strategies, or hybrid deployments, while Bedrock Agents accelerate time-to-value for AWS-centric workflows.
If you are not yet a Medium member and want to support authors like me, please sign up here: https://garystafford.medium.com/membership.
This blog represents my viewpoints and not those of my employer, Amazon Web Services (AWS). All product names, images, logos, and brands are the property of their respective owners.
