HomeArticle

Amazon Agent Suite Gets a Major Update, Unveils Nine New Features to Secure the Title of the Strongest Agent Platform

智东西2025-12-04 08:19
Build the most powerful intelligent agent platform, and the largest cloud giant piles on a lot of features.

According to a report from ZDXX on December 3rd in Las Vegas, at the annual cloud computing event, the AWS re:Invent Conference, Swami Sivasubramanian, the vice president of AWS Agentic AI, delivered a keynote speech. He elaborated on why Amazon Web Services (AWS) is the best choice for building and operating agents and announced several new tools for agent development.

The Strands Agents SDK agent framework has added support for TypeScript and edge devices, making it easier to build agents and expanding to a wider range of edge fields such as automotive, gaming, and robotics.

The Amazon Bedrock AgentCore agent platform has launched several innovations: The policy feature allows teams to set boundaries for the agent's tool usage. The evaluation feature helps teams understand the agent's performance in real - world scenarios. The episodic memory feature enables agents to learn from experience and continuously optimize.

The fully managed Amazon Bedrock AI platform has added a reinforcement fine - tuning feature, providing automated fine - tuning capabilities. The Amazon SageMaker AI platform has added a model customization feature, supporting in - depth underlying adjustments and simplifying the process of building efficient AI.

The newly added checkpointless training feature in Amazon SageMaker HyperPod enables large - scale, low - cost training. The overall goal is to maximize the value and return on investment (ROI) of these workloads for customers in the production environment.

In addition, the official version of the Amazon Nova Act service, which focuses on building agent reliability, is now fully available, facilitating large - scale production deployment of agents.

01. Strands Agents SDK Adds Two New Features, Supporting TypeScript and Edge Devices

Strands Agents SDK is an open - source, model - driven AI agent framework that provides model - driven orchestration. Since its release, the number of downloads has reached 5.299 million.

Today, Amazon Web Services announced two new features:

Firstly, it supports TypeScript (preview version). TypeScript is one of the most popular programming languages globally, which will make it easier to build full - stack agent applications.

Strands Agents provides comprehensive support for the core features of TypeScript, including type safety, async/await asynchronous syntax, and modern JavaScript/TypeScript programming paradigms. Developers can use AWS CDK (Cloud Development Kit) to build a complete agent technology stack using TypeScript throughout the process.

Secondly, it supports edge devices. Customers can use the Strands Agents SDK to build autonomous AI agents that can run on small devices, enabling agent application scenarios in fields such as automotive, gaming, and robotics and delivering intelligent services in the real world.

02. Amazon Bedrock AgentCore Adds Policy, Evaluation, and Episodic Memory Features to Aid in Next - Generation Agent Development

Introducing agents into production is fraught with difficulties. It requires rapid large - scale deployment of agents, the ability to remember past interactions and learn, the identification and access control of all agents and tools, the mastery of agent tool usage for executing complex workflows, and finally, the observation and debugging of problems.

Complexity can slow down innovation. How can we help customers build and deploy secure, production - grade agents at scale? This is the core value of Amazon Bedrock AgentCore.

Amazon Bedrock AgentCore is an agent platform designed for the secure, large - scale building and deployment of agents. It is compatible with various frameworks and models. The preview version was first released at the AWS New York Summit in July this year and has since been rapidly iterated. It became fully available in October.

For enterprises to move agents from prototypes to the production environment, they need a dedicated infrastructure that is secure, reliable, scalable, and adaptable to the non - deterministic characteristics of agents. Agents need an underlying support that can dynamically scale, support long - running workloads, and store and retrieve context information instantly and securely.

However, early adopters currently need to invest a large amount of resources to build such infrastructure from scratch, which is time - consuming and labor - intensive and seriously slows down the development cycle.

Amazon Bedrock AgentCore addresses this challenge by providing a fully managed service. It includes a series of key components that provide everything needed to run production - grade agents at scale, including:

Runtime: Serverless, secure, and isolated runtime computing resources;

Observability: Observability tools (open - source and compatible with the OpenTelemetry protocol) to help customers understand the agent's operating status;

Memory: Memory function that enables agents to interact with users over the long term, remember past interactions, and build intelligent, personalized applications;

Code Interpreter: Code interpreter that allows agents to access previously unavailable tools by writing code;

Gateway: Gateway function that supports connecting to systems inside and outside AWS;

Managed Browser and Identity: Web usage permissions and identity authentication functions that clarify the agent's identity and the entity it represents, which is closely related to governance and observability.

Customers can either use Amazon Bedrock Agent to build agents or combine it with any open - source agent - building framework. The platform has been widely adopted, and the number of developer downloads has exceeded 2 million so far.

On this basis, Bedrock AgentCore has added two new features:

Firstly, Policy in AgentCore, the policy feature, sets clear boundaries for the agent's operations. It proactively intercepts unauthorized agent operations through real - time deterministic control independent of the agent code.

Enterprises can create detailed policies simply by describing rules in natural language. They can define policies for agents (accessible tools and data, executable operations, applicable conditions, etc.), such as a policy like "Reject all customer refund requests when the reimbursement amount exceeds $1000".

These policies are evaluated before the agent executes to ensure that the agent always operates within the set rules.

Secondly, AgentCore Evaluation, the evaluation feature, helps developers continuously detect the quality of agents based on their behavior to ensure that their behavior meets expectations.

The AgentCore evaluation feature does not require managing complex infrastructure and provides 13 pre - set evaluators covering common quality dimensions such as correctness, usability, tool selection accuracy, security, goal achievement rate, and context relevance. Developers can also flexibly use their preferred large - language models and prompts to write custom evaluators.

Thirdly, AgentCore Memory Episodic Functionality, the episodic memory feature, automatically saves key events and states during interactions, helping agents learn from past experiences and improve decision - making.

It includes short - term memory and long - term memory. Short - term memory is used to record the current interaction process, helping agents understand the real - time interaction status with users or operators. Long - term memory is used to track long - term interaction history. Episodic memory can overlay context information of specific interaction scenarios on these memories, enabling agents to give more intelligent suggestions.

Here is a real - life example: Suppose there is a booking agent. The first time you use it, it books a vehicle for you and reserves 45 minutes to catch a flight. However, you missed the flight because you were taking care of your family and had to re - book. With episodic memory, the system will record this interaction experience.

When you book a flight again six months later, the agent will remember that you need more preparation time and will automatically reserve a two - hour vehicle booking window instead of 45 minutes. This feature is deeply integrated into AgentCore.

The core goal of these features is to accelerate the process of bringing agents from concept to large - scale production.

03. New Features of Amazon Bedrock and SageMaker AI: Simplify Model Customization Process and Build Faster and More Efficient Agents

With the popularization of agent applications and the increase in model scale in the production environment, efficiency has become a core issue that customers must pay attention to. Enterprise customers face a challenge when using off - the - shelf models: these models are powerful but often not optimized for efficiency and scale, resulting in unnecessary cost expenditure, slower response times, and resource waste.

Efficiency is not just about cost; it involves several key factors: latency (whether the agent can respond quickly for real - time interaction), scalability (whether it can handle expected high loads), and agility (whether it can be quickly iterated and adjusted according to application evolution and customer interaction).

The key to solving this problem lies in customization: By customizing small, dedicated models to handle the tasks that agents perform most frequently, faster and more accurate responses can be achieved at a lower cost.

Previously, advanced customization technologies such as reinforcement learning required in - depth machine - learning expertise, large - scale infrastructure support, and development cycles of up to several months.

In response, Amazon Web Services announced that Amazon Bedrock and Amazon SageMaker AI have launched new features to enable developers to use advanced model - customization technologies.

1. Reinforcement Fine - tuning in Amazon Bedrock: Improve Model Accuracy

Amazon Web Services announced a new reinforcement fine - tuning feature in Amazon Bedrock - Reinforcement Fine - Tuning (RFT).

This feature simplifies the model - customization process. The core goal is to allow customers to easily improve model accuracy without in - depth machine - learning and AI model - development expertise.

On average, it can improve the accuracy of the base model by 66%, helping customers obtain better results through smaller, faster, and more cost - effective models instead of relying on large and expensive models.

The operation process is very simple: Developers select a base model, specify call logs or upload datasets, select a reward function, and then the automated workflow in Amazon Bedrock will handle the fine - tuning process to maximize the result of the reward function.

In this way, customers can obtain customized models that better meet their needs without professional knowledge.

At the initial release, the reinforcement fine - tuning feature in Amazon Bedrock will support the Amazon Nova 2 Lite model and will gradually be compatible with more models in the future.

2. Model Customization in Amazon SageMaker AI: Faster, Lower - Cost, and Higher - Accuracy Models

There is also a group of customers who are domain experts and want more control over AI workflows.

Although the reinforcement fine - tuning feature in Amazon Bedrock is very convenient, some customers want to make in - depth underlying custom adjustments. Therefore, Amazon Web Services has added a Model Customization in - depth model - customization feature to the SageMaker AI platform, which is used for large - scale model training and customization.

Since its launch in 2017, SageMaker AI has been the core platform for customers to develop AI and machine - learning models. To meet customers' in - depth customization needs, Amazon Web Services has made this process simpler in SageMaker: Customers do not need to manage infrastructure, and the platform can generate synthetic data for them to improve application effects.

Amazon Web Services provides two experience modes:

Firstly, the Agent - Driven Mode (preview version): An agent guides developers through the model - customization process. After customers describe their needs in natural language, the agent will guide them through the entire customization process, from generating synthetic data to model evaluation.

Secondly, the Self - Guided Mode: Suitable for developers who prefer to operate independently and want fine - grained control and flexibility. This mode does not require managing infrastructure and provides appropriate tools for developers to choose customization technologies and adjust relevant parameters.

Through these two modes, developers can use advanced customization technologies, including reinforcement learning based on AI feedback, reinforcement learning with verifiable rewards, supervised fine - tuning, and direct preference optimization.

The new features of SageMaker AI will support popular open - source models such as Amazon Nova, Llama, Qwen, DeepSeek, and gpt - oss.

Amazon Web Services hopes to provide professional customers with all the necessary functions, control, and flexibility through a diverse interface, allowing customers to customize models to achieve the best performance at the lowest cost and providing solutions that match their professional levels and preferred work modes.

04. SageMaker HyperPod Checkpointless Training: Recover from Model Training Failures in Minutes

In the process of collaborating with customers on model customization and training, Amazon Web Services realized that there was still room for improvement. Model training is costly and the process is cumbersome.

Typically, customers need to run large GPU clusters, which are expensive to operate. The losses are even greater when they are idle or malfunction, and they cannot carry out effective work.

To solve this problem, Amazon Web Services developed Amazon SageMaker HyperPod