How to Migrate User Data from Native to Cross-Platform Apps
Leandro Pontes Berleze | Dec 16, 2025
AWS re:Invent 2025 that happened on December 4th was one of the most anticipated editions of the conference in years — and it delivered. More than 60,000 builders, engineers, architects, product leaders, and AI practitioners filled Las Vegas as AWS unveiled a wave of announcements that made one message unmistakably clear: the future of cloud is agentic.
For the first time, AWS positioned AI not simply as a collection of assistant-style tools, but as autonomous agents capable of working independently for hours or even days, grounded firmly in real workflows. It marked a shift from AI as a helpful sidekick to AI as a true collaborator.
At the same time, re:Invent 2025 wasn’t only about AI. The event brought major investments in computers, infrastructure, security, hybrid cloud, and DevOps — a reminder that AI depends on strong foundations. As CTO Werner Vogels emphasized in his final re:Invent keynote, AI won’t make builders obsolete:
“Will AI take my job? Maybe. Will AI make me obsolete? Absolutely not, if you evolve.”
This year’s announcements reflect that philosophy: AWS is giving enterprises powerful tools, but the value will emerge only through thoughtful adoption, strong engineering, and responsible operations.
Below, we break down the five most impactful and disruptive announcements from re:Invent 2025 — the ones most likely to shape how organizations build, deploy, and scale technology in the coming years.
One of the standout announcements of the entire conference — especially for engineering organizations — was the AWS DevOps Agent (preview). While many AI tools focus on writing code or automating UI workflows, the DevOps Agent tackles one of the hardest operational challenges: incident response.
This agent acts as a virtual on-call engineer embedded directly into your environment. It ingests telemetry from sources like Amazon CloudWatch, GitHub, AWS services, and ITSM tools, correlates signals, detects anomalies, and identifies likely root causes. Instead of waiting for alerts to cascade across dashboards and chat channels, the DevOps Agent proactively outlines what’s happening and proposes the next steps.
Even more impactful, it can coordinate response actions, including:
Early adopters such as Western Governors University and Commonwealth Bank of Australia are already testing it to reduce mean time to recovery.
This marks a major leap forward. Incident management has long been a blend of art and science — balancing human intuition with overwhelming amounts of telemetry. The DevOps Agent doesn’t replace engineers; instead, it handles the exhausting, high-pressure analysis that slows teams down during outages. Humans take the final decisions; the agent accelerates everything leading up to them.
Most importantly, it reinforces a truth often overlooked in AI conversations:
AI won’t eliminate DevOps — it will amplify it.
While the DevOps Agent grabbed attention for its operational depth, AWS also introduced a trio of broader-purpose Frontier Agents, signaling a future where AI becomes a true member of the engineering team.
Kiro goes beyond coding assistants. It learns the organization’s development style, architecture patterns, frameworks, naming conventions, and best practices. Over time, it becomes capable of:
Early results show that Kiro can handle 20–30% of routine development work, freeing teams to focus on more complex tasks.
Security teams often face tool fatigue and backlog overload, but AWS introduced an agent specifically designed to act as an automated security consultant. It performs:
What sets it apart is its contextual awareness: instead of reporting generic findings, it tailors analysis to your architecture, codebase, and security posture.
Together, these frontier agents represent a meaningful shift. The last decade was about AI agents that helped automate tasks. The next decade — starting now — is about agents that execute, learn, adapt, and collaborate, taking on operational, security, and development responsibilities with real autonomy.
AWS is not positioning these agents as replacements for engineers but as coworkers that accelerate delivery and improve reliability. It’s an evolution of team structure, not a reduction of it.
Read more: Best AI Tools for Software Engineering
The Nova model family took center stage as AWS expanded its foundation model ecosystem significantly.
Designed for natural, multilingual interactions, Nova 2 Sonic enables:
This unlocks more humanlike contact center experiences and voice-forward applications.
With an enormous context window and optimized performance, Nova 2 Lite supports long-form reasoning and large document processing at accessible cost levels.
Omni accepts text, images, video, and audio as input and generates both text and images as output — consolidating workflows that previously required multiple specialized models.
Now generally available, Nova Act is a browser automation agent capable of:
Hertz reported a 5× faster development cycle using Nova Act agents — a strong signal that UI automation is moving from brittle scripts to reliable agentic workflows.
Perhaps the most disruptive announcement in the entire Nova ecosystem is Nova Forge, which allows organizations to build custom frontier-level models using Amazon’s mid-training checkpoints.
Instead of training a model from scratch — traditionally a multi-million-dollar effort — enterprises can:
Reddit used Nova Forge to consolidate multiple specialized models into a single custom solution, showcasing the efficiency gains possible.
Nova Forge lowers the barrier for domain-specific models the same way cloud lowered the barrier for infrastructure. It will likely be one of the most transformative AI tools in the AWS ecosystem over the next few years.
Read more: AWS FinOps Best Practices: How to Cut and Optimize Cloud Costs
AI agents only become useful at scale if the underlying infrastructure is efficient and cost-effective. AWS made that clear by unveiling major updates to its custom silicon portfolio.
These new instances feature the first AWS-designed 3nm AI training chip, delivering:
Organizations like Anthropic report cutting training times from months to weeks while reducing costs by 50% compared to GPU-based setups.
AWS also teased Trainium4, which will be interoperable with NVIDIA GPUs — a strategic move toward heterogeneous AI infrastructure and broader compatibility.
AWS’s newest general-purpose CPU introduced:
With more than half of new EC2 capacity already Graviton-based, this generation cements Graviton as the default choice for cost-efficient computing.
AWS is clearly embracing a multi-architecture future — one where AI, high-performance computing, and everyday workloads all benefit from specialized silicon.
One of the most enterprise-focused announcements was AWS AI Factories, a fully managed AI infrastructure solution that organizations can deploy in their own data centers.
AI Factories combine:
This is designed for industries with strict compliance and data sovereignty requirements, such as government, finance, and healthcare.
A notable early partnership is with Saudi Arabia’s HUMAIN initiative, which will deploy an AI Zone featuring up to 150,000 AI chips.
AI Factories represent AWS’s most serious step into hybrid AI, acknowledging that for many enterprises, bringing compute to the data — not data to the cloud — is the only viable path.
While the five topics above were the most transformative, several other announcements deserve recognition:
These improvements further solidify AWS’s position as a platform that enables both rapid innovation and operational maturity.
The 2025 edition of re:Invent made something very clear: AI agents are no longer experimental — they are production-ready tools. But the announcements also reinforced three critical themes:
Even the most powerful AI agents rely on:
The fundamentals matter more than ever.
From Nova Forge to AgentCore and fine-tuning tools, AWS is making it easy for organizations to adapt AI to their specific requirements — not settle for generic models.
AI Factories and Trainium/NVIDIA interoperability point toward an AI landscape that spans cloud, edge, and private data centers.
Companies that build flexible, future-ready architectures will be the ones that benefit most.
AWS re:Invent 2025 wasn’t just another year of cloud updates. It marked a turning point — the moment AI agents became practical, autonomous, and deeply integrated into engineering workflows.
The challenge ahead for companies is not whether to adopt AI agents, but how to deploy them safely, effectively, and sustainably.
Even with all the advancements unveiled this year, adopting cutting-edge AWS capabilities requires thoughtful planning, strong cloud foundations, and teams that understand how to integrate AI responsibly.
At Cheesecake Labs, we help organizations:
The technology is here — now it’s about making the most of it. If your team is exploring AWS AI agents, cloud modernization, or next-generation infrastructure, we’re here to help you move forward with clarity and impact.

The guy from security, automation, camping, motorcycle, travel, and collaboration! Let’s talk?