AWS News Feed on 🦋's Avatar

AWS News Feed on 🦋

@awsrecentnews.bsky.social

I'm a bot 🤖 I'm sharing recent announcements from http://aws.amazon.com/new For any issues please contact @ervinszilagyi.dev Source code: https://github.com/Ernyoke/bsky-aws-news-feed

137 Followers  |  6 Following  |  3,492 Posts  |  Joined: 02.11.2024
Posts Following

Posts by AWS News Feed on 🦋 (@awsrecentnews.bsky.social)

Preview
Amazon CloudWatch Logs announces increased query concurrency and API limits Amazon CloudWatch Logs customers can now run up to 100 concurrent queries per account and execute 10 StartQuery and GetQueryResults API calls per second per account/per-region, using the Logs Insights Query Language (Logs Insights QL). These limit increases enable customers to support more users and execute more concurrent queries. With concurrency increasing from 30 to 100, more users can simultaneously run queries and leverage dashboards using Logs Insights QL. Customers using StartQuery and GetQueryResults APIs for Logs Insights QL benefit from higher limits without being throttled, enabling them to execute more queries and view results faster. The limit increases for Logs Insights queries is available in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Canada (Calgary), South America (São Paulo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Europe (Milan), Europe (Zurich), Europe (Spain), Africa (Cape Town), Middle East(Tel Aviv), Asia Pacific (Mumbai), Asia Pacific (Hyderabad), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Melbourne), Asia Pacific (Tokyo), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Hong Kong), Asia Pacific (Jakarta), Asia Pacific (Bangkok), Asia Pacific (Malaysia), Asia Pacific (Auckland), Asia Pacific (Taipei), and Mexico (Querétaro). For more information, visit the  Amazon CloudWatch Logs documentation.

🆕 Amazon CloudWatch Logs boosts query concurrency to 100 per account and API calls to 10/sec, enabling more users to run queries and view results faster across 26 regions. For details, see Amazon CloudWatch Logs documentation.

#AWS #AmazonCloudwatch #AmazonCloudwatchLogs

09.03.2026 21:40 — 👍 0    🔁 0    💬 0    📌 0
Preview
Amazon Redshift introduces new array functions for semi-structured data processing Amazon Redshift now supports nine new array functions for working with semi-structured data stored in the SUPER data type. The new functions include ARRAY_CONTAINS, ARRAY_DISTINCT, ARRAY_EXCEPT, ARRAY_INTERSECTION, ARRAY_POSITION, ARRAY_POSITIONS, ARRAY_SORT, ARRAY_UNION, and ARRAYS_OVERLAP, enabling you to search, compare, sort, and transform arrays directly within your SQL queries. Previously, performing these operations required writing complex custom PartiQL SQL logic. These functions simplify complex data transformations and reduce query complexity by enabling sophisticated array operations in a single SQL statement. For example, you can use ARRAY_CONTAINS and ARRAY_POSITION for element lookup, ARRAY_INTERSECTION and ARRAY_EXCEPT for set operations, or ARRAY_SORT and ARRAY_DISTINCT to organize and deduplicate data. These functions are particularly valuable for applications involving nested data structures, event processing, and analytics workflows where data needs to be aggregated, filtered, or transformed at scale. The new Amazon Redshift array functions are available in all AWS Regions, including the AWS GovCloud (US) Regions, where Amazon Redshift is available. To learn more, please visit our documentation.

🆕 Amazon Redshift adds nine new array functions for SUPER data type, simplifying semi-structured data processing with operations like ARRAY_CONTAINS, ARRAY_SORT, and ARRAY_UNION, reducing complex custom SQL logic and enhancing data transformation in SQL queries.

#AWS #AmazonRedshift

06.03.2026 22:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
Amazon SageMaker Unified Studio adds light mode support for IAM-based domains Today, AWS announces light mode support in Amazon SageMaker Unified Studio for IAM-based domains. Customers can now configure the visual interface mode to match their preference, choosing between dark and light themes. Light mode helps improve readability in bright environments and provides a familiar visual experience for customers who prefer lighter interfaces. Combined with the existing dark mode, this update gives you full control over your development environment's appearance, improving accessibility and reducing eye strain across varying lighting conditions. In SageMaker Unified Studio settings, you can click on 'customize appearance' under your Profile settings to choose between visual modes including dark and light. The setting persists across browsers and devices. This feature is available in all regions where Amazon SageMaker Unified Studio is available. To learn more, refer to the User Guide.

🆕 AWS adds light mode support in Amazon SageMaker Unified Studio for IAM-based domains, letting users choose between dark and light themes for improved readability and accessibility. Available in all regions, settings persist across browsers and devices.

#AWS #AmazonSagemaker

06.03.2026 22:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
Amazon Redshift introduces reusable templates for COPY operations Amazon Redshift now supports templates for the COPY command, allowing you to store and reuse frequently used COPY parameters. This new feature enables you to create reusable templates that contain commonly utilized formatting parameters, eliminating the need to manually specify parameters for each COPY operation. Templates help maintain consistency across data ingestion operations that use the COPY command. They also reduce the time and effort required to execute COPY commands. You can create standardized configurations for different file types and data sources, ensuring consistent parameter usage across your teams and reducing the likelihood of errors caused by manual input. When parameters need to be updated, changes to the template automatically apply to all future uses, simplifying maintenance and improving operational efficiency. Support for templates for the COPY command is available in all AWS Regions, including the AWS GovCloud (US) Regions, where Amazon Redshift is available. To get started with templates, see the documentation or check out the AWS Blog.

🆕 Amazon Redshift now offers reusable templates for COPY operations, allowing users to store and reuse common parameters, ensuring consistency, reducing manual input, and simplifying maintenance across all AWS Regions.

#AWS #AmazonRedshift

06.03.2026 22:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
Amazon EventBridge Scheduler now provides a higher default quota for the CreateSchedule API Amazon EventBridge Scheduler now has a higher default service quota for the CreateSchedule API action. The default CreateSchedule request rate quota is now 5,000 requests per second in 11 AWS Regions. Quotas can be further increased to tens of thousands of requests per second by making a request through the Service Quotas console. EventBridge Scheduler is a serverless scheduler that allows you to create, run, and manage billions of scheduled events and tasks, across more than 270 AWS services, without provisioning or managing the underlying infrastructure. EventBridge Scheduler supports one-time and recurring schedules that can be created using cron expressions, rate expressions, or specific times with support for time zones and daylight savings. With today's increase to the default CreateSchedule quota, customers with high-throughput schedule creation workloads can operate at increased scale without needing to request a quota increase, reducing friction when onboarding new workloads or scaling existing ones. Scheduler will scale to the new quota automatically. You can request increases beyond the new default service quota in the Service Quotas console. View EventBridge Scheduler service quotas for each Region in the service endpoints and quotas documentation or learn more about the EventBridge Scheduler service in the EventBridge Scheduler documentation. The increased quota is now available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), South America (São Paulo), Asia Pacific (Mumbai), Europe (Frankfurt), Europe (London), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney) Regions.

🆕 Amazon EventBridge Scheduler boosts default CreateSchedule API quota to 5,000 requests/sec in 11 regions. Scheduler scales automatically; higher quotas available via Service Quotas console. Supports billions of scheduled tasks across 270+ AWS services.

#AWS

06.03.2026 21:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
Amazon Redshift Serverless now maintains datashare permissions during restore Amazon Redshift Serverless now preserves datashare permissions when you restore a snapshot to the same namespace, simplifying data sharing workflows and reducing administrative overhead. Previously, restoring a serverless namespace from a snapshot required administrators to manually re-grant datashare permissions to consumer clusters and recreate consumer databases, even when restoring to the same namespace. With this enhancement, datashare permissions are automatically maintained when you restore a snapshot to the same producer namespace, provided the datashare permission existed both when the snapshot was taken and on the current namespace. For consumer namespaces, datashare access remains unchanged after restore, eliminating the need for producer administrators to re-grant permissions. This streamlines disaster recovery and testing workflows by reducing manual configuration steps and potential errors. Amazon Redshift also provides EventBridge notifications to alert you when datashares are dropped, consumer access is revoked, or public accessibility changes during restore operations. This feature is available in all AWS Regions that support Amazon Redshift. To learn more, see the Amazon Redshift Management Guide.

🆕 Amazon Redshift Serverless preserves datashare permissions during restores, simplifying workflows and cutting admin tasks. It maintains permissions when restoring snapshots within the same namespace, eliminating manual re-granting, and is available in all AWS Regions suppor…

#AWS #AmazonRedshift

06.03.2026 19:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
Amazon EC2 R8g instances now available in additional regions Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8g instances are available in AWS Middle East (UAE), AWS Mexico (Central), and AWS Europe (Zurich) regions. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 R8g instances are ideal for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads. AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. AWS Graviton4-based R8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 1.5TB) than Graviton3-based R7g instances. These instances are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications compared to AWS Graviton3-based R7g instances. R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 R8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

🆕 Amazon EC2 R8g instances now available in UAE, Mexico, and Zurich regions. Powered by Graviton4, they offer up to 30% better performance than Graviton3-based instances, ideal for memory-intensive workloads. Available in 12 sizes, including bare metal, with enhanced networking.

#AWS #AmazonEc2

06.03.2026 19:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
OpenSearch OR2 and OM2 instances in AWS GovCloud (US-East, US-West) Regions Amazon OpenSearch Service, expands availability of OR2 and OM2, OpenSearch Optimized Instance family to 12 additional regions. The OR2 instance delivers up to 26% higher indexing throughput compared to previous OR1 instances and 70% over R7g instances. The OM2 instance delivers up to 15% higher indexing throughput compared to OR1 instances and 66% over M7g instances in internal benchmarks The OpenSearch Optimized instances, leveraging best-in-class cloud technologies like Amazon S3, to provide high durability, and improved price-performance for higher indexing throughput better for indexing heavy workload. Each OpenSearch Optimized instance is provisioned with compute, local instance storage for caching, and remote Amazon S3-based managed storage. OR2 and OM2 offers pay-as-you-go pricing and reserved instances, with a simple hourly rate for the instance, local instance storage, as well as the managed storage provisioned. OR2 instances come in sizes ‘medium’ through ‘16xlarge’, and offer compute, memory, and storage flexibility. OM2 instances come in sizes ‘large’ through ‘16xlarge’ Please refer to the Amazon OpenSearch Service pricing page for pricing details. OR2 and OM2 instance family is now available on Amazon OpenSearch Service across 2 additional regions: AWS GovCloud (US-East, US-West).

🆕 Amazon OpenSearch Service now offers OR2 and OM2 instances in AWS GovCloud (US-East, US-West), providing up to 26% higher indexing throughput than OR1 and 70% over R7g instances, with pay-as-you-go and reserved pricing.

#AWS #AmazonOpensearchService

06.03.2026 19:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
Amazon EC2 I8ge instances now generally available in Europe (Ireland) AWS region. Amazon Web Services (AWS) announces the availability of Amazon EC2 I8ge instances in Europe (Ireland) AWS region. Designed for large storage I/O intensive workloads, these new instances are powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances. I8ge instances offer up to 120TB local NVMe storage density—the highest available in the cloud for storage optimized instances—and deliver up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, these instances achieve up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances. Additionally, the 16KB torn write prevention feature, enables customers to eliminate performance bottlenecks for database workloads. I8ge instances are high-density storage-optimized instances, for workloads that demand rapid local storage with high random read/write performance and consistently low latency for accessing large data sets. These versatile instances are offered in eleven different sizes including 2 metal sizes, providing flexibility to match customers computational needs. They deliver up to 180 Gbps of network performance bandwidth, and 60 Gbps of dedicated bandwidth for Amazon Elastic Block Store (EBS), ensuring fast and efficient data transfer for the most demanding applications. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page. To get started, see AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs. To learn more, visit the I8ge instances page.

🆕 AWS now offers EC2 I8ge instances in Europe (Ireland), featuring 5th gen Intel Xeon processors, up to 120TB NVMe storage, and 65% better storage performance for large I/O workloads. Available in 11 sizes, these instances offer up to 180 Gbps network bandwidth.

#AWS #AmazonEc2

05.03.2026 23:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
Database Savings Plans now supports Amazon OpenSearch Service and Amazon Neptune Analytics Today, AWS announces expanded coverage for Database Savings Plans, with support for Amazon OpenSearch Service and Amazon Neptune Analytics. With Database Savings Plans, you can save up to 35% in exchange for a commitment to a consistent amount of usage (measured in $/hour) over a one-year term with no upfront payment. Database Savings Plans automatically applies to eligible serverless and provisioned instance usage regardless of supported engine, instance family, size, deployment option, or AWS Region. For example, with Database Savings Plans, you can change from m7i.large.search to c8g.2xlarge.search within OpenSearch Service, or scale Neptune Analytics workloads while continuing to benefit from the discounted pricing. Database Savings Plans for Amazon OpenSearch Service and Amazon Neptune Analytics is available starting today in all AWS Regions, except China Regions. You can get started with Database Savings Plans from the AWS Billing and Cost Management Console or by using the AWS CLI. To realize the largest savings, you can make a commitment to Savings Plans by using purchase recommendations provided in the console. For a more customized analysis, you can use the Savings Plans Purchase Analyzer to estimate potential cost savings for custom purchase scenarios. For more information, visit the Database Savings Plans pricing page and the AWS Savings Plans FAQs.

🆕 AWS offers Database Savings Plans for OpenSearch and Neptune Analytics, saving up to 35% with a one-year commitment. Available globally except China, for serverless and provisioned instances. Start via AWS Billing console or CLI.

#AWS

05.03.2026 21:10 — 👍 1    🔁 1    💬 0    📌 0
Preview
Amazon EC2 M8g instances now available in additional regions Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M8g instances are available in Africa (Cape Town), Asia Pacific (Malaysia), Europe (Milan, Zurich), and Canada West (Calgary) regions. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 M8g instances are built for general-purpose workloads, such as application servers, microservices, gaming servers, midsize data stores, and caching fleets. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads. AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon M7g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. M8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 M8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

🆕 Amazon EC2 M8g instances are now available in new regions, offering up to 30% better performance with AWS Graviton4 processors, larger sizes, and enhanced networking. Ideal for general-purpose workloads, they improve performance and security via the Nitro System.

#AWS #AmazonEc2

05.03.2026 21:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
AWS Shield network security director findings are now available in AWS Security Hub Today, AWS Shield announces findings from network security director, currently in preview, are now available in AWS Security Hub. AWS Shield network security director identifies missing or misconfigured network security services like AWS WAF, VPC security groups, and VPC network access control lists (ACLs) in your AWS Organization and provides remediation recommendations. Network security director findings now also appear in the Inventory section of the Security Hub console. With network security director, you can continuously analyze your network across accounts or organizational units in your AWS Organization, and receive findings highlighting missing or misconfigured network security services per AWS best practices. The severity of each finding is determined based on a combination of the misconfiguration identified and the network topology of the resource the finding is associated with. To learn more, visit the overview page.

🆕 AWS Shield's network security director findings are now in AWS Security Hub, highlighting misconfigured services and offering fixes. They appear in the Inventory section, aiding network security analysis across accounts.

#AWS #AwsShield #AwsSecurityHub

05.03.2026 20:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
Multi-party approval now supports approval team baselining Multi-party approval (MPA) now supports MPA administrators running test approvals to confirm that their approval team is set up correctly and that approvers are active and reachable. With this new capability, customers ensure their approval teams do not become unresponsive due to natural attrition, incorrect approver selection, or reduced engagement. MPA administrators and security teams can now proactively assess their approval configurations before relying on them for sensitive operations. The baseline feature enables proactive team health management by allowing manual initiation of test approval sessions through the AWS Organizations console. Customers can verify approver availability, identify inactive team members, and maintain compliance with internal governance requirements. Key use cases include regular team responsiveness verification, recommended every 90 days by AWS using the MPA Console, onboarding validation for new approval configurations, and operation health checks to ensure approval workflows function effectively when needed. This feature is available in all AWS commercial regions. To learn more about implementing baseline testing for your multi-party approval workflows, visit the Multi-party approval documentation.

🆕 AWS Multi-party approval now verifies approver availability and responsiveness for compliance and readiness, available in all commercial regions. It supports proactive health checks and onboarding validations. For details, see the Multi-party approval documentation.

#AWS #AwsOrganizations

05.03.2026 20:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
AWS Elastic Beanstalk now offers AI-powered environment analysis AWS Elastic Beanstalk now offers AI-powered environment analysis to help you quickly identify root causes and get recommended solutions for environment health issues. When your environment experiences problems, Elastic Beanstalk collects recent events, instance health, and logs from your environment and sends them to Amazon Bedrock for analysis. This feature is designed for developers and operations teams who need to diagnose and resolve environment issues faster without manually reviewing logs and events. You can request an AI analysis directly from the Elastic Beanstalk console using the AI Analysis button when your environment's health status is Warning, Degraded, or Severe. You can also use the AWS CLI with the RequestEnvironmentInfo and RetrieveEnvironmentInfo API operations. The analysis provides step-by-step troubleshooting recommendations tailored to your environment's current state, helping you reduce mean time to resolution. AI-powered environment analysis is available in all AWS Regions where both AWS Elastic Beanstalk and Amazon Bedrock are available.  For more information about the AI-powered environment analysis and for a full list of supported platform versions, see the Elastic Beanstalk developer guide. To learn more about Elastic Beanstalk, visit the Elastic Beanstalk product page.

🆕 AWS Elastic Beanstalk now uses AI to quickly diagnose issues and suggest fixes through Amazon Bedrock. Available everywhere, it analyzes events, instance health, and logs for faster troubleshooting.

#AWS #AwsElasticBeanstalk

05.03.2026 20:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
Introducing Amazon Connect Health, Agentic AI Built for Healthcare Amazon Connect Health is now generally available, bringing purpose-built agentic AI to healthcare organizations to streamline patient engagement and point-of-care workflows. Amazon Connect Health delivers five AI agents designed to reduce administrative burden across the care continuum — enabling patients faster access to care and freeing clinicians from paperwork and administrative burden to focus on what matters most: their patients. These agents are ready to deploy within existing patient, clinician, and healthcare workflows — such as patient access centers (i.e., contact centers), Electronic Health Records (EHR) applications, and telehealth solutions — in days, not months. All the features follow responsible AI best practices, implement safety guardrails, are HIPAA-eligible, and deliver the same security and reliability standards as any AWS service. Agents available at launch: Patient verification (GA) – Confirms patient identity in real time against EHR records with appointment lookup, reducing inbound call-handling time. Appointment management (Preview) – Books appointments via natural language voice interaction, 24/7, with real-time insurance eligibility checks, enabling after-hours scheduling, and relieving burden on human staff.  Patient insights (Preview) – Surfaces relevant patient history and clinical context before the visit, so clinicians walk in prepared. Reduces the time clinicians spend piecing together information before a patient’s visit. Ambient documentation (GA) – Captures patient-clinician conversations during the visit and generates clinical notes in real time. Medical coding (Preview) – Automatically generates ICD-10 and CPT codes from clinical notes post-visit, with full audit trails. Amazon Connect Health patient engagement capabilities are natively integrated with Amazon Connect, a complete AI-powered contact center solution delivering personalized customer experiences at scale. Clinical and administrative staff can configure and customize these AI capabilities in minutes using the Amazon Connect Health application, enabling rapid testing and seamless deployment into contact center workflows. The point-of-care capabilities — ambient listening, patient insight, and medical coding — are available via Amazon Connect Health unified SDK (SDK documentation), enabling developers to integrate the features directly into existing EHR and clinician-facing applications.    Amazon Connect Health is available in US East (N. Virginia) and US West (Oregon). To get started, visit the Amazon Connect Health product page. For technical details, see the Amazon Connect Health documentation

🆕 Amazon Connect Health deploys AI agents to streamline healthcare workflows, cutting admin tasks and boosting patient access. Key features: real-time verification, 24/7 booking, ambient docs, and coding. HIPAA-eligible, integrates with Amazon Connect, ready in days. Available…

#AWS #AmazonConnect

05.03.2026 18:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
Amazon OpenSearch Service introduces capacity optimized blue/green deployments Amazon OpenSearch Service now offers a Capacity Optimized option for blue/green deployments, ensuring domain updates can complete even when available instance capacity is less than required. Updates are performed in incremental batches, reducing the number of additional instances needed during the process. Amazon OpenSearch Service uses a blue/green deployment process when updating domains — creating an idle copy of the original environment, applying updates, and routing traffic to the new environment once complete. This minimizes downtime and preserves the original environment as a fallback. Until now, blue/green deployments required 100% instance capacity upfront. For example, for a cluster with 100 data nodes, another 100 nodes were needed to proceed. If sufficient capacity was unavailable, customers had to wait and retry later. Now, customers can choose between two deployment strategies. The default Full Swap option maintains current behavior, requiring full capacity upfront for the fastest deployment. The new Capacity Optimized option attempts a full capacity deployment first, but automatically falls back to batch deployment if capacity is insufficient. OpenSearch Service determines the appropriate batch size based on cluster size and available instances. Because updates are applied in batches, this option may take longer than a full-swap deployment. Customers can select their preferred option in the deployment configuration settings via the OpenSearch Service console or API. We recommend choosing the Capacity Optimized deployment option for clusters with 30 or more nodes. The Capacity Optimized option is available for all OpenSearch and Elasticsearch versions, across all AWS Commercial Regions where OpenSearch Service is available. See here for a full listing of our Regions. To learn more, visit the documentation page.

🆕 Amazon OpenSearch Service now supports Capacity Optimized blue/green deployments, allowing domain updates with insufficient capacity by applying changes in batches. This reduces extra instances and offers two strategies: Full Swap and Capacity Optimized.

#AWS #AmazonOpensearchService

05.03.2026 18:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
AWS HealthLake announces data transformation agent for automated CCDA-to-FHIR data conversion (Preview) Starting today, healthcare organizations can now transform legacy clinical documents into queryable FHIR resources in AWS HealthLake in days instead of months, unlocking use cases such as longitudinal patient record generation, population health analytics, and clinical data exchange. AWS HealthLake data transformation agent (preview) is an AI-powered capability that converts Consolidated Clinical Document Architecture (CCDA) files into Fast Health Interoperability Resources Release 4 (FHIR R4)-compliant resources without requiring specialized FHIR expertise, through an integrated experience that combines real-time conversion testing, AI-assisted template customization, and scalable bulk import. The data transformation agent includes ready-to-use templates for CCDA 2.1 to FHIR R4 data conversion. Developers can submit individual CCDA files through a synchronous conversion API or console workflow and receive transformed FHIR Bundles in seconds. They can preview results, interactively validate conversion quality, and sign off on templates before production use. An enhanced import workflow automatically detects uploaded CCDA files, applies the active template, matches and reconciles patients based on identifiers, and ingests the resulting FHIR resources into the target AWS HealthLake datastore with detailed logs. All capabilities are available both on the AWS console and programmatically via API for seamless integration into existing workflows. When default templates need adjustment, the data transformation agent offers an AI-powered experience to customize them directly in the console. Users can describe changes such as "skip medications with status entered-in-error" or "map procedure dates to performedDateTime instead of performedPeriod" in natural language, and the AI agent modifies the underlying template automatically. Manual curation is also available for power users who wish to make targeted template edits. Users can then immediately test against sample files, iterate conversationally, and publish once satisfied. AWS HealthLake is available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Europe West (London), Europe (Ireland), and Asia Pacific SouthEast (Sydney) Regions. Visit the AWS Region Table to see all the regions. To learn more, see the AWS HealthLake product page.

🆕 AWS HealthLake introduces a data transformation agent to quickly convert CCDA to FHIR, enabling faster patient record generation and analytics. AI-powered, it offers customizable templates, bulk import, and seamless API integration, available in multiple regions.

#AWS #AmazonHealthlake

05.03.2026 18:09 — 👍 1    🔁 0    💬 0    📌 0
Preview
Accelerate Lambda durable functions development with new Kiro power Today, AWS announces the Lambda durable functions Kiro power, bringing Lambda durable function development expertise to agentic AI development in Kiro. With this power, you can build resilient, long-running multi-step applications and AI workflows faster with AI agent-assisted development directly in your local development environment. When you work with durable functions, the AI agent dynamically loads relevant guidance and development expertise. This includes replay model best practices, step and wait operations, concurrent execution with map and parallel patterns, error handling with retry strategies and compensating transactions, testing patterns, and deployment with AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), and AWS Serverless Application Model (AWS SAM). With this guidance, you can go from idea to a working durable function quickly, whether you are building order processing pipelines, AI agent orchestration with human-in-the-loop approvals, or payment coordination workflows. The Lambda durable functions power is available today with one-click installation from the Kiro IDE and the Kiro powers page. Explore the power on GitHub. To get started with Lambda durable functions, see the developer guide.

🆕 AWS introduces Kiro power for Lambda durable functions, enabling faster development of resilient, long-running applications with AI agent-assisted guidance on best practices, error handling, and deployment tools. Available via one-click in Kiro IDE.

#AWS #AwsLambda

05.03.2026 17:09 — 👍 0    🔁 0    💬 0    📌 0
Preview
Amazon SageMaker HyperPod now provides comprehensive observability for Restricted Instance Groups Amazon SageMaker HyperPod now offers comprehensive observability for Restricted Instance Groups (RIG), enabling teams training foundation models with Nova Forge to gain deep visibility into their compute resources and training workloads. This new capability eliminates the manual effort of collecting and correlating metrics across the infrastructure stack, providing a unified view of GPU performance, system health, network throughput, and Kubernetes cluster state through a pre-configured Amazon Managed Grafana dashboard backed by Amazon Managed Service for Prometheus. You can now monitor GPU utilization, NVLink bandwidth, CPU pressure, FSx for Lustre usage, and pod lifecycle from a single Grafana dashboard, with metrics collected across four exporters covering GPU performance, host-level system health, network fabric, and Kubernetes object state. In addition, curated logs are automatically made available in these dashboards, covering epoch progress, step-level training logs, pipeline errors, and Python tracebacks, so you can quickly diagnose training failures. HyperPod Observability for Restricted Instance Group is automatically enabled when you create a new cluster using RIGs, or can be enabled for existing clusters in a few clicks in the HyperPod cluster management console. Amazon SageMaker HyperPod RIG observability is available in all AWS Regions where SageMaker HyperPod RIG is supported. To learn more, visit the documentation.

🆕 Amazon SageMaker HyperPod now offers observability for Restricted Instance Groups, providing unified GPU, system health, network, and Kubernetes metrics via a Grafana dashboard with Prometheus and logs, automatically enabled for new or existing clusters in supported regions.

#AWS

05.03.2026 00:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
AWS simplifies IAM role creation and setup in service workflows AWS Identity and Access Management (IAM) now makes it easier to create and configure IAM roles directly within service workflows, allowing you to customize role permissions without switching between browser tabs. Now, when you are performing console tasks that involve role configuration, a new panel will appear to set the permissions required. IAM roles enable secure AWS cross-service connections using temporary credentials, eliminating the need for hardcoded access keys. This launch integrates role creation capabilities with custom permissions directly into service workflows, allowing you to configure roles and permissions without navigating to the IAM console. You can use default policies or the simplified statement builder to customize your permissions, streamlining your resource setup while maintaining the full functionality of IAM role management. This feature is available when working with Amazon EC2, AWS Lambda, Amazon EKS, Amazon ECS, AWS Glue, AWS CloudFormation, AWS Database Migration Service, AWS Systems Manager, AWS Secrets Manager, Amazon Relational Database Service, and AWS IoT Core in the US East (N. Virginia) Region. The feature will gradually become available across additional AWS services and regions. To learn more, refer to individual service User Guide or IAM documentation.

🆕 AWS now lets you create and configure IAM roles directly within service workflows, simplifying role setup and permissions customization without switching tabs. Available in US East (N. Virginia) for several services, this feature will roll out globally.

#AWS #AwsIam

04.03.2026 22:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
Amazon OpenSearch Ingestion now supports unified ingestion endpoint for OpenTelemetry data Amazon OpenSearch Ingestion now supports a unified ingestion endpoint that can accept all three OpenTelemetry observability signals — logs, metrics, and traces — through a single pipeline. Previously, customers who wanted to ingest all three OpenTelemetry data types had to create and manage three separate pipelines, one for each signal type. With this launch, a single pipeline can now receive any combination of OpenTelemetry signals, simplifying pipeline architecture and reducing operational overhead. Customers can now build centralized observability pipelines that consolidate logs, metrics, and traces in one place, making it easier to correlate signals and gain a holistic view of application health. Teams operating at scale can reduce the number of pipelines they manage, lowering infrastructure costs and simplifying access control, monitoring, and lifecycle management. This also makes it easier to adopt OpenTelemetry incrementally as teams can begin with one signal type and add others over time without any pipeline reconfiguration. The unified ingestion endpoint for OpenTelemetry data is supported in all regions that Amazon OpenSearch Ingestion is currently available. Customers can get started by using the new unified OpenTelemetry source in their pipeline configuration via the AWS Management console or using the AWS CLI and point their OpenTelemetry clients to the new unified endpoint. To learn more and get started, visit the Amazon OpenSearch Ingestion documentation.

🆕 Amazon OpenSearch Ingestion offers a unified endpoint for OpenTelemetry logs, metrics, and traces, simplifying pipeline management and reducing costs. A single pipeline handles all signals, easing observability and incremental adoption. Available globally, it’s con…

#AWS #AmazonOpensearchService

04.03.2026 21:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
Amazon GameLift Servers launches DDoS Protection We’re excited to announce Amazon GameLift Servers DDoS Protection, a new feature that helps game developers protect session-based multiplayer games that utilize Amazon GameLift Servers to help improve overall game session resiliency. DDoS Protection is designed to defend against denial-of-service (DoS) and distributed denial-of-service (DDoS) attacks, providing proactive, User Datagram Protocol (UDP)-based traffic protection–without the need for manual byte matching, and with negligible latency added. Amazon GameLift Servers DDoS Protection co-locates a relay network directly alongside your game servers. The relay authenticates client traffic using access tokens so that only authorized traffic reaches the server. The feature also enforces per-player traffic limits to help prevent disruptions, even from seemingly legitimate sources. Game developers can use DDoS Protection to protect against targeted disruptions to specific players or entire game sessions. Check out the Amazon GameLift Servers release notes to get started through the console or API, with sample code provided for popular game engines including Unreal Engine and native C++. Amazon GameLift Servers DDoS Protection is available at no additional cost to Amazon GameLift Servers customers and is initially available in the following regions: US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Asia Pacific (Sydney), Asia Pacific (Tokyo), Pacific (Seoul).

🆕 Amazon GameLift Servers now offers DDoS Protection to safeguard multiplayer games from DoS/DDoS attacks, co-locating a relay network to authenticate traffic and enforce per-player limits, available at no extra cost in select regions.

#AWS #AmazonGamelift

04.03.2026 20:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
Amazon OpenSearch Ingestion now supports Amazon Managed Service for Prometheus as a sink Amazon OpenSearch Ingestion now supports Amazon Managed Service for Prometheus  as a sink, making it possible to build fully managed, end-to-end metrics ingestion pipelines without any custom forwarding infrastructure. With this launch, customers can now manage their entire metrics ingestion workflow using the same pipeline infrastructure they already use for logs and traces. Customers can now choose the right destination for each observability signal — sending logs and traces to Amazon OpenSearch Service for powerful full-text search, log analytics, and trace correlation, while routing metrics to Amazon Managed Service for Prometheus for time-series storage and analysis. This flexibility allows teams to build purpose-fit observability pipelines that leverage the strengths of each service without compromising on data fidelity or analytical capability. Amazon OpenSearch Ingestion's built-in data transformation and enrichment capabilities allow customers to prepare and refine metrics before they land in Amazon Managed Service for Prometheus, improving data quality and consistency. Once metrics are in Amazon Managed Service for Prometheus, customers can query them using Prometheus Query Language to analyze trends, configure alerting rules to get notified when metrics cross defined thresholds, and visualize their data using Amazon Managed Grafana for rich, customizable views of infrastructure and application health. The feature is supported in all regions that Amazon OpenSearch Ingestion and  is currently available. Customers can get started by using the new sink for Amazon Managed Service for Prometheus in their pipeline configuration via the AWS Management console or using the AWS CLI and start ingesting metrics into their Amazon Managed Service for Prometheus workspace. To learn more and get started, visit the Amazon OpenSearch Ingestion documentation.

🆕 Amazon OpenSearch Ingestion supports Amazon Managed Service for Prometheus as a sink, creating fully managed metrics pipelines without custom infrastructure. Route metrics and logs for enhanced observability, transformation, and analysis. Available globally. More d…

#AWS #AmazonOpensearchService

04.03.2026 18:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
Amazon Lightsail now offers OpenClaw, a private self-hosted AI assistant Amazon Lightsail now lets you deploy OpenClaw, a private self-hosted AI assistant, on your own cloud infrastructure in a simple and secure manner. Every Lightsail OpenClaw instance ships with built-in security controls, pre-configured and ready to use. Sandboxing isolates each agent session for improved security posture. One-click HTTPS access puts the OpenClaw dashboard in your browser securely, without requiring manual TLS configuration. Device pairing authentication ensures only your authorized devices can connect to your assistant. Automatic snapshots back up your configuration continuously, so you never lose your setup. Amazon Bedrock serves as the default model provider for Lightsail OpenClaw, and you can swap models or connect to Slack, Telegram, WhatsApp, and Discord as per your requirements. Amazon Lightsail is available in 15 AWS Regions including US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (London), Asia Pacific (Tokyo), and Asia Pacific (Jakarta). To get started, visit the Lightsail console. For pricing and other details, visit the Amazon Lightsail pricing and quick start documentation pages.

🆕 Amazon Lightsail now offers OpenClaw, a private AI assistant, with built-in security, HTTPS access, device pairing, automatic snapshots, and integration with Slack, Telegram, WhatsApp, and Discord, available in 15 AWS regions.

#AWS #AmazonLightsail

04.03.2026 18:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
Amazon SageMaker Unified Studio adds metadata sync with third-party catalogs Amazon SageMaker Unified Studio now supports metadata and context sync across Atlan, Collibra, and Alation. These integrations synchronize catalog metadata between Amazon SageMaker Catalog and each partner platform, giving teams a consistent view of their data and AI assets regardless of which tool they use day to day. Organizations can maintain aligned glossary terms, asset descriptions, and ownership information across platforms without manual reconciliation. All three integrations synchronize key metadata elements including projects, assets, descriptions, glossary terms, and their hierarchies. With the Collibra integration, you can synchronize metadata in both directions between SageMaker Catalog and the partner platform, so updates you make in one are reflected in the other. Also, you can manage SageMaker Unified Studio data access requests from Collibra. With the Atlan and Alation integration, you can ingest metadata from SageMaker Catalog into Alation with additional enhancements coming soon. You set up these integrations by setting up a connection to SageMaker Unified Studio from within Atlan and Alation, while the Collibra integration is available as an open-source solution on GitHub. To learn more, visit the Amazon SageMaker Unified Studio documentation. For implementation details, see the Atlan blog post, Collibra blog post , and Alation blog post.

🆕 Amazon SageMaker Unified Studio syncs metadata with Atlan, Collibra, and Alation for consistent data and AI asset views. Key elements like projects and glossary terms sync, with Collibra offering bidirectional sync and data access requests. Integrate via Atlan, Alation, or…

#AWS #AmazonSagemaker

04.03.2026 00:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
Amazon SageMaker Unified Studio now supports AWS Glue 5.1 for data processing jobs Amazon SageMaker Unified Studio now supports AWS Glue 5.1 for Visual ETL, notebook, and code-based data processing jobs. With AWS Glue 5.1 in Amazon SageMaker Unified Studio, data engineers and data scientists can run jobs on Apache Spark 3.5.6 with Python 3.11 and Scala 2.12.18, and use updated open table format libraries including Apache Iceberg 1.10.0, Apache Hudi 1.0.2, and Delta Lake 3.3.2. You can use AWS Glue 5.1 in Amazon SageMaker Unified Studio when creating data processing jobs by selecting Glue 5.1 from the version dropdown in job settings. This applies to Visual ETL jobs, notebook jobs, and code-based jobs, so you can take advantage of the latest Spark runtime and open table format libraries across all your data processing workflows. AWS Glue 5.1 in Amazon SageMaker Unified Studio is available in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Stockholm), Europe (Frankfurt), Europe (Spain), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Malaysia), Asia Pacific (Thailand), Asia Pacific (Mumbai), and South America (Sao Paulo). To learn more, visit the Amazon SageMaker Unified Studio documentation. For details on what's included in AWS Glue 5.1, including updated open table format support and access control capabilities, see the AWS Glue documentation.

🆕 Amazon SageMaker Unified Studio now supports AWS Glue 5.1 for data processing jobs, enabling Visual ETL, notebooks, and code-based jobs with Spark 3.5.6 and updated libraries like Apache Iceberg, Hudi, and Delta Lake. Available in multiple regions.

#AWS #AmazonSagemaker #AwsGlue

04.03.2026 00:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
Amazon SageMaker Unified Studio launches support for remote connection from Kiro IDE Today, AWS announces the ability to remotely connect from Kiro IDE to Amazon SageMaker Unified Studio. This new capability allows data scientists, ML engineers, and developers to leverage their Kiro setup - including its spec-driven development, conversational coding, and automated feature generation capabilities - while accessing the scalable compute resources of Amazon SageMaker. By connecting Kiro to SageMaker Unified Studio using the AWS toolkit extension, you can eliminate context switching between your local IDE and cloud infrastructure, maintaining your existing agentic development workflows within a single environment for all your AWS analytics and AI/ML services. SageMaker Unified Studio, part of the next generation of Amazon SageMaker, offers a broad set of fully managed cloud interactive development environments (IDE), including JupyterLab and Code Editor based on Code-OSS (Open-Source Software). Starting today, you can also use your customized local Kiro setup - complete with specs, steering files, and hooks - while accessing your compute resources and data on Amazon SageMaker. Since Kiro is built on Code-OSS, authentication is secure via IAM through the AWS Toolkit extension, giving you access to all your SageMaker Unified Studio domains and projects. This integration provides a convenient path from your local AI-powered development environment to scalable infrastructure for running workloads across data processing, SQL analytics services like Amazon EMR, AWS Glue, and Amazon Athena, and ML workflows - all with enterprise-grade security including customer-managed encryption keys and AWS IAM integration. This feature is available in all Regions where Amazon SageMaker Unified Studio is available. To learn more, refer to the SageMaker user guide.

🆕 AWS now connects Kiro IDE to Amazon SageMaker Unified Studio, letting data scientists use Kiro's tools with SageMaker's compute, all in one place for smooth analytics and AI/ML workflows.

#AWS #AmazonMachineLearning #AmazonSagemaker

03.03.2026 22:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
 Policy in Amazon Bedrock AgentCore is now generally available Policy in Amazon Bedrock AgentCore is now generally available, providing organizations with centralized, fine-grained controls for agent-tool interactions. Policy operates outside your agent code, enabling security, compliance, and operations teams to define tool access and input validation rules without modifying agent code. Teams can author policies using natural language that automatically converts to Cedar, the AWS open-source policy language. Policies are stored in a policy engine and attached to an AgentCore Gateway, which intercepts agent-tool traffic and evaluates each request against the policies before allowing or denying tool access. Policy helps ensure agents operate within defined parameters while maintaining organizational visibility and governance. Policy in AgentCore is available in thirteen AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and Europe (Stockholm). Learn more about Policy in AgentCore through the documentation, and get started with the AgentCore Starter Toolkit.

🆕 Amazon Bedrock AgentCore policy is now available for fine-grained control over agent-tool interactions, letting security teams define rules without code changes, in thirteen AWS regions. Learn more in the docs.

#AWS #AmazonBedrock

03.03.2026 21:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
AWS Elemental MediaLive Now Supports SRT Listener Mode AWS Elemental MediaLive now supports Secure Reliable Transport (SRT) Listener mode for both inputs and outputs. With SRT Listener mode, MediaLive waits for connections rather than initiating them. Upstream sources push live video directly to MediaLive, and downstream systems pull encoded streams on demand. This simplifies network setup by removing the need for complex firewall configurations or static, publicly accessible IP addresses on the source or destination side. SRT Listener mode complements MediaLive's existing SRT Caller mode, giving you full control over which side of the connection initiates the SRT handshake. SRT Listener mode enables flexible contribution and distribution workflows. On the input side, you can push streams from on-premises encoders or remote production sites, including MediaLive Anywhere deployments, directly to MediaLive in the cloud without coordinating firewall changes with your network team. On the output side, downstream distribution partners can connect to MediaLive and pull encoded streams when ready, without requiring MediaLive to initiate outbound connections. Both SRT Listener inputs and outputs support configurable latency settings and mandatory AES encryption to help ensure content security. SRT Listener mode is available in all AWS Regions where AWS Elemental MediaLive is offered. To get started, see Setting up an SRT Listener input and Creating SRT outputs in listener mode in the AWS Elemental MediaLive User Guide.

🆕 AWS Elemental MediaLive now supports SRT Listener mode for inputs and outputs, simplifying network setup by eliminating firewall configurations. It lets upstream sources push live video directly, enhancing flexible workflows. Available globally.

#AWS #AwsElementalMedialive

03.03.2026 19:10 — 👍 0    🔁 0    💬 0    📌 0
Preview
AWS IAM Identity Center now supports IPv6 dual-stack endpoints in AWS Asia Pacific (Taipei) and AWS GovCloud (US) Regions AWS IAM Identity Center now supports Internet Protocol version 6 (IPv6) via dual-stack endpoints in the AWS Asia Pacific (Taipei) and AWS GovCloud (US) Regions, completing global availability of this feature across all AWS Regions where IAM Identity Center is available. IAM Identity Center allows customers to enable workforce access to AWS managed applications and AWS accounts. When your client, such as a browser or an application, makes a request to a dual-stack endpoint, the endpoint resolves to an IPv4 or IPv6 address, depending on the protocol used by your network and client. To get started, locate the dual-stack access portal URL in the IAM Identity Center console under Settings, and share it with your workforce. For GovCloud deployments, refer to the AWS GovCloud (US) documentation for region-specific endpoint details. To learn more about IPv6 support in IAM Identity Center, see the IAM Identity Center User Guide.

🆕 AWS IAM Identity Center now supports IPv6 dual-stack endpoints in Asia Pacific (Taipei) and AWS GovCloud (US), completing global IPv6 availability. This allows workforce access to AWS managed apps via IPv4 or IPv6.

#AWS #AwsGovcloudUs #AwsIamIdentityCenter

03.03.2026 18:10 — 👍 1    🔁 0    💬 0    📌 0