Top Announcements of AWS re:Invent 2023

Amazon Q 

AWS announces Amazon Q, a new generative AI–powered assistant that is specifically designed for work and can be tailored to your business to have conversations, solve problems, generate content, and take actions using the data and expertise found in your company’s information repositories, code, and enterprise systems. Amazon Q has a broad base of general knowledge and domain-specific expertise.  

Next-Generation AWS Chips: Powering AI Models 

Introduction of AWS Trainium2 and Graviton4 chips, marking a notable leap forward in the realms of AI model training and inferencing. Trainium2, with its enhanced performance and energy efficiency, emerges as a revolutionary option for aspiring developers aiming to expedite their model training without incurring substantial costs. This development signals a significant stride in empowering junior developers with advanced tools for efficient and cost-effective model training. 

Amazon Bedrock: New Capabilities 

Amazon Bedrock introduces groundbreaking innovations, expanding model options and delivering robust capabilities to streamline the development and scaling of custom generative artificial intelligence (AI) applications. As a fully managed service, Bedrock provides effortless access to a range of leading large language and foundation models from AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon. These enhancements democratize generative AI by offering customers a broader selection of industry-leading models, simplified customization with proprietary data, automated task execution tools, and responsible application deployment safeguards. This evolution transforms how organizations, irrespective of size or industry, leverage generative AI to drive innovation and redefine customer experiences. 

Amazon SageMaker Capabilities 

Announced five new capabilities for Amazon SageMaker, our fully managed service that brings together a broad set of tools to enable high-performance, low-cost machine learning for any use case. 

SageMaker HyperPod: Boosting model training efficiency by up to 40%, this enhancement enables customers to effortlessly distribute training workloads across hundreds or thousands of accelerators. This parallel processing capability significantly improves model performance by automating workload division, thereby expediting the overall model training time. 

SageMaker Inference: Enabling customers to deploy multiple models on a single AWS instance (virtual server), this feature optimizes the utilization of underlying accelerators, resulting in reduced deployment costs and latency. 

SageMaker Clarify: Assisting customers in assessing, contrasting, and choosing optimal models tailored to their specific use cases, this feature promotes responsible AI usage by aligning with their selected parameters. 

SageMaker Canvas Enhancements: Two new launches (Prepare data using natural-language instructions and Leverage models for business analysis at scale) in Canvas make it easier and faster for customers to integrate generative AI into their workflows. 

Amazon Q in Connect 

Amazon Q in Connect enhances customer service agent responsiveness by providing suggested responses, recommended actions, and links to pertinent articles based on real-time customer interactions. Additionally, contact center administrators can now create more intelligent chatbots for self-service experiences by articulating their objectives in plain language. 

AWS Serverless Innovations 

Building on the legacy of AWS services since the launch of the first service, Amazon S3, three new serverless innovations for Amazon Aurora, Amazon Redshift, and Amazon ElastiCache have been introduced. These additions are designed to assist customers in analyzing and managing data at any scale, significantly streamlining their operations. 

Four New Capabilities for AWS Supply Chain 

Expanding upon the launch of AWS Supply Chain during the previous re:Invent, this year introduces four additional capabilities to the service—supply planning, collaboration, sustainability, and the integration of a generative artificial intelligence (generative AI) assistant known as Amazon Q. AWS Supply Chain is a cloud-based application leveraging Amazon’s extensive 30 years of supply chain expertise to offer businesses across industries a comprehensive, real-time view of their supply chain data. The enhanced capabilities encompass assisting customers in forecasting, optimizing product replenishment to minimize inventory costs and enhance responsiveness to demand, and facilitating seamless communication with suppliers. Moreover, the platform simplifies the process of requesting, collecting, and auditing sustainability data, aligning with the growing focus on environmental responsibility. The innovative generative AI feature delivers a condensed overview of critical risks related to inventory levels and demand fluctuations, providing visualizations of tradeoffs within different scenarios. 

Amazon One Enterprise 

Amazon One Enterprise offers a convenient solution to streamline access to physical locations. This novel palm recognition identity service empowers organizations to grant authorized users, including employees, swift and contactless entry to various spaces such as offices, data centers, hotels, resorts, and educational institutions—simply through a quick palm scan. Beyond physical locations, the technology extends its utility to providing access to secure software assets like financial data or HR records. Learn more about how Amazon One Enterprise works, and how it’s designed to improve security, prevent breaches, and reduce costs, all while protecting people’s personal data. 

Amazon S3 Express One Zone 

Amazon Simple Storage Service (Amazon S3) stands out as one of the most widely utilized cloud object storage services, boasting an impressive repository of over 350 trillion data objects and handling more than 100 million data requests per second on average. In a groundbreaking move, Amazon has introduced Amazon S3 Express One Zone, a specialized storage class within Amazon S3 tailored to deliver data access speeds that are up to 10 times faster. Notably, this new offering also comes with the added benefit of request costs that are up to 50% lower compared to the standard Amazon S3. The motivation behind this innovation is to cater to customers with applications that demand exceptionally low latency, emphasizing the need for rapid data access to optimize efficiency. A prime example includes applications in the realms of machine learning and generative artificial intelligence (generative AI), where processing millions of images and lines of text within minutes is a common requirement. 

4 New Integrations for a Zero-ETL Future 

Enabling customers to seamlessly access and analyze data from various sources without the hassle of managing custom data pipelines, Amazon Web Services (AWS) introduces four new integrations for its existing offerings. Traditionally, connecting diverse data sources to uncover insights required the arduous process of “extract, transform, and load” (ETL), often involving manual and time-consuming efforts. These integrations mark a strategic move towards realizing a “zero ETL future,” streamlining the process for customers to effortlessly place their data where it’s needed. The focus is on facilitating the integration of data from across the entire system, empowering customers to unearth new insights, accelerate innovation, and make informed, data-driven decisions with greater ease. 

AWS Management Console for Applications 

AWS announces the general availability of myApplications, an intuitive addition to the AWS Management Console for streamlined application management. This feature provides a unified view of application metrics, encompassing cost, health, security, and performance. Users can effortlessly create and monitor applications, addressing operational issues promptly. The application dashboard offers one-click access to corresponding AWS services like AWS Cost Explorer, AWS Security Hub, and Amazon CloudWatch Application Signals. Introducing application operations, myApplications simplifies AWS resource organization through automatic application tagging, enhancing efficiency in application deployment and management. Accessible in all AWS Regions with Resource Explorer, myApplications facilitates swift, scalable operations via the AWS Management Console or various coding solutions. 

Schedule a meeting with our AWS cloud solution experts and accelerate your cloud journey with Idexcel. 

AWS re:Invent 2023 – Day 4 Recap

AWS Management Console for Applications 

AWS announces the general availability of myApplications, an intuitive addition to the AWS Management Console for streamlined application management. This feature provides a unified view of application metrics, encompassing cost, health, security, and performance. Users can effortlessly create and monitor applications, addressing operational issues promptly. The application dashboard offers one-click access to corresponding AWS services like AWS Cost Explorer, AWS Security Hub, and Amazon CloudWatch Application Signals. Introducing application operations, myApplications simplifies AWS resource organization through automatic application tagging, enhancing efficiency in application deployment and management. Accessible in all AWS Regions with Resource Explorer, myApplications facilitates swift, scalable operations via the AWS Management Console or various coding solutions. 

Amazon CloudWatch Application Signals 

Amazon CloudWatch Application Signals addresses this by automatically instrumenting applications based on best practices, eliminating manual effort and custom code. It provides a pre-built dashboard with standardized metrics like volume, availability, and latency, and allows defining Service Level Objectives (SLOs) for critical operations. 

Application Signals further enhances performance monitoring by: 

  • Automating telemetry correlation across metrics, traces, logs, real user monitoring, and synthetic monitoring. This speeds up troubleshooting and reduces application disruption. 
  • Providing an integrated experience for analyzing application performance in the context of their business functions. This improves productivity and allows teams to focus on critical applications. 
  • Enabling collaboration between teams through the Service Map. This allows service owners to efficiently identify and communicate issues caused by other services. 

Overall, Application Signals offers a powerful and efficient way to monitor the performance of distributed systems, improve developer productivity, and ensure high availability and performance for customer applications. 

Amazon SageMaker Studio  

Amazon SageMaker Studio introduces an enhanced web-based interface for an optimized user experience. The updated platform boasts faster loading times and seamless access to your preferred integrated development environment (IDE) alongside SageMaker resources. Beyond JupyterLab and RStudio, SageMaker Studio now incorporates a fully managed Code Editor based on Code-OSS (Visual Studio Code Open Source). The addition of flexible workspaces allows users to scale compute and storage, customize runtime environments, and easily pause-and-resume coding. Multiple spaces can be created with diverse configurations. The platform also features a streamlined onboarding and administration process, ensuring quick setup for individual users and enterprise administrators. 

New Capabilities for Amazon Inspector 

Amazon Inspector introduces a groundbreaking feature utilizing generative AI to analyze Lambda function code, automatically generating code patches to address security vulnerabilities. This innovative capability not only communicates findings in plain language but also offers a “diff” view, illustrating suggested code updates. This functionality proves invaluable for conducting security checks, addressing issues such as hard-coded secrets and encryption gaps in Lambda functions, enhancing overall security measures. 

Schedule a meeting with our AWS cloud solution experts and accelerate your cloud journey with Idexcel. 

AWS re:Invent 2023 – Day 3 Recap

Amazon Neptune Analytics 

Neptune Analytics, merging graph and vector database capabilities for enhanced insights in generative AI applications. The fully managed service enables users to analyze Neptune graph data or data lakes on Amazon S3, leveraging vector search for rapid discovery of key relationships. Swami Sivasubramanian, VP of Data and ML at AWS highlighted the synergy between graph analytics and vectors in uncovering hidden data relationships. Neptune Analytics streamlines infrastructure management, allowing users to focus on queries and workflows, automatically allocating resources based on graph size. The service is now available on a pay-as-you-go model in seven AWS regions. 

Amazon SageMaker HyperPod 

SageMaker HyperPod, a purpose-built service for training and fine-tuning large language models (LLMs), is now generally available. Optimized for distributed training, HyperPod creates a distributed cluster with accelerated instances, enabling efficient model and data distribution for accelerated training processes. The service allows users to save checkpoints, facilitating pause, analysis, and optimization without restarting. With fail-safe for GPU failures, HyperPod ensures a resilient training experience. Supporting Amazon’s Trainium and Trainium 2 chips or Nvidia-based GPU instances, including H100 processors, HyperPod can enhance training speeds by up to 40%, streamlining ML model development. 

AWS Titan Image Generator 

Titan Image Generator for content creators, allowing rapid creation and refinement of images through English natural language prompts. Catering to advertising, e-commerce, and media sectors, the tool produces studio-quality, realistic images at scale and low cost. It facilitates iterative image concept development by generating multiple options based on text descriptions and comprehending complex prompts with diverse objects. Trained on high-quality data, it ensures accurate outputs, featuring inclusive attributes and minimal distortions. The Titan Image Generator incorporates image editing functionalities, supporting automatic edits, inpainting, and outpainting. Users can customize the model with proprietary data, maintaining brand consistency, and it includes safeguards against harmful content generation, embedding invisible watermarks to identify AI-generated images, and discouraging misinformation. 

Amazon OpenSearch Service zero-ETL integration with Amazon S3 

Amazon unveils the customer preview of Amazon OpenSearch Service zero-ETL integration with Amazon S3, streamlining operational log queries in S3 and data lakes. This eliminates the need for tool-switching while analyzing operational data, enhancing query performance, and enabling the creation of fast-loading dashboards. Users leveraging OpenSearch Service and storing log data in Amazon S3 can now seamlessly analyze and correlate data across sources without costly and complex data replication. The integration simplifies complex queries and visualizations, offering a more efficient approach to understanding data, identifying anomalies, and detecting potential threats without the necessity of data movement. 

Amazon Redshift now supports Multi-AZ (Preview) for RA3 clusters 

Amazon Redshift is introducing Multi-AZ deployments that support running your data warehouse in multiple AWS Availability Zones (AZ) simultaneously and continue operating in unforeseen failure scenarios. A Multi-AZ deployment is intended for customers with business- critical analytics applications that require the highest levels of availability and resiliency to AZ failures. A Redshift Multi-AZ deployment allows you to recover in case of AZ failures without any user intervention. A Redshift Multi-AZ deployment is accessed as a single data warehouse with one endpoint and helps you maximize your data warehouse performance by distributing workload processing across multiple AZs automatically. You are not required to make any application changes to maintain business continuity during unforeseen outages. 

Amazon SageMaker Canvas 

SageMaker Canvas now supports foundation model-powered natural language instructions for data exploration, analysis, visualization, and transformation. This feature, powered by Amazon Bedrock, allows users to explore and transform their data to build accurate machine-learning models. Data preparation is often the most time-consuming part of the ML workflow, but SageMaker Canvas offers 300+ built-in transforms, analyses, and in-depth data quality insights reports without writing any code (like an example the quality of my dataset to identify any outliers or anomalies, we can make SageMaker Canvas to generate a data quality report to accomplish this task.). The process of data exploration and preparation is faster and simpler with natural language instructions, queries, and responses, allowing users to quickly understand and explore their data. 

Schedule a meeting with our AWS cloud solution experts and accelerate your cloud journey with Idexcel. 

AWS re:Invent 2023 – Day 2 Recap

Amazon Q 

AWS announces Amazon Q, a new generative AI–powered assistant that is specifically designed for work and can be tailored to your business to have conversations, solve problems, generate content, and take actions using the data and expertise found in your company’s information repositories, code, and enterprise systems. Amazon Q has a broad base of general knowledge and domain-specific expertise. 

Availability of Amazon Q’s feature development capability in preview, in Amazon CodeCatalyst. With this new capability, developers can assign a CodeCatalyst issue to Amazon Q, and Q performs the heavy lifting of converting a human prompt to an actionable plan, then completes code changes and a pull request that is assigned to the requester. Q will then monitor any associated workflows and attempt to correct any issues. The user can preview code changes and merge the pull request. Development teams can utilize this new capability as an end-to-end, streamlined experience within Amazon CodeCatalyst, without having to enter the IDE. 

Amazon Q, an advanced AI-driven assistant seamlessly integrated into Amazon Connect, is specifically designed for enhancing workplace efficiency and customizable to suit your business needs. Amazon Q within Connect provides instantaneous recommendations, empowering contact center agents to address customer concerns swiftly and precisely. This not only boosts agent productivity but also elevates overall customer satisfaction levels. 

With Amazon Q in AWS Chatbot, customers receive expert answers to questions related to AWS issues from chat channels where they collaborate with their peers to finalize the next steps.  

Amazon Transcribe 

Amazon Transcribe’s next-generation, multi-billion parameter speech foundation model-powered system that expands automatic speech recognition (ASR) to over 100 languages. Amazon Transcribe is a fully managed ASR service that makes it easy for customers to add speech-to-text capabilities to their applications. 

Continued Pre-training with Amazon Titan 

Amazon Bedrock provides you with an easy way to build and scale generative AI applications with leading foundation models (FMs). Continued pre-training in Amazon Bedrock is a new capability that allows you to train Amazon Titan Text Express and Amazon Titan Text Lite FMs and customize them using your own unlabeled data, in a secure and managed environment. As models are continually pre-trained on data spanning different topics, genres, and contexts over time, they become more robust and learn to handle out-of-domain data better by accumulating wider knowledge and adaptability, creating even more value for your organization. 

Fully Managed Agents for AWS Bedrock

Fully managed Agents for Amazon Bedrock enables generative AI applications to execute multi-step tasks across company systems and data sources. Agents analyze the user request and break it down into a logical sequence using the FM’s reasoning capabilities to determine what information is needed, the APIs to call, and the sequence of execution to fulfill the request. After creating the plan, Agents call the right APIs and retrieve the information needed from company systems and data sources to provide accurate and relevant responses. 

AWS Bedrock: PartyRock 

PartyRock, an Amazon Bedrock Playground is a fun and intuitive hands-on, shareable generative AI app-building tool. Post announcement, builders made tens of thousands of AI-powered apps in a matter of days, shared on social media using single-step tools from within the playground. 

Amazon DynamoDB zero-ETL integration with Amazon OpenSearch Service 

AWS has introduced Amazon DynamoDB zero-ETL integration with Amazon OpenSearch Service, empowering users with advanced search functionalities, including full-text and vector search, on their DynamoDB data. Zero-ETL integration provides seamless data synchronization from Amazon DynamoDB to OpenSearch without the need for custom code to extract, transform and load data. Leveraging Amazon OpenSearch Ingestion, the integration automatically comprehends the DynamoDB data format and maps it to OpenSearch index mapping templates for optimal search results. Users can synchronize data from multiple DynamoDB tables into a single OpenSearch managed cluster or serverless collection, facilitating comprehensive insights across various applications. This zero-ETL integration is available for Amazon OpenSearch Service managed clusters and serverless collections and accessible in all 13 regions where Amazon OpenSearch Ingestion is available. 

Amazon S3 Express One Zone high performance storage class 

Amazon has unveiled the Amazon S3 Express One Zone storage class, offering up to 10x better performance compared to S3 Standard. This high-performance class handles hundreds of thousands of requests per second with consistent single-digit millisecond latency, making it ideal for the most frequently accessed data and most demanding applications. Data is stored and replicated within a single AWS Availability Zone, enabling storage and compute resource co-location thus reducing latency. Small objects can be read up to 10x faster compared to the S3 standard with the S3 Express One Zone’s consistently very low latency. Amazon S3 Express One Zone offers remarkably low latency, and when paired with request costs that are 50% less than those associated with the S3 Standard storage class, it results in a comprehensive reduction in processing expenses. A new bucket type – ‘Directory Buckets’ is introduced specifically for this storage class that supports hundreds of thousands of requests per second. Pricing is on a pay-as-you-go basis, and the storage class offers 99.95% availability with a 99.9% SLA. 

Schedule a meeting with our AWS cloud solution experts and accelerate your cloud journey with Idexcel. 

AWS re:Invent 2023 – Day 1 Recap

Amazon Aurora Limitless Database (Preview) 

Amazon Aurora introduces the preview of Aurora Limitless Database, offering automated horizontal scaling to handle millions of write transactions per second and manage petabytes of data in a single database. This groundbreaking capability allows scaling beyond the limits of a single Aurora writer instance with independent compute and storage capacity. The two-layer architecture includes shards for parallel processing and transaction routers for managing distribution and ensuring consistency. Users can easily start with a preview in specified AWS regions, selecting the Limitless Database-compatible version and creating tables with options for maximum Aurora capacity units. Connectivity is established through the limitless endpoint, and two table types (Sharded and Reference) to optimize data distribution for enhanced performance. Aurora Limitless Database streamlines the scaling process, enabling the development of high-scale applications without the complexity of managing multiple instances. 

Amazon SQS FIFO queues – Throughput increase and DLQ redrive support 

Amazon SQS has introduced two new capabilities for FIFO (first-in, first-out) queues. Maximum throughput has been increased up to 70,000 transactions per second (TPS) per API action (compared to the previous limit of 18,000 TPS per API action in October) in selected AWS Regions, supporting sending or receiving up to 700,000 messages per second with batching. Similar to the DLQ redrive that is available for Standard Queues, DLQ redrive support for FIFO queues is introduced to handle the messages that are not consumed after a specific number of retries. This helps to set up the process to analyze the messages, take necessary actions for failures, and push the messages back to source queues or re-process the messages. These features can be leveraged to build more robust and scalable message processing systems. 

Amazon ElastiCache Serverless for Redis and Memcached 

Amazon Web Services (AWS) announced the general availability of Amazon ElastiCache Serverless for Redis and Memcached, a new serverless option that simplifies the deployment, operation, and scaling of caching solutions. ElastiCache Serverless requires no capacity planning or caching expertise and constantly monitors memory, CPU, and network utilization, providing a highly available cache with up to 99.99 percent availability. ElastiCache Serverless automatically scales to meet your application’s traffic patterns, and you only pay for the resources you use. 

Schedule a meeting with our AWS cloud solution experts and accelerate your cloud journey with Idexcel. 

What to Expect from AWS re:Invent 2023

The AWS re:Invent 2023, Amazon Web Services’ annual technology conference, is set to take place from November 27 to December 1 in Las Vegas, Nevada. As the premier event for AWS partners and customers, it stands out as the largest gathering, offering a diverse range of activities. Attendees can anticipate keynote announcements, training and certification options, entry to over 2,000 technical sessions, participation in the expo, engaging after-hours events, and numerous other enriching experiences during this in-person conference. 

AWS re:Invent 2023 is The event is ideal for people who want to transform their business with the cloud, hear the latest innovations with AWS, and explore new technology and anyone who wishes to level up their cloud-computing skills to advance their career. As an Amazon Web Services (AWS) Advanced Tier Services Partner and Managed Service Provider (MSP), Idexcel is very excited and looking forward to this event. 

Keynotes Sessions 

Keynote Session 1:  Peter DeSantis | Monday, November 27 | 7:30 PM – 9:00 PM (PST) 

Join Peter DeSantis, Senior Vice President of AWS Utility Computing, as he continues the Monday Night Live tradition of diving deep into the engineering that powers AWS services. Get a closer look at how our unique approach and culture of innovation help create leading-edge solutions across the entire spectrum, from silicon to services—without compromising on performance or cost. 

Keynote Session 2:  Adam Selipsky | Tuesday, November 28 | 8:00 AM – 10:30 AM (PST) 

Join Adam Selipsky, CEO of Amazon Web Services, as he shares his perspective on cloud transformation. He highlights innovations in data, infrastructure, artificial intelligence, and machine learning that are helping AWS customers achieve their goals faster, mine untapped potential, and create a better future. 

Keynote Session 3:  Swami Sivasubramanian | Wednesday, November 29 | 8:30 AM – 10:30 AM (PST) 

A powerful relationship between humans, data, and AI is unfolding right before us. Generative AI is augmenting our productivity and creativity in new ways, while also being fueled by massive amounts of enterprise data and human intelligence. Join Swami Sivasubramanian, Vice President of Data and AI at AWS, to discover how you can use your company data to build differentiated generative AI applications and accelerate productivity for employees across your organization. Also hear from customer speakers with real-world examples of how they’ve used their data to support their generative AI use cases and create new experiences for their customers. 

Keynote Session 4:  Ruba Borno | Wednesday, November 29 | 3:00 PM – 4:30 PM (PST) 

Join the AWS Partner Keynote, presented by Ruba Borno, Vice President of AWS Worldwide Channels and Alliances, as she delves into the world of strategic partnerships to show how AWS and AWS Partners are achieving impossible firsts, helping customers reimagine their business models and drive business outcomes. Hear from partners and customers about how they’re utilizing AWS to develop industry-changing solutions. Learn how we work together across industries and geographies to provide innovative solutions, robust customer opportunities, and tailored content and programs that drive collective prosperity. Discover how AWS is improving the digital experience for partners and connecting them with higher-value opportunities and longer-term success. 

Keynote Session 5:  Dr. Werner Vogels | Thursday, November 30 | 8:30 AM – 10:30 AM (PST) 

Join Dr. Werner Vogels, Amazon.com’s VP and CTO, for his twelfth re:Invent appearance. In his keynote, he covers best practices for designing resilient and cost-aware architectures. He also discusses why artificial intelligence is something every builder must consider when developing systems and the impact this will have in our world. 

Allolankandy Anand – (Vice President | Digital Transformation & Strategy at Idexcel) and his team will be attending this event to meet with customers and partners. Schedule a meeting with Allolankandy Anand to discover how Idexcel can deliver strategic and innovative cloud solutions to achieve your organization’s business goals. 

Top Announcements of AWS re:Invent 2022

Amazon Security Lake is a purpose-built service that automates the central management of security data sources into a purpose-built data lake stored in the account. This service helps security teams to analyze security data easily and have a complete understanding of the organization’s security posture. Security Lake has adopted the Open Cybersecurity Schema Framework (OCSF), an open standard that helps to normalize and combine the security data from various data sources including on-prem infrastructure, Firewalls, AWS CloudTrail, Amazon Route53, Amazon VPC Flow Logs, etc… Amazon Security Lake supports integrating data sources from third-party security solutions and custom data that has OCSF security data.

AWS Application Composer is a new AWS service that helps developers simplify and accelerate architecting, configuring, and building serverless applications. Users can visually compose serverless applications using AWS services with little guesswork. AWS Application Composer’s browser-based visual canvas supports the drag and drop of AWS services, establishing connectivity between them to form an application architecture comprising multiple AWS services. This service aids Developers in overcoming the challenges of configuring various AWS services and from writing IaC to deploying the application. AWS Application Composer maintains the visual representation of the application architecture in sync with the IaC, in real-time.

Amazon Inspector Now Scans AWS Lambda Functions for Vulnerabilities: Amazon Inspector, a vulnerability management service that continually scans workloads across Amazon Elastic Compute Cloud (Amazon EC2) instances & container images in Amazon Elastic Container Registry (Amazon ECR) now supports scanning AWS Lambda functions and Lambda layers. Customers who had to assess the lambda functions against common vulnerabilities had to use AWS and third-party tools. This increased the complexity of keeping all their workloads secure. As new vulnerabilities can appear at any time, it is very important for the security of your applications that the workloads are continuously monitored and rescanned in near real-time as new vulnerabilities are published.

AWS Clean Rooms: Helping companies bring in data from different environments, AWS Clean Rooms lets firms securely analyze and collaborate on data sets without sharing possibly insecure information, helping firms better understand their own customers and allow joint data analysis.

Amazon Redshift Streaming Ingestion with this new capability, Amazon Redshift can natively ingest hundreds of megabytes of data per second from Amazon Kinesis Data Streams and Amazon MSK into an Amazon Redshift materialized view and query it in seconds

Amazon Redshift integration for Apache Spark which makes it easy to build and run Spark applications on Amazon Redshift and Redshift Serverless, enabling customers to open up the data warehouse for a broader set of AWS analytics and machine learning (ML) solutions.

Amazon Athena for Apache Spark with this feature, we can run Apache Spark workloads, use Jupyter Notebook as the interface to perform data processing on Athena. this benefits customers in performing interactive data exploration to gain insights without the need to provision and maintain resources to run Apache Spark.

Create Point-to-Point Integrations Between Event Producers and Consumers with Amazon EventBridge Pipes In the modern event-driven application where multiple cloud services are used as building blocks, communication between the services requires integration code. Maintaining the integration code is a challenge. Amazon EventBridge Pipes is a new feature of Amazon EventBridge that makes it easier to build event-driven applications by providing a simple, consistent, and cost-effective way to create point-to-point integrations between event producers and consumers, removing the need to write undifferentiated glue code. Amazon EventBridge Pipes bring the most popular features of Amazon EventBridge Event Bus, such as event filtering, integration with more than 14 AWS services, and automatic delivery retries

Amazon DataZone is a new data management service that makes it faster and easier for customers to catalog, discover, share, and govern data stored across AWS, on-premises, and third-party sources. “To unlock the full power, the full value of data, we need to make it easy for the right people and applications to find, access, and share the right data when they need it — and to keep data safe and secure,” AWS CEO Adam Selipsky said on his keynote session. DataZone enables you to set data free throughout the organization safely by making it easy for admins and data stewards to manage and govern access to data. DataZone provides a data catalog accessible through a web portal where users within an organization can find data that can be used for analytics, business intelligence, and machine learning.

AWS Supply Chain is a new cloud-based application that helps supply chain leaders mitigate risks and lower costs to increase supply chain resilience. AWS Supply Chain unifies supply chain data, provides ML-powered actionable insights, and offers built-in contextual collaboration, all of which help you increase customer service levels by reducing stockouts and help you lower costs from overstock.

Support for Real-Time and Batch Inference in Amazon SageMaker Data Wrangler Deploy data preparation flows from SageMaker Data Wrangler for real-time and batch inference. This feature allows you to reuse the data transformation flow which you created in SageMaker Data Wrangler as a step in Amazon SageMaker inference pipelines.

SageMaker Data Wrangler support for real-time and batch inference speeds up your production deployment because there is no need to repeat the implementation of the data transformation flow.

You can now integrate SageMaker Data Wrangler with SageMaker inference. The same data transformation flows created with the easy-to-use, point-and-click interface of SageMaker Data Wrangler, containing operations such as Principal Component Analysis and one-hot encoding, will be used to process your data during inference. This means that you don’t have to rebuild the data pipeline for a real-time and batch inference application, and you can get to production faster.

Classifying and Extracting Mortgage Loan Data with Amazon Textract Until now, classification and extraction of data from mortgage loan application packages have been human-intensive tasks, although some lenders have used a hybrid approach, using technology such as Amazon Textract. However, customers told us that they needed even greater workflow automation to speed up automation efforts and reduce human error so that their staff could focus on higher-value tasks.

The new API also provides additional value-add services. It’s able to perform signature detection in terms of which documents have signatures and which don’t. It also provides a summary output of the documents in a mortgage application package and identifies select important documents such as bank statements and 1003 forms that would normally be present. The new workflow is powered by a collection of machine learning (ML) models. When a mortgage application package is uploaded, the workflow classifies the documents in the package before routing them to the right ML model, based on their classification, for data extraction.

Process PDFs, Word Documents, and Images with Amazon Comprehend for IDP with Amazon Comprehend for IDP, customers can process their semi-structured documents, such as PDFs, docx, PNG, JPG, or TIFF images, as well as plain-text documents, with a single API call. This new feature combines OCR and Amazon Comprehend’s existing natural language processing (NLP) capabilities to classify and extract entities from the documents. The custom document classification API allows you to organize documents into categories or classes, and the custom-named entity recognition API allows you to extract entities from documents like product codes or business-specific entities. For example, an insurance company can now process scanned customers’ claims with fewer API calls. Using the Amazon Comprehend entity recognition API, they can extract the customer number from the claims and use the custom classifier API to sort the claim into the different insurance categories—home, car, or personal.

Next Generation SageMaker Notebooks – Now with Built-in Data Preparation, Real-Time Collaboration, and Notebook Automation SageMaker Studio notebooks automatically generate key visualizations on top of Pandas data frames to help you understand data distribution and identify data quality issues, like missing values, invalid data, and outliers. You can also select the target column for ML models and generate ML-specific insights such as imbalanced class or high correlation columns. You then receive recommendations for data transformations to resolve the issues. You can apply the data transformations right in the UI, and SageMaker Studio notebooks automatically generate the corresponding transformation code in the notebook cells that you can use to replay your data preparation pipeline

SageMaker Studio now offers shared spaces that give data science and ML teams a workspace where they can read, edit, and run notebooks together in real time to streamline collaboration and communication during the development process. Shared spaces provide a shared Amazon EFS directory that you can utilize to share files within a shared space. All taggable SageMaker resources that you create in a shared space are automatically tagged to help you organize and have a filtered view of your ML resources, such as training jobs, experiments, and models, that are relevant to the business problem you work on in the space. This also helps you monitor costs and plan budgets using tools such as AWS Budgets and AWS Cost Explorer.

Schedule a meeting with our AWS cloud solution experts and accelerate your cloud journey with Idexcel.

AWS re:Invent 2022 – Day 4 Recap

AWS Application Composer is a new AWS service that helps developers simplify and accelerate architecting, configuring, and building serverless applications. Users can visually compose serverless applications using AWS services with little guesswork. AWS Application Composer’s browser-based visual canvas supports the drag and drop of AWS services, establishing connectivity between them to form an application architecture comprising multiple AWS services. This service aids Developers in overcoming the challenges of configuring various AWS services and from writing IaC to deploying the application. AWS Application Composer maintains the visual representation of the application architecture in sync with the IaC, in real-time.

Create Point-to-Point Integrations Between Event Producers and Consumers with Amazon EventBridge Pipes In the modern event-driven application where multiple cloud services are used as building blocks, communication between the services requires integration code. Maintaining the integration code is a challenge. Amazon EventBridge Pipes is a new feature of Amazon EventBridge that makes it easier to build event-driven applications by providing a simple, consistent, and cost-effective way to create point-to-point integrations between event producers and consumers, removing the need to write undifferentiated glue code. Amazon EventBridge Pipes bring the most popular features of Amazon EventBridge Event Bus, such as event filtering, integration with more than 14 AWS services, and automatic delivery retries.

Process PDFs, Word Documents, and Images with Amazon Comprehend for IDP
With Amazon Comprehend for IDP, customers can process their semi-structured documents, such as PDFs, docx, PNG, JPG, or TIFF images, as well as plain-text documents, with a single API call. This new feature combines OCR and Amazon Comprehend’s existing natural language processing (NLP) capabilities to classify and extract entities from documents. The custom document classification API allows you to organize documents into categories or classes, and the custom-named entity recognition API allows you to extract entities from documents like product codes or business-specific entities.

Amazon CodeCatalyst A unified software development and delivery service, Amazon CodeCatalyst enables software development teams to plan, develop, collaborate on, build, and deliver applications on AWS, reducing friction throughout the development lifecycle quickly and easily.

Features in Amazon CodeCatalyst to address these challenges include:

  1. Blueprints that set up the project’s resources—not just scaffolding for new projects, but also the resources needed to support software delivery and deployment.
  2. On-demand cloud-based Dev Environments, to make it easy to replicate consistent development environments for you or your teams.
  3. Issue management, enabling tracing of changes across commits, pull requests, and deployments.
  4. Automated build and release (CI/CD) pipelines using flexible, managed build infrastructure.
  5. Dashboards to surface a feed of project activities such as commits, pull requests and test reporting.
  6. The ability to invite others to collaborate on a project with just an email.
  7. Unified search, making it easy to find what you’re looking for across users, issues, code, and other project resources.

Step Functions Distributed Map – A Serverless Solution for Large-Scale Parallel Data Processing
The new distributed map state allows you to write Step Functions to coordinate large-scale parallel workloads within your serverless applications. You can now iterate over millions of objects such as logs, images, or .csv files stored in Amazon Simple Storage Service (Amazon S3). The new distributed map state can launch up to ten thousand parallel workflows to process data.

Step Functions distributed map supports a maximum concurrency of up to 10,000 executions in parallel, which is well above the concurrency supported by many other AWS services. You can use the maximum concurrency feature of the distributed map to ensure that you do not exceed the concurrency of a downstream service.

Schedule a meeting with our AWS cloud solution experts and accelerate your cloud journey with Idexcel.