AWS re:Invent 2023 – Day 2 Recap

Amazon Q 

AWS announces Amazon Q, a new generative AI–powered assistant that is specifically designed for work and can be tailored to your business to have conversations, solve problems, generate content, and take actions using the data and expertise found in your company’s information repositories, code, and enterprise systems. Amazon Q has a broad base of general knowledge and domain-specific expertise. 

Availability of Amazon Q’s feature development capability in preview, in Amazon CodeCatalyst. With this new capability, developers can assign a CodeCatalyst issue to Amazon Q, and Q performs the heavy lifting of converting a human prompt to an actionable plan, then completes code changes and a pull request that is assigned to the requester. Q will then monitor any associated workflows and attempt to correct any issues. The user can preview code changes and merge the pull request. Development teams can utilize this new capability as an end-to-end, streamlined experience within Amazon CodeCatalyst, without having to enter the IDE. 

Amazon Q, an advanced AI-driven assistant seamlessly integrated into Amazon Connect, is specifically designed for enhancing workplace efficiency and customizable to suit your business needs. Amazon Q within Connect provides instantaneous recommendations, empowering contact center agents to address customer concerns swiftly and precisely. This not only boosts agent productivity but also elevates overall customer satisfaction levels. 

With Amazon Q in AWS Chatbot, customers receive expert answers to questions related to AWS issues from chat channels where they collaborate with their peers to finalize the next steps.  

Amazon Transcribe 

Amazon Transcribe’s next-generation, multi-billion parameter speech foundation model-powered system that expands automatic speech recognition (ASR) to over 100 languages. Amazon Transcribe is a fully managed ASR service that makes it easy for customers to add speech-to-text capabilities to their applications. 

Continued Pre-training with Amazon Titan 

Amazon Bedrock provides you with an easy way to build and scale generative AI applications with leading foundation models (FMs). Continued pre-training in Amazon Bedrock is a new capability that allows you to train Amazon Titan Text Express and Amazon Titan Text Lite FMs and customize them using your own unlabeled data, in a secure and managed environment. As models are continually pre-trained on data spanning different topics, genres, and contexts over time, they become more robust and learn to handle out-of-domain data better by accumulating wider knowledge and adaptability, creating even more value for your organization. 

Fully Managed Agents for AWS Bedrock

Fully managed Agents for Amazon Bedrock enables generative AI applications to execute multi-step tasks across company systems and data sources. Agents analyze the user request and break it down into a logical sequence using the FM’s reasoning capabilities to determine what information is needed, the APIs to call, and the sequence of execution to fulfill the request. After creating the plan, Agents call the right APIs and retrieve the information needed from company systems and data sources to provide accurate and relevant responses. 

AWS Bedrock: PartyRock 

PartyRock, an Amazon Bedrock Playground is a fun and intuitive hands-on, shareable generative AI app-building tool. Post announcement, builders made tens of thousands of AI-powered apps in a matter of days, shared on social media using single-step tools from within the playground. 

Amazon DynamoDB zero-ETL integration with Amazon OpenSearch Service 

AWS has introduced Amazon DynamoDB zero-ETL integration with Amazon OpenSearch Service, empowering users with advanced search functionalities, including full-text and vector search, on their DynamoDB data. Zero-ETL integration provides seamless data synchronization from Amazon DynamoDB to OpenSearch without the need for custom code to extract, transform and load data. Leveraging Amazon OpenSearch Ingestion, the integration automatically comprehends the DynamoDB data format and maps it to OpenSearch index mapping templates for optimal search results. Users can synchronize data from multiple DynamoDB tables into a single OpenSearch managed cluster or serverless collection, facilitating comprehensive insights across various applications. This zero-ETL integration is available for Amazon OpenSearch Service managed clusters and serverless collections and accessible in all 13 regions where Amazon OpenSearch Ingestion is available. 

Amazon S3 Express One Zone high performance storage class 

Amazon has unveiled the Amazon S3 Express One Zone storage class, offering up to 10x better performance compared to S3 Standard. This high-performance class handles hundreds of thousands of requests per second with consistent single-digit millisecond latency, making it ideal for the most frequently accessed data and most demanding applications. Data is stored and replicated within a single AWS Availability Zone, enabling storage and compute resource co-location thus reducing latency. Small objects can be read up to 10x faster compared to the S3 standard with the S3 Express One Zone’s consistently very low latency. Amazon S3 Express One Zone offers remarkably low latency, and when paired with request costs that are 50% less than those associated with the S3 Standard storage class, it results in a comprehensive reduction in processing expenses. A new bucket type – ‘Directory Buckets’ is introduced specifically for this storage class that supports hundreds of thousands of requests per second. Pricing is on a pay-as-you-go basis, and the storage class offers 99.95% availability with a 99.9% SLA. 

Schedule a meeting with our AWS cloud solution experts and accelerate your cloud journey with Idexcel. 

AWS re:Invent 2023 – Day 1 Recap

Amazon Aurora Limitless Database (Preview) 

Amazon Aurora introduces the preview of Aurora Limitless Database, offering automated horizontal scaling to handle millions of write transactions per second and manage petabytes of data in a single database. This groundbreaking capability allows scaling beyond the limits of a single Aurora writer instance with independent compute and storage capacity. The two-layer architecture includes shards for parallel processing and transaction routers for managing distribution and ensuring consistency. Users can easily start with a preview in specified AWS regions, selecting the Limitless Database-compatible version and creating tables with options for maximum Aurora capacity units. Connectivity is established through the limitless endpoint, and two table types (Sharded and Reference) to optimize data distribution for enhanced performance. Aurora Limitless Database streamlines the scaling process, enabling the development of high-scale applications without the complexity of managing multiple instances. 

Amazon SQS FIFO queues – Throughput increase and DLQ redrive support 

Amazon SQS has introduced two new capabilities for FIFO (first-in, first-out) queues. Maximum throughput has been increased up to 70,000 transactions per second (TPS) per API action (compared to the previous limit of 18,000 TPS per API action in October) in selected AWS Regions, supporting sending or receiving up to 700,000 messages per second with batching. Similar to the DLQ redrive that is available for Standard Queues, DLQ redrive support for FIFO queues is introduced to handle the messages that are not consumed after a specific number of retries. This helps to set up the process to analyze the messages, take necessary actions for failures, and push the messages back to source queues or re-process the messages. These features can be leveraged to build more robust and scalable message processing systems. 

Amazon ElastiCache Serverless for Redis and Memcached 

Amazon Web Services (AWS) announced the general availability of Amazon ElastiCache Serverless for Redis and Memcached, a new serverless option that simplifies the deployment, operation, and scaling of caching solutions. ElastiCache Serverless requires no capacity planning or caching expertise and constantly monitors memory, CPU, and network utilization, providing a highly available cache with up to 99.99 percent availability. ElastiCache Serverless automatically scales to meet your application’s traffic patterns, and you only pay for the resources you use. 

Schedule a meeting with our AWS cloud solution experts and accelerate your cloud journey with Idexcel. 

What to Expect from AWS re:Invent 2023

The AWS re:Invent 2023, Amazon Web Services’ annual technology conference, is set to take place from November 27 to December 1 in Las Vegas, Nevada. As the premier event for AWS partners and customers, it stands out as the largest gathering, offering a diverse range of activities. Attendees can anticipate keynote announcements, training and certification options, entry to over 2,000 technical sessions, participation in the expo, engaging after-hours events, and numerous other enriching experiences during this in-person conference. 

AWS re:Invent 2023 is The event is ideal for people who want to transform their business with the cloud, hear the latest innovations with AWS, and explore new technology and anyone who wishes to level up their cloud-computing skills to advance their career. As an Amazon Web Services (AWS) Advanced Tier Services Partner and Managed Service Provider (MSP), Idexcel is very excited and looking forward to this event. 

Keynotes Sessions 

Keynote Session 1:  Peter DeSantis | Monday, November 27 | 7:30 PM – 9:00 PM (PST) 

Join Peter DeSantis, Senior Vice President of AWS Utility Computing, as he continues the Monday Night Live tradition of diving deep into the engineering that powers AWS services. Get a closer look at how our unique approach and culture of innovation help create leading-edge solutions across the entire spectrum, from silicon to services—without compromising on performance or cost. 

Keynote Session 2:  Adam Selipsky | Tuesday, November 28 | 8:00 AM – 10:30 AM (PST) 

Join Adam Selipsky, CEO of Amazon Web Services, as he shares his perspective on cloud transformation. He highlights innovations in data, infrastructure, artificial intelligence, and machine learning that are helping AWS customers achieve their goals faster, mine untapped potential, and create a better future. 

Keynote Session 3:  Swami Sivasubramanian | Wednesday, November 29 | 8:30 AM – 10:30 AM (PST) 

A powerful relationship between humans, data, and AI is unfolding right before us. Generative AI is augmenting our productivity and creativity in new ways, while also being fueled by massive amounts of enterprise data and human intelligence. Join Swami Sivasubramanian, Vice President of Data and AI at AWS, to discover how you can use your company data to build differentiated generative AI applications and accelerate productivity for employees across your organization. Also hear from customer speakers with real-world examples of how they’ve used their data to support their generative AI use cases and create new experiences for their customers. 

Keynote Session 4:  Ruba Borno | Wednesday, November 29 | 3:00 PM – 4:30 PM (PST) 

Join the AWS Partner Keynote, presented by Ruba Borno, Vice President of AWS Worldwide Channels and Alliances, as she delves into the world of strategic partnerships to show how AWS and AWS Partners are achieving impossible firsts, helping customers reimagine their business models and drive business outcomes. Hear from partners and customers about how they’re utilizing AWS to develop industry-changing solutions. Learn how we work together across industries and geographies to provide innovative solutions, robust customer opportunities, and tailored content and programs that drive collective prosperity. Discover how AWS is improving the digital experience for partners and connecting them with higher-value opportunities and longer-term success. 

Keynote Session 5:  Dr. Werner Vogels | Thursday, November 30 | 8:30 AM – 10:30 AM (PST) 

Join Dr. Werner Vogels, Amazon.com’s VP and CTO, for his twelfth re:Invent appearance. In his keynote, he covers best practices for designing resilient and cost-aware architectures. He also discusses why artificial intelligence is something every builder must consider when developing systems and the impact this will have in our world. 

Allolankandy Anand – (Vice President | Digital Transformation & Strategy at Idexcel) and his team will be attending this event to meet with customers and partners. Schedule a meeting with Allolankandy Anand to discover how Idexcel can deliver strategic and innovative cloud solutions to achieve your organization’s business goals. 

Top Announcements of AWS re:Invent 2022

Amazon Security Lake is a purpose-built service that automates the central management of security data sources into a purpose-built data lake stored in the account. This service helps security teams to analyze security data easily and have a complete understanding of the organization’s security posture. Security Lake has adopted the Open Cybersecurity Schema Framework (OCSF), an open standard that helps to normalize and combine the security data from various data sources including on-prem infrastructure, Firewalls, AWS CloudTrail, Amazon Route53, Amazon VPC Flow Logs, etc… Amazon Security Lake supports integrating data sources from third-party security solutions and custom data that has OCSF security data.

AWS Application Composer is a new AWS service that helps developers simplify and accelerate architecting, configuring, and building serverless applications. Users can visually compose serverless applications using AWS services with little guesswork. AWS Application Composer’s browser-based visual canvas supports the drag and drop of AWS services, establishing connectivity between them to form an application architecture comprising multiple AWS services. This service aids Developers in overcoming the challenges of configuring various AWS services and from writing IaC to deploying the application. AWS Application Composer maintains the visual representation of the application architecture in sync with the IaC, in real-time.

Amazon Inspector Now Scans AWS Lambda Functions for Vulnerabilities: Amazon Inspector, a vulnerability management service that continually scans workloads across Amazon Elastic Compute Cloud (Amazon EC2) instances & container images in Amazon Elastic Container Registry (Amazon ECR) now supports scanning AWS Lambda functions and Lambda layers. Customers who had to assess the lambda functions against common vulnerabilities had to use AWS and third-party tools. This increased the complexity of keeping all their workloads secure. As new vulnerabilities can appear at any time, it is very important for the security of your applications that the workloads are continuously monitored and rescanned in near real-time as new vulnerabilities are published.

AWS Clean Rooms: Helping companies bring in data from different environments, AWS Clean Rooms lets firms securely analyze and collaborate on data sets without sharing possibly insecure information, helping firms better understand their own customers and allow joint data analysis.

Amazon Redshift Streaming Ingestion with this new capability, Amazon Redshift can natively ingest hundreds of megabytes of data per second from Amazon Kinesis Data Streams and Amazon MSK into an Amazon Redshift materialized view and query it in seconds

Amazon Redshift integration for Apache Spark which makes it easy to build and run Spark applications on Amazon Redshift and Redshift Serverless, enabling customers to open up the data warehouse for a broader set of AWS analytics and machine learning (ML) solutions.

Amazon Athena for Apache Spark with this feature, we can run Apache Spark workloads, use Jupyter Notebook as the interface to perform data processing on Athena. this benefits customers in performing interactive data exploration to gain insights without the need to provision and maintain resources to run Apache Spark.

Create Point-to-Point Integrations Between Event Producers and Consumers with Amazon EventBridge Pipes In the modern event-driven application where multiple cloud services are used as building blocks, communication between the services requires integration code. Maintaining the integration code is a challenge. Amazon EventBridge Pipes is a new feature of Amazon EventBridge that makes it easier to build event-driven applications by providing a simple, consistent, and cost-effective way to create point-to-point integrations between event producers and consumers, removing the need to write undifferentiated glue code. Amazon EventBridge Pipes bring the most popular features of Amazon EventBridge Event Bus, such as event filtering, integration with more than 14 AWS services, and automatic delivery retries

Amazon DataZone is a new data management service that makes it faster and easier for customers to catalog, discover, share, and govern data stored across AWS, on-premises, and third-party sources. “To unlock the full power, the full value of data, we need to make it easy for the right people and applications to find, access, and share the right data when they need it — and to keep data safe and secure,” AWS CEO Adam Selipsky said on his keynote session. DataZone enables you to set data free throughout the organization safely by making it easy for admins and data stewards to manage and govern access to data. DataZone provides a data catalog accessible through a web portal where users within an organization can find data that can be used for analytics, business intelligence, and machine learning.

AWS Supply Chain is a new cloud-based application that helps supply chain leaders mitigate risks and lower costs to increase supply chain resilience. AWS Supply Chain unifies supply chain data, provides ML-powered actionable insights, and offers built-in contextual collaboration, all of which help you increase customer service levels by reducing stockouts and help you lower costs from overstock.

Support for Real-Time and Batch Inference in Amazon SageMaker Data Wrangler Deploy data preparation flows from SageMaker Data Wrangler for real-time and batch inference. This feature allows you to reuse the data transformation flow which you created in SageMaker Data Wrangler as a step in Amazon SageMaker inference pipelines.

SageMaker Data Wrangler support for real-time and batch inference speeds up your production deployment because there is no need to repeat the implementation of the data transformation flow.

You can now integrate SageMaker Data Wrangler with SageMaker inference. The same data transformation flows created with the easy-to-use, point-and-click interface of SageMaker Data Wrangler, containing operations such as Principal Component Analysis and one-hot encoding, will be used to process your data during inference. This means that you don’t have to rebuild the data pipeline for a real-time and batch inference application, and you can get to production faster.

Classifying and Extracting Mortgage Loan Data with Amazon Textract Until now, classification and extraction of data from mortgage loan application packages have been human-intensive tasks, although some lenders have used a hybrid approach, using technology such as Amazon Textract. However, customers told us that they needed even greater workflow automation to speed up automation efforts and reduce human error so that their staff could focus on higher-value tasks.

The new API also provides additional value-add services. It’s able to perform signature detection in terms of which documents have signatures and which don’t. It also provides a summary output of the documents in a mortgage application package and identifies select important documents such as bank statements and 1003 forms that would normally be present. The new workflow is powered by a collection of machine learning (ML) models. When a mortgage application package is uploaded, the workflow classifies the documents in the package before routing them to the right ML model, based on their classification, for data extraction.

Process PDFs, Word Documents, and Images with Amazon Comprehend for IDP with Amazon Comprehend for IDP, customers can process their semi-structured documents, such as PDFs, docx, PNG, JPG, or TIFF images, as well as plain-text documents, with a single API call. This new feature combines OCR and Amazon Comprehend’s existing natural language processing (NLP) capabilities to classify and extract entities from the documents. The custom document classification API allows you to organize documents into categories or classes, and the custom-named entity recognition API allows you to extract entities from documents like product codes or business-specific entities. For example, an insurance company can now process scanned customers’ claims with fewer API calls. Using the Amazon Comprehend entity recognition API, they can extract the customer number from the claims and use the custom classifier API to sort the claim into the different insurance categories—home, car, or personal.

Next Generation SageMaker Notebooks – Now with Built-in Data Preparation, Real-Time Collaboration, and Notebook Automation SageMaker Studio notebooks automatically generate key visualizations on top of Pandas data frames to help you understand data distribution and identify data quality issues, like missing values, invalid data, and outliers. You can also select the target column for ML models and generate ML-specific insights such as imbalanced class or high correlation columns. You then receive recommendations for data transformations to resolve the issues. You can apply the data transformations right in the UI, and SageMaker Studio notebooks automatically generate the corresponding transformation code in the notebook cells that you can use to replay your data preparation pipeline

SageMaker Studio now offers shared spaces that give data science and ML teams a workspace where they can read, edit, and run notebooks together in real time to streamline collaboration and communication during the development process. Shared spaces provide a shared Amazon EFS directory that you can utilize to share files within a shared space. All taggable SageMaker resources that you create in a shared space are automatically tagged to help you organize and have a filtered view of your ML resources, such as training jobs, experiments, and models, that are relevant to the business problem you work on in the space. This also helps you monitor costs and plan budgets using tools such as AWS Budgets and AWS Cost Explorer.

Schedule a meeting with our AWS cloud solution experts and accelerate your cloud journey with Idexcel.

AWS re:Invent 2022 – Day 4 Recap

AWS Application Composer is a new AWS service that helps developers simplify and accelerate architecting, configuring, and building serverless applications. Users can visually compose serverless applications using AWS services with little guesswork. AWS Application Composer’s browser-based visual canvas supports the drag and drop of AWS services, establishing connectivity between them to form an application architecture comprising multiple AWS services. This service aids Developers in overcoming the challenges of configuring various AWS services and from writing IaC to deploying the application. AWS Application Composer maintains the visual representation of the application architecture in sync with the IaC, in real-time.

Create Point-to-Point Integrations Between Event Producers and Consumers with Amazon EventBridge Pipes In the modern event-driven application where multiple cloud services are used as building blocks, communication between the services requires integration code. Maintaining the integration code is a challenge. Amazon EventBridge Pipes is a new feature of Amazon EventBridge that makes it easier to build event-driven applications by providing a simple, consistent, and cost-effective way to create point-to-point integrations between event producers and consumers, removing the need to write undifferentiated glue code. Amazon EventBridge Pipes bring the most popular features of Amazon EventBridge Event Bus, such as event filtering, integration with more than 14 AWS services, and automatic delivery retries.

Process PDFs, Word Documents, and Images with Amazon Comprehend for IDP
With Amazon Comprehend for IDP, customers can process their semi-structured documents, such as PDFs, docx, PNG, JPG, or TIFF images, as well as plain-text documents, with a single API call. This new feature combines OCR and Amazon Comprehend’s existing natural language processing (NLP) capabilities to classify and extract entities from documents. The custom document classification API allows you to organize documents into categories or classes, and the custom-named entity recognition API allows you to extract entities from documents like product codes or business-specific entities.

Amazon CodeCatalyst A unified software development and delivery service, Amazon CodeCatalyst enables software development teams to plan, develop, collaborate on, build, and deliver applications on AWS, reducing friction throughout the development lifecycle quickly and easily.

Features in Amazon CodeCatalyst to address these challenges include:

  1. Blueprints that set up the project’s resources—not just scaffolding for new projects, but also the resources needed to support software delivery and deployment.
  2. On-demand cloud-based Dev Environments, to make it easy to replicate consistent development environments for you or your teams.
  3. Issue management, enabling tracing of changes across commits, pull requests, and deployments.
  4. Automated build and release (CI/CD) pipelines using flexible, managed build infrastructure.
  5. Dashboards to surface a feed of project activities such as commits, pull requests and test reporting.
  6. The ability to invite others to collaborate on a project with just an email.
  7. Unified search, making it easy to find what you’re looking for across users, issues, code, and other project resources.

Step Functions Distributed Map – A Serverless Solution for Large-Scale Parallel Data Processing
The new distributed map state allows you to write Step Functions to coordinate large-scale parallel workloads within your serverless applications. You can now iterate over millions of objects such as logs, images, or .csv files stored in Amazon Simple Storage Service (Amazon S3). The new distributed map state can launch up to ten thousand parallel workflows to process data.

Step Functions distributed map supports a maximum concurrency of up to 10,000 executions in parallel, which is well above the concurrency supported by many other AWS services. You can use the maximum concurrency feature of the distributed map to ensure that you do not exceed the concurrency of a downstream service.

Schedule a meeting with our AWS cloud solution experts and accelerate your cloud journey with Idexcel.

AWS re:Invent 2022 – Day 3 Recap

AWS Marketplace Vendor Insights – Simplify Third-Party Software Risk Assessments It helps you to ensure that the third-party software continuously meets your industry standards by compiling security and compliance information, such as data privacy and residency, application security, and access control, in one consolidated dashboard.

As a security engineer, you may now complete third-party software risk assessment in a few days instead of months. You can now:

  • Quickly discover products in AWS Marketplace that meet your security and certification standards by searching for and accessing Vendor Insights profiles.
  • Access and download current and validated information, with evidence gathered from the vendors’ security tools and audit reports. Reports are available for download on AWS Artifact third-party reports (now available in preview).
  • Monitor your software’s security posture post-procurement and receive notifications for security and compliance events.

New for Amazon SageMaker – Perform Shadow Tests to Compare Inference Performance Between ML Model Variants
You can create shadow tests using the new SageMaker Inference Console and APIs. Shadow testing gives you a fully managed experience for setup, monitoring, viewing, and acting on the results of shadow tests. If you have existing workflows built around SageMaker endpoints, you can also deploy a model in shadow mode using the existing SageMaker Inference APIs. You can monitor the progress of the shadow test and performance metrics such as latency and error rate through a live dashboard.

Next Generation SageMaker Notebooks – Now with Built-in Data Preparation, Real-Time Collaboration, and Notebook Automation
The next generation of Amazon SageMaker Notebooks will increase efficiency across the ML development workflow. You can now improve data quality in minutes with the built-in data preparation capability, edit the same notebooks with your teams in real-time, and automatically convert notebook code to production-ready jobs.

SageMaker Studio now offers shared spaces that give data science and ML teams a workspace where they can read, edit, and run notebooks together in real time to streamline collaboration and communication during the development process. Shared spaces provide a shared Amazon EFS directory that you can utilize to share files within a shared space.

You can now select a notebook and automate it as a job that can run in a production environment without the need to manage the underlying infrastructure. When you create a SageMaker Notebook Job, SageMaker Studio takes a snapshot of the entire notebook, packages its dependencies in a container, builds the infrastructure, runs the notebook as an automated job on a schedule you define, and deprovisions the infrastructure upon job completion.

Introducing Support for Real-Time and Batch Inference in Amazon SageMaker Data Wrangler
To build machine learning models, machine learning engineers need to develop a data transformation pipeline to prepare the data. The process of designing this pipeline is time-consuming and requires a cross-team collaboration between machine learning engineers, data engineers, and data scientists to implement the data preparation pipeline into a production environment.

The main objective of Amazon SageMaker Data Wrangler is to make it easy to do data preparation and data processing workloads. With SageMaker Data Wrangler, customers can simplify the process of data preparation and all of the necessary steps of data preparation workflow on a single visual interface. SageMaker Data Wrangler reduces the time to rapidly prototype and deploy data processing workloads to production, so customers can easily integrate with MLOps production environments.

Additional Data Connectors for Amazon AppFlow
AWS announced the addition of 22 new data connectors for Amazon AppFlow, including:

  1. Marketing connectors (e.g., Facebook Ads, Google Ads, Instagram Ads, LinkedIn Ads).
  2. Connectors for customer service and engagement (e.g., MailChimp, SendGrid, Zendesk Sell or Chat, and more).
  3. Business operations (Stripe, QuickBooks Online, and GitHub).

In total, Amazon AppFlow now supports over 50 integrations with various different SaaS applications and AWS services.

Redesigned UI for Amazon SageMaker Studio
The redesigned UI makes it easier for you to discover and get started with the ML tools in SageMaker Studio. One highlight of the new UI includes a redesigned navigation menu with links to SageMaker capabilities that follow the typical ML development workflow from preparing data to building, training, and deploying ML models.

Schedule a meeting with our AWS cloud solution experts and accelerate your cloud journey with Idexcel.

AWS re:Invent 2022 – Day 2 Recap

Amazon QuickSight Q is powered by machine learning (ML), providing self-service analytics by allowing you to query your data using plain language and therefore eliminating the need to fiddle with dashboards, controls, and calculations. With last year’s announcement of QuickSight Q, you can ask simple questions like “who had the highest sales in EMEA in 2021” and get your answers (with relevant visualizations like graphs, maps, or tables) in seconds. Automated data preparation utilizes machine learning to infer semantic information about data and adds it to datasets as metadata about the columns (fields), making it faster for you to prepare data to support natural language questions.

AWS Supply Chain is a new cloud-based application that helps supply chain leaders mitigate risks and lower costs to increase supply chain resilience. AWS Supply Chain unifies supply chain data, provides ML-powered actionable insights, and offers built-in contextual collaboration, all of which help you increase customer service levels by reducing stockouts and help you lower costs from overstock.

Amazon DataZone is a new data management service that makes it faster and easier for customers to catalog, discover, share, and govern data stored across AWS, on-premises, and third-party sources. “To unlock the full power, the full value of data, we need to make it easy for the right people and applications to find, access, and share the right data when they need it — and to keep data safe and secure,” AWS CEO Adam Selipsky said on his keynote session. DataZone enables you to set data free throughout the organization safely by making it easy for admins and data stewards to manage and govern access to data. DataZone provides a data catalog accessible through a web portal where users within an organization can find data that can be used for analytics, business intelligence, and machine learning.

Amazon Security Lake is a purpose-built service that automates the central management of security data sources into a purpose-built data lake stored in the account. This service helps security teams to analyze security data easily and have a complete understanding of the organization’s security posture. Security Lake has adopted the Open Cybersecurity Schema Framework (OCSF), an open standard that helps to normalize and combine the security data from various data sources including on-prem infrastructure, Firewalls, AWS CloudTrail, Amazon Route53, Amazon VPC Flow Logs, etc… Amazon Security Lake supports integrating data sources from third-party security solutions and custom data that has OCSF security data.

VPC Lattice – For modern applications that follow distributed architecture, troubleshooting the communication issues between various components/services is a challenge and time-consuming unless the communication configurations are under control and tracking. AWS VPC Lattice is a new capability of Amazon Virtual Private Cloud (Amazon VPC) that gives us a consistent way to connect, secure, and monitor communication between the services that are distributed. Policies for traffic management, network access, and monitoring can be defined in the VPC Lattice to connect applications in a simple and consistent way across AWS compute services (instances, containers, and serverless functions). VPC Lattice handles service-to-service networking, security, and monitoring requirements.

Schedule a meeting with our AWS cloud solution experts and accelerate your cloud journey with Idexcel.

AWS re:Invent 2022 – Day 1 Recap

Amazon Inspector Now Scans AWS Lambda Functions for Vulnerabilities: Amazon Inspector, a vulnerability management service that continually scans workloads across Amazon Elastic Compute Cloud (Amazon EC2) instances & container images in Amazon Elastic Container Registry (Amazon ECR) now supports scanning AWS Lambda functions and Lambda layers. Customers who had to assess the lambda functions against common vulnerabilities had to use AWS and third-party tools. This increased the complexity of keeping all their workloads secure. As new vulnerabilities can appear at any time, it is very important for the security of your applications that the workloads are continuously monitored and rescanned in near real-time as new vulnerabilities are published.

Protect Sensitive Data with Amazon CloudWatch Logs: Safeguard sensitive data that are ingested by CloudWatch Logs by using CloudWatch Logs data protection policies. When sensitive information is logged, CloudWatch Logs data protection will automatically mask it per your configured policy. This is designed so that none of the downstream services that consume these logs can see the unmasked data. These policies let you audit and mask sensitive log data. If data protection for a log group is enabled, then sensitive data that matches the data identifiers is masked. A user who has the logs Unmask IAM permission can view unmasked data for validation. Each managed data identifier is designed to detect a specific type of sensitive data, such as credit card numbers, AWS secret access keys, or passport numbers for a particular country or region. We can configure it to use these identifiers to analyze logs ingested by the log group and take actions when they are detected.

AWS Backup – Protect and Restore Your CloudFormation Stacks: AWS Backup now supports attaching an AWS CloudFormation stack to the data protection policies for the applications managed using infrastructure as code (IaC). With this, all stateful and stateless components supported by AWS Backup are backed up around the same time. As the application managed with CloudFormation is updated, AWS Backup automatically keeps track of changes and updates the data protection policies for us. This gives users a single recovery point that can be used to recover the application stack or the individual resources and helps to prove compliance with the data protection policies.

Schedule a meeting with our AWS cloud solution experts and accelerate your cloud journey with Idexcel.