AWS re:Invent Recap: Werner Vogels Keynote

Major Service Announcements:

  1. NEW: AWS CloudShell is a browser-based, pre-authenticated shell that can be launched directly from the AWS Management Console to run AWS CLI commands against AWS services using a preferred shell of Bash, PowerShell, or Z shell. This can all be completed without downloading or installing command line tools. AWS CloudShell is generally available in us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), ap-northeast-1 (Tokyo), and eu-west-1 (Ireland) at launch.
  2. PREVIEW: Chaos Engineering with AWS Fault Injection Simulator is a fully managed chaos engineering service that makes it easier for teams to discover an application’s weaknesses at scale in order to improve performance, observability, and resiliency. Developers can now specify conditions to create real-world scenarios that allow hidden issues to be revealed, monitoring potential unforeseen issues, and identify bottlenecks that might affect performance that are usually very tough to find.
  3. PREVIEW: Amazon Managed Service for Prometheus (AMP) enables developers to ingest, store, and query millions of time series metrics, increasing scaling capabilities. This Prometheus-compatible monitoring service makes it easier to keep track of containerized applications at a larger scale.
  4. PREVIEW: Amazon Managed Service for Grafana (AMG) is a secure data visualization service that allows users to query, correlate, and visualize various operational metrics, logs, and traces across multiple data sources to increase observability.

Why it Matters:

There were some major themes we noticed throughout the presentation that really resonated on how to help meet customer needs, wherever they are in their digital transformation journeys:

1. Architecture Sustainability: One of the biggest focuses for many organizations is analyzing the sustainability of their architectures and the impact it has on operations. Once COVID-19 forced companies to adapt how they work, looking at technical architecture strategies is no longer about working from home. It’s about working anywhere, with the right tools available to complete the work necessary that drives business success. AWS tools and new services launched enable development teams to build better solutions, but also to help our clients bring more sophisticated solutions to market faster.

2. Dependability: Another critical theme discussed was the need for solutions to be dependable and not bound by latency. Availability, reliability, safety, security are all properties of dependability that must be considered when building solutions. This is why we’re excited about the new fault removal, forecasting, and Chaos Engineering tool announced, AWS Fault Injection Simulator. This enables weaknesses to be identified quickly to be fixed, addressing the common challenge IT teams have adequately stress-testing cloud applications and. With the integration of the AWS Fault Injection Simulator, we will be able to stress-test applications at a much faster clip to ensure greater dependability in solutions we deliver to our customers.

3. Observability: Many organizations that have various systems generating and storing terabytes of data would benefit from an end-to-end observability that displays optimized analytics and valuable insights to IT and leadership teams. The sheer amount of data could be massive and near impossible to integrate into one core dashboard without the right tools and approach. That is why we are excited about the logging, monitoring, and tracing capabilities that will be available in the latest of two core service previews announced:

Amazon Managed Service for Prometheus (AMP) is a tool that automatically scales the intake, storage, and analytics queries when workloads sizes change and are integrated with AWS security services to. The service works with Amazon EKS, Amazon ECS, and AWS Distro for OpenTelemetry, enabling fast and secure access to required data sources.

Amazon Managed Service for Grafana (AMG) is a fully managed service that can be leveraged to create on-demand, scalable, and secure Grafana workspaces that create visualization elements and perform data analyzation from multiple sources.

4. Customer-Centricity: Every organization has a unique digital transformation journey with very specific needs that requires a tailored approach to build a solution that has an impact. Werner reiterated his message of building with customer needs in mind on Twitter by saying, “Think about what you can do to meet your customers where they are.” He reminded us to be conscious of important issues up front while designing products – services, interfaces, and user experience features that could help address their concerns during an uncertain time for many. It is critical that the services we build to address operational problems should consider the experiences we’re creating for people.

Are you ready to construct your Digital Transformation Roadmap or need to know a little more about how we can help? Get connected with an Idexcel expert to schedule your assessment today!

AWS re:Invent Recap: Infrastructure Keynote

Here are the key discussion topics from the AWS re:Invent 2020 Infrastructure Keynote from Peter DeSantis – Senior VP of Global Infrastructure and Customer Support, with a focus on efforts to improve resiliency, availability, and sustainability for its customers:

  1. AWS Nitro System: Enables faster innovation and enhanced security. Nitro version hypervisor chips is the most recent generation of instances built on the AWS Nitro System is the C6gn EC2 instances
  2. AWS Inferentia: AWS Inferentia is Amazon’s first machine learning chip designed to accelerate deep learning workloads and provide high-performance inference in the cloud.
  3. AWS Graviton2: Graviton 2 processors are the most power-efficient processors AWS provides, achieving 2-3.5 times better performance per watt than any other processor in AWS’s portfolio and is suitable for a variety of workloads.
  4. AWS Commitment to Renewable Energy: A customer moving from an enterprise data center to an AWS data center can reduce their carbon footprint. UPS and data center power and the changes there save 35% lost energy in power conversion and 6,500 megawatts of renewable energy utilized across the world.
  5. New Infrastructure Design: AWS is implementing a new infrastructure design that replaces single, large UPSs from their infrastructure with custom-built power supplies and small battery packs and power supplies, placing them directly into every data center. These micro UPSs are intended to reduce complexity and maintenance and improve availability.
  6. Regions and Availability Zones: AWS continues to invest in more regions and Availability Zones (AZs) around the globe. Italy and South Africa launched earlier in 2020, and Indonesia, Japan, Spain, India, Switzerland, and Melbourne are in the works. Explore AWS’ interactive global infrastructure site here.

If you’re looking to explore these services further and need some guidance, let us know and we’ll connect you to an Idexcel expert!

AWS re:Invent Recap: Amazon SageMaker Edge Manager

Recap Amazon SageMaker Edge Manager

What happened?

Amazon SageMaker Edge Manager was announced during the re:Invent Machine Learning Keynote. This new feature gives developers model management tools for to optimize, secure, monitor, and maintain machine learning models on fleets of edge devices such as smart cameras, robots, personal computers, and mobile devices.

Why is it important?

  • Device Compatibility: It enables developers to train ML models once in the cloud and across fleets of devices at the edge.
  • Optimize ML Models: Amazon SageMaker Edge Manager provides a software agent that comes with an ML model optimized with SageMaker Neo automatically. This eliminates the need to have Neo runtime installed on devices to leverage model optimizations.
  • Reduce Downtime and Service Disruption Costs: SageMaker Edge Manager manages models separately from the rest of the application so update the model and the application independently reducing downtime and service disruptions.

Why are we excited?

The new SageMaker Edge Manager helps data scientists remotely optimize, monitor, and improve the ML models on edge devices across the world, saving developer time and customer costs. It reduces the time and effort required to get models to production, while continuously monitoring and improving model quality.

Availability

It’s available today in the US East (N. Virginia), US West (Oregon), US East (Ohio), Europe (Ireland), Europe (Frankfurt), and Asia Pacific (Tokyo) regions.

If you’re looking to explore these services further and need some guidance, let us know and we’ll connect you to an Idexcel expert!

AWS re:Invent Recap: Amazon SageMaker Debugger

Recap Amazon SageMaker Debugger

What happened?

Amazon SageMaker Debugger, a tool that monitors machine learning training performance to help developers train models faster, was announced during the re:Invent 2020 Machine Learning keynote. This tracks the system resource utilization and creates alerts for problems during training. With these new capabilities, automatic recommendations for resource allocation for training jobs, resulting in an optimized training process that reduces time and costs.

Why is it important?

  • Monitor Automatically: Amazon SageMaker Debugger enables developers to train their models faster through automatic monitoring of system resource utilization and alerts for training bottlenecks or bugs.
  • ID & Resolve Issues Faster: Amazon SageMaker Debugger provides quick issue resolution and bug fix actions with automatic alerts and resource allocation recommendations.
  • Customizability: With SageMaker Debugger, custom conditions can also be created to test for specific behavior in your training jobs.

Why are we excited?

AWS SageMaker Debugger allows data scientists to iterate over a ML model to give better accuracy and assist in detecting model training inconsistencies in real-time. With a little brainstorming and experience, we can find out the actual problem in our ML model. It also integrates with AWS Lambda, which can automatically stop a training job when a non-converging action is detected, resulting in lower costs and faster training time.

Availability

Amazon SageMaker Debugger is now generally available in all AWS regions in the Americas and Europe, and some regions in Asia Pacific with additional regions coming soon.

If you’re looking to explore these services further and need some guidance, let us know and we’ll connect you to an Idexcel expert!

AWS re:Invent Recap: Amazon SageMaker Clarify

Amazon SageMaker Clarify

What happened?

AWS released Amazon SageMaker Clarify, a new tool for mitigating bias in machine learning model that helps customers more accurately and rapidly detect bias to build better solutions. This provides critical data and insights that increase transparency to help support analysis and explanation of model behavior to stakeholders and customers.

Why is it important?

  • Easily Detect Bias: SageMaker Clarify will help data scientists detect bias in data sets before training and their models after training.
  • Valuable Metrics & Statistics: It explains how feature values contribute to the predicted outcome, both for the model overall and for individual predictions.
  • Build Better Solutions: With the capability for developers to specify important model attributes, such as location, occupation, age, teams are better able to focus the set of algorithms in a sophisticated way to detect any presence of bias in those attributes. This enables teams to build the most accurate and effective solutions that drive client success.

Why are we excited?

With Amazon SageMaker Clarify, we can now better understand each feature in our ML models and give more detailed explanations to stakeholders. It provides transparency in model understanding that gives leadership more valuable information to inform critical business decision-making. SageMaker Clarify also includes feature importance graphs that explain model predictions and produce reports for presentations to better highlight any significant business impacts.

Availability

SageMaker Clarify is available in all regions where Amazon SageMaker is available. The tool will come free for all current users of Amazon SageMaker.

If you’re looking to explore these services further and need some guidance, let us know and we’ll connect you to an Idexcel expert!

AWS re:Invent Recap: Machine Learning Keynote

Here are the key announcements from the re:Invent 2020 Machine Learning Keynote:

  1. Faster Distributed Training on Amazon SageMaker is the quickest and most efficient approach for training large deep learning models and datasets. Through model parallelism and data parallelism, SageMaker distributed training automatically splits deep learning models and datasets for training in significantly less time across AWS GPU instances.
  2. Amazon SageMaker Clarify detects potential bias during all phases of the data preparation, model training, and model deployment, giving development teams greater visibility into their training data and models to resolve potential bias and explain predictions in greater detail.
  3. Deep Profiling for Amazon SageMaker Debugger gives developers the capability to train models at a quicker pace by monitoring system resource utilization automatically and providing notifications of training bottlenecks.
  4. Amazon SageMaker Edge Manager: provides developers the tools to optimize, secure, monitor, and maintain ML model management on edge devices like smart cameras, robots, personal computers, and mobile devices.
  5. Amazon Redshift ML empowers data analyst, development, and scientist teams to create, train, and deploy machine learning (ML) models using SQL commands. Teams can now build and train machine learning models from Amazon Redshift datasets and apply them to use cases.
  6. Amazon Neptune ML leverages Graph Neural Networks (GNNs) to make easy, fast, and more accurate predictions using graph data. The accuracy of most graph predictions increases to 50% with Neptune ML when compared to non-graph prediction methods. The selection and training of the best ML model for graph data are automated and lets users run ML on their graph directly using Neptune APIs and queries. ML teams can now create, train, and apply ML on Neptune data, reducing the development time from weeks down to a matter of hours.
  7. Amazon Lookout for Metrics applies ML to detect metrics anomalies in your metrics to perform proactive monitoring of the health of your business, issue diagnosis, and opportunity identification quickly that can save costs, increase margins, and improve customer experience.
  8. Amazon HealthLake leverages ML models to empower healthcare and life sciences organizations to aggregate various health information from different silos and formats into a centralized AWS data lake to standardize health data.

If you’re looking to explore these services further and need some guidance, let us know and we’ll connect you to an Idexcel expert!

AWS 2020 re:Invent Recap: AWS Trainium

What Happened: 

The newest AWS custom-designed chip, AWS Trainium, was announced during Andy Jassy’s 2020 re:Invent keynote, with the projected best price performance for training Machine Learning (ML) models in the cloud. Meant for deep learning training workloads for applications, it includes capabilities of image classification, semantic search, translation, voice recognition, NLP (Natural Language Processing), and recommendation engines.

Why It’s Important: 

  • Lower Costs: AWS Trainium instances are specifically targeted for reducing the training costs, in addition to the existing savings through using AWS Inferentia, which focuses on the inference factors of ML applications.
  • Easy Integration: Since it shares the same AWS Neuron SDK as AWS Inferentia, it’s easier for developers with Inferentia experience to start working with AWS Trainium. SDK integrates with popular ML frameworks like Pytorch, MXNet, and Tensorflow, making it easier for developers to move GPU instances to AWS with minimal code changes.
  • Greater Capabilities: This chip is optimized for training deep learning models for applications using images, text, and audio, which means more opportunities to build solutions that solve operational business challenges across industries.

Why We’re Excited

AWS Trainium will be the most sophisticated and advanced training technology leveraged to deliver elegant solutions to address customer challenges and project requirements. Since it integrates with and complements AWS Inferentia, ML training capabilities will significantly increase with optimized speed, skill, and efficiency. As the most cost-effective option with the broadest set of capabilities and robust AWS toolset to support it, end-to-end workflows can be created to scale AI/ML training workloads faster and bring products or services to market at an accelerated rate.

Availability:

AWS Trainium is available as an EC2 instance and will be available in Amazon Sagemaker in the 2nd half of 2021.

If you’re looking to explore these services further and need some guidance, let us know and we’ll connect you to an Idexcel expert!

AWS re:Invent Recap: Amazon SageMaker Pipelines

What happened?

The new service, Amazon SageMaker Pipelines, has been launched to provide continuous integration and delivery pipelines that automate steps of ML (Machine Learning) workflows. It’s the first CI/CD service for ML to build, store, and track automated workflows and also create an audit trail for training data and modeling configurations.

Why is it important?

  • Ease of Use: It has built in ML workflow templates that can be used to build, test, and deploy ML models quickly.
  • Compliance: Amazon SageMaker pipeline logs can be saved as audit trails to recreate models for similar future business cases that help support compliance requirements.
  • Better Capabilities: This service brings CI/CD practices to ML, which keeps the development & production environments separate, version control measures, on-demand testing, and end-to-end automation.
  • Automation: As the first Purpose-built CI/CD service for ML that incorporates automation of data loading, transformation, training, tuning, and deployment workflow steps, this increases productivity significantly.
  • Scalability: With the ability to create, automate, and manage end-to-end ML workflows at scale, there’s peace of mind knowing various are stored and be referred back to for audit purposes, compliance requirements, and future solution builds.

Why We’re Excited

Amazon SageMaker Pipelines offer a more efficient and productive solution to scale by reusing the workflow steps created and stored in a central repository. With built-in templates to deploy, train, and test models, our ML teams can quickly leverage CI/CD in our ML environments and easily incorporate models we’ve already created. With the SageMaker Pipelines model registry, we can track these model versions in one central location that gives us visibility and up to date record logs of the best possible solution options to meet client deployment needs.

If you’re looking to explore these services further and need some guidance, let us know and we’ll connect you to an Idexcel expert!