AWS re:Invent Recap: Werner Vogels Keynote

Major Service Announcements:

  1. NEW: AWS CloudShell is a browser-based, pre-authenticated shell that can be launched directly from the AWS Management Console to run AWS CLI commands against AWS services using a preferred shell of Bash, PowerShell, or Z shell. This can all be completed without downloading or installing command line tools. AWS CloudShell is generally available in us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), ap-northeast-1 (Tokyo), and eu-west-1 (Ireland) at launch.
  2. PREVIEW: Chaos Engineering with AWS Fault Injection Simulator is a fully managed chaos engineering service that makes it easier for teams to discover an application’s weaknesses at scale in order to improve performance, observability, and resiliency. Developers can now specify conditions to create real-world scenarios that allow hidden issues to be revealed, monitoring potential unforeseen issues, and identify bottlenecks that might affect performance that are usually very tough to find.
  3. PREVIEW: Amazon Managed Service for Prometheus (AMP) enables developers to ingest, store, and query millions of time series metrics, increasing scaling capabilities. This Prometheus-compatible monitoring service makes it easier to keep track of containerized applications at a larger scale.
  4. PREVIEW: Amazon Managed Service for Grafana (AMG) is a secure data visualization service that allows users to query, correlate, and visualize various operational metrics, logs, and traces across multiple data sources to increase observability.

Why it Matters:

There were some major themes we noticed throughout the presentation that really resonated on how to help meet customer needs, wherever they are in their digital transformation journeys:

1. Architecture Sustainability: One of the biggest focuses for many organizations is analyzing the sustainability of their architectures and the impact it has on operations. Once COVID-19 forced companies to adapt how they work, looking at technical architecture strategies is no longer about working from home. It’s about working anywhere, with the right tools available to complete the work necessary that drives business success. AWS tools and new services launched enable development teams to build better solutions, but also to help our clients bring more sophisticated solutions to market faster.

2. Dependability: Another critical theme discussed was the need for solutions to be dependable and not bound by latency. Availability, reliability, safety, security are all properties of dependability that must be considered when building solutions. This is why we’re excited about the new fault removal, forecasting, and Chaos Engineering tool announced, AWS Fault Injection Simulator. This enables weaknesses to be identified quickly to be fixed, addressing the common challenge IT teams have adequately stress-testing cloud applications and. With the integration of the AWS Fault Injection Simulator, we will be able to stress-test applications at a much faster clip to ensure greater dependability in solutions we deliver to our customers.

3. Observability: Many organizations that have various systems generating and storing terabytes of data would benefit from an end-to-end observability that displays optimized analytics and valuable insights to IT and leadership teams. The sheer amount of data could be massive and near impossible to integrate into one core dashboard without the right tools and approach. That is why we are excited about the logging, monitoring, and tracing capabilities that will be available in the latest of two core service previews announced:

Amazon Managed Service for Prometheus (AMP) is a tool that automatically scales the intake, storage, and analytics queries when workloads sizes change and are integrated with AWS security services to. The service works with Amazon EKS, Amazon ECS, and AWS Distro for OpenTelemetry, enabling fast and secure access to required data sources.

Amazon Managed Service for Grafana (AMG) is a fully managed service that can be leveraged to create on-demand, scalable, and secure Grafana workspaces that create visualization elements and perform data analyzation from multiple sources.

4. Customer-Centricity: Every organization has a unique digital transformation journey with very specific needs that requires a tailored approach to build a solution that has an impact. Werner reiterated his message of building with customer needs in mind on Twitter by saying, “Think about what you can do to meet your customers where they are.” He reminded us to be conscious of important issues up front while designing products – services, interfaces, and user experience features that could help address their concerns during an uncertain time for many. It is critical that the services we build to address operational problems should consider the experiences we’re creating for people.

Are you ready to construct your Digital Transformation Roadmap or need to know a little more about how we can help? Get connected with an Idexcel expert to schedule your assessment today!

AWS re:Invent Recap: Infrastructure Keynote

Here are the key discussion topics from the AWS re:Invent 2020 Infrastructure Keynote from Peter DeSantis – Senior VP of Global Infrastructure and Customer Support, with a focus on efforts to improve resiliency, availability, and sustainability for its customers:

  1. AWS Nitro System: Enables faster innovation and enhanced security. Nitro version hypervisor chips is the most recent generation of instances built on the AWS Nitro System is the C6gn EC2 instances
  2. AWS Inferentia: AWS Inferentia is Amazon’s first machine learning chip designed to accelerate deep learning workloads and provide high-performance inference in the cloud.
  3. AWS Graviton2: Graviton 2 processors are the most power-efficient processors AWS provides, achieving 2-3.5 times better performance per watt than any other processor in AWS’s portfolio and is suitable for a variety of workloads.
  4. AWS Commitment to Renewable Energy: A customer moving from an enterprise data center to an AWS data center can reduce their carbon footprint. UPS and data center power and the changes there save 35% lost energy in power conversion and 6,500 megawatts of renewable energy utilized across the world.
  5. New Infrastructure Design: AWS is implementing a new infrastructure design that replaces single, large UPSs from their infrastructure with custom-built power supplies and small battery packs and power supplies, placing them directly into every data center. These micro UPSs are intended to reduce complexity and maintenance and improve availability.
  6. Regions and Availability Zones: AWS continues to invest in more regions and Availability Zones (AZs) around the globe. Italy and South Africa launched earlier in 2020, and Indonesia, Japan, Spain, India, Switzerland, and Melbourne are in the works. Explore AWS’ interactive global infrastructure site here.

If you’re looking to explore these services further and need some guidance, let us know and we’ll connect you to an Idexcel expert!

AWS re:Invent Recap: Amazon SageMaker Edge Manager

Recap Amazon SageMaker Edge Manager

What happened?

Amazon SageMaker Edge Manager was announced during the re:Invent Machine Learning Keynote. This new feature gives developers model management tools for to optimize, secure, monitor, and maintain machine learning models on fleets of edge devices such as smart cameras, robots, personal computers, and mobile devices.

Why is it important?

  • Device Compatibility: It enables developers to train ML models once in the cloud and across fleets of devices at the edge.
  • Optimize ML Models: Amazon SageMaker Edge Manager provides a software agent that comes with an ML model optimized with SageMaker Neo automatically. This eliminates the need to have Neo runtime installed on devices to leverage model optimizations.
  • Reduce Downtime and Service Disruption Costs: SageMaker Edge Manager manages models separately from the rest of the application so update the model and the application independently reducing downtime and service disruptions.

Why are we excited?

The new SageMaker Edge Manager helps data scientists remotely optimize, monitor, and improve the ML models on edge devices across the world, saving developer time and customer costs. It reduces the time and effort required to get models to production, while continuously monitoring and improving model quality.

Availability

It’s available today in the US East (N. Virginia), US West (Oregon), US East (Ohio), Europe (Ireland), Europe (Frankfurt), and Asia Pacific (Tokyo) regions.

If you’re looking to explore these services further and need some guidance, let us know and we’ll connect you to an Idexcel expert!

AWS re:Invent Recap: Amazon SageMaker Debugger

Recap Amazon SageMaker Debugger

What happened?

Amazon SageMaker Debugger, a tool that monitors machine learning training performance to help developers train models faster, was announced during the re:Invent 2020 Machine Learning keynote. This tracks the system resource utilization and creates alerts for problems during training. With these new capabilities, automatic recommendations for resource allocation for training jobs, resulting in an optimized training process that reduces time and costs.

Why is it important?

  • Monitor Automatically: Amazon SageMaker Debugger enables developers to train their models faster through automatic monitoring of system resource utilization and alerts for training bottlenecks or bugs.
  • ID & Resolve Issues Faster: Amazon SageMaker Debugger provides quick issue resolution and bug fix actions with automatic alerts and resource allocation recommendations.
  • Customizability: With SageMaker Debugger, custom conditions can also be created to test for specific behavior in your training jobs.

Why are we excited?

AWS SageMaker Debugger allows data scientists to iterate over a ML model to give better accuracy and assist in detecting model training inconsistencies in real-time. With a little brainstorming and experience, we can find out the actual problem in our ML model. It also integrates with AWS Lambda, which can automatically stop a training job when a non-converging action is detected, resulting in lower costs and faster training time.

Availability

Amazon SageMaker Debugger is now generally available in all AWS regions in the Americas and Europe, and some regions in Asia Pacific with additional regions coming soon.

If you’re looking to explore these services further and need some guidance, let us know and we’ll connect you to an Idexcel expert!

AWS re:Invent Recap: Machine Learning Keynote

Here are the key announcements from the re:Invent 2020 Machine Learning Keynote:

  1. Faster Distributed Training on Amazon SageMaker is the quickest and most efficient approach for training large deep learning models and datasets. Through model parallelism and data parallelism, SageMaker distributed training automatically splits deep learning models and datasets for training in significantly less time across AWS GPU instances.
  2. Amazon SageMaker Clarify detects potential bias during all phases of the data preparation, model training, and model deployment, giving development teams greater visibility into their training data and models to resolve potential bias and explain predictions in greater detail.
  3. Deep Profiling for Amazon SageMaker Debugger gives developers the capability to train models at a quicker pace by monitoring system resource utilization automatically and providing notifications of training bottlenecks.
  4. Amazon SageMaker Edge Manager: provides developers the tools to optimize, secure, monitor, and maintain ML model management on edge devices like smart cameras, robots, personal computers, and mobile devices.
  5. Amazon Redshift ML empowers data analyst, development, and scientist teams to create, train, and deploy machine learning (ML) models using SQL commands. Teams can now build and train machine learning models from Amazon Redshift datasets and apply them to use cases.
  6. Amazon Neptune ML leverages Graph Neural Networks (GNNs) to make easy, fast, and more accurate predictions using graph data. The accuracy of most graph predictions increases to 50% with Neptune ML when compared to non-graph prediction methods. The selection and training of the best ML model for graph data are automated and lets users run ML on their graph directly using Neptune APIs and queries. ML teams can now create, train, and apply ML on Neptune data, reducing the development time from weeks down to a matter of hours.
  7. Amazon Lookout for Metrics applies ML to detect metrics anomalies in your metrics to perform proactive monitoring of the health of your business, issue diagnosis, and opportunity identification quickly that can save costs, increase margins, and improve customer experience.
  8. Amazon HealthLake leverages ML models to empower healthcare and life sciences organizations to aggregate various health information from different silos and formats into a centralized AWS data lake to standardize health data.

If you’re looking to explore these services further and need some guidance, let us know and we’ll connect you to an Idexcel expert!