AWS re:Invent 2021: Our Predictions Vs Announcements for ML Services

AWS re:Invent 2021: Our Predictions Vs Announcements for ML Services

Based on the current trends and advancements in the technology industry, in addition to several other factors, we made certain predictions about new services / features that were likely to be launched at the AWS re:Invent 2021 annual conference. The table below presents a wrap-up of all our predictions about the event in comparison with the actual announcements made by AWS:

S.No

Idexcel’s Predictions

AWS re:Invent 2021 Announcements

1

Release of new generation ec2 instances for faster Machine Learning Training and Inference, which will offer a better Price Performance Ratio.

AWS announced 3 new Amazon EC2 instances powered by AWS-designed chips. They are as follows:

(i) Amazon EC2 C7g instances powered by new AWS Graviton3 processors that provide up to 25% better performance for compute-intensive workloads over current generation C6g instances powered by AWS Graviton2 processors.

(ii) Amazon EC2 Trn1 instances powered by AWS Trainium chips which provide the best price performance and the fastest time to train most Machine Learning models in Amazon EC2.

2

Amazon Textract will soon penetrate the market by providing extraction solutions that are domain specific, covering specific types of document extraction solutions. We may see examples of specific types of documents that will be extracted.

Amazon Textract had announced specialized support for automated processing of identity documents. Users can now swiftly and accurately extract information from IDs (eg. U.S. Driver Licenses & Passports) which have varying templates or formats.

3

Improvements in Lex are likely to be out later this year or early next year, with the recent acquisition of Wickr.

AWS announced the Amazon Lex Automated Chat Bot Designer (in Preview), a new feature that simplifies the process of chatbot training and design by bringing in a level of automation to it.

4

A range of Automation options within AWS Service are likely to be announced.

Amazon SageMaker Inference Recommender – A new capability of SageMaker introduced at AWS re:Invent 2021, which lets users choose the best available compute instance and configuration to deploy machine learning models for optimal inference performance and cost. Also, it minimizes the time taken to obtain Machine Learning (ML) models in production by automating performance benchmarking and load testing models across SageMaker ML instances. Users can now utilize Inference Recommender to deploy their model to a real-time inference endpoint that delivers the finest performance at a meanest cost.

Recap of Swami Sivasubramanian’s Keynote Announcements at AWS re:Invent 2021

What To Expect From AWS reInvent 2021

Amazon Web Services (AWS) has announced a heap of features and services to make technologies like Machine Learning more effective and economical, along with a new USD $ 10 million scholarship programme for careers in Machine Learning (ML).

During his 2-hour keynote session at re:Invent 2021, which is in Day-3 of returning to Las Vegas after a one-year interval due to the pandemic, Vice President (VP), Amazon AI at AWS, Swami Sivasubramanian, revealed new solutions to make Machine Learning more approachable and inexpensive, in addition to new training programmes to further democratize the technology and make it simpler to experiment with. Also, AWS announced several new capabilities for its Machine Learning service i.e. Amazon SageMaker. This combines strong new capabilities, which include a no-code environment for building accurate ML predictions, more precise data labelling utilizing highly skilled annotators, and a universal Amazon SageMaker Studio notebook experience for better association across domains. We hereby present below, a summary of Sivasubramanian’s biggest announcements:

  • Amazon DevOps Guru for RDS tool lets you automatically detect, diagnose and resolve complicated database-related (Amazon Aurora databases) issues within minutes. Also, the DevOps Guru for RDS can help rectify a wide range of issues, such as over-exploitation of host resources, database bottlenecks or misbehavior of SQL queries. Whenever an issue is detected, users can view them either through the DevOps Guru console or via notifications from Amazon EventBridge or Amazon Simple Notification Service (SNS).
  • AWS Database Migration Service Fleet Advisor lets you accelerate database migration with automated inventory and migration recommendations. This tool is specifically designed to help make it easier and quicker to get your data to the cloud and match it with the appropriate database service. DMS Fleet Advisor spontaneously builds an inventory of your on-prem database and analytics service by streaming data from on-prem to Amazon S3.
  • New SageMaker Studio Notebook service allows users to access a broad range of data sources and conduct data engineering, analytics, and Machine Learning workflows in one notebook. Currently, Amazon SageMaker Studio has the capability to integrate directly to EMR, the company’s Hadoop-based service that grants access to frameworks such as Spark, Presto, MapReduce, and Hive. Now, SageMaker Studio users can build, terminate, manage, discover and connect to EMR clusters directly from within their SageMaker Studio environment, which would in turn streamline workflows for data scientists.
  • Amazon Sagemaker Studio Lab is a free service for students and other learners or developers to experiment and learn Machine Learning. There are things like JupyterLab IDE, 15 GB of storage, and model can be trained on GPUs. After training the model, the user can also deploy the model in AWS Infrastructure just by one-click using Sagemaker capabilities.
  • Amazon SageMaker Ground Truth Plus allows users to deliver high-quality training databases fast, with no necessity to write a single line of code. Basically, this is a professional services version of SageMaker Ground Truth, which already exists. This new service empowers users to associate themselves with a pool of expert data labelers who have been curated by AWS, and to have the data labeling process directly incorporated with their SageMaker environment. Also, this new service offering can bring down data labeling costs by up to 40%.
  • Amazon SageMaker Platform is getting 3 new innovations:
    • SageMaker Training Compiler is a new feature that can accelerate the training of deep learning models by up to 50% through more efficient use of GPU instances.
    • SageMaker Inference Recommender helps users to choose the best available compute instance and configuration to deploy Machine Learning models for ideal inference performance and cost. This new feature can reduce the time to deploy from weeks to hours.
    • SageMaker Serverless Inference is a new inference option that empowers users to easily deploy machine learning models for inference without having to configure or manage the underlying infrastructure. This new feature can lower the cost of ownership with pay-per-use pricing.
  • Amazon Kendra Experience Builder allows you to deploy a fully functional and customizable search experience with Amazon Kendra in just a few clicks, with absolutely no necessity for any coding or Machine Learning experience. Experience Builder service delivers an intuitive visual workflow to swiftly build, customize and initiate your Kendra-powered search application, safely on the cloud. You can begin with the ready-made search experience template in the builder, that can be tailored by simply dragging and dropping the components you require, like filters or sorting.
  • Amazon Lex Automated Chatbot Builder is a new capability which reduces the time and effort it takes for customers and partners to design a chatbot from weeks to hours, by simply automating the process using existing conversation transcripts. It is indeed an easy and intuitive way of designing chatbots, by employing advanced natural language comprehension driven by deep learning techniques. Amazon Lex enables you to build, test and deploy chatbots and virtual assistants on contact center services (i.e. Amazon Connect), websites and messaging platforms (e.g. Facebook Messenger). The automated chatbot designer widens the usability of Amazon Lex to the design phase. It utilizes Machine Learning to render an initial bot design that you can then refine and initiate conversational experiences quicker.

Schedule a meeting with our AWS Cloud Solution Experts and accelerate your cloud journey with Idexcel.

WHAT TO EXPECT FROM AWS RE:INVENT 2021

What To Expect From AWS reInvent 2021

AWS re:Invent 2021 is an Amazon Web Services annual technology conference scheduled for November 29 through December 3 in Las Vegas, Nevada. This is the largest annual Amazon Web Services (AWS) conference for partners and customers. The event is scheduled to include 1,500 technical sessions, a partner expo, training and certification opportunities, and multiple keynote announcements. Also, AWS re:Invent 2021 is dedicated to cloud strategies, IT architecture and infrastructure, operations, security, and developer productivity with a focus on AWS products and features. Irrespective of whether you are an engineer, a business leader, or just embarking on the cloud journey, this is an opportunity to discover everything the event has to offer. As an AWS Advanced Consulting Partner and Managed Service Provider (MSP), Idexcel is much excited and looking forward to this event.

Key Note Sessions

Keynote Session 1: Global Partner Summit – November, 29 (3:00 PM – 5:00 PM PST)

Doug Yeum – Head of AWS Partner Organization: Yeum is currently AWS’ head of worldwide channels and alliances. His responsibilities included engaging with c-level executives about their digital transformation and innovation strategies, developing business through channels & alliances, and managing business operations.

Sandy Carter – Vice President for Worldwide Public Sector Partners and Programs at AWS: She is responsible for driving next-generation partnering and evolving partner models to intensify partner innovation and AWS Cloud adoption.

Stephen Orban – General Manager of AWS Marketplace and Control Services: Orban formerly served for three years as general manager of AWS Data Exchange, a service launched in 2019 that makes it easy for AWS customers to find, subscribe to, and use third-party data in the cloud through Marketplace.

Keynote Session 2: AWS Customers, Products, & Services – November, 30 (8:30 AM – 10:30 AM PST)

Adam Selipsky – AWS CEO: Adam would take the stage to share his insights and latest news about AWS customers, products, and services.

Keynote Session 3: Databases, Analytics, & Machine Learning– December, 01 (8:30 AM – 10:30 AM PST)

Swami Sivasubramanian – Vice President at AWS: Heading up all Amazon AI and Machine Learning services and leading on all aspects of Machine Learning, from ML frameworks and infrastructure to Amazon SageMaker and AI services.

Keynote Session 4: AWS & Cloud Infrastructure – December, 01 (3:00 PM – 4:30 PM PST)

Peter DeSantis – Senior Vice President of AWS Global Infrastructure and Customer Support: Leading the AWS teams responsible for designing the data centers, servers, and network that underpin AWS services and for deploying and operating this infrastructure worldwide.

Keynote Session 5: Future of Software Development – December, 02 (8:30 AM – 10:30 AM PST)

Dr. Werner Vogels – Chief Technology Officer at Amazon.com: Dr. Werner is responsible for driving the company’s customer-centric technology vision and is one of the forces behind Amazon’s approach to cloud computing.

Offerings of re:Invent 2021

Here is a lineup of some of the expanded offerings of the event that you may want to check out:

Presentations

  • Keynotes from AWS on cutting edge-technology and industry trends
  • Leadership Sessions on a range of hot topics
  • Breakout Groups and Q&A Sessions
  • Expert Learning Lounges
  • Builders’ Fair with Presentations and Question & Answer Sessions
  • AWS Product Announcements

AWS Training & Certification

  • Exam Readiness Sessions
  • Remote Certification Examinations
  • Hands-on Labs & Bootcamps

Hands-on Learning & Fun

  • AWS DeepRacer Competitions
  • Jams, Game Days & Hackathon
  • Virtual Play Events

Performances & Entertainment

  • Cooking Demonstrations
  • re:Play – a Live Concert by a National Act

Our AWS re:Invent 2021 Predictions for ML Services

  1. Release of new generation ec2 instance for faster Machine Learning Training and Inference, which will offer a better Price-Performance Ratio.
  2. Amazon Textract will soon penetrate the market by providing extraction solutions that are domain-specific, covering specific types of document extraction solutions. We may see examples of specific types of documents that will be extracted.
  3. As we are in the midst of guiding ourselves through the pandemic, we will be aware that 2020 was a big year for medical image processing, particularly for chest x-rays. In this regard, we are expecting Amazon to announce a service similar to “Lookout For Vision” which will be used for anomaly detection, specifically focused on radiology-based images.
  4. An enhancement to Amazon Health Lake is expected to augment patient tracking and management.
  5. With the recent acquisition of Wickr, improvements in Lex are likely to be out later this year or early next year.
  6. A range of Automation options within AWS Service is likely to be announced.
  7. Furthermore, we look forward to big announcements of several incremental improvements to Amazon Rekognition service offering and additional flexibility in Machine Learning Training Models for images and videos.

Schedule a meeting with our AWS Cloud Solution Experts and accelerate your cloud journey with Idexcel.

Mass Migration to Scale up your Cloud Journey

Mass Migration to scale up your Cloud Journey

Growth in digital transformation brings with it a growing need for agility and innovation in the technology sector. As a result, by 2020, it is projected that 83 percent of the corporate workloads will be run in the cloud. When it comes to maintaining your IT burden, if you haven’t regarded cloud computing as a viable option, it’s time to rethink about your strategy.

In light of current IT trends and corporate expectations, domain experts in the industry believe that Cloud Migration is the most effective method to stay up with the Digital Transformation and fulfill the ever-changing customer demands as fast and efficiently as possible.

Idexcel is an AWS Advanced Consulting Partner having more than 100 AWS certified specialists on staff. Such specialists assist with AWS migration to help you save a significant amount of time and money, by transferring your workload to the AWS cloud with Idexcel.

Three Phases of Cloud Migration Process

This process is designed to help organizations in migrating tens, hundreds, or thousands of applications. This is an iterative process, as you migrate more applications you will be able to accelerate repeatability and predictability in the AWS Migrations. Let us, deep-dive, into each of the migration processes:

Assess

At the start of the migration journey, the first thing the organization needs, is to identify all the assets in the data center that can be migrated into the cloud. The organization also needs to understand its current readiness to operate in the cloud. Most importantly, you need to identify the desired business outcomes and develop the business case for migration to the cloud.

AWS has tools to assess your on-premises resource and build a right-sized and optimized cost projection for running applications in AWS. To get started, the Migration Evaluator tool provides a total cost of ownership (TCO) projection for running the workloads on AWS based on your actual utilization of resources. This helps organizations to optimize compute, storage, database, networking, and software licenses on AWS. It also helps to estimate the cost of migrations.

This stage usually means deploying various monitoring tools such as Migration Evaluator, Migration Hub, or third-party tools like RISC Networks, which helps gather information about the applications dependency.

Once the business use case is created, migration and modernization strategies are designed based on the AWS Well-Architected Framework.

Mobilize

As a part of mobilizing phase, we create a detailed migration plan and address gaps that were uncovered in the Assess phase, with a focus on building your baseline environment (landing zone) and drive operational readiness.

A good migration plan starts with a deeper understanding of the inter-dependencies between applications and evaluates migration strategies to drive successful migration. Based on the data collected by tools deployed in the assessment phase, we place the application into various migration paths/strategies. Here is the list of migration paths

  1. Re-Host: It is the most straightforward path cloud migration strategy. It simply means that you lift servers, applications, virtual machines & operating systems from current data centers to Cloud. Once done, the dependencies need to be rewired to the new host in the cloud. It is typically done by a tool like CloudEndure, SMS (Server Migration Service). Amongst all the migration strategies, this is the fastest The big drawback is that cloud-native features are not efficiently utilized like CI/CD automation, self-healing, automated recovery, monitoring systems, etc.
  2. Re-Platform: It is called a lift tinker and shift. Here you might make a few cloud and other optimizations to achieve some tangible benefit, but you aren’t otherwise changing the core architecture of the application. Example: This can be moving from the proprietary web servers to open-source Tomcat server post-migration.
  3. Re-Purchase: Moving to a different product. Example: This can be moving a CRM application from one vendor to another vendor depending on business needs.
  4. Re-Factoring/Re-Architecting: Re-imagining how the application is architected and developed, typically using cloud-native features. This is typically driven by a strong business need to add features, scale, or performance that would otherwise be difficult to achieve in the application’s existing environment. It typically entails migrating from a monolithic architecture to a service-oriented/server-less architecture to boost agility or improve business continuity.
  5. Re-Tire: Get rid of. Once all the applications are discovered, there will be few applications for which nobody takes responsibility. Since nobody takes responsibility for these servers, the server is stopped to check if that impacts any team. If there are no alarms raised by anybody, then they can be forwarded for decommissioning process.
  6. Re-Tain: This means do nothing. There may be applications that are tied to hardware in on-premise like a Fax machine. These servers need to be hosted in data centers.

Migrate and Modernize

During the migrate and modernize phase, each application is planned, migrated, and validated. Based on the inputs from the above two phases, the migration is meticulously planned and executed. This phase can be sub-divided into 3 phases pre-migration, migration, and post-migration phase.

Pre-Migration:

  1. It starts with setting up a meeting with stakeholders and assigning the responsibilities to the concerned person
  2. Starting the replication/launching new instances based on the migration path
  3. Opening the connectivity between all the dependencies
  4. Creating the change request. The change request needs to be approved and all the pre-work, like intimation to business users regarding the downtime needs to be addressed. The changes need to be tracked as part of the change management process

Migration:

  1. It starts with a cutover window
  2. The On-premise server is stopped
  3. Launching the cloud instance
  4. Changing the hostname
  5. Validation of the application

Post-Migration:

  1. Enabling termination protection
  2. Setting up backup and patching policies
  3. Starting the decommissioning process for the on-premise server

Key Challenges to Scale Migrations

Every transformation task comes with its own set of challenges. These challenges are not faced by all the applications but on an average, it is observed that 20-25% of applications face this challenge.

  1. CMDB is not very accurate. Even if we find all the data in the CMDB tool, many times we see that this data is not accurate and underlying assets in these applications are changed. In this case, we need to redo the planning based on current/correct assets.
  2. If the organization doesn’t have CMDB tool. The application team is supposed to fill in the discovery data. It is found that many times the Application team takes a lot of time to fill this data as they have to source the data from multiple places.
  3. Production systems have a limitation of when we can migrate. If the migration is not executed in that specific time frame, sometimes the migration gets delayed by months.
  4. Migration road maps run parallel to the product feature roadmap. As a result, migration gets lower priority and gets pushed back.
  5. The technical depth of the Application team. Many times, the applications are used by support teams and the people who developed the application are long gone. Hence there is a challenge in migrating and validating such applications.

Migration Acceleration with Idexcel

Based on the above challenge we encountered with the migrations we have developed processes and tools to migrate the application effectively.

  1. Inventa Discovery Tool: This tool is built to speed up the discovery process. Many of the fields are auto-populated so that the Application team focuses on the data that is most important for migration. It also gives full visibility of the applications being discovered. This tool provides a self-service dashboard where stakeholders can view the status of their application discovery. It provides information about all the applications that are running in the organization. With Inventa, we get an idea about the number of applications being in progress and those being blockers. It also provides a drill-down where the user can navigate through each application to view the blockers and see why they are blocked and when it is likely to be resolved or any action needs to be taken to unblock the issues.
  2. 2/2 Migration Execution Model: The migrations are typically done at a time when the load on the servers is minimum, usually on weekends or Friday evenings. It is often the case that there is more than 1 migration happening at the same time. If both the application migration is handled by the same migration engineer, then one of the migrations may need to be pushed back.
  3. AWS Migration Hub: One of the key reasons for the success of migration is identifying the application dependencies. It is often found that the application team is not fully aware of all the dependencies. We leverage AWS Migration Hub to identify these dependencies and proactively addressing them.
  4. Idexcel Configuration management tool: This tool is built on top of AWS Migration Factory and CloudEndure which automates the migration execution process. This tool is developed to speed up re-host migrations. Using this tool, tasks like installing the CloudEndure Agent on the source system or removing any software post cutover of servers can be done with a single click of a button. This brings down the application downtime considerably. It provides a dashboard for all the applications in scope for migration cutover. Stakeholders can view how many application migrations are completed and how many migrations are in progress. It shows a wave-wise breakdown of all applications being migrations.

Schedule for your Free Migration Readiness Assessment Program

How is Cloud Computing Transforming the Financial Services Industry?

Cloud Computing in Financial Services Industry

An increasing number of financial institutions are transforming their systems and exploring how cloud computing can help. The COVID-19 pandemic has further inspired financial and banking leaders to incorporate cloud and security in financial services. According to The Next Web, cloud services have served as a saving grace throughout the unprecedentedly challenging time, making it easier for businesses to allow employees to work remotely, efficiently, effectively, and securely. While digital transformation without the cloud is not impossible, its scope will be very limited.

If you are curious about the transformational possibilities of cloud computing in financial services and need more information to determine whether it’s the best path for your bank or other financial institution, keep reading.

Improving Operational Efficiencies in Financial Institutions

Top cloud service leaders work to design and build a system that focuses on the big picture for your FI, which includes user experience to drive satisfaction and success for all.

Further, the flexibility and scalability that cloud adoption offers allow your financial business to quickly adjust to operational changes, empowering you to handle greater volume without worrying about adding more storage capabilities or staff to manage it all.

Reducing Costs

When you move your data and key business applications to the cloud, you don’t have to store them locally, continually upgrading your storage needs with quick fixes. Even better, you no longer need to worry about your physical infrastructure since your cloud service provider has it covered. With the Pay-as-you-go pricing models provided by the cloud providers, you’ll enjoy huge savings associated with no longer worrying about buying, maintaining, upgrading, and housing the necessary hardware.

Increasing Data Storage Options In the Cloud

The need for more data increases daily, making it challenging for on-site IT teams to keep up with the demand. With the right cloud service provider, you have access to virtually unlimited cloud storage without worry.

Enhancing Availability and Resiliency Through Redundancy

Redundancy, creating backups and fail-safes, can help make applications resilient and more readily available. Many cloud service companies are building various redundancies to guarantee that data is always available and secure. A few redundancies include:

  • Availability Sets, which protect against localized disk or network hardware failures. Virtual Machines (VMs) are spread across different fault domains, defining the VM group that shares a common power source.
  • Availability Zones, which serve as isolated data centers within a cloud service system’s regions.

AWS has developed several strategies to improve availability, resilience, and sustainability through faster innovation and enhanced security, accelerating deep learning workloads, investing in faster processors, and much more.

Freeing IT Teams to Manage Core Organizational Projects

Investing in cloud services means that your IT team can focus on strategic planning, core organizational objectives, and on-site employee support and training. Engineering-focused IT team members can take on projects like app development to improve customer experience.

Boosting Customer-Banking Relations

Pleasing customers is high on your priority list, and cloud computing in banking offers many ways to do that. With cloud services, you can control customer data instantly and better understand their financial habits and account management preferences. Additionally, you can improve the customer experience and offer instant and accurate information, consistent accessibility, and peak performance to keep their visits brief, easy, and satisfying. Customers will enjoy a better experience with fewer disruptions and outages. Even if there is some disruption, cloud computing resources make it easier to get back online quickly, ensuring continuous availability to customers.

Tightening Up Security

Cloud service firms continue to dispel any fears that experts have expressed over cloud security over the years. With the right cloud platform partner, keeping your data in the cloud is safer than keeping it on-site with effective SSL management options, enhanced credentialing, and data encryption.

Many cloud service providers take a proactive approach to tackling DDOS attacks, avoiding data breaches, preventing data loss, strengthening access points, and sending prompt notifications and alerts.

Streamlining Compliance Standards

The digital world is brimming with risks, making compliance standards essential for everyone, particularly financial institutions. Once you move into the cloud, you’ll find that your provider must remain compliant with all relevant regulations.

The financial industry is brimming with regulations to protect multiple parties, including customers, investors, and financial institutions, and it’s imperative that all parties dealing with a financial institution’s data, including a cloud hosting service, must remain compliant.

Cloud computing helps to achieve compliance by taking the necessary steps to secure data and mitigate risk. Further, cloud providers also offer several dynamic tools and features across network security, access control, data encryption, and configuration management. All these tools and features help customers meet their security objectives and remain compliant with their information security standards.

Are You Ready for Cloud Computing Services to Transform Your Financial Industry Enterprise?

If you find any of these transformative possibilities exciting, it might be time to consider moving to the cloud. Our Idexcel team offers an array of cloud services to enjoy a bounty of benefits. With over 20 plus years of experience fine-tuning various digital technologies, we can help you take your financial business in any direction you choose.

Connect with our team to schedule your free assessment.

Why Are Cloud-First AI Solutions Important?

Nearly everywhere you look in today’s global marketplace, you’ll spot an ever-increasing use of artificial intelligence (AI) solutions combined with cloud computing technologies. You might even personally use AI in cloud solutions in the form of digital assistants, such as Apple’s Siri, Amazon’s Alexa, and Google Home. But how far down the AI solutions rabbit hole are you ready to take your business?

If you are still on the fence about adopting AI in the cloud for your enterprise, you might benefit from exploring why cloud-first AI solutions are vital to your business’s growth now and in the future.

Keep reading to learn the most crucial benefits of investing in a cloud AI platform.

Transformative IT Infrastructure That Spurs Growth and Competition

Today’s top AI cloud platforms, like AWS, are reimagining and transforming traditional IT infrastructure to help you keep up with the competition within your industry. It comes down to your potentially falling behind the competition if you don’t adopt AI in the cloud, making it a little like high-tech peer pressure thanks to the increasing demand for AI-optimized cloud application infrastructure. Basically, everyone is doing it, so it’s best to stay ahead of the curve to minimize the need for a panicked adoption later.

The top vendors make it easy for you by introducing specialized IT platforms that feature pre-designed storage and computing resources combinations to streamline the learning curve for you and your team.

cloud and AI

Enhanced Data Access and Data Analysis

AI and data go hand-in-hand, and to a degree, data feeds AI, allowing it to learn and become smarter. Essentially, your computing system learns more without explicit or manual programming. Add to that benefit the cloud’s nearly unlimited data storage capacity, removing delays and other obstacles, and your AI is free to gather, analyze, and interpret data. With massive tomes of data and unlimited resources to access it, your AI can make predictions regarding matters like risks and troubleshoot those issues before they boil to the surface as bona fide problems. Further, training machine learning models is much easier in the cloud. With the right solution, machine learning options don’t require in-depth and intensive AI knowledge, extensive familiarity with machine learning theory, or a group of data scientists.

Reduced Costs

Think about all the time and resources your business has invested in infrastructure upgrades and adding on-site storage, then imagine a business model that removes those concerns. The top cloud AI platforms can do that for your organization and much more. With a cloud AI solution, you can purchase only the amount of storage you need for the data you currently possess and what you anticipate ahead, knowing you can scale to your enterprise’s needs. It also frees your IT team to manage on-site matters, such as purchasing, installing, updating, and maintaining employees’ hardware devices.

AI cloud platforms also reduce or eliminate the need for your executive and IT teams to perform continuous tech research in a field where everyone struggles to keep up with the latest trends and innovations. Your AI cloud service provider stays up-to-date on these concerns as a matter of course, therefore able to pass the benefits of their work along to your business, further allowing your team to attend to core business functions.

Are You Ready to Adopt Cloud-First Solutions for Your Enterprise?

Adopting a cloud-first solution puts you on track for keeping up with the competition at a minimum, and along with your own strategies, it will help you move far ahead.

Our team at Idexcel can help you determine and launch the best strategy for seamless AI cloud adoption. For more than 21 years, we have delivered professional services and technology solutions with specializations in cloud services, cloud-native services, data platforms, and intelligence to satisfied, loyal clients. Get in touch with our team to schedule your free assessment.

AWS re:Invent Recap: Amazon SageMaker Pipelines

What happened?

The new service, Amazon SageMaker Pipelines, has been launched to provide continuous integration and delivery pipelines that automate steps of ML (Machine Learning) workflows. It’s the first CI/CD service for ML to build, store, and track automated workflows and also create an audit trail for training data and modeling configurations.

Why is it important?

  • Ease of Use: It has built in ML workflow templates that can be used to build, test, and deploy ML models quickly.
  • Compliance: Amazon SageMaker pipeline logs can be saved as audit trails to recreate models for similar future business cases that help support compliance requirements.
  • Better Capabilities: This service brings CI/CD practices to ML, which keeps the development & production environments separate, version control measures, on-demand testing, and end-to-end automation.
  • Automation: As the first Purpose-built CI/CD service for ML that incorporates automation of data loading, transformation, training, tuning, and deployment workflow steps, this increases productivity significantly.
  • Scalability: With the ability to create, automate, and manage end-to-end ML workflows at scale, there’s peace of mind knowing various are stored and be referred back to for audit purposes, compliance requirements, and future solution builds.

Why We’re Excited

Amazon SageMaker Pipelines offer a more efficient and productive solution to scale by reusing the workflow steps created and stored in a central repository. With built-in templates to deploy, train, and test models, our ML teams can quickly leverage CI/CD in our ML environments and easily incorporate models we’ve already created. With the SageMaker Pipelines model registry, we can track these model versions in one central location that gives us visibility and up to date record logs of the best possible solution options to meet client deployment needs.

If you’re looking to explore these services further and need some guidance, let us know and we’ll connect you to an Idexcel expert!

The Best of Both: Serverless and Containers with AWS Fargate and Amazon EKS

Co-Authored by: Pradeepta Sahu, DevOps Lead & Sidharth Parida, DevOps Engineer

When enterprises require more control over the components of their applications used, they move away from infrastructure management by eliminating SaaS-based infrastructure (with EC2) and migrating to automated Containers Services (CaaS). Through this migration to CaaS, companies gain flexibility and agility in DevOps because their contracted structure is not associated with a specific machine. This approach uses AWS Cloud resources like AWS Fargate for Amazon EKS to overcome the disadvantages of OS virtualization (i.e. Running Multiple OSs on a physical server), by introducing containers that give teams more control over the software delivery model.

Our Idexcel DevOps team has created a strategic solution using AWS Fargate for  Amazon EKS that reduces development costs on new projects. This managed Microservices-based platform breaks down the industry burden of managing into more easily managed Serverless Kubernetes infrastructures. Why are these monolithic applications such a challenge? They are tightly coupled and entangled as an application evolves, making it difficult to isolate services for purposes such as independent scaling or code maintainability. So, a single point of failure would shut down the entire production until the necessary recovery actions are taken.

Since SaaS-based applications are fully managed Monolithic and control lies with the primary cloud provider, organizations are realizing the critical need to have more control of their infrastructure in the cloud. AWS Fargate delivers serverless container capabilities to Amazon EKS, which combines the best of both Serverless and Container benefits. With Serverless capabilities, developers don’t need to worry about purchasing, provisioning, and managing backend servers. Serverless architecture is also indefinitely scalable and easy to deploy with plug-and-play features. This integration between Fargate and EKS enables Kubernetes users to transform a standard pod definition into a Fargate deployment. Fargate is a serverless compute engine for containers that removes the need to provision and manage servers. It allocates the right amount of compute needed for optimal performance by eliminating the need to choose instances and automatically scaling the cluster capacity.

This means that EKS can support Fargate to provide serverless compute engines for containers by reducing provisioning, configuring, or scaling virtual machine groups to run containers. EKS does this by facilitating the existing nodes (managed nodes) to communicate with Fargate pods in an existing cluster that already has worker nodes associated with it.

Major Advantages of Fargate Managed Nodes

Faster DevOps Cycle = Faster time to market: By removing the contracted structure tied to specific machines and leveraging cloud resources, DevOps teams increase deployment agility and flexibility to launch solutions at a quicker pace.  

Increased Security: Fargate and EKS are both AWS Managed Services that provide serverless and Kubernetes configuration management, safely and securely within the AWS ecosystem. 

Combines the Best of Both Serverless & Containers:  Fargate provides serverless computing with Containers. This combination of technologies enables developers to build applications with less costly overhead and greater flexibility than applications hosted on traditional servers or virtual machines.

Enhanced Flexibility and Scalability: Any Kubernetes microservices application can be migrated to EKS easily with infinite scalable serverless capability.

Reduced Costs: With Containerization, overhead costs are reduced through the elimination of on-premises servers, network equipment, maintenance of server management maintenance, and patch/cluster management.

In this next section, we’ll illustrate how to control the resource configuration in Fargate Nodes in Amazon EKS and administer the Kubernetes Nodes on AWS Fargate without needing to stand up or maintain a separate Kubernetes control plane.

Kubernetes Cluster Management in Amazon Cloud

Amazon EKS runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability. Amazon EKS automatically detects and replaces unhealthy control plane instances and provides automated version upgrades/patching for them.

Amazon EKS is also integrated with many other available AWS services to provide scalability and security for applications, including the following:

Fargate with Managed Nodes
The goal achieved through this solution is a more flexible and controlled Kubernetes infrastructure to make sure pods on the worker nodes can communicate freely with the pods running on Fargate. These Fargate pods are automatically configured to use the cluster security group for the cluster they are associated with. Part of this includes making sure that any existing worker nodes in the cluster can send and receive traffic to and from the cluster security group. Managed node groups are automatically configured to use the cluster security group, alleviating the need to modify or check for compatibility.

Our Solution Architecture

1. Create the Managed Node Cluster

Prerequisites:  Install and configure the binaries that need to create and manage an Amazon                      EKS cluster as below:

– Latest AWS CLI

– Command-line utility tool eksctl

– Configure Command-line utility kubectl for Amazon EKS

Referencehttps://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html

Create the Managed node group cluster with the eksctl command line utility following the below command.

2. Create a Fargate Pod Execution Role

When the cluster creates pods on AWS Fargate, the pods need to make calls to AWS APIs to perform tasks like pulling container images from the Amazon ECR/DockerHub Registry The Amazon EKS pod execution role provides the IAM permissions to do these tasks.

Note:  To create the cluster, use the eksctl –fargate option to create the necessary profiles and pod execution role for the cluster. Note: If the cluster already has a pod execution role, skip this step to Create a Fargate Profile.

With a Fargate profile, a pod execution role is specified to use with the pods. This role is added to the cluster’s Kubernetes Role Based Access Control (RBAC) for authorization. This allows the kubelet that is running on the Fargate infrastructure to register with the Amazon EKS cluster so that it can appear in the cluster as a node.

The RBAC role can be setup by following these steps:

  1. Open the IAM in AWS Console: https://console.aws.amazon.com/iam/
  2. Choose Roles, then Create role.
  3. Choose EKS from the list of services, EKS – Fargate pod for your use case, and then Next: Permissions.
  4. Choose Next: Tags.
  5. (Optional) Add metadata to the role by attaching tags as key–value pairs. Choose Next: Review.
  6. For Role name, enter a unique name for the role, such as AmazonEKSFargatePodExecutionRole, then choose Create role

3. Create a Fargate Profile for the Cluster

Before scheduling pods running on Fargate in the cluster, a Fargate profile need to be defined that specifies which pods should use Fargate when they are launched.

Note: If we created the cluster with eksctl using the –fargate option, then a Fargate profile has already been created for the cluster with selectors for all pods in the kube-system and default namespaces. Use the following procedure to create Fargate profiles for any other namespaces you would like to use with Fargate.

Create the Fargate profile with the following eksctl command, replacing the <<variable text>> with the own values. Specify a namespace (labels option is not required).

$ eksctl create fargateprofile –cluster <<cluster_name>> –name <<fargate_profile_name>> –namespace <<kubernetes_namespace>> –labels key=value

4. Deploy the sample web application to EKS Cluster

To launch an app in EKS cluster, we need to deploy a deployment file and a service file. We then launch the deployment and the service in the EKS cluster.

Example:

$ kubectl apply -f <<deployment_file.yaml>>

$ kubectl apply -f <<deployment-service.yaml>>

The above creates a LoadBalancer to access the public part of the cluster.

After that, the details of the running service in the cluster can be viewed.

Example:

$ kubectl get svc <<deployment-service>> -o yaml

Observation:

Verify that the hostname/loadbalancer created as it is configured in the <<deployment-service.yaml>>.

Now the service can be accessed with the hostname/loadbalancer on the browser.

Simply type the respective hostname/loadbalancer in the browser to verify that the application is up and running.

Streaming CloudWatch Logs Data to Amazon ElasticSearch Service

Amazon ElasticSearch Service (Amazon ES) is a managed service that makes it easy to deploy, operate, and scale ElasticSearch clusters in the AWS Cloud. ElasticSearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and clickstream analysis. Kibana is a popular open-source visualization tool designed to work with ElasticSearch. Amazon ES provides installation of Kibana with every Amazon ES domain.

Configure ELK with EKS Fargate

– Configure a log Group by following the steps provided by AWS at Log Group

– Subscribe the Log Group in CloudWatch, to stream data into the Amazon EKS

EKS Fargate is a robust platform that provides high availability and controlled maintainability in a secure environment. Because it runs the Kubernetes management infrastructure across multiple AWS Availability Zones, it automatically detects and replaces unhealthy control plane nodes, providing on-demand upgrades and patching with no downtime. This approach enables organizations to reduce time-to-market and remove the cumbersome burdens of patching, scaling, or securing a Kubernetes cluster in the cloud. Looking to explore this solution further or implement EKS Fargate Managed Nodes for your IT ecosystem? Connect with an Idexcel Expert today!