The Best of Both: Serverless and Containers with AWS Fargate and Amazon EKS

Co-Authored by: Pradeepta Sahu, DevOps Lead & Sidharth Parida, DevOps Engineer

When enterprises require more control over the components of their applications used, they move away from infrastructure management by eliminating SaaS-based infrastructure (with EC2) and migrating to automated Containers Services (CaaS). Through this migration to CaaS, companies gain flexibility and agility in DevOps because their contracted structure is not associated with a specific machine. This approach uses AWS Cloud resources like AWS Fargate for Amazon EKS to overcome the disadvantages of OS virtualization (i.e. Running Multiple OSs on a physical server), by introducing containers that give teams more control over the software delivery model.

Our Idexcel DevOps team has created a strategic solution using AWS Fargate for  Amazon EKS that reduces development costs on new projects. This managed Microservices-based platform breaks down the industry burden of managing into more easily managed Serverless Kubernetes infrastructures. Why are these monolithic applications such a challenge? They are tightly coupled and entangled as an application evolves, making it difficult to isolate services for purposes such as independent scaling or code maintainability. So, a single point of failure would shut down the entire production until the necessary recovery actions are taken.

Since SaaS-based applications are fully managed Monolithic and control lies with the primary cloud provider, organizations are realizing the critical need to have more control of their infrastructure in the cloud. AWS Fargate delivers serverless container capabilities to Amazon EKS, which combines the best of both Serverless and Container benefits. With Serverless capabilities, developers don’t need to worry about purchasing, provisioning, and managing backend servers. Serverless architecture is also indefinitely scalable and easy to deploy with plug-and-play features. This integration between Fargate and EKS enables Kubernetes users to transform a standard pod definition into a Fargate deployment. Fargate is a serverless compute engine for containers that removes the need to provision and manage servers. It allocates the right amount of compute needed for optimal performance by eliminating the need to choose instances and automatically scaling the cluster capacity.

This means that EKS can support Fargate to provide serverless compute engines for containers by reducing provisioning, configuring, or scaling virtual machine groups to run containers. EKS does this by facilitating the existing nodes (managed nodes) to communicate with Fargate pods in an existing cluster that already has worker nodes associated with it.

Major Advantages of Fargate Managed Nodes

Faster DevOps Cycle = Faster time to market: By removing the contracted structure tied to specific machines and leveraging cloud resources, DevOps teams increase deployment agility and flexibility to launch solutions at a quicker pace.  

Increased Security: Fargate and EKS are both AWS Managed Services that provide serverless and Kubernetes configuration management, safely and securely within the AWS ecosystem. 

Combines the Best of Both Serverless & Containers:  Fargate provides serverless computing with Containers. This combination of technologies enables developers to build applications with less costly overhead and greater flexibility than applications hosted on traditional servers or virtual machines.

Enhanced Flexibility and Scalability: Any Kubernetes microservices application can be migrated to EKS easily with infinite scalable serverless capability.

Reduced Costs: With Containerization, overhead costs are reduced through the elimination of on-premises servers, network equipment, maintenance of server management maintenance, and patch/cluster management.

In this next section, we’ll illustrate how to control the resource configuration in Fargate Nodes in Amazon EKS and administer the Kubernetes Nodes on AWS Fargate without needing to stand up or maintain a separate Kubernetes control plane.

Kubernetes Cluster Management in Amazon Cloud

Amazon EKS runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability. Amazon EKS automatically detects and replaces unhealthy control plane instances and provides automated version upgrades/patching for them.

Amazon EKS is also integrated with many other available AWS services to provide scalability and security for applications, including the following:

Fargate with Managed Nodes
The goal achieved through this solution is a more flexible and controlled Kubernetes infrastructure to make sure pods on the worker nodes can communicate freely with the pods running on Fargate. These Fargate pods are automatically configured to use the cluster security group for the cluster they are associated with. Part of this includes making sure that any existing worker nodes in the cluster can send and receive traffic to and from the cluster security group. Managed node groups are automatically configured to use the cluster security group, alleviating the need to modify or check for compatibility.

Our Solution Architecture

1. Create the Managed Node Cluster

Prerequisites:  Install and configure the binaries that need to create and manage an Amazon                      EKS cluster as below:

– Latest AWS CLI

– Command-line utility tool eksctl

– Configure Command-line utility kubectl for Amazon EKS

Referencehttps://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html

Create the Managed node group cluster with the eksctl command line utility following the below command.

2. Create a Fargate Pod Execution Role

When the cluster creates pods on AWS Fargate, the pods need to make calls to AWS APIs to perform tasks like pulling container images from the Amazon ECR/DockerHub Registry The Amazon EKS pod execution role provides the IAM permissions to do these tasks.

Note:  To create the cluster, use the eksctl –fargate option to create the necessary profiles and pod execution role for the cluster. Note: If the cluster already has a pod execution role, skip this step to Create a Fargate Profile.

With a Fargate profile, a pod execution role is specified to use with the pods. This role is added to the cluster’s Kubernetes Role Based Access Control (RBAC) for authorization. This allows the kubelet that is running on the Fargate infrastructure to register with the Amazon EKS cluster so that it can appear in the cluster as a node.

The RBAC role can be setup by following these steps:

  1. Open the IAM in AWS Console: https://console.aws.amazon.com/iam/
  2. Choose Roles, then Create role.
  3. Choose EKS from the list of services, EKS – Fargate pod for your use case, and then Next: Permissions.
  4. Choose Next: Tags.
  5. (Optional) Add metadata to the role by attaching tags as key–value pairs. Choose Next: Review.
  6. For Role name, enter a unique name for the role, such as AmazonEKSFargatePodExecutionRole, then choose Create role

3. Create a Fargate Profile for the Cluster

Before scheduling pods running on Fargate in the cluster, a Fargate profile need to be defined that specifies which pods should use Fargate when they are launched.

Note: If we created the cluster with eksctl using the –fargate option, then a Fargate profile has already been created for the cluster with selectors for all pods in the kube-system and default namespaces. Use the following procedure to create Fargate profiles for any other namespaces you would like to use with Fargate.

Create the Fargate profile with the following eksctl command, replacing the <<variable text>> with the own values. Specify a namespace (labels option is not required).

$ eksctl create fargateprofile –cluster <<cluster_name>> –name <<fargate_profile_name>> –namespace <<kubernetes_namespace>> –labels key=value

4. Deploy the sample web application to EKS Cluster

To launch an app in EKS cluster, we need to deploy a deployment file and a service file. We then launch the deployment and the service in the EKS cluster.

Example:

$ kubectl apply -f <<deployment_file.yaml>>

$ kubectl apply -f <<deployment-service.yaml>>

The above creates a LoadBalancer to access the public part of the cluster.

After that, the details of the running service in the cluster can be viewed.

Example:

$ kubectl get svc <<deployment-service>> -o yaml

Observation:

Verify that the hostname/loadbalancer created as it is configured in the <<deployment-service.yaml>>.

Now the service can be accessed with the hostname/loadbalancer on the browser.

Simply type the respective hostname/loadbalancer in the browser to verify that the application is up and running.

Streaming CloudWatch Logs Data to Amazon ElasticSearch Service

Amazon ElasticSearch Service (Amazon ES) is a managed service that makes it easy to deploy, operate, and scale ElasticSearch clusters in the AWS Cloud. ElasticSearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and clickstream analysis. Kibana is a popular open-source visualization tool designed to work with ElasticSearch. Amazon ES provides installation of Kibana with every Amazon ES domain.

Configure ELK with EKS Fargate

– Configure a log Group by following the steps provided by AWS at Log Group

– Subscribe the Log Group in CloudWatch, to stream data into the Amazon EKS

EKS Fargate is a robust platform that provides high availability and controlled maintainability in a secure environment. Because it runs the Kubernetes management infrastructure across multiple AWS Availability Zones, it automatically detects and replaces unhealthy control plane nodes, providing on-demand upgrades and patching with no downtime. This approach enables organizations to reduce time-to-market and remove the cumbersome burdens of patching, scaling, or securing a Kubernetes cluster in the cloud. Looking to explore this solution further or implement EKS Fargate Managed Nodes for your IT ecosystem? Connect with an Idexcel Expert today!

How To Build Business Intelligent Chatbots with Amazon Lex


Enabling Business Intelligence in Chatbots with Amazon Lex

In this fast-paced digital age, organizations need a fast and efficient way of gathering information. Especially in a customer-driven market like fintech, “time is money ”. Decisions will have to be made accurately and fast. Incorrect decisions can lead to severe consequences or lost customers. In several fintech applications, information is made available through reporting solutions, presentations, charts, etc. What customers find difficult is digging out the specific report or data needed through a multitude of mouse-clicks and then spending a lot of time analyzing them. There is a critical need for one central point from which a variety of data can be delivered to the user in an efficient and effective process. AWS technology and tools open several avenues to make this possible.

Amazon Lex – Machine Learning As a Service

Amazon Lex is one service that enables state-of-the-art chatbots to be built. It has redefined how people in the industry perceive building chat-bots. Bots themselves have gradually evolved from typical question-answering bots to more complex ones that can perform an array of functions. Amazon Lex offers features that tackle several complexities faced while building the previous generation of chatbots. The intent fulfillment, dialogue flow, and context management features of Amazon Lex help to make conversation with a chat-bot as human-like as possible.

This blog discusses how information can be retrieved from databases with a simple question asked to Kasper (the name of our bot). The following components of this blog will give a clear understanding to the user, how everything is built, networked, and coupled with a custom user interface.

Solution Architecture

Kasper is a chatbot built specifically for a lending platform to retrieve various data points based on specific inquiries. Like all bots, Kasper is also built on intents, utterances, and slots. After adding intents, its corresponding utterances, and slots, a few slots need to be added as custom slots. For example, there was a query – “show clients where invoice amount is greater than 20000”.  In the utterance section of Kasper, it was recorded as below:

 

Here ‘cola’ and ‘operatora’ are slot variables under custom slots ‘columnname’ and ‘operator’ respectively.

Natural Language to SQL Conversion

All the responses that require output from the database are sourced with the help of a lambda function. The JSON response from the lambda function contains the input transcript, intent, and slots information. The back-end application then receives the response from the lambda function, segregates the JSON, and classifies information into the corresponding intent and slots. The application extracts the slots and intents and then proceeds to build the query.

Responses from Kasper

Responses from Kasper can result in different formats of data. There can be single value responses, images, tables, etc. The types of responses are automatically determined from the intents. A custom website with a chat window has been developed for interacting with Kasper. The chat window can take in both texts, as well as audio inputs. The following are the detailed sections explaining each response type, with their corresponding chat window.

 

Response type I – Single values

There are instances where users might want to know about a sum or count or any other single value response. For example, an inquiry might be “count the number of clients whose due date is within 2 weeks” or “sum of the invoice amount of all clients“. The responses of these queries will be just a single value eg. “10,000”.

Response type II – Images and Tables

1. Tables

Images and tables are the next type of responses Kasper delivers. Once the SQL query is constructed, it connects with the database and retrieves data and stores it in a pandas dataframe. This dataframe can be exported as an html table for previewing through the chat window. It can also be downloaded in the form of a csv file.

2. Images

From the pandas dataframe, different charts/graphs can be derived. When an image response is expected, charts are generated using python libraries, saved to a file, and then exported to the chat window. Two types of images are generated – one is a thumbnail and the second is the actual image. Kasper is equipped with a feature named Auto-visualization. According to the dataframe, the function will decide what type of graph or chart has to be plotted. There are numerous rules applied before making that decision. For example, the function determines whether a specific column features continuous or categorical values. The resulting graph is plotted based on such combinations.

Response type III – Fallback mechanism with response card

The third type of response are response cards – a response to clarify the intention of the user. Suppose the user asks an ambiguous question like this “what is the amount of Apollo Inc. “. The chatbot will find the query to be missing some keywords because the user did not specify the type of amount (either invoice amount or balance amount). Kasper then prompts back with a list of possible options, so the user can select the appropriate option and receive the accurate result.

Kasper is a chatbot that has evolved to its current operational capabilities because of maximizing Amazon Lex’s potential and accommodating other significant AWS services to its architecture. Currently, Kasper can solve important natural language to SQL problems and a few FAQ questions as well. It can also be modified for other domain problems to suit specific needs. Over time, more capabilities will be possible to add and could serve as a first-line substitute for human support personal, freeing up your support team to help address more critical issues more quickly. If you’re interested in how a chatbot might improve your operations, schedule a Free assessment with our Machine Learning team today.

Want to learn more?

6 Business Continuity Strategies to Implement Post COVID-19

The health crisis of COVID-19 impacted businesses, people, and communities in numerous ways, causing us to change our strategies and the way we live going forward. This means that businesses are adapting to an incredibly new business landscape that’s changing the way we will work for the foreseeable future. Organizations are challenged with reinventing strategies, enabling virtual teams with remote workspaces, and exploring what’s possible for creating new innovations. Here are some key strategies to implement to accelerate business continuity and transition to a new working world:

Establish Your Team Leaders

The greatest asset any organization has are its people. Choose the team members with proven reliability, organization skills, and strong leadership qualities, especially under pressure. Situations like COVID-19 can prove to be stressful, so it is wise to choose to your Business Continuity team with these things in mind. Some roles you might consider designating specifically for Business Continuity purposes are Executive Business Continuity Manager (overall Team Lead), Communication Lead, IT Lead, Human Resources, Facilities/Maintenance, and Operations/Logistics Lead. These roles can depend on your specific business needs and internal departmental breakdown. Once you’ve decided your key players, it’s time to evaluate the primary business processes that need to continue in case of business disruption.

Document & Identify Critical Processes

From internal human resources processes like payroll processing, retirement plan administration, healthcare benefits to business operations such as supply chain management, customer support, operational processes, each of these requires certain access to various technology and secure applications. It is important to know if these processes will still be able to be performed with the current systems architectures and IT tools in place. That leads us into the next strategy, where we connect each process with existing resources in place to determine if the business continuity plan being developed will need specific changes, updates, or additions.

Identify Key Technology and Tools

Performing a proper assessment of current tools and technologies to validate capability will reveal where there might be gaps that need to be filled. One key question to consider is “Will these tools and technologies we currently have in place work in the case of a future change in working environment?” The answer to this will help identify what potential technologies or tools that might be needed in order to continue seamlessly operating with minimal disruption. Need help strategizing? Learn more about how to leverage cloud technology to improve business operations and increase performance efficiencies here.

Consider Contingency Technology and Tools

Is your system architecture set up for a new working structure for virtual teams? Is your cloud strategy crystal clear and strong enough to handle changing needs in terms of scalability and operations? Is it ready in case of another change in the working environment or future disaster? For example, it might be necessary to set up virtual workspace situations for employees. As a preferred AWS partner, Idexcel can help implement AWS Workspaces solutions in your organization – enabling business continuity by providing users and partners with a highly secure, virtual Microsoft Windows or Linux desktop. This setup grants your team access to the documents, applications, and resources they need, anywhere, anytime, from any supported device. Learn more about how we can help do that here.

Build A Customer Communication Plan

Communication with your staff, clients, and partners is perhaps the most important element of these strategies. The more they hear from you, the better off you will be with establishing trust and reliability. When communicating, be sure to follow these 3 guidelines:

1. Timing is everything. Responding quickly is key to establishing trust, visibility, and proactivity. It’s critical to be timely with messaging and depending on that communication sent, to give proper response and planning time to the recipient.

2. Be clear, concise, authentic, and provide value. Keep your communications simple and to the point. Create messaging that provides value, help, and support during any business changes or possible disruptions. Another key tip: keep it positive and avoid the use of negative words to evoke a more positive feeling and reaction to the communication. The more authentic and personable the messaging, the more likely you are to receive a positive response and invoke a sense of comfortability.

3. Leverage all communication channels. Social Media is a great way to keep in touch with your audience. Employees, clients, and partners alike are all very active, especially on LinkedIn, given it’s a key point of communication and connection digitally among professionals. Keep up with email communication with your teams internally as well, checking in often and also checking in how it may have impacted them.

Set Your Organization Up for Innovation

With a Business Continuity plan in place and the team assembled, now might be the time to consider strategically planning for innovative solutions. Specific technologies can be implemented to ensure accelerated business continuity measures are in place to better set your business and teams up for success.

For example, many organizations are adopting Machine Learning solutions with RPA (Robotic Process Automation). Many websites are using chatbots for answering general FAQs asked by the customers, eliminating the need for personnel to respond, and enabling them to focus on other tasks. They can positively impact the customer’s experience and are an ideal tool for short-staffed employers, saving thousands of hours of productivity and cost.

If you need help strategizing and creating your business continuity plan, get in touch with us to get connected with an expert.

Is Machine Learning the Solution to Your Business Problem?

The term Machine Learning (ML) is defined as ‘giving computers the ability to learn without being explicitly programmed’ (this definition is attributed to Arthur Samuel)Another way to think of this is that the computer gains intelligence by identifying patterns and data sets on its own, improving output accuracy over time as more data sets are examined. Since ML can be a challenging solution to implement, we’ve put together some foundational steps to assess the feasibility of building an ML solution for your organization: 

1. Identify the problem TYPE 

Start by distinguishing between automation problems and learning problems. Machine learning can help automate your processes, but not all automation problems require learning.

Automation: Implementing automation without learning is appropriate when the problem is relatively straightforward. These are the kinds of tasks where you have a clear, predefined sequence of steps currently being executed by a human, but that could conceivably be transitioned to a machine.

Machine Learning: For the second type of problem, standard automation is not enough – it requires learning from data. Machine learning, at its core, is a set of statistical methods meant to find patterns of predictability in datasets. These methods are great at determining how certain features of the data are related to the outcomes you are interested in.

2. Determine if you have the right data

The data might come from you, or an external provider. In the latter case, make sure to ask enough questions to get a good feel for the data’s scope and whether it is likely to be a good fit for your problem. consider your ability to collect it, its source, the required format, where it is stored, but also the human factor. Both executives and employees involved in the process need to understand its value and why taking care of its quality is important. 

3. Evalute Data Quality and Current State

Is the data you have usable as-is, or does it require manual human manipulation before introducing into the learning environment? A solid dataset is one of the most important requirements for building a successful machine learning model. Machine learning models that make predictions to answer their questions usually need labeled training data. For example, a model built to learn how to determine borrower due dates to improve accurate reporting needs a starting point from which to build an accurate ML solution. Labeled training datasets can be tricky to obtain and often require creativity and human labor to create them manually before any ML can happen.

4. Assess Your Resources

Do you have the right resources to maintain your ML solution? Once you have an appropriate question and a rich training dataset in hand, you’ll need people with experience in data science to create your models. Lots of work goes into figuring out the best combination of features, algorithms, and success metrics needed to make an accurate model. This can be time-consuming and requires consistent maintenance over time.

5. Confirm Feasibility of ML Project

With the four previous steps for assessing whether or not ML is right for your organization, consider the responses. Is the question appropriate for building an ML business solution? Is the data available, or at least attainable? Does the data need hours of human labor? Do you have the right skilled team members to carry out the project? And finally, is it worth it – meaning, will the solution have a large impact, financially and socially? 

It’s important to consider these key questions when assessing whether or not Machine Learning is the right solution for your organization’s needs. Connect with our ML experts today to schedule your free assessment. 

India Cloud Summit 2020

India Cloud Summit 2020

Event Details: India Cloud Summit is the one place to explore new ideas & learn from industry experts Join us for the event that brings together technologists, executives, customers, partners & Key decision makers from all industries.

Why Attend

Whether you are new to cloud or an experienced user, India Cloud Summit will give you inspiration & technical knowledge for your cloud journey. We have put together a mix of thought leaders & industry experts to cover topics like Artificial Intelligence (AI)/Machine Learning (ML), Alexa, IoT, Migration, Big Data and cloud architecture best practices.

The purpose of the summit is to network, share ideas & have meaningful conversations about how others are implementing Cloud in their organizations, Cloud migration, Devops and more.

Idexcel Keynote Details

Title: AI in the Cloud
Speaker: Viswanath Ravindran – Principal Data Scientist at Idexcel

Event Pictures

India Cloud Summit 2020 India Cloud Summit 2020
India Cloud Summit 2020 India Cloud Summit 2020

[Know more about the event]

AWS re:Invent 2019 – Global Partner Summit Announcements

AWS re:Invent 2019

1. Introducing AWS Retail Competency Partners: AWS Retail Competency Partners provide innovative technology offerings that accelerate retailers’ modernization and innovation journey across all areas in the enterprise. Read More

2. Introducing AWS Public Safety & Disaster Response Competency Partners: AWS Customers can quickly identify top-tier APN Consulting Partners who identify, build, and implement technology offerings aimed at improving organizational capacity to prepare, respond, and recover from emergencies and disasters, globally. Read More

3. New AWS Service Ready Program to help customers find tools that integrate with AWS services: AWS Partner Network (APN) announced AWS Service Ready Program, a new way for AWS customers to identify if a tool or application will integrate with AWS services running in their cloud environment. Read More

4. New APN Global Startup Program, helping startup APN Technology Partners grow their cloud-based business: AWS Partner Network (APN) announced the APN Global Startup Program, a dedicated go-to-market (GTM) program for eligible Startup APN Technology Partners. Read More

5. Introducing a new benefit for APN Consulting Partners, APN Immersion Days: AWS Immersion Day workshops provide a customizable AWS experience delivered by AWS Solution Architects and Account Managers to AWS customers. Read More

6. AWS Marketplace makes it easier for you to discover relevant third-party software and data products: AWS Marketplace, a digital catalog with over 7,000 software listings and data products, has announced Discovery API, a new API created for select partners. Read More

7. AWS Marketplace announces a simplified fee structure and the expansion of Seller Private Offers: Starting today, all registered sellers with a public listing in AWS Marketplace can extend a custom contract through Seller Private Offers. Read More

AWS re:Invent 2019

AWS re:Invent 2019

Event Details: At re:Invent 2019, you can expect deeper technical content, more hands-on learning opportunities, and more access to AWS experts than ever. Each year at re:Invent, we bring you over a thousand sessions, chalk talks, workshops, builders sessions, and hackathons that cover AWS core topics and highlight the emerging technologies that we are developing. This year, re:Invent will be no different.

[Know more about the Conference]

About Idexcel: Idexcel is a Professional Services and Technology Solutions provider specializing in Cloud Services, Application Modernization, and Data Analytics. Idexcel is proud that for more than 21 years it has provided services that implement complex technologies that are innovative, agile and successful and have provided our customers with lasting value.

Anand Allolankandy – (Senior Director, Cloud Services Practice at Idexcel) will be attending this event. For further queries, please write to anand@idexcel.com