AWS re:INVENT 2017


Date : November 27–December 1, 2017
Location : ARIA, Encore, MGM, Mirage, The LINQ, The Venetian
Las Vegas, NV

Event Details

AWS re:Invent is a learning conference hosted by Amazon Web Services for the global cloud computing community. The event features keynote announcements, training and certification opportunities. At the conference, you’ll have access to more than 1,000 technical sessions, a partner expo, after-hours events, and so much more.

Why Attend

The event is ideal for developers and engineers, system administrators, systems architects, and technical decision makers.

[Know more about the Conference]

About Idexcel: Idexcel is a global business that supports Commercial & Public Sector organizations as they Modernize their Information Technology using DevOps methodology and Cloud infrastructure. Idexcel provides Professional Services for the AWS Cloud that includes Program Management, Cloud Strategy, Training, Applications Development, Managed Service, Integration, Migration, DevOps, AWS Optimization and Analytics. As we help our customers modernize their IT, our clients should expect a positive return on their investment in Idexcel, increased IT agility, reduced risk on development projects and improved organizational efficiency.

Allolankandy Anand Sr. Director Technical Sales & Delivery will be attending this event. For further queries, please write to anand@idexcel.com

Cloud based QA Infrastructure

A silver bullet to ward off traditional challenges

If you have some spare time at the office, spare a thought to the CIO in the IT industry. A blitzkrieg of challenges invite the CIO every day as he settles down on his desk after greeting his colleagues, rather ironically for him, a “good morning”. Here’s how the dice rolls for him every day at work:

Existing Scenario:

a)    Shrinking budget

b)    Increasing cost pressures

Expectations:

a)    Cut IT spend

b)    Deliver value and technology edge

Preferred Solution:

a)    Enhance ROI generated from IT components

b)    Increase focus on QA infrastructure and maintenance costs

c)    Lean on test managers to reduce QA infra costs as they form a major chunk of IT infrastructure budgeting.

Cutting costs, a Catch-22 situation

On the other side, test managers face a catch-22 situation as cut in QA infrastructure spend could potentially impact the quality of software deliverables. Here are a few examples of the challenges that drive cost of IT upwards while creating and managing QA infrastructure:

  • Testing operations are recurring but non-continuous. This means test infrastructure is sub-optimally utilized and therefore has a significant impact on ROI.
  • Testing work areas span a wide spectrum such as On-time QA environment provisioning for multiple projects, decommissioning of QA environment to other projects, QA environment support, managing incidents, and managing configurations for multiple projects. All these necessitate an organization to allocate and maintain proportionate skilled resources at all times which in turn drives costs upwards.
  • CIOs and Test Managers are expected to ensure testing is commissioned on recommended hardware, because most of the issues linked to later stages of the quality gate are attributed to testing on inadequate hardware. This again accounts for a significant chunk of the total IT budget
  • Creating appropriately defined QA infrastructure up and running in time (including procurement and leasing of these elements) to meet the set timelines demands more IT staffing resources
  • Many Test Managers give the goby to staging environment and directly deploy to production because of budget constraints, however creating a staging environment that mimics production is more critical to quality of software in production. Creating such environment also necessitates huge chunk of total IT budget.
  • Today’s complex application architecture involves multiple hardware and software tools which require a lot of investment in terms of time, money, resources on coordination, managing SLAs, procurement;  with multiple vendors. All these taken together add up more allocations in the budget.
  • For conducting performance testing, test managers need to set up a huge number of machines in the lab to generate desired number of virtual users demanding more budget from CIOs

The Case for QA infrastructure as a Service in Cloud

All the above challenges force CIOs and Test Managers to move away from on-premises QA infrastructure and scout for alternatives such as cloud computing for creating and managing QA environments. Organizations are leveraging cloud computing to significantly lower IT infra spend towards QA environments while at the same time deliver value, quality and efficient QA lifecycle. Already, many players, big and small, such as Amazon, IBM, Skytap, CMT, Joyent, Rackspace;  offer QA infrastructure as a service in cloud. Using this service, organizations can set up QA infrastructure in cloud, shifting focus from CAPEX to OPEX.  CIOs too are able to significantly squeeze both CAPEX and OPEX elements thereby meeting the budget cap without compromising on the quality of the solution.

How does it work?

Assume that a QA team needs a highly complex test environment configuration in order to conduct testing on a new application. Instead of setting up on-premises QA environment (which requires hardware procurement, set up, maintenance), a QA team member logs in to the QA infrastructure service provider’s self-service portal and:

* Creates an environment template with each tier of the application and network elements like web server, application servers, load balancer, database and storage.  For example a QA team member can fill the web server template like “web server with large instance and windows server 2008”.

* Submits the request through the IaaS service provider’s portal

* The service provider provisions this configuration and hardware in minutes and sends a mail to the QA team.

* The QA team uses this testing environment for required time and completes the testing.

* the QA team releases the test environment at the end of the testing cycle.

* For subsequent releases, the environment can simply be set up from the same template and the QA team can deploy the new code and start testing.

* The service provider bills for only the actual usage of the QA environment.

How does it help?

Elastic and scalable data center with no CAPEX investment: CIOs/Test Managers don’t have to worry about budgeting, procurement, setting up and maintenance of QA environment. Organizations simply need to develop applications and create a template of the required environment and request the service provider who enables the test environment. The QA team then deploys the application on a production like environment, thus saving time and expenses over traditional on-premises deployment. This shifts the focus from CAPEX to OPEX for IT infrastructure spending.

QA teams can provision their own environment: With this facility, QA teams can provision their own environment on-demand, rather than going though long IT procurement process, to set up an on-premises test environment.

Multiple parallel environments: QA teams can create different environments with different platforms and application stacks, with no investment in capex and multiple hardware, reducing the Go to Market time.

Minimize resource hoarding: Instead of setting up on-premises test environments and investing capital on hardware, QA teams can deploy the environments on cloud on a need-basis and release the resources after completion of testing. Some service providers provide ‘suspend and resume’ facility, in which case QA teams can suspend an environment saving the entire state including memory and resume at a later stage when required.

The bottom line: QA environments in cloud are lifesavers for companies. CIOs are slowly adapting cloud based QA infrastructure and moving away from on-premises QA infrastructures which demands huge CAPEX and OPEX and yields less ROI. Cloud-based QA infrastructure, if managed smartly, is a silver bullet that can neutralize most of the challenges faced by CIOs/Test Managers in traditional QA infrastructure.

In the Cloud, Don’t KISS.

Remember the Y2K dotcom era when every Tom, Dick and Harry rushed to ride the Internet bubble? It looks like many of us have forgotten our lesson, the instant Internet 2.0 (or is it 3.0?) made a comeback on a cloud, viz. Signing up for Cloud Services like you are applying for a credit card. Follow the herd mentality, you know.

To get smarter, faster, and better, go easy. And then act with speed. That’s how you win the race. Just because your competitor, your associate, or your vendor is moving to the cloud, doesn’t mean you mimic them without giving it any more thought. Think before you ink a SLA. Is your CSP (Cloud Service Provider) capable of delivering standards-based cloud solutions that are designed from the ground up to meet your specific enterprise requirements? Does your Service Level Agreement with your CSP also cover your requirements for monitoring, logging, encryption and security? Do you have the domain specific IT knowledge and expertise and the corresponding environment in place before signing up for a cloud solution? And are your security protocols in optimum functional mode?

Security protocols: Keep a hawk’s eye on them. In CIO circles, they warn you not to KISS (Keep It Stupid & Silly) when you sign for the cloud. KISS refers to common mistakes in an enterprise such as for instance, failing to to register your passwords and individual IDs with the enterprise; turning a deaf ear to demands for secure Application Programming Interfaces (API); and wrongly assuming that you are outsourcing risk, accountability and compliance obligations as well to the cloud.

The ironic party of this business of securing the cloud is the challenge of arriving at an ideal tradeoff between the need of the enterprise for security and the need of the consumer for privacy. The Economist in “Keys to the Cloud Castle” succinctly sums up this dilemma faced by cloud-based internet storage and synchronization providers like say Dropbox, using a house metaphor. Which do you prefer: An access through a master key which is in the hands of an authorized internal security or an access whereby you choose your own security key. The problem with the former is in the key falling into wrong hands, while in the latter case, the danger is in losing all access if you lose the key due to negligence. Cloud security scientists so constantly look to find a middle path that combines privacy with security.

Does this mean that a perfectly secure cloud computing is still a chimera? Happily for us, recent research in cryptography shows homographic encryption – a new algorithm which would enable a Web user to send encrypted data to a server in the cloud which it turn would process it without decrypting it and send back a still-encrypted result – is well on the way to become a pursuit of wow, among CIOs.

A clearly demarcated delegation of tasks between cloud providers and security providers could serve as a rule of thumb for ensuring both security and privacy. Cloud providers should focus on providing access, anywhere, anytime, while security providers should focus on core encryption. An integration of both these services can lead to a seamless and secure user experience. For example, you as an user encrypt your files directly on your laptop/desktop/phones, and then upload the encrypted documents to the cloud.

Bottom line: Don’t sign up for the cloud like you are applying for a credit card. Outsourcing your ideas doesn’t mean you also outsource your thinking..

For A Better Cloud Security – Wheel it Different, instead of Reinventing the Wheel !

Saas served as sauce? Wow. But only as long as it’s secure. And that’s where the penny drops. No matter. Big money now is way too big on cloud services. We can’t roll back the Age of Participation. The jury may be pondering on how secure is the cloud, but the verdict is only going to tweak “how secure is the cloud” to “how to secure the cloud”.

Yes, there is a cloud over the cloud. Less than a year ago, hackers stole 6 million passwords from dating site eHarmony and LinkedIn fueling the debate over cloud security. DropBox, a free online service provider that lets you share documents freely online, became “a problem child for cloud security” in the words of a cloud services expert.

The “Notorious Nine” threats to cloud computing security according to the Cloud Security Alliance (CSA), a not-for-profit body: Data breaches, data loss, account or service traffic hijacking, insecure interfaces and APIs, Denial of service, malicious insiders, cloud abuse, insufficient due diligence, and shared technology vulnerabilities.

However, a problem is an opportunity in disguise, and so the algorithm waiting to be discovered is to how to outsmart the hackers and overcome the threats to cloud security. More so, since the advantages that accrue from cloud services viz. flexibility, scalability, economies of scale, for instance, far outweigh the risks associated with the cloud.

One way for better cloud security is to use a tried, tested and trusted Cloud Service Provider (CSP) rather than to self-design a high availability data center. Also, a CSP yields more economies of scale.
Virtualized servers, though less secure than the physical servers they replace, are getting more and more secure than before. According to research by Gartner, virtual servers were less secure than the physical servers they replaced by 60% in 2012. In 2015, they will be only 30% less secure.

To do the new in cloud security, we could begin by reinventing the old. The traditional methods of data security, viz. Logical security, Physical security and Premises security, also apply to securing the cloud. Logical security protects data using software safeguards such as password access, authentication, and authorization, and ensuring proper allocation of privileges.

The risk in Cloud Service Offerings arises because a single host with multiple virtual machines may be attacked by one of the guest operating systems. Or a guest operating system may be used to attack another guest operating system. Cloud services are accessed from the Internet and so are vulnerable to attacks arising from Denial of Service or widespread infrastructure failure.

Traditional security protocols can also be successfully mapped to work in a cloud environment. For example Traditional physical controls such as firewalls, Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS), Network Access Control (NAC) products that ensure access control can continue to be critical components of the security architecture. However, these appliances no longer need to be a physical piece of hardware. A virtual firewall, like for example Cisco’s security gateway, performs the same functions of a physical firewall but has been virtualized to work with the hypervisor. This is catching on fast. Gartner researchers predict that by 2015, 40% of security controls in the data centers will be virtualized.

Moral of the cloud: You don’t have to reinvent the wheel to secure the cloud. But we need to keep talking – to wheel it differently.