A new forecast predicts that automated malware attacks will have a devastating effect on the internet of things (IoT). It also predicts the rise of the Shadownet (IoT botnets that can’t be seen or measured using conventional tools), cloud poisoning, more growth of Ransomware as a Service, and attacks on smart buildings. The report, “Fortinet 2017 Cyber-Security Predictions: Accountability Takes the Stage,” based its predictions on cyber-security trends this year. The digital footprint of businesses and individuals has expanded, thus increasing the potential attack surfaces; everything is a target and anything can be a weapon; threats are becoming intelligent, can operate autonomously and are increasingly difficult to detect; and old threats are returning but are enhanced with new technologies. According to the report, “This demand for connectivity, and the need to address its associated risks, will create serious challenges for emerging countries, traditionally disconnected markets, and smaller companies adopting digital business strategies for the first time.” Some key predictions are highlighted here. [Read more../strong>]
Date : April 17-20, 2017
Location : Austin, TX
Venue : Austin Convention Center | 500 E. Cesar Chavez St. Austin
DockerCon is the community and container industry conference for makers and operators of next generation distributed apps built with containers. The three-day conference provides talks by practitioners, hands-on labs, an expo hall of Docker ecosystem innovators and great opportunities to share your experiences with other virtual container enthusiasts.
. 3 Keynotes & 7 Tracks . 60+ Breakout Sessions . Community Presentations . Hands-on Lab . Ask The Experts . Workshops . Birds-of-a-feather . Hosted Happy Hours . After Party . Ecosystem Expo
You will never want to implement software that bugs up every fortnight and annoys your customer. Security testing is so, an inevitable step prior to software deployment in client’s place. In this article, we shall bring an insight to the security testing and state why it is so important web applications.
What is security testing?
Security testing forms an integral part of software testing that is done to identify weaknesses and vulnerabilities of a software application. The main objective is to identify the vulnerabilities of software and determine if the data and other resources are protected from foreign intruders. It is a way to verify whether or not a confidential data stays confidential or not.
Due to the notable explosion of the ecommerce websites in the world today, security testing has become all the more important. The testing is done once the application is developed and installed. To identify all the potent vulnerabilities, a network security testing is suggested.
Seven attributes the security testing needs to follow are:
The Security Testing “Terminology”
It is a type of testing that is done by evaluating the system and/or network using various malicious techniques. The purpose of this testing is to protect important data from users who do not have access to the system, like hackers. It is carried out after cautious notifications, considerations and planning.
Penetration testing is categorized into two types – Black Box Testing and White Box Testing. In White Box Testing, the tester has access to all vital information like Code, IP Address, Infrastructure Diagram, etc. In Black Box Testing, the tester doesn’t have any access to any sort of vital information. Black box testing tends to be the most accurate testing as the tester doesn’t have any access to any information, thereby, simulating the testing as a hacker.
In Password crack testing, the system is tested to identify the weak passwords. Password Cracking tools are used for testing of this attribute. The end result is to ensure that users are adequately using strong password.
This is to identify the weakest attributes in the system which might lend easy paths for the malicious software to be attached by unauthorized users. Vulnerability can occur due to bug in software, inaccurate software testing or presence of malicious code. This phase requires fixes, patches to prevent the compromised integrity by malware or hackers.
One of the popular ways to hack a website is URL manipulation where in hackers manipulate website URL query strings and get access to confidential information.
This usually takes place when the application makes use of HTTP GET to pass information between client & server. Information is passed via query string. The tester alters the query parameters to check if is accepted by the server.
An URL Manipulation testing ensures that database records are not accessed neither other vital information of the website by unauthorized users.
One of the other common ways picked by hackers to steal the vital information from the web, the SQL Injection testing ensures all the databases are safe and protected. It is a type of testing that takes the advantages of the loopholes that make the hackers easily pass into the system by passing all possible SQL queries to hack it.
They try to query the database using the SQL Injection statements to pull information and crash the system. Even the errors displayed while crashing the system will provide generous amount of important data to the hackers.
So, SQL Injection testing is purposed to take care of the input fields like comments, text boxes etc. Special characters are either handled or skipped from the input.
Cross Side Scripting (CSS):
Security Testing Approach
• Following are the approaches taken for preparing and planning for security testing:
• Security Architecture Study: The first step is to comprehend the client’s requirements and security goals and objectives in compliance to the security need of the organization.
• Security Architecture Analysis: Comprehend the need of application under test.
• Classify security testing: Collect system set up information like operating system, technology and hardware to identify the list of vulnerabilities.
• Threat profile: Based on the information collected above, a threat profile is created.
• Test Planning: Based on identified threat, security risks and vulnerabilities, a test plan is drafted to address the issues.
• Traceability matrix preparation: A traceability matrix is prepared based on the identified threats and vulnerabilities.
• Security Testing Tool Identification: Identify the most suitable tool to test security test cases faster.
• Test Case Preparation: Prepare a test case document.
• Test Case Execution: Test case execution is done and the defect cases are fixed. Test case regressions are executed.
• Reports: Document a detailed report of Security Testing from step 1 to the final including the still open issues.
At Idexcel, we perform security testing for all our clients to ensure they enjoy a bug free application execution across various domains. Our standards, methodologies and experience help us deliver the best business value to customers.
We have a robust automation framework using SOAP UI open source tool.
Key Features of framework
• Data Driven Framework to test with multiple inputs.
• Supports Security and functional testing of Web Services.
• Affordable framework since we are using open source SOAP UI tool.
• Simple and ready to use framework
• Suitable for both SOAP and REST web services
Would you like to experience an error free execution of your application? Call us today!
Date : April 18-20, 2017
Location : San Deigo, CA
Venue : San Diego Marriott
As a TA leader, I know you’ll only attend one or two or conferences this year, and choosing where to invest your conference dollars is not always easy.
To help your decision making, here are a few good reasons why we think ERE is the best conference for you this spring.
We know TA leaders
ERE’s agenda is built specifically for experienced TA and recruiting leaders like YOU. This is not a “how-to” or “Recruiting 101” conference.
You’ll attend sessions led by experts in the field on topics that matter:
Leadership & successful roadmaps
Future trends, emerging technologies and how to utilize them
Data, predictive analytics, and metrics that matter
The focus this spring is about current changes in the industry that are shaping the future role of talent acquisition. We are bringing together the people and companies that can help you the most in the road ahead. [Know more about the Conference]
1. Tailoring Your DevOps Transformation to Organizational Culture
In the ‘2016 State of DevOps Report’ the Westrum Model  of organizational culture is proposed. It focuses on information flow, high cooperation and trust as predictive factors of DevOps success in a company. It is a perfect future state design tool which, however, tells little about where your company is at the moment. Moreover, it does not suggest how to influence an organizational culture and in which direction it should change. Read more…
2. How to Set Up a Continuous Delivery Environment
With the increasing popularity of microservices, more and more is being said about Continuous Delivery. There are many interesting books and articles about that subject. There are also many tools and solutions that can help set up a Continuous Delivery environment. Read more…
3. DevOps done right: Why work-life balance matters to digital transformation success
As enterprises in every industry grapple with digital transformation, and fixate on meeting user demands for always-on services, IT departments find themselves under growing pressure to perform and deliver. Read more…
4. Is DevOps security about behavior or process?
One of my main roles is improving the security of the software produced by my employer, and it was in that role that I attended the annual gathering of the security industry in San Francisco last week. The RSA Conference is one of the two global security conferences I attend, the other being Blackhat. While Blackhat has become more corporate, it’s still dominated by hackers and focuses more on vulnerabilities, whereas RSA is very much a corporate event focused on enterprise security and security policy. Read more…
5. Finance industry leading the way in DevOps implementations, research says
Financial services firms are embracing DevOps approaches and best practices more quickly than other industries, according to new research from managed services provider Claranet. Read more…
When Amazon announced its earnings for its Amazon Web Services cloud division on Thursday, the results were hardly surprising. While AWS might not have the eye-popping growth percentages of its rivals, it still grew at a decent 47 percent, with earnings of $3.53 billion on an astonishing $14.2 billion run rate.
You may point to the rivals and say, well, they had better quarters from a growth standpoint, but it’s important to remember it’s easier to grow from a small number to a bigger small number than it is to grow from a big number. In that sense, AWS could be seen simply as a victim of its own success. Read more…
Date : May 8-11 2017
Location : Boston, MA
Venue : Hynes Convention Center
THE MUST-ATTEND: Open Infrastructure Event
The world runs on open infrastructure. At the OpenStack Summit, you’ll learn about the mix of open technologies building the modern infrastructure stack, including OpenStack, Kubernetes, Docker, Ansible, Ceph, OVS, OpenContrail, OPNFV, and more. Whether you are pursuing a private, public or multi-cloud approach, the OpenStack Summit is the place to network, skill up and plan your cloud strategy.
Hear business cases and operational experience directly from users, learn about new products in the ecosystem and participate in hands-on workshops to build your skills. Attended by thousands of people from more than 60 countries, it’s the ideal venue to plan your cloud strategy and share knowledge about architecting and operating OpenStack clouds. [Know more about the Conference]
1. Reddit – Cloud computing
About Blog: News, articles and tools covering cloud computing, grid computing, and distributed computing.
2. Amazon Web Services (AWS) – Cloud Computing Services
About Blog: Amazon Web Services offers reliable, scalable, and inexpensive cloud computing services.
3. Google Cloud Platform Blog
About Blog: Google Cloud Platform’s blog contains hundreds of articles written by Google cloud experts. You will find product updates, customer stories, and tips and tricks on Google Cloud Platform.
4. Infoworld – Cloud Computing
About Blog: Business technology, IT news, product reviews and enterprise IT strategies.
5. Cloud Tech
About Blog: CloudTech is a leading blog and news site that is dedicated to cloud computing strategy and technology.
6. All Things Distributed
About Blog: All Things Distributed is written by the world-famous Amazon CTO Werner Vogels. His blog is a must-read for anyone who uses AWS. He publishes sophisticated posts about specific AWS services and keeps his readers up-to-date on the latest AWS news.
About Blog: Technology News Articles – Cloud, Big Data, IoT News and Resources.
8. Cloud Computing Magazine
About Blog: One of the most active and extensive cloud blogs available. Its posts are from numerous writers from across the cloud industry.
9. Talkin’ Cloud
About Blog: Cloud Computing Industry News Trends for cloud services providers (CSPs), managed services providers (MSPs) and value-added resellers (VARs).
10. Compare the Cloud
About Blog: Compare the Cloud is one of the Internet’s most popular sources for cloud industry information.
I was asked to review an architecture diagram for an application that would use MicroServices. I could find few REST APIs in the diagram connecting to a single database.
That raised tons of questions:
1. Only one database?
2. What if the database is down?
3. All services will be hosted in a single server?
4. What if I need to upgrade the server?
What is MicroService?
It will not be an easy task to define MicroService in a single statement. The definition depends on different viewpoints & requirements. However, most of the prominent characteristics of MicroServices are:
• They encapsulate a customer or business scenario.
• They are developed by a small development team.
• They can be written in any programming language and use any framework.
• OOPS concept is implemented in loosely coupled manner
• The Codebase is small that are independently versioned, deployed, and scaled.
• They interact with other MicroServices over well-defined interfaces and protocols.
• They have unique names (URLs) that can be used to resolve their location.
• They remain consistent even after failures.
SOA vs. Microservices
Microservice is not only SOA. If Microservices are to be defined, is simply an ideal, refined form of SOA. SOA focuses on imperative programming, whereas MicroServices architecture focuses on a responsive-actor programming style. It’s something like decomposing a large monolithic service into smaller independent services which are self deployable, sustainable & scalable.
Microservice Architecture – Overview
Just as there is no formal definition of the term MicroServices, there’s no standard model that you’ll see represented in every system based on this architectural style. But you can expect most MicroService systems to share a few notable characteristics.
1. Software built as MicroService can be broken down into multiple components, so that each of these services can be deployed, and redeployed independently without compromising the integrity of an application.
2. The MicroServices style is usually business and priorities centric. Unlike a traditional monolithic development approach, MicroService architecture utilizes cross-functional teams. In MicroServices, a team owns the product for its lifetime, as in Amazon’s oft-quoted maxim “You build it, you run it.”
3. MicroServices have smart endpoints that process info and apply logic, and dumb pipes through which the info flows. They receive requests, process them, and generate a response accordingly.
4. Decentralized control between teams, so that its developers strive to produce useful tools that can then be used by others to solve the same problems.
5. MicroServices architecture allows its neighbouring services to function while it bows out of service. This architecture also scales to cater to its clients’ sudden spike in demand.
6. MicroService is ideal for evolutionary systems where it is difficult to anticipate the types of devices that may be accessing our application.
MicroService architecture uses services as small components and is usually business centric; focuses on products functionality; has smart end points but standard input/output mechanisms; is decentralized, as well as decentralized data management; is designed to auto scale & is resilient to failure; and, of course is an evolutionary model.
Knowledge needed to implement MicroService
To conclude our brief overview of microservices here, we need to have a basic grasp of the following concepts:
• Object Oriented Programming (OOP) with loose coupling techniques
• Web service / API/ REST—a way to expose the functionality of your application to others, without a user interface
• Service Oriented Architecture (SOA)—a way of structuring many related applications to work together, rather than trying to solve all problems in one application
• Single Responsibility Principle (SRP)—the idea of code with one focus
• Interface Segregation Principle (ISP)—the idea of code with defined boundaries.
Advantages of MicroService
• Evolutionary Design – No need to rewrite your whole application. Add new features as MicroServices, and plug them into your existing application
• Small Codebase – Each MicroService deals with one concern(SoC) only – this result in a small codebase, which means easier maintainability
• Auto Scale – freedom to scale only the loaded service, as that service will handle the bigger load.
• Easy to Deploy – Deploy only the needed codebase, instead of redeploying the entire application.
• System Resilience – If some of the services go down only some features will be affected, not the entire application.
Challenges of MicroService
The MicroService architecture helps a lot, but comes with its own challenges.
• Inter Service Communication – MicroServices will rely on each other and they will have to communicate. A common communication channel needs to be framed using HTTP/ESB etc.
• Health Monitoring – There are more services to monitor which may be developed using different programming languages.
• Distributed logging – Different services will have its own logging mechanism, resulting GBs of distributed unstructured data.
• Transaction Spanning – MicroServices may result in spanning of transactions over multiple services & database. An issues caused somewhere will result is some other issues somewhere else.
• Finding root cause – Distributed logic with distributed data increases the effort of finding the root cause. The performance related root cause can still be managed using APM tools like New Relic & Dynatrace.
• Cyclic dependencies between services – Reproducing a problem will prove to be very difficult when it’s gone in one version, and comes back with a newer one.
MicroServices architectural style is an important idea – one worth serious consideration for enterprise applications. A Monolithic architecture is useful for simple, lightweight applications. It will be a maintenance nightmare if used for complex applications. The MicroServices architecture pattern is the better choice for complex, evolving applications despite the drawbacks and implementation challenges.
MicroServices is prevalent for a long time and recently we are seeing increase in their popularity. There are a number of factors that lead to this trend with scalability being probably the most important one. Utilization of MicroServices by “big guys” like Amazon, NetFlix, eBay, and others, provides enough confidence that this architectural style is here to stay.
Talking of public cloud, provisioning storage, launching VMs and configuring networks are no more cutting edges. New IaaS capabilities enable enterprises to operate their workloads in the cloud. Innovative Cloud services are helping organisations drive transformation through agility, cost effectiveness and reduced IT complexities. With IaaS evolving at a rapid rate, the public cloud is seemingly gearing up to the next level.
Cloud providers have already started investing in emerging cloud technologies that will deliver managed services to the customers. Here are six disruptive trends that are shaping the future of the public cloud.
Serverless Computing or more precisely, FaaS(Functions as a Service) focus on code instead of infrastructure – delivering what PaaS promises. It enables developers to write modular functions that perform one task at a time. By writing and executing multiple such functions, a meaning and complex application is built. The best part is, it allows developers select framework, language and runtime of their choice instead of using a particular platform. This implies, each developer has liberty to choose his preferred language and deliver a module.
Serverless Computing or FaaS is rapidly becoming the most preferred way of running code in the cloud.
Blockchain as a Service:
Bitcoin is considered dead long ago, but the technology behind it is alive and kicking to make public cloud all the more powerful. Blockchain is a cryptographic data structure used to create a digital ledger of the transaction happening across distributed networks of computers. It eliminates the need for central authority as cryptography is the only medium to manipulate ledger. However, in this environment, transactions are immutable meaning operations once made cannot be modified. Transactions are verified by the parties involved in the transaction.
Blockchains have many use cases in the domains spanning across manufacturing, finance, healthcare, supply chain and real estate.
Cognitive Computing adds human senses to the computers. It simulates human thoughts by applying latest technologies like natural language processing, machine learning, neural networks, deep learning and of course, artificial intelligence.
Multiple factors fuelling the trend of Cognitive Computing are affordable hardware, abundant storage, seamless connectivity and compute capacity.
Heavy lifting needed to process the inputs for cognitive computing is handled by deep-pocketed cloud providers. Only the simplest of APIs are exposed for the developers to comprehend and build compelling interfaces for applications.
Data Science as a Service:
Managed NoSQL and relational database started data revolution in the cloud but Hadoop and Big Data empowered the public cloud.
Public Cloud Data Platform takes care of everything spanning from data ingestion to processing, analysis and visualisation. Machine Learning for data enables organisations to tap the power of data analysis and execute predictive analytics.
As organisations are shifting data to the public cloud, they will be catered with an end-to-end approach by the cloud providers for more actionable insights to customers.
Verticalized IoT PaaS:
Internet of Things – the next big thing that is taking distributed computing network by storm already is deployed by organisations for device management capabilities, predictive analytics, data processing pipelines and business intelligence.
Mainstream cloud providers are reaping the benefits of IoT to drive device management, data processing capabilities and cloud-based M2M connectivity.
It is expected that going forward; the cloud providers will use IoT platforms to target automobiles, retail, manufacturing, healthcare and consumer markets. It is soon going to become the prime enabler for Data Science as a Service.
Containers as a Service:
Containers have already buzzing in the cloud market. Though it is as young as two years old, enterprises are readily using containers alongside VMs.
New categories like orchestration, logging, security, monitoring and container management are evolving rapidly. However, when microservices and container workloads become mainstream, they will increasingly dominate the public cloud deployment space. It is poised to be the fastest growing delivery model in the arena of the public cloud.
In conclusion, it is inferred that future of cloud is dictated by the data driven applications powered by Blockchains and IoT. Containers, Serverless Services and the Microservices will be used to deal with the abundance of data hitting the cloud!