Web Performance – A Critical Success Factor for eRetailers

Since the rise of online shopping in the late 1990s, we have seen many evolutions in the underlying technology infrastructure and in consumer expectations. The rise of broadband access in homes, businesses and the advent of Mobile surely have placed the ‘Online Channel’ ahead of other retail channels.

In a recent survey it is found that, around 80% of shoppers will research online before making a purchase, and they hop across devices to suit their needs. And, 3 in 4 shoppers will abandon the site if the site does not load in under 3 seconds. These are staggering facts and these user behavior and expectations have serious consequences for an online retailer with an underperforming site.
Let’s look at some of the facts that are making ‘Web Performance’ a critical success factor for eRetailers (Online Retailers).

Fact 1: Need for Back-end IT integration and providing a ‘Seamless Experience’ to the end customer

The rise in the ‘Online’ channel did not eliminate the need for other B&M channels, but it only made it very important for the retailers to maintain a consistent messaging across all the channels so that the consumers sees ‘One Brand’ and not multiple competing channels. This has been proven by a recent survey conducted by ‘Sterling Commerce’ and ‘DemandWare’, in which 85% of the respondents expect a seamless experience across all the channels. So, to project a ‘One Brand’ image and to meet the customer’s expectation of ‘Seamless Experience’, the retailers must integrate the back-end IT landscape. This integration brings in a lot of advantages such as:

  • A single view of the customer and Products
  • Continuous state of interactions
  • Opportunity to optimize processed and run insightful analytics
  • Consistent Messaging and Branding

This means that an immense amount of data has to be gathered, collated and presented. Additionally there are many process intensive actions. Adding all this data to the webpage is going to bloat the page size, consume more CPU and Memory and impact the overall performance of the retailer’s website.

Fact 2: Increase in Mobile and Social Media adoption in consumers
The trends in Mobile and Social Media also indicate that there is growth in the retailer presence and sales through these channels. For example, the sales from Mobile devices will reach 37% of the total online sales by Sep-2013 as compared to 17% a year back. Also, the number of Smart phones and the shopping apps on them, are on the rise.
This means that the retailers not only have to provide a mobile enabled retail store but also have to support the various makes and models of the devices and types of operating systems on these devices. The app has to perform better on all the combinations and if it does not, users have apps-stores and mobile browsers at their finger-tips and can jump to a competitor’s store in a flash!

Fact 3: Rich User Experience demands of the consumers
Online customers want a rich and engaging experience, but at the same time, they love their websites to perform and respond quickly.
The retailers really want to meet or exceed the consumers’ expectations in this area. They do Channel Integration, Customized Recommendations, Product Review/ Alternatives/Comparisons, Interactive UI, Video Demonstrations, Past history of the customer purchases and Social Media integration. In fact, the average web-page size over the last few years has grown tremendously. The average page size has crossed 1MB with over 100 objects per page.
The page size along with any 3rd party code integration is surely going to have a negative impact on performance. Here are some examples of the impacts of underperforming online retail sites:

  • A study of a travel website shows that 57% of the users will abandon the site if it does not respond within 3 seconds.
  • 60% of the mobile users expect their site to load in under 3 seconds, if it does not load in under 5 seconds, 74% of the users will abandon the site.
  • 79% of the online shoppers who experience dissatisfaction are likely to no longer buy from that website again
  • 46% of the dissatisfied online shoppers develop ‘negative perception’ of the company. With ease of access to various social media, the ‘negative perception’ propagates quickly and can damage the company’s reputation and brand image, in-turn impacting prospective sales.

Conclusion:
Surprising as all these may be, the financial implications of the user impatience are even more shocking.

  • Slowing down the page load by just one second could cost Amazon $1.6 billion in sales each year.
  • Almost 3 billion searches are done on Google each day and 95% of Google’s revenue comes from advertising. Slowing Google’s search results by just four tenths of a second could result in a loss of 8 million searches per day, meaning they’d serve up many millions fewer online adverts.

This is how important performance of the website is. Poor web performance cost retailers:

  • Loss of loyal customers
  • Loss of Brand Reputation
  • Loss of revenue (because of fewer page visits, Higher page abandonments, less customer satisfaction and fewer conversions)

The bottom line is “Poor online retail site performance = Poor user experience = Less time on site = Lower conversions”. When conversion of a visit to a purchase online is where the money is for the retailers, Web Performance Matters!!

Note: Please go through the webinar we conducted recently sharing a holistic approach on technology, process & tools to leverage in achieving a “world class web performance” for online retail store fronts.

Startup Sutra: To Scale Quick, Ride A Cloud

Small is Big makes a catchy label for a startup to stick at the office water cooler. But Small is Big with cloud computing makes for business gyan. To put it in another way, Startup + Cloud = Another Facebook kind of valuation in the works (read on to know how). So think big. Work smart. Keep it lean and mean. Deliver stuff that works straight off the shelf. That’s what the cloud is all about, particularly for a startup. Enabling anyone to do any work or any play anywhere, anyplace, anytime. Is that not why when people say they are on cloud, they mean they are on cloud nine, eight times out of nine?

Reverse the equation for a moment. What if you are a startup actually offering cloud services? Impossible is nothing! You can potentially set the investors’ pulse racing and have over-eager venture capitalists knocking on your doors! Workday, a young Californian firm selling cloud-based software hit pay dirt managing the back-offices of large companies and ended up with a valuation of nearly $4 billion at the New York bourses. Another company, Yammer that offers social networking software, was snapped up by Microsoft for $1.2 billion.

Let’s rewind to Ground Zero when you have just buckled your straps and are starting from scratch. As a startup, you cannot afford to be straight-jacketed. You need to keep your options open. Like, one door should open when another closes.

Suppose you start with investing big on creating an all-purpose fully loaded virtual architecture, and this model ends up as a white elephant? All the more sensible therefore that you keep your investment on virtual architecture lean and mean and to the minimum, and fully leverage Cloud Service to the maximum by using it for accessing application infrastructure, processing, storage, etc.

Unless you are starting your enterprise with a billion dollars (!) your number one concern will be about how to thread your costs thin. Remember Google’s pay-per-click (PPC) concept? It’s the same with startups using cloud service. You only pay per spend, or pay per user or per quantity of processing/storage.
With cloud services, your resources are “elastic”, and you enjoy out of the box mobility by way of easy and instant access to IT facilities from any suitably configured device, including faster access to latest software and hardware upgrades on the cutting edge. For instance, days after your new state-of-art server farm arrives on its pallets, the market is abuzz about the launch of a new server that has double the processing power and is available at half the cost of your server! But if you have adopted the cloud model, you are able to access up-to-date hardware resources and software functionality, and its newly added features, at little or no extra cost.

However, many startups would like to cross the bridge to the cloud only when it becomes par for the course and not when it is still a fashion statement.

For instance, in situations where data requirements are huge, working on a smart phone view is like watching the spectacular Avatar on a 9’ inch screen and writing a review of it!

When a startup relies on a network provider for most, if not all, its IT needs, how will it cope in the event of a network disruption? How will you ensure uptime in case you lose connectivity to your data? How will you manage your Windows Active Directory servers?

Cloud for startups has its advocates and critics and it would be fair to say that it is an idea whose time will not go for some time to come. Wish we had Steve Jobs to ask the right questions and provide better answers. Or is it that he is on cloud ??

If you want to bootstrap your way to scale, your ticket is a cloud away.

Mapping the Organizations in Year 2020

Year 2020: Don’t bet your company will be the same as now. And by the way, don’t bet your company will change beyond description. There will be change, but it won’t be disruptive.

Recent research conducted by the Economist Intelligence Unit says companies will be larger and more globally integrated, with better information flow and collaboration across borders, less centralized, a flatter hierarchy and more empowered employees.

Employees will not just be knowledge workers of today, but active stakeholders in decision-making. They will double as data scientists, not because of a decree from the boss, but because of their ability to play multiple roles. For example, a LinkedIn employee uses analytics to come up with the popular “People You May Know” feature. A Facebook team creates a new coding language. And the boss cannot turn around and say, ‘I told you so’.

Size will not matter. It doesn’t really even today. Anecdotal stories of David vs Goliath will become more routine than rare, more fact than fiction. In fact, size could well be a disadvantage. Value creation will not depend on a company being a 800-pound gorilla, but on the ability of individuals to connect with one another.
Speed to market and speed to work will be the new dynamic on demand. To study it in contrast, consider the term “spinning the tape”, a fashionable jargon used by balance sheet accountants. Spinning the tape refers to the static way of analyzing accounting data for years. The new paradigm could be described as “speeding the tape”. Eg: You could be working on a deadline that is yesterday and expected to deliver just-in-time.
Employee loyalty will get virtually extinct. Blame it on global operations, emerging markets, and demographic pressure. 360-degree appraisals will be the norm. The boss will review your performance, and you will be reviewing his. It cuts both ways.

Management could be localized while company outlook will be globalized. Cross-cultural hires will be more frequent and people with poor soft skills will not be able to get a foot in. Perform or perish will be the universal credo of all organizations.

More organizations will invest in R&D and use data silos to test product launches. The metrics will vary from division to division. For instance, Google manages its various offices at Paris and New York in different ways, for there is no such thing as one-size-fits-all for organizations in the future.
But not everything will be hunky dory. Just like it always is in all enterprise history. Serving different kinds of customers in different countries through a workforce which is equally drawn from different lands, speaking different languages, create a whole new and different set of challenges for organizations. Consider working at odd hours. Outsourcing to call centers began as a great cost-cutting idea – and still is – but the intangible costs such as employee migration, employee retention, and the emotional costs on account of graveyard shifts will pose difficult and formidable challenges.

The future workplace calls for leaders with a holistic view of conducting business and managing people. Organizations will have to speed up to the science, step out of the fast lane and work on themselves. We shall be reminded often that Success, as Bill Gates famously said, is a lousy teacher…

Cloud based QA Infrastructure

A silver bullet to ward off traditional challenges

If you have some spare time at the office, spare a thought to the CIO in the IT industry. A blitzkrieg of challenges invite the CIO every day as he settles down on his desk after greeting his colleagues, rather ironically for him, a “good morning”. Here’s how the dice rolls for him every day at work:

Existing Scenario:

a)    Shrinking budget

b)    Increasing cost pressures

Expectations:

a)    Cut IT spend

b)    Deliver value and technology edge

Preferred Solution:

a)    Enhance ROI generated from IT components

b)    Increase focus on QA infrastructure and maintenance costs

c)    Lean on test managers to reduce QA infra costs as they form a major chunk of IT infrastructure budgeting.

Cutting costs, a Catch-22 situation

On the other side, test managers face a catch-22 situation as cut in QA infrastructure spend could potentially impact the quality of software deliverables. Here are a few examples of the challenges that drive cost of IT upwards while creating and managing QA infrastructure:

  • Testing operations are recurring but non-continuous. This means test infrastructure is sub-optimally utilized and therefore has a significant impact on ROI.
  • Testing work areas span a wide spectrum such as On-time QA environment provisioning for multiple projects, decommissioning of QA environment to other projects, QA environment support, managing incidents, and managing configurations for multiple projects. All these necessitate an organization to allocate and maintain proportionate skilled resources at all times which in turn drives costs upwards.
  • CIOs and Test Managers are expected to ensure testing is commissioned on recommended hardware, because most of the issues linked to later stages of the quality gate are attributed to testing on inadequate hardware. This again accounts for a significant chunk of the total IT budget
  • Creating appropriately defined QA infrastructure up and running in time (including procurement and leasing of these elements) to meet the set timelines demands more IT staffing resources
  • Many Test Managers give the goby to staging environment and directly deploy to production because of budget constraints, however creating a staging environment that mimics production is more critical to quality of software in production. Creating such environment also necessitates huge chunk of total IT budget.
  • Today’s complex application architecture involves multiple hardware and software tools which require a lot of investment in terms of time, money, resources on coordination, managing SLAs, procurement;  with multiple vendors. All these taken together add up more allocations in the budget.
  • For conducting performance testing, test managers need to set up a huge number of machines in the lab to generate desired number of virtual users demanding more budget from CIOs

The Case for QA infrastructure as a Service in Cloud

All the above challenges force CIOs and Test Managers to move away from on-premises QA infrastructure and scout for alternatives such as cloud computing for creating and managing QA environments. Organizations are leveraging cloud computing to significantly lower IT infra spend towards QA environments while at the same time deliver value, quality and efficient QA lifecycle. Already, many players, big and small, such as Amazon, IBM, Skytap, CMT, Joyent, Rackspace;  offer QA infrastructure as a service in cloud. Using this service, organizations can set up QA infrastructure in cloud, shifting focus from CAPEX to OPEX.  CIOs too are able to significantly squeeze both CAPEX and OPEX elements thereby meeting the budget cap without compromising on the quality of the solution.

How does it work?

Assume that a QA team needs a highly complex test environment configuration in order to conduct testing on a new application. Instead of setting up on-premises QA environment (which requires hardware procurement, set up, maintenance), a QA team member logs in to the QA infrastructure service provider’s self-service portal and:

* Creates an environment template with each tier of the application and network elements like web server, application servers, load balancer, database and storage.  For example a QA team member can fill the web server template like “web server with large instance and windows server 2008”.

* Submits the request through the IaaS service provider’s portal

* The service provider provisions this configuration and hardware in minutes and sends a mail to the QA team.

* The QA team uses this testing environment for required time and completes the testing.

* the QA team releases the test environment at the end of the testing cycle.

* For subsequent releases, the environment can simply be set up from the same template and the QA team can deploy the new code and start testing.

* The service provider bills for only the actual usage of the QA environment.

How does it help?

Elastic and scalable data center with no CAPEX investment: CIOs/Test Managers don’t have to worry about budgeting, procurement, setting up and maintenance of QA environment. Organizations simply need to develop applications and create a template of the required environment and request the service provider who enables the test environment. The QA team then deploys the application on a production like environment, thus saving time and expenses over traditional on-premises deployment. This shifts the focus from CAPEX to OPEX for IT infrastructure spending.

QA teams can provision their own environment: With this facility, QA teams can provision their own environment on-demand, rather than going though long IT procurement process, to set up an on-premises test environment.

Multiple parallel environments: QA teams can create different environments with different platforms and application stacks, with no investment in capex and multiple hardware, reducing the Go to Market time.

Minimize resource hoarding: Instead of setting up on-premises test environments and investing capital on hardware, QA teams can deploy the environments on cloud on a need-basis and release the resources after completion of testing. Some service providers provide ‘suspend and resume’ facility, in which case QA teams can suspend an environment saving the entire state including memory and resume at a later stage when required.

The bottom line: QA environments in cloud are lifesavers for companies. CIOs are slowly adapting cloud based QA infrastructure and moving away from on-premises QA infrastructures which demands huge CAPEX and OPEX and yields less ROI. Cloud-based QA infrastructure, if managed smartly, is a silver bullet that can neutralize most of the challenges faced by CIOs/Test Managers in traditional QA infrastructure.

Big Data: The Engine Driving the Next Era of Computing

You are at a conference. Top business honchos are huddled together with their Excel sheets and paraphernalia. The speaker whips out his palmtop and mutters ‘big data’. There follows an impressive hush. Everyone plays along. You feel emboldened to ask, “Can you define it?” Another hush follows. The big daddys of business are momentarily at a loss. Perhaps they can only Google. You get it? Everyone knows, everyone accepts, big data is big, but no one really knows how, or why. At any rate, no one knows enough straight off the bat.

In the Beginning was Data. Then data defined the world. Now big data is now refining the data-driven world. God is in the last-mile detail. Example: In the number-crunching world of accountancy, intangibles are invading the balance sheet values. “Goodwill” is treated as an expense. It morphs into an asset only when it is acquired externally like say, through a market transaction. Data scientists now ask why can’t we classify Amazon’s vast data pool of its customers as an “asset”? Think of it as the latest straw in the wind of how big data is getting bigger.

Big data is getting bigger and bigger because data today is valued as an economic input as well as an output. The time for austerity is past. Now is the time for audacity. Ask how. Answer: Try crowd sourcing your data defining skills.

When you were not watching, big data was changing the way the technology enablers play the game in the next era of computing. Applications are doing a lot more for a lot less.

Big data isn’t about bits or even gigabytes. It’s about talent. Used wisely, it helps you to take decisions you trust. Naysayers of course see the half-full glass as if it is under threat of an overspill. They insinuate that big data leads to relationships that are unreal. But the reality we don’t know is what is behind all that big data. It is after all, a massy and classy potpourri: part math, part data, with some intuition thrown in. It’s ok if you can’t figure out the math in the big data, because it is all wired in the brain, and certainly not fiction or a fictitious figment of imagination.
When you were not watching, big data was changing the way the technology enablers play the game in the next era of computing. Applications are doing a lot more for a lot less. Just to F5 (we mean refresh…):
You and me can flaunt a dirt cheap $50 computer the size of your palm AND use the same search analysis software that is run by obscenely wealthy Google.

Every physical thing is getting connected, somewhere, at some time or the other, in some or the other ways. AT&T claims a staggering 20,000% growth on wireless traffic over the past 5 years. Cisco expects IP traffic to leap frog ahead and grow four-fold by 2016. And Morgan Stanley breezes through an entire gamut of portfolio analysis, sentiment analysis, predictive analysis, et al for all its large scale investments with the help of Hadoop, the top dog for analyzing complex data. Retail giant Amazon uses one million Hadoop clusters to support their affiliate network, risk management, machine learning, website updates and lots more stuff that works for us.

Data critics though are valiantly trying to hoist big data on its own petard by demanding proof of its efficacy. Proof? Why? Do we really need to prove that we have never ever had a greater, better analyzed, more pervasive, or expansively connected computing power and information at a cheaper price in the history of the world? Give the lovable data devil its due!

In the Cloud, Don’t KISS.

Remember the Y2K dotcom era when every Tom, Dick and Harry rushed to ride the Internet bubble? It looks like many of us have forgotten our lesson, the instant Internet 2.0 (or is it 3.0?) made a comeback on a cloud, viz. Signing up for Cloud Services like you are applying for a credit card. Follow the herd mentality, you know.

To get smarter, faster, and better, go easy. And then act with speed. That’s how you win the race. Just because your competitor, your associate, or your vendor is moving to the cloud, doesn’t mean you mimic them without giving it any more thought. Think before you ink a SLA. Is your CSP (Cloud Service Provider) capable of delivering standards-based cloud solutions that are designed from the ground up to meet your specific enterprise requirements? Does your Service Level Agreement with your CSP also cover your requirements for monitoring, logging, encryption and security? Do you have the domain specific IT knowledge and expertise and the corresponding environment in place before signing up for a cloud solution? And are your security protocols in optimum functional mode?

Security protocols: Keep a hawk’s eye on them. In CIO circles, they warn you not to KISS (Keep It Stupid & Silly) when you sign for the cloud. KISS refers to common mistakes in an enterprise such as for instance, failing to to register your passwords and individual IDs with the enterprise; turning a deaf ear to demands for secure Application Programming Interfaces (API); and wrongly assuming that you are outsourcing risk, accountability and compliance obligations as well to the cloud.

The ironic party of this business of securing the cloud is the challenge of arriving at an ideal tradeoff between the need of the enterprise for security and the need of the consumer for privacy. The Economist in “Keys to the Cloud Castle” succinctly sums up this dilemma faced by cloud-based internet storage and synchronization providers like say Dropbox, using a house metaphor. Which do you prefer: An access through a master key which is in the hands of an authorized internal security or an access whereby you choose your own security key. The problem with the former is in the key falling into wrong hands, while in the latter case, the danger is in losing all access if you lose the key due to negligence. Cloud security scientists so constantly look to find a middle path that combines privacy with security.

Does this mean that a perfectly secure cloud computing is still a chimera? Happily for us, recent research in cryptography shows homographic encryption – a new algorithm which would enable a Web user to send encrypted data to a server in the cloud which it turn would process it without decrypting it and send back a still-encrypted result – is well on the way to become a pursuit of wow, among CIOs.

A clearly demarcated delegation of tasks between cloud providers and security providers could serve as a rule of thumb for ensuring both security and privacy. Cloud providers should focus on providing access, anywhere, anytime, while security providers should focus on core encryption. An integration of both these services can lead to a seamless and secure user experience. For example, you as an user encrypt your files directly on your laptop/desktop/phones, and then upload the encrypted documents to the cloud.

Bottom line: Don’t sign up for the cloud like you are applying for a credit card. Outsourcing your ideas doesn’t mean you also outsource your thinking..

For A Better Cloud Security – Wheel it Different, instead of Reinventing the Wheel !

Saas served as sauce? Wow. But only as long as it’s secure. And that’s where the penny drops. No matter. Big money now is way too big on cloud services. We can’t roll back the Age of Participation. The jury may be pondering on how secure is the cloud, but the verdict is only going to tweak “how secure is the cloud” to “how to secure the cloud”.

Yes, there is a cloud over the cloud. Less than a year ago, hackers stole 6 million passwords from dating site eHarmony and LinkedIn fueling the debate over cloud security. DropBox, a free online service provider that lets you share documents freely online, became “a problem child for cloud security” in the words of a cloud services expert.

The “Notorious Nine” threats to cloud computing security according to the Cloud Security Alliance (CSA), a not-for-profit body: Data breaches, data loss, account or service traffic hijacking, insecure interfaces and APIs, Denial of service, malicious insiders, cloud abuse, insufficient due diligence, and shared technology vulnerabilities.

However, a problem is an opportunity in disguise, and so the algorithm waiting to be discovered is to how to outsmart the hackers and overcome the threats to cloud security. More so, since the advantages that accrue from cloud services viz. flexibility, scalability, economies of scale, for instance, far outweigh the risks associated with the cloud.

One way for better cloud security is to use a tried, tested and trusted Cloud Service Provider (CSP) rather than to self-design a high availability data center. Also, a CSP yields more economies of scale.
Virtualized servers, though less secure than the physical servers they replace, are getting more and more secure than before. According to research by Gartner, virtual servers were less secure than the physical servers they replaced by 60% in 2012. In 2015, they will be only 30% less secure.

To do the new in cloud security, we could begin by reinventing the old. The traditional methods of data security, viz. Logical security, Physical security and Premises security, also apply to securing the cloud. Logical security protects data using software safeguards such as password access, authentication, and authorization, and ensuring proper allocation of privileges.

The risk in Cloud Service Offerings arises because a single host with multiple virtual machines may be attacked by one of the guest operating systems. Or a guest operating system may be used to attack another guest operating system. Cloud services are accessed from the Internet and so are vulnerable to attacks arising from Denial of Service or widespread infrastructure failure.

Traditional security protocols can also be successfully mapped to work in a cloud environment. For example Traditional physical controls such as firewalls, Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS), Network Access Control (NAC) products that ensure access control can continue to be critical components of the security architecture. However, these appliances no longer need to be a physical piece of hardware. A virtual firewall, like for example Cisco’s security gateway, performs the same functions of a physical firewall but has been virtualized to work with the hypervisor. This is catching on fast. Gartner researchers predict that by 2015, 40% of security controls in the data centers will be virtualized.

Moral of the cloud: You don’t have to reinvent the wheel to secure the cloud. But we need to keep talking – to wheel it differently.

Relevance of knowledge management for testing center of excellence

Knowledge management (KM) in simplistic terms can be defined as a collection of strategies and practices used to assimilate, articulate, share, sustain, reuse and retirement of knowledge. Undoubtedly KM is very vital for prolonging the life of knowledge. Many of the beliefs we have today (ranging from religious to understanding how the prehistoric humans lived) are strongly influenced and arrived at by reading and understanding the documentation (scriptures, carvings and drawings) created by the early KM pioneers. KM very much applies and is relevant to the present day context. In this new economy, the achievement of a sustained competitive advantage depends on organization’s capacity to develop, deploy and efficient use of its KM strategies.

Every piece of knowledge that is managed through KM is a Knowledge Asset (KA). In the context of the current day organizations, these knowledge assets can include anything from business processes, innovative ideas, lessons learned to FAQs. Industries like manufacturing, logistics, health care have employed KM since centuries. In the modern world, many of us have seen the importance and relevance of the KM for operations like Help Desk and customer care. But, it is equally important and relevant for the whole IT Services industry and particularly Testing Organizations (TCoE).
Many organizations setup TCoEs with the intent to create a centralized team that can independently verify and validate the IT solutions from the business standpoint. While there are many advantages in having a TCoE
(like sharing of best practices, tools, techniques, standards etc which are knowledge assets that can be efficiently managed using KM), one biggest advantage is that, they are the team that understands and evaluates IT solutions from the business stand-point. This very premise gives this team an undue advantage of gaining greater understanding of the business processes and their interdependence.

So, in my opinion, TCoEs are well equipped to champion the KM for the ‘business processes’ knowledge assets (with review and approval from business). TCoE can roll ‘business processes’ into their KA portfolio along with full life-cycle management for the TCoE related assets (like processes, innovations, tools, best practices and lessons learned).

In my personal experience, I have setup and worked in many TCoEs. I have seen a tremendous change in the way the TCoEs work over the last 15 years. Some of the challenges the TCoEs face include: flexing up and down in resourcing (just-in-time resourcing) to meet changing business needs, optimize the cost and improve the quality of delivery. While the above challenges are business driven, there is another challenge that plagues the IT industry especially the offshore companies; constant churn in the resources. One solution that can effectively address all the above challenges is the efficient implementation and use of KM.

The following are some of the key points that must be kept in mind for KM:
1. Defining what KAs will be managed through KM
2. Work closely with Business to capture the Business Processes and get them reviewed and approved before managing them in KM
3. Map Business processes and IT solutions/architecture so there is a correlation between both these KA
4. Assign champions for each KA area who is responsible for managing the life-cycle of the KA
5. Use a platform that can enable the publication and sharing of the KA(there are many commercial and open source tools that can be customized for this purpose like SharePoint, Redmine etc )
6. Use KM portal as a key source of knowledge acquisition for new entrants to the project/program
7. Make sure the KAs are constantly reviewed to ensure they are accurate and up to date

The bottom line is, successful TCoEs have a strong KM framework and practices in place and the world-class TCoEs include Business Process KA as part of their KM strategy.