Blogging was where we began, and how we built our company so we have preserved this archive to show how our thinking developed over a decade of developing the use of social technology inside organisations

Data ubiquity threatening usefulness of Enterprise 2.0

by Lee Provoost

(This blog post is co-written with Nigel Walsh from the Enterprise Mashups vendor Corizon and started over a bowl of porridge.)

“Content and data are everywhere. People are creating and curating content like never before. As data storage becomes cheaper, businesses are storing, archiving, and mining more data than previously possible. The increasing openness of APIs and data portability make more enterprise data available for both consumers and employees to consume. Free flow of data also allows business partner relationships to be readily analyzed and optimized.” (Emerging Opportunities in Social Business Design)

Filter Failure

With large corporations storing more and more data (be it for compliance, regulatory or internal mining purposes) in their Enterprise 2.0 (and overall IT) systems, we have the danger of getting big data silos or disparate solutions. To make matters worse, they are often stored locally in systems that are owned by different business units with different purposes. So, imagine that you have invested a lot of money and effort in a knowledge management system, just to realize after 3 years that it does not suit your needs anymore and you need something else? If you have a couple of thousands of files, it’s still quite manageable. However, if you work in a very knowledge-intensive organisation, three years of data might have accumulated into several hundreds of gigabytes of data. Good luck with that migration.

Then go through a merger with your competitor or launch a whole load of new products or services and try to gain consensus and consistency across these disparate solutions.

With the increasing importance (and increasing amount) of data floating around your organisation, it becomes more and more important to think about open standards for data interoperability. Accept the reality of the day that a lot of your data is stored in silos. What we need to think of now is how we are going to make this data step-by-step accessible so that we don’t need to do tedious and error-prone data migrations when the system doesn’t cope with our demands anymore.

Perhaps to your surprise, I’d argue that the data silo lock-in is not your biggest problem. No, the inability to intelligently manage and reuse this volume of content in a meaningful way is a much bigger danger that has a direct impact on your business. Filter failure arises when individuals are unable to synthesize and understand the vast amounts of information being generated by an organisation.

Where the problem used to be getting enough information, now it’s being able to make sense of it all. So in addition to filtering the underlying plethora of data and subsequent applications, you also have to be an inline translator. For anyone dealing with end users directly eg a front line customer agent dealing with lots of applications, they will always speak in their own language and never that of your systems, applications or processes. More importantly, they have no reason to.

The interface is the product

But what exactly are we trying to solve here? Why would we even care about this problem? Just a Bunch of Stuff That Happens perfectly coined it in the following cartoon:


Even this is being kind – the average knowledge worker will use between 6 and 15 of these apps, we have experienced people using upwards of 30 because of these data silos. The typical enterprise application looks much like “Your company’s app” as shown in the cartoon. There is such a vast amount of data flowing around your company that you often end up with these kind of user interfaces. Instead of achieving the goal of bringing powerful information to the fingertips of the business end-user, it just confuses people. It just makes people unhappy and unproductive. For every new channel, (email, web, social channels, …) and for every new product, the quick answer is often to bring in a new additional application. This all adds to the complexity and mess on the knowledge workers desktop.

And just in case you would forget, an IBM Design tweet nailed it:


The end-user of your product doesn’t care what kind of data silos are laying underneath your IT system. They just want the information they need to do their work, but very importantly: taking in account the context of the work! Put yourself in the consumer’s shoes, your customer doesn’t talk to you in silos and certainly doesn’t want to be treated in silos. Have you ever called your bank, only to be passed to several different departments? We know already that Interactions aren’t Connected and ultimately we are still running processes as if we are in the industrial age – an assembly line of handoff after handoff.

This is the same as going to a McDonalds and asking for a ‘Happy Meal’ but being told to get a drink from one counter, a sandwich from another, fries from a third and the toy will be sent directly from Mattel – oh and you want a straw, napkins and sauce – there’s self service for that. More of a Meal than Happy! Not quite so fast or convenient food. What happens when the meal then changes, there’s a new toy, you add something else to the box – how is the existing process able to cope with changes easily?

Impact of data ubiquity

We have now identified the impact of data ubiquity:

  1. the corporate IT department being challenged by huge data silos (lock-in) and disparate solutions with complex processes that rely on humans to be the integration layer
  2. the business end-user dealing with too many different and complex applications and not being able to make sense of all the data (filter failure), subsequently not being able to deliver the process

In many businesses the delivery of an end to end business process relies on users accessing multiple software applications that combine to deliver the complete process. The result of this can be disjointed processes, mistakes, slow access to required information, no single customer view and ultimately a dysfunctional customer experience that the business users can’t impact.

So, the goal we’re trying to achieve is to provide business end-users Enterprise 2.0 systems with meaningful contextual information in a simple and elegant way – we like to call this fit for purpose!

Mashing it together

As a technologist, the first reaction would be to try to solve this data silo problem. (Let’s ignore for a second the old approach to create a monolithic repository, called the black hole, where we dump everything in.) How can we make it more accessible, can we wrap a web services around it, can we apply on a large scale the principles of Service-Oriented Architecture and Model-View-Controller, can we add an open data API interface to it, etc. That will most likely keep us busy for the next coming years. Wake up call: your customers are not going to wait two years till you have your internal issues solved. This approach also often instigates new shadow projects that proclaim to deliver tactical, quick win solutions whilst waiting for the ‘nirvana’.

As a business end-user, you’re faced with a proliferation of applications. A lot of time and money have been invested in building these, so… why not starting by reusing the useful bits of the existing apps to get immediate value?

This is the approach pioneered by
. An user-centered focus approach, starting with the end users and working down – understanding the process in which they go through to complete a task, be it solve a customer enquiry in the front office or manage work in the back office – referred to as the user process. All too often, technology’s answer to a problem is upgrade to the latest version as it has all these new features. The problem with this is that this doesn’t necessarily resolve the original problems, complex process, too many applications.

Once the user process is clearly defined, we then (or can in parallel) look at working up – understanding what data & applications we need access to to effectively and efficiently complete the defined user processes. We will now understand the pattern, what gets the most use, by who, how. This firmly puts the cross hairs on which applications to enable for reuse. Unlike traditional data re-use approaches, this approach enables the useful & required bits of applications, but also defines reusable UI and stores these in a library of reusable services.

Now we have two key elements defined, a clear user process and a set of reusable UI services. Corizon’s solution then allows you to mashup these to create the optimal interface, be it a standalone UI or consumed as a widget in your Enterprise 2.0 application. Adding new, or removing legacy applications becomes a much less complex task – the UI Services approach allows you to interchange these without affecting the interface.

Good is good enough

At first sight, this approach might sound a bit unconventional, but we’d like to invite you to an excellent post by Peter Evans-Greenwood (@pevansgreenwood) that talks about “The Price of Regret“.

Building the big, scalable perfect solution in the first place might be more efficient from an engineering point of view. However, if we make the delivery effort so large that we miss the window of opportunity, then we’ve just killed any chance of helping the business to capitalise on the opportunity. … Size the solution to match the business opportunity, and accept that there may need to be some rework in the future. Make the potential need for rework clear to the business so that there are no surprises. Don’t use potential rework in the future as a reason to do nothing. Or to force approval of a strategic infrastructure project which will deliver sometime in the distant future, a future which may never come.

One thing we’ve learned in this consulting business is that most of the times, good is good enough since perfection takes an eternity.

One Response to Data ubiquity threatening usefulness of Enterprise 2.0

  1. By Martijn Linssen on January 14, 2010 at 2:49 pm

    Question marks…
    I dig the data stuff. There’s bits, data, (content), information, knowledge. It’s a pyramid, you only want the top, but only the bottom is tangible and not abstract. So you’ll have to sandwhich yourself somewhere in between
    What Corizon calls user-centered focus approach (not sure they do but you’ll get my drift) is actually what one would do pre-ESB or pre-SOA in order to establish what the reusables are, the enterprise nuggets so to say
    Exploding data stores are a known fact, but the problem they form is that there’s not a single source of truth for business information. Nobody really cares that data is costing fortunes on storage
    Information is in the systems though, and you can perfectly integrate that machine2machine. Unless your applications are so low-quality that bad data made it through
    Personally I think social will solve this (= information) problem because it will put the human factor back in and enable ubiquitous connections. It’s the people that will indirectly turn data into knowledge because they’ll interchange on the information level
    I’m sure this post will get a follow-up in a little time, there’s a lot in here but it will travel in different directions