A Customer Support “Barn Raising”

(Barn raising image: Ian Adams)

Rural_wsgabrolAs mentioned here, I had the opportunity to spend yesterday speaking with the fine folks from the Consortium for Service Innovation about emerging collaborative technologies, primarily in areas of blogging, wikis and social networking. What I expected was a good, conversational session about how collaboration tools could be used to improve customer support. What I didn’t expect was to leave, head spinning with notions, with an entirely different idea on how to address ongoing customer needs over the course of a business relationship. Happily, however, that’s exactly what occurred.

Perhaps a bit of background on the current “state of the art” (::cough::) in customer support is useful to set the stage. Currently, a large number of organizations view customer service and support (that is, interactions with a customer after the initial sale) as a series of “incidents” that need to be “closed.” That is, when a customer has an issue, the customer contacts the organization’s support center, and opens an “incident” (or a “trouble ticket”) that needs to be resolved. This is very transactional, very discrete.

Customer support is also viewed as a cost center by many organizations. This causes support to be measured internally within an organization with an eye toward keeping support investment as low as possible. How to do this? Use the least-expensive resources that can be mustered in order to close the incidents, and close them as quickly as possible.

In emergency and battlefield situations, triage (another definition here) is usually performed in order to best prioritize need and allocate scarce resources. In many modern customer support organizations, a similar idea is used. In customer support orgs, this is usually referred to as “Level 1,” “Level 2,” and “Level 3” support. While the definitions vary, these three support levels are usually defined as some variant on the following:

  • Level 1: The Level 1 support team is the first point of contact in the incident response process. Customer service personnel are responsible for call handling, triage, problem characterization, and resolution of basic problems. Oftentimes, Level 1 Support answers questions by consulting lists of frequently-asked questions (FAQs).
  • Level 2: The Level 2 support team is staffed with support engineers assigned by product type. The support engineers are responsible for lab-based simulation, difficult problem resolution, defect correction or escalation management to Level 3 support.
  • Level 3: The Level 3 support team is staffed with senior analysts, program managers, and development engineers dedicated to working on the critical problems. They are responsible for confirmation of defects, including complex failures, performing interoperability studies, and enacting engineering level changes to permanently resolve any issues in released products.

(The Level 1, 2, 3 definitions above were generalized from this list from Caspian.)

From the vendor’s perspective, this arrangement makes perfect sense. Support is a cost center. Highly trained development resources are expensive. Many problems can be answered by rote, by Level 1 support (read “inexpensive”) personnel. Therefore, the logical thing to do from the vendor’s perspective is to have incidents come into Level 1 support first and only then, and only if they cannot be resolved, escalated to Level 2 and then, in very rare cases, to Level 3. It’s all very logical, rational and measurable with a stopwatch and a stats counter.

However, that’s not what customers see. Here’s the customer view of support:


(By the way, the graphics above and below were cribbed from the folks over at XPlane. If you haven’t seen the things that Dave Gray, Aric Wood and the team at XPlane have done around Visual Thinking School, you’re missing out.)

From the customer’s point of view, there’s a gatekeeper who is required to exhaust all (inexpensive, and ineffective) possibilities before escalating the incident to Level 2 or Level 3. If escalated to Level 2, all possibilities must be exhausted before escalating to Level 3. It’s slow. It’s inefficient. It’s maddening from the customer’s perspective.

Now, the Level 1-2-3 customer support arrangement is pretty much standard practice, at least across the technology industry. But what if there existed a better way…? (Cue Wayne’s World dream sequence music here.)

The big “a-ha” moment for me yesterday with the consortium was their idea of “swarming” around an incident. Instead of going through the rigid, rote routine of support escalation, what if there was a model that looked like this?

Instead of pushing issues through a funnel, resources
swarm to an issue to resolve it, then disband.

This idea can be described thusly:

Instead of issues (incidents) being pushed through a series of increasingly onerous screens in order to find a solution, what if the solutions were drawn to the issues?

A reading from the Book of Clue, the word according to Dr. Weinberger (p. 127):

“Here’s one example of how things work in a hyperlinked organization: You’re a sales rep in the Southwest who has a customer with a product problem. You know that the Southwest tech-support person happens not to know anything about this problem. In fact, she’s a flat-out bozo. So, to do what’s right for your customer, you go outside the prescribed channels and pull together the support person from the Northeast, a product manager you respect, and a senior engineer who’s been responsive in the past (no good deed goes unpunished!). Via e-mail or by building a mini-Web site on an intranet, you initiate a discussion, research numbers, check out competitive solutions, and quickly solve the customer’s problem – all without notifying the “appropriate authorities” of what you’re doing because all they’ll do is try to force you back into the official channels.”

Dave Weinberger nailed the fundamental aspect of this concept back in 2000, per the above, and now the members of the consortium are actually thinking about how to get it done. As important as the “markets are conversations” meme from Cluetrain is, this idea of resource swarming to resolve customer issues is equally important.

Put another way: when an incident occurs, the right people from the social network that forms the community (from inside the company, from the existing customer base, from the broader community of interested parties) will find out about the issue, collaborate using a combination of formal (e.g. FAQs, formal diagnostic processes) and informal (blogs, wikis, Skype, instant messaging) mechanisms, and resolve the issue in a real, human and adhoc manner. The right people come together with their appropriate tools and skills, and they get the job done together, and then disband and reform fluidly for the next incident.

It’s customer support as barn raising. More here.

How to get there? Some ideas to discuss…

What if we abolished the support “department?” What if individuals in the organization were encouraged to invest, say, 10%-20% of their time working directly with customers (not unlike what Google does with their engineering staff and Google Labs)?

What if people within the organization subscribed to an RSS feed of incidents, and jumped in and helped when something in their wheelhouse came across their feed reader?

What if dynamic, living profiles were set up, a la Haystack, where people within the organization could tag themselves based on their areas of experience and expertise, and be matched up with incidents that they were best suited to resolve?

What if, when those incidents were resolved, they were stored in a public knowelege base that customers could peruse, and help themselves? (Yes, there are a number of systems out there right now that do this, but they are the exception, not the norm.)

How To Use The New Google Music Search

As widely reported this morning, Google has released a number of new features that enables them to act as the gateway to a great number of music-related resources. Here’s how it seems to work:

1) There are a handful of artists that trigger the music-related functions straight from the Google search bar.

For example, Dave’s faves the Beatles show up with their own section if you do a standard search for “The Beatles” from the Google search box:


Notice the special area up top above the search results with the picture, etc. This way of entering the Google music area only works for a handful of artists, currently.

(The results seem to be somewhat random; Queen currently returns the “normal” search results, while less-well-known acts like Death Cab For Cutie get the full treatment.)

2) A little bit of digging actually finds the dedicated the Google Music Search page.

This seems to work for a much larger number of artists.


3) From the music search results page, one can click through to a number of additional resources.

4) You can buy from online sellers who stock the item

From the listing purchase page, you can see which online sellers have the item (the Google “musiclp” page)


There appear to be a few other functions as well (e.g. “show all tracks“, the Google “musicad” page). Enjoy!


Corporate Blogging, Wikis, RSS All On The Fast Track, Says Gartner

Gartner has published their most recent “Hype Cycle” report, this one covering emerging technologies. The report covers 44 technologies, and prognosticates when they will reach the “plateau of productivity”…that is, mainstream business use and acceptance. Corporate blogging and RSS are flagged as technologies that will take “less than two years” to reach the plateau, with wikis on their tail in the 2-5 year window.

The interesting thing about Gartner’s analysis of all three of these technologies is that all are still positioned as being before the “trough of disillusionment” — that is, the inevitable backlash to their initial hype is yet to come. (n.b. Podcasting’s hype is still on the upswing, according to this, if you can believe it…)

Opinion (mine, not Gartner’s): Of these technologies, RSS is going to be the one that is going to have the greatest challenge slogging through the trough to true mass-market (i.e. not early-adopter) usage. Until there is a truly “zero-training” method of publishing, finding, and subscribing to RSS feeds (which might not even be called RSS feeds in a couple of years), RSS will have a challenge crossing the chasm, to use Geoffrey Moore’s terminology.

(hat tip: Steve Rubel for the initial link. Of course we went out, did some research, and dug a bit deeper to find the details ::poke:: But that’s what we do.)

Jerry’s Brain And The Heresies It Contains

At last night’s Hillside Club “Fireside Meeting,” Jerry Michalski presented his thoughts on the question that’s been nagging at him…what is it that caused some of the great thinkers of the past 300 years to become “outcasts?” Why have certain types of outside-the-mainstream thought been marginalized and their proponents ostracized?

Part technology demo, part presentation, part sorta-participative-performance piece, Jerry started out with an overview of what *he’s* been using for the last decade to capture, collect, and organize his thoughts, a piece of visual software called TheBrain. In some ways, The Brain is sort of like del.icio.us on steroids, in that it enables someone in the period of just a few seconds to tag a website, book, piece of music, document or any other “thought” with user-defined tags, index it, and store it for easy future access.


(from here)

Where TheBrain goes beyond tagging-and-bookmarking tools like del.icio.us, however, is in its ability to create some basic relationships between the different bookmarks (parent, child, peer/sibling, etc.). Although the fundamentals are straightforward, the power of this approach becomes evident when Jerry shows that over the past 10 years he’s logged over 60,000 bookmarks into the system (62,864 as of last night, and counting…), each one connected to a handful of others, creating an ecosystem of links that touches on many of the key areas of human thought.

(Side note: Tracking bookmarks in a dynamic medium such as the web is a Sisyphean task, as web sites change and companies are born…and die. However, the Internet Archive and Wayback Machine really are the “4th dimension” of the web, and allow one to chronicle the life and death of startups with quite a bit of clarity.)

After showing one mechanism for organizing thoughts, Jerry moved over to discussing some of the “heresies” of a number of outcast thinkers of the past few hundred years, including:

Disparate thoughts. Disparate ideas, spanning hundreds of years. Yet, Jerry theorizes that there is a common message that each of these thinkers was railing against. That message they opposed?

“We don’t trust you.”

We. Don’t. Trust. You.
…to self-educate.
…to enjoy authentic art.
…to minister to one another.
…to tell the truth about your lives.
…to design the places we inhabit.

(Kenneth Tyler asked the question-of-the-evening here, in wondering “Who is ‘we?’ in the statement above…is “we” the government, some other authority group, distrustful organizations, ourselves…?)

The remainer of the evening was wide-open and interactive, with the 40-or-so folks in the room batting around the ideas presented, and finally splitting into emergent groups discussing their various takes on the implications of the discussion.

Moving from the esoteric to the practical, however, the points made at Hillside ring strongly. There are myriad examples where “We Don’t Trust You” is the rule in business with how customers are often treated:

…and so forth.

Jerry was recording the evening on his iPod, so when it goes up as a podcast (at Sociate, I’d presume), definitely check it out.

Note to self: Today, make an effort to notice the places where rules, structures, and physical barriers have been put in place based on lack of trust and/or to ostensibly protect me from myself.

Finding The Conversations

Johnnie Moore’s blog rocks, and it’s one of 100+ that I have read through my aggregator in the past. But I rarely read it anymore. Why? Because there’s something better.

What’s better than his blog? Finding the conversations that he’s hosting.

This is because, although his blog is here, he publishes the feed for just his comments. This is where the good stuff is happening. This is where the conversations are happening. (n.b. have shamelessley stolen this idea, and if’n you’re interested the comments feed for The Social Customer Manifesto is here).

Subscribing to just the comments is a double-edged sword. On one hand, there may be insights that are missed in the “regular” blog posts. But as long as there are a good number of readers/lurkers to a regular blog, and some small number of those folks choose to start a conversation in the comments, there is an almost built-in filtering mechanism that is put in place…the posts that generate the most comments are the “high value” ones that pop up, and are the ones that get read. (By the way, Wilco is amazing. Buy all their records. Now. And Lane‘s too, while you’re at it.)

Here’s a link to how to do this yourself in Moveable Type or Typepad (thanks, Johnnie for pointing this out). It’s pretty straightforward, but you need to be comfortable mucking with the templates. Drop me a note…or a comment…if you’re not able to get it to work.

Two Fascinating Collaborative Environments


Just tripped across two flash-based sites that have sent the idea of “collaboration” off into a new direction. The first is a collaborative scratchpad where multiple people can all interact with a drawing that is being created:

As they say, what you see above is a “simulated image” (in this case, what might have been on the napkin before this was created, with kudos to Lee).

My actual experience on the scratchpad site was less-than stellar due to the combination of a troll who chose to scribble out the drawings of others, and the manic sketchings of a wanna-be Larry Flynt. The current scratchpad area appears to be a completely unmoderated, anonymous area. Not suitable for those with an aversion to profanity, etc. You’ve been warned; here’s the link. But, despite the presence of the troglodytes on the site, the possibilities are impressive.

To interact with the scratchpad, there was no loading of anything needed. No installations. No registration. No training. No nothing…just show up, and start collaborating.

The second site, also by the same author, was a collaborative site where one can move the virtual equivalent of alphabetic refrigerator magnets around. Again, no setup was needed; by simply showing up at the site, you are immediately immersed in the environment and collaborating with the others who are there. The same caveats apply as above, here’s the refrigerator magnet link.

The thing that makes these sites revolutionary in my opinion is their ease of use and light-weight nature. The scratchpad site appears to be a 29K shockwave file, and allows up to fifty concurrent participants. Similar specs on the letter game.

What if an organization could point a website visitor at a private site like this and work with that customer or prospect on architecting a solution to their problems, collaboratively and in real time? Or what if you could integrate these capabilities into, say, a wiki-based environment, and document and take snapshots of a solution as it evolves?

There’s something significant here.

The Psychology Of Scarcity

“When people are told they can’t have something they want it all the more. As a result incredibly powerful emotions are released which go on to drive actions often deemed irrational under normal circumstances.” – from The Psychology of Scarcity

Google is doing it with Gmail…putting a “limit” in place that, in actuality, isn’t much of a limit at all (2GB is a fair amount of email to store). But it feels like one. Ditto with the ability to “get” an “invitation only” Gmail account.

Hugh is doing it, and creating a microtulipmania in the process. He’s producing just 200 of each of his cartoons as t-shirts, and only having four designs available at a time. The interesting thing is, he’s got a huge backcatalog of designs to pull from. So, as long as he is making them available, there will always be a few hundred shirts available, as new designs could be rolled in to replace the old ones that have gone “out of print.” It feels like one needs to act “right away” in order to get a shirt, even though it’s quite possible that there will always be some available.

We see this all the time.

Among a number of interesting dimensions of this “artificial scarcity” is the emergence of secondary markets that are completely irrational. When Gmail first launched, the “undersupply” of Gmail invitiations caused a rich secondary market to spring up on eBay, with people selling Gmail invites at, well, an infinite profit.

(here’s what Andale had to say about the over eight hundred auctions over the past few months)

Completed eBay Listings (February 21 – March 20)
Listing Title…………………………………..Sale Price
100 Gmail invites 1000MB Space each//Instant delivery…$10.50
50 Fresh Gmail Invite – NO Reserve – Instant Delivery…$10.00
50 Fresh Gmail Invite – NO Reserve – Instant Delivery….$6.50

Three things spring to mind:

One: If you are the creator of something, and you have the discipline to not need to wring every short-term cent out of something you’re doing, the resulting buzz based on the scarcity comes back to you, in spades.

Two: This thinking may help to build relationships and (perhaps) community as well. If there are only a “few” of something available, connecting with others who share that thing can be a starting point for a relationship. This applies to both members of the community, as well as to between the creator of the “thing” and the community members themselves.

Three: This kind of scarcity creates a huge opportunity for arbitrageurs in a secondary market.

If the above three points are valid, the relationship-driven folks live in worlds One and Two, and the pure profit-maximization folks live in world Three. All three worlds are valid. Understand which world you’re in. And why.

The Radical Insidiousness Of Desktop Search

This week’s desktop search frenzy is much bigger than the desktop. It actually signals the beginning of a fundamental shift in the way we will interact with information.

The Legacy Of…The Folder
Many of us want to keep things “organized.” We want to put the right files in the right folders so we’ll be able to find them later. We want to organize our vacation photos. We want to organize our music (sometimes autobiographically). We want to put our desktop things on our Desktop, and our documents in the Documents folder. The folder itself provides the metadata that, in theory, helps us to effectively locate what we’re looking for when we need it later.

We think this way because the tools that we were given to store and locate information were based on the metaphor of a set of hierarchical folders. It’s the script we’ve been given.

Distributed creation of content, however, broke this. When millions of people are creating content (whether in the form of web pages, blogs, or what have you), only a miniscule fraction of those people will go through the laborious step of explicitly stating how that information should be organized. The DMOZ, for example, states categorization of approximately four million web sites — while Google lists over eight billion pages (yes, one is counting “sites,” the other is counting “pages,” but there’s still a three-order-of-magnitude difference here…work with me). Organizing things is a pain. Let’s not forget that Yahoo! started out as a directory which, although it still exists, has been depreciated and now only fills a minor role in the Y! universe.

When things got too massive, messy, and organic for the folder approach, search stepped in to fill the gap.

The Nearest Node
Until the desktop search tools started showing up, there was always an implicit distinction between things that were “local” and things that were “on the web,” one primary difference being in how you located those things when you needed them. That difference has effectively vanished. And with that change, I would contend the Folder’s days are numbered.

It is only a matter of time before the “flatness” of the web becomes mirrored in how people use their local systems, and maybe even in how those systems are organized. With a solid desktop search engine, why should I bother to put things in folders anymore? I can put everything in one place, and the search engine will find it for me. My job just got easier.

I no longer think of my machine as a separate entity from the Internet. It just happens to be the nearest node.

Next Steps
Of course, this only works well for things that are easily indexable. The images that are fairly flying from camera phones will still need to be indexed, as will the podcasts and the videos and all the other “rich media” out there. That is, until someone figures out a cost-effective way to automatically extract and index metadata from these types or artifacts*. (Hey Virage, are you listening?) I suppose in a way, Google’s library project today is an extension of this as well — a library itself is rich media, isn’t it?

* – Thing to watch for: when “search” finds a way to effectively mine existing relational databases as well, in lieu of SQL

Desktop search tools
Ask Jeeves
Yahoo / X1

Others discussing desktop search:

Scoble: “MSN Toolbar Suite reactions from the blogosphere”
Charlene Li: “I believe that both MSN and Google (as well as their future competitors) will all develop more robust desktop search that can handle multiple files and give users flexibility and control over the search results.”
Rajesh Jain
: “As the various search engines battle to deliver relevant ads to users, the ultimate prize is the user desktop.”
David Weinberger: “X1 beats Google desktop search in every regard but two: price and branding.”
Mike Torres: “The command line of the future.”
Michael Griffiths: “MSN Desktop search is doing allright…[but] even though it’s been on for some 20 hours, uninterrupted time, it hasn’t finished.”
Alpha-Geek: “I don’t want desktop search; I want digital lifestyle search.”
Alex Barnett: “I’m still indexing.”


So, the whole “bloggers vs. journalists” debate was getting a little tired from my point of view, and I just figured out why. It was getting tired because that debate doesn’t matter. It is trivial in comparison to a more fundamental change that is taking place.

  • The cost of publishing has effectively dropped to zero for any individual.
  • The cost of aggregating information that has been published by others has effectively dropped to zero for any individual.
  • For any information topic of interest, there are people who are passionate about it, who will share their passion and knowledge for free.

Now put those three things together and make a picture. Stand back and look at it. The picture I see is the following: if you are in any role where the only thing you provide is information that is available from publicly available sources, your industry is about to experience a tectonic shift. The only reason this came out first in journalism is because journalists have the barrels of ink.

This “knowledgeswarming” has been taking place for years in areas where there are a large number of individuals who are passionate about a topic. Usenet and other online forums are great examples of this. The twin problems, however, were the barriers to both publishing and accessing the information in these areas, since some level of technical acumen and knowledge of technical arcana were required to participate. No longer. Now, anyone can publish anything with two clicks, and anyone can aggregate information on their myYahoo page.

An industry that is about to be swept up in this maelstrom is the “analyst” community on both on the finanical/Wall Street side of things, as well as the high-tech industry analysts. In both cases, the attack will come from the low-end, a la Christensen, and the entrenched players will be left wondering “what happened?” if they don’t get out in front.

Historically, the Wall Street analysts have had priviliged information given to them by their clients. This made the analysts a valuable asset to their customers, since the analysts possessed information not available to the average investor. This is no longer the case since Regulation FD went into effect. Now, anything that is “material” needs to be disclosed publicly, without any preferential treatment given to analysts, insiders, or anyone else. In other words, the analysts are now getting their information from company press releases. Just like you and me.

Some students at Babson are starting to swarm around this idea, and are proposing an “open source” model for financial research. They feel that motivated individuals, working in concert, can provide the same (or better) research than the sell-side analysts. They are talking a lot about “mosaic theory,” and the belief that it is possible to take non-material information from a variety of sources and create meaningful observations out of it. The larger the number of contributors, the clearer the mosaic is going to be.

This model is starting to creep into the industry analyst community as well. Why pay $25K a year for Gartner, when Techdirt is blogging on the same events, and giving analysis in real time? Now take that model, and expand it out to a collaborative blog/wiki model, where users, developers, and customers are sharing information about the good, the bad, and the ugly regarding the vendors they are working with.

Oh yeah, Encyclopedia Britannica is dead, too. They just haven’t fallen down yet. The Wikipedians swarmed, and a 250 year-old company is effectively rendered obsolete.

At most, a very few hands will be required as guides. Each of these communities will require some way to ensure reputation and quality. In some cases, it makes sense to have an editorial voice ensuring that the results being generated by innumerable participants are valid. In other cases, emergent reputation management systems (think eBay) will provide the signposts.

So. Open source software has already gone this way. Journalism is trending this way (actually here’s a great example). Financial and industry analysis seem to be headed for the offramp. What other industries are next?