Monday, June 16, 2008

Is Your Schedule Based on a Guess or an Estimate?

Clearcut Ideals and Messy Realities
Ever work with Microsoft Project? Ever spend hours and hours—or rather days and weeks—gathering project requirements and schedule projections from team members ("how many days do you think your part of the project will take? OK, we'll say three"), so you can generate draft after draft of Gantt charts and timelines, leading up to the official copy that you present to management, then print out—page after page of solid lines presented in a staggered order like a vast, irregular staircase—and tape up on the wall of your office?

If you've ever worked on a project plan like that, you may find yourself holding your breath now, because you know, in the pit of your stomach, that the process I've just described is only the beginning. It's only the beginning, because inevitably, important aspects of the project change. Some tasks finish late; others finish early; others disappear from the schedule entirely, while new ones, unimagined in the planning stages, miraculously appear. If you're lucky—and a lot of people are—the team will manage to complete the project—or some semblance of it—overall.

When the project is finished and you look back at all those charts you printed out and taped do your wall, how do you feel? Don't those solid lines and neat demarcations—progressing across the page with the neat precision of a well drilled marching band—now look hopelessly optimistic—like the budget projections of a politician or the crop forecasts in a Soviet five-year plan? I mean, how could anything as random as a bunch of human beings working on complex project ever proceed in such a neat manner, with such precision?

But what's your alternative? You can't afford to be vague when you're scheduling a project, can you? And you do have to produce some kind of schedule or plan. And whether you use Microsoft Project or some other planning program, most likely the output is going to be hard lines, those promises of firm dates, neat beginnings and endings.

Nothing really ever works out that way. Precise project scheduling is like penciling in a landing strip for a water balloon.

A New Approach to Planning
One of the most useful products I saw demonstrated at the Enterprise 2.0 Conference in Boston didn't really have much to do with Enterprise 2.0, as far as I could tell. It's a piece of collaboration software, but it's no more collaborative than Project or other planning tools that have been around for over a decade. It doesn't explicitly make use of network effects, though it does support discussion threads and Web-based scheduling. Most importantly, though, it offers a new and potentially very useful approach to planning.

The project is called Liquid Planner, and it's based on the premise, which seems blindingly obvious in retrospect, that accurate planning should be based on estimates and probabilities, not hard certainties.

Bruce Henry, whose title at Liquid Planner is Director of Rocket Science, explained the "Ah-ha!" moment that led to the founding of the company. He and some of his colleagues from Expedia were taking a class from Steve McConnell, the author of Software Estimation: Demystifying the Black Art and Rapid Development, among other books. McConnell pointing out that when you ask how long someone will take to do something, and they say, "4 to 6 days," and you say, "OK, we'll call it 5," you're making a guess, not an estimate. Estimates are based on ranges and probabilities. Guesses pick a number and use it as the basis of planning.

Most organizations base their planning on guesses. It's not surprising then, that most schedules slip, and that most Gantt charts end up looking hopelessly optimistic.

Two of Henry's colleagues from Expedia—Charles Seybold and Jason Carlson—founded Liquid Planner to address this problem. Henry joined them and wrote the probability engine that's at the heart of Liquid Planner's software. The goal: make project planning more accurate by enabling teams to base their schedules on realistic probabilities rather than unrealistic "certainties."

Here's a screenshot of the software, showing probabilities and date ranges for tasks.



Henry points out that seeing a list of probabilities can raise red flags early in the planning process. For example, if managers notice that a particular task has only a 30% chance of completing on time, they might ask why. They might discover dependencies they weren't aware of. They might be able to apply people and resources to address any dependencies or shortcomings, greatly increasing the task's chance of completing on time.

I haven't tried this software myself, but it seems like it's worth a look for any team beginning a new project.

The company launched its public Beta at the DEMO Conference in February, 2008. Since then, over 11,000 users and organizations Philips, Butterball Farms, and Reed Business Information have signed up for their online service. At the Enterprise 2.0 Conference in Boston in June, 2008, Liquid Planner announced its commercial version.

The service is free for teams with up to 3 members, for 501(c)(3) non-profits, and for educational users. Larger teams can take advantage of a free 15-day trial, then pay monthly or annual fees per user. You'll find pricing details here.

Wednesday, June 11, 2008

All That Data

None of my clients were exhibiting in the demo area of the Enterprise 2.0 Conference, so when the demo floor was open, I had the opportunity to stroll through the aisles and talk to various vendors instead of manning a booth and explaining a particular product or technology to passersby.

The Enterprise 2.0 movement—applying Web 2.0 technology to problems and processes within the enterprise—promises to transform the online experience of workers in companies large and small. Instead of being deluged with email and interrupted by IM, workers can access company news and information in RSS feeds when it's convenient. Instead of emailing Word documents to everyone on a team and trying to coordinate all the changes and comments, authors can jointly edit documents with tools like Google docs. The table below summarizes some of these changes:

CategoryWeb 1.0Enterprise 2.0Benefits from New Approach
Knowledge sharingEmail and irregular postings on portalsWikis and blog posts
  • Publishes data in a more permanent format
  • Makes information easier to discover
  • Reaches stake-holders outside one's immediate group
  • Enables non-technical users to post information without requiring custom clients or help from IT
Notification of changes and newsEmail and phone callsRSS
  • Occurs automatically when blogs or wikis are updated
  • Reaches all interested parties, even those the author might not know about


(For more about this new way of working, and some thoughts on the pros and cons of email in particular, see this recent post by Harvard Business School's Andrew McAfee.)

Clearly, these platforms and portals are going to store a lot of data. How do we make it searchable? How we enable a product manager for a new leather cleaner product to find the blog post from three months ago that discussed product requirements for a similar product being developed by a partner in Switzerland.

One solution is to apply tags—meta-data keywords that summarize the content of a blog post, Web page, or some other piece of content. For example, the tags for that product requirement blog post might be "leather cleaner, research, product requirements, survey, partner, Switzerland."

Thomas Vander Wal is a consultant who has spent a great deal of time thinking about tagging and classifying data. He coined the term folksonomy to distinguish a bottom-up approach to classifying data, in which users apply the tags they think are relevant, from more traditional top-down approaches that rely on formal vocabularies and specialists in information taxonomy.

Getting users in the habit of tagging content and tagging it usefully can be a bit of a challenge, however. As Vander Wal pointed out in his presentation at the Enterprise 2.0 Conference, you can end up with problems like users not tagging content at all or using tags that are so general they prove useless in future searches. He gave the example of a company promising to reward workers who tagged documents, then discovering that workers were meeting this requirement by applying tags like "document." More tags are better than fewer, and overall, however they are applied, tags must serve the purpose of distinguishing one document for another.

The creators of software tools have a role to play here. They can create applications that prompt users to tag data. Some applications might even analyze data as it's being entered and propose tags for it.

Shortly after Vander Wal's talk, I found myself strolling through the demo area, wondering how the many social collaboration programs on display handled this important issue.

At the Microsoft booth, I heard from someone demoing SharePoint that customers simply don't use tagging all that much. The idea of tagging, in this person's opinion, was not turning out to be a success.

At the ThoughtFarmer booth, I met Darren Gibbons, the co-creator of the ThoughtFarmer intranet solution, and the president of OpenRoad Communications. I asked Darren what he thought about tagging. Should be automated? Left to individuals? How could it be made to work?

Here's a video with his answer, which is that tagging works best when it benefits both the tagger and the community overall.



On the next aisle, I got talking to Padmanabh Dabke, the founder and CTO of a company called SpigIt, about the challenge of creating meaningful tags on a large scale. SpigIt makes software that enables companies to collect from large communities of employees and customers, then rank the ideas to decide which ones should be pursued. In addition to offering guidance for investment, the software helps managers identify which employees and customers are consistently coming up with the best ideas.

Nabh pointed out that some older technology—namely, expert systems—could be applied to sort through the torrent of data in online communities and aid in speedy classification. (Pardon my shaky camera work in the first few moments of our conversation.)



Any conclusions? Yes. It's clear that companies are replacing or upgrading their old Web 1.0 intranets with these new, easier-to-use community platforms. Workers are getting used to blogging and using tools like Wikis and RSS feeds. Tagging will make all these tools more useful, and the best practices for tagging will probably combine user habits, helpful user interfaces, and powerful processing engines like that described by Nabh.

The Enterprise 2.0 Conference in Boston


I'm spending this week at the Enterprise 2.0 Conference in Boston. Monday started strong with an overview of Enterprise 2.0 concepts and tools by Dion Hinchcliffe. That evening, leading cloud vendors—Amazon, Google, and Salesforce—sat on a stage with potential customers in a lively, in-depth discussion arranged by TechWeb's David Berlind.

Tuesday's sessions were more uneven. One of the key topics of the day turned out to be tagging. Thomas Vander Wal, a social bookmarking consultant and the coiner of the term folksonomy, offered a look at the pros and cons of various approaches to managing tagging on a grand scale. Later in the afternoon, my discussions with software vendors in the demo area of the conference returned to the subject again.

More details—and a few video interviews—in upcoming posts.