Requirements Traceability? How about User Traceability?

I work in the Veteran Relationship Management program office in the Veterans Benefits Administration (VBA) and we manage the delivery of about 25 veteran-facing enterprise software applications. The vast majority of the development work is contracted to outside developers. As is good practice in larger scale software development – especially when there are multiple companies involved – there is a focus on “requirements trace-ability”. This helps ensure that the software that is ultimately delivered to a business stakeholder (the VA) by a contractor can be traced back to the original requirements the business and stakeholders agreed on. But how do we know that the requirements that have been put on paper actually meet the real needs of veterans? I’m proposing that for user-facing software solutions along with a “Business Requirements Document” (which is the current artifact for all projects at the VA that detail what a solution needs to do in non-technical language), a “User Research Document” is required as well that uses artifacts from Human Centered Design to support a real understanding of users.

How do we know that our requirements are any good for our end-users (veterans)? 

From my my background in startups and product development I’ve been most concerned the past five years with how to do a better job of building products and services people actually need. In my Presidential Innovation Fellowship at the VA I’m trying to answer the same question but in the context of the federal government delivering services to 21 million veterans in many cases through technology solutions such as websites and mobile applications.

In the startup world or in the private sector it’s fairly straightforward to see how latching onto user needs is important. If you don’t build things people truly want or need then no-one will buy your product and you’ll soon go out of business. So many startups have built well-crafted technical products that users didn’t really need and then went out of business (taking their investors millions with them) that the Lean Startup movement was basically born to address this problem. But in the federal government our users (citizens, veterans) don’t directly pay for our services and there is mostly no competition. In other words, when we have a website with a low user satisfaction rate (such as one I work on with a customer satisfaction rate of 55%), there are few direct consequences for that – unless veteran complaints go through congress or somehow reach the Under Secretary etc. The the natural “nerve-endings” that wire dangerously low user-satisfaction to the product owner “brain” in a private company don’t really exist in the federal government.

So we have to work extra hard in the government to intentionally wire-up a nervous system that makes sure for user-facing services delivered through technology we are paying attention to user needs. I have some thoughts on how to do this, but that is for another blog post. For this post, I’d like to propose the idea of “User Trace-ability” to exist with equal importance with “Requirements Trace-ability” and use existing tools from Human Centered Design and Lean Startup to implement this. I propose To complement the BRD (Business Requirements Document) all business users will have to include a URD (User Research Document).

How software is currently built at the VA (and most of the federal government) in the immediate year prior to a development contract being awarded.

  1. Business stakeholders with the help of a program office like VRM or OBPI write up high-level business requirements for a solution. This is a document that reads like “The system shall allow veterans to log into the site with DSLogon credentials”, “The system shall allow veterans to search on mental health services near them and receive a set of results of nearby facilities”.
  2. This document gets fed into the procurement process – IT comes up with a “level of effort”, there may be some back and forth with the business (but in most cases not), a contracting officer writes up the Statement of Work (or Performance Work Statement in the VA) and it goes out for bid to contractors.
  3. When contractors come on board many months later, they dig into more detailed requirements with the business stakeholders. The output of this is a more detailed business requirements set that is to be used to measure software delivery.

From what I’ve seen, however, there is no evidence offered to anyone downstream of how these business requirements were identified, vetted or validated with end users (veterans). Basically, everyone largely takes the word of whoever put the BRD together. I’ve been told in some cases BRDs are put together last minute in 48 hours. The many choices of what goes into a website that delivers a service should be based on actual work done with users rather than a smart business stakeholder putting on paper their best effort at what the solution should look like based on their experience. In the best private companies and startups, this practice is seen as very flawed and the source of a tremendous amount of wasted effort and mediocre products. In Lean Startup you learn that it really doesn’t matter what a room of smart people think their users need – they’re almost always wrong. You need to check your assumptions and validate your choices rigorously with direct user research and contact. At the VA and the federal government in general we almost always embark on building very expensive solutions based on business requirements that have little evidence of user research to back up the choices made.

The User Research Document

I propose that when a BRD is delivered to IT, the business must also product a URD (User Research Document). This would consist of artifacts from the Human Centered Design discipline:

  1. A stakeholder map – of all of the players involved in the service being rendered to the user.User Personas (or “mind-sets”) of the main types of users
  2. A journey map of how the user currently goes through the system
  3. Key insights derived from 8-15 user interviews with stakeholders in the system.
  4. These are not boxes that can be checked in a hurry. You need to go through some effort to do this.

But we already do some of this in the BRD!

Sometimes, some of this is included in a BRD. But its not complete, is in Lean/Six Sigma process speak (not user-focused). But its generally not clear where it came from and I’ve never seen a BRD where there is proper user-based research and evidence to back up the hundred items “The system shall do …”.

But we don’t have the time to do that!

Well, tough. You have the time to ask for $5 million of tax payer dollars to build a solution that you made up in three days? Make the time to do a clear, focused user research up front. Hire fewer people in the the army of PMI-certified contractors and make some of them design researchers – its money very, very well spent. Then the folks managing the implementation will actually be managing the delivery of something good that veterans will be happy with.

 

 

 

ClientHat: Customer Pivot – Week 1 – Digging into LinkedIn

I’ve decided that its time for a customer pivot with a startup I’m involved with – ClientHat. I’m blogging about my progress to help keep me honest in my efforts and solicit help from the wide group of smarty-pants fellow Lean Startup aficionados to uncover my blind spots and kick me forward.

“Real progress and learning happens when you’re out of your comfort zone” – somebody said something like this at some point. I like it.

Previous Business Model Hypothesis:
The previous business model “hypothesis” for ClientHat – which is of course the sum of many individual hypothesis – was approximately:
Virtual Assistants have a significant pain point around switching costs when moving back and forth between different client web applications. They are constantly logging in and out, having to look up passwords. They will pay $9 / month for a service like this.

Result: Invalidated. The details of this I will get into in another post. Despite conducting two dozen customer and solution interviews and doing my best to validate assumptions before building a product, I built a product and the bottom line is that it seems most VAs do not pay for things.

What is the current product? Along the way, we’ve built a working software product that allows a client (such as “Johnny’s Italian Catering”) to share the passwords for its various websites with an agency (such as “My super marketing agency”) and that agency can re-share that password with a series of assistants. The actual password is not known to the agency or any subcontracted folks. The software is implemented as a browser plugin and also happens to let you open multiple windows at the same time – logged into the same site as different users. Meaning – cookies are not shared between instances of the browser.

My goal: We’ve put a bunch of development effort into this product over the past 18 months. Before we put much more time into this – I want to discover an actual customer segment that I can reach who will pay a sufficient amount to support a growing business (what is sufficient? I’ll leave that for another post). So I’m returning to Customer Development basics – and am focused on learning and speed. Meaning, quickly learning whether or not this product has legs with some customer group – and if not, moving on.

And I really need to be learning about my riskiest assumptions – namely, does anybody care to pay to have this problem solved. So, learning how to learn.

A second goal of mine is to actually get good at learning how to discover business model – and learn which of the many customer development and learn startup methods work for learning.

Customer Pivot: I’m now making a “vision leap” and am exploring the pain points that business owners (the “clients” in the above section) have with sharing passwords with various virtual assistants / consultants – and seeing if there is a decent match between a problem that some member of the value chain has and the solution I’ve already built. Note: But wait Ben, don’t start with a solution! Well, in a customer pivot – you DO have a solution. And as Steve Blank said in the 4 Steps and in his latest book – look for the customer group that requires the least change on the product dev. side. At least exhaust those options before you invest tons of more development time changing the product. 

New Problem Hypothesis: 25% of startup founders experience frustrations around sharing and managing virtual assistants to help them with their business.

I’m using LinkedIn to line up Customer Development interviews
My first hunch was that start-up founders use virtual assistants and have this problem. I’ve had some positive feedback in this area, so I’ll test this. Using LinkedIn’s relatively recent ability to allow you to message other members of a group you’re a member of, I have been reaching out to start-up founders.

Methodology: I send messages to twenty startup founders on LinkedIn. The script was as follows:

Hello xxx,

My name is Ben Willman, I’m a developer/founder considering working on solving the problem of startups needing to share website passwords with assistants, consultants, interns, etc and the pain points around this (having to change passwords when folks leave, figuring out who has access to what, etc).  I’m wondering if the above problem resonates with you at all and if you might have time for a brief phone call to learn more about how you experience the problem.

I promise I’m not selling anything, just looking for perspective and advice to better understand this problem – and ultimately determine if it is a problem worth building a solution to solve.

My schedule is flexible and can speak when it’s convenient for you.

All the best,

Ben
202-xxx-yyyy

How am I documenting this?
I’m keeping it in Excel.

Did I learn anything?
Well, hmm. I sent out 24 messages. I got seven responses of some kind (26%). Ok – response rate not bad I suppose.  Of those responses I have:

1 conversation with someone at a startup who doesn’t personally have the problem – but says it sounds like its a problem.
1 conversation with someone at a starutp who doesn’t have the problem at their startup, but has it as a web developer – and gave me good ideas on who does have this problem
1 person I’m still scheduling a call with
1 person who says they have the problem but wants to answer by email
3 who said by email they either had a mild problem, they solved it – or don’t have the problem.

Learned thing #1: 25% of startup folks contacted as part of the onStartups LinkedIn Group will reply in some way. But was this about my riskiest assumption? No.

Do 25% of startup founders have frustrations around the process of sharing website access with Virtual assistant or consultants?

So far no, I have not found support for this. But have I done enough work? I don’t know.

Methodology problems: Well, first off – I’m only asking for conversations with people who have this problem. So right there, I’m not learning much about why the other people don’t have this problem. That’s learning that’s left on the table to a degree. How could I fix this? Well, I could just ask for conversations to generally understand how they go about  sharing website access with folks in their startup. Maybe its VAs. Maybe its interns. Or maybe its outside designers.

Next Steps: So, I believe I need to continue to have problem interviews with different types of folks to hopefully discover some group that has this problem, knows they have the problem, has been trying to fix it on their own and has a budget. Another group that I am reaching out to in parallel is coaches, trainers and speakers. I know they use virtual assistants – and I’m fishing to see if they show any strong signals of having this problem. LinkedIn seems helpful in being able to reach out to folks who I would otherwise not have a connection to – but I’m not convinced yet its leading to tons of learning. Lets take another look next week.

Bottom Line: Learning Velocity:   0

I’m making up something called Learning Velocity to measure how much I’m learning each week. Lets call it “# of things learned about riskiest assumptions” / weeks spent on learning.

So for this last week:
# of things learned:   0 – I didn’t have enough conversations, enough rejections by email – to make this call. Lets do this for another week and see what happens.

I’d love to hear any thoughts or advice on how I’m going about this, etc. Don’t hold back!

 

 

Build, Measure, Learn loop. Wait, that’s backwards!

When discussing Lean Startup methodologies with other entrepreneurs I’m frequently drawn back to an awesome presentation from Eric Ries’ Startup Lessons Learned Conference of 2010 by Kent Beck (Test Driven Development and Agile pioneer – http://www.justin.tv/startuplessonslearned/b/262656520 ) on how the Build-Measure-Learn loop should really be thought of as the Learn-Measure-Build loop. He draws inspiration from Test Driven Development (TDD) and it also just makes sense. You should always be starting out with what you want to learn and then think of how possibly you could measure it.

Step 1: What do you want to learn?  (Learn)

What’s the most pressing thing that you need to learn about your startup idea? In common Lean-lingo, of all the assumptions that make up your current guess at your business model, which is the riskiest? Which assumption would break your business if it were not true? This usually has to do with what customers want and will pay for.

Step 2: Is that reasonably testable? (Measure)

How can you test this efficiently? Ideally cheaply and quickly. Early on in my customer discovery process I prefer phone “Problem Interviews” with potential users. If I want to sell a kitchen knife that comes with a gnome that sharpens the knife when it needs it, my test can be “I believe that 75% of people I talk to will believe that dull knives are a problem they would pay to have solved”. You can find out the answer in a series of conversations. Hopefully you structure your interview freely enough to learn other things about their cooking habits and frustrations and imagine alternate problems that they would pay to solve if your guess doesn’t work out.

To be a test it must be able to clearly fail.

Fun activity: Look up the term “null hypothesis” and see how it applies to Lean Startup.

Other examples of tests:

  • When I put $24.99 on my landing page I predict a get a 50% sign up rate.
  • Users will forward our link to other users an average of 7 times.
  • If I show a mockup of my proposed software, 50% of users will say they would pay $19.99 a month.

Sometimes you can’t quite measure what you want so you tweak what you learn to the to what can actually be tested.

Step 3: Build (but hopefully build nothing!)

Finally engage in the minimal activity that you can get the answers to your tests. Do the easiest one!

Best to Worst options of activity

1) Have a phone conversation
2) Show a PDF mockup and get answers
3) Fake the product or service just enough to get your answer (sign-up page with price, ad tests)
4) Cobble together the product from existing stuff (or services)
5) Create a “prototype” of the product
6) Create a decently functioning version of the product

To get the answer to some questions you may have to actually build it. But that is a last resort. If not a failure – its a very expensive option in terms of your startups most valuable capital – time. So match with the risk.

 

 

Lean Startup Machine Weekend – NYC

I had the privilege of mentoring at another fantastic Lean Startup Machine weekend in NYC this past weekend. There were examples of awesome validated learning and failing fast (Food Without Borders) and examples of where serendipity and  validation meet (Hangalong). Many thanks to Ryan MacCarrigan, Eugenia Koo and the whole LSM team of mentors and judges who made this event rock.

Why HangAlong won

They did a great job all around but I feel there were two areas in which they excelled. First, they competently executed on validated learning iterations. In other words, they nailed the basics. They had clear hypotheses and success criteria and had ample ample sample size (20 interviews) in each of three rounds of customer discovery interviews and experiments. This is harder than it sounds – many teams are not able to learn from their interview efforts because of bad question design, team-members not asking the same questions, solution bias etc.

Second, they also made a creative leap of faith guess about how people might want to use their product, they cleanly tested it and it turned out they were right and got excellent activation rate (80%) and ethusiasm from prospective customers. Everybody loves a happy validated epiphany. Can’t wait to see these guys move forward.

Why ThoughtBox won second place

ThoughtBox had a hunch that restaurants were missing out on customer feedback that could be given immediately after a meal. So got out of the building and spoke to a good number of people in the neighborhood about why they do or do not leave restaurant reviews. They found out that many people would love to help the restaurant with feedback but there wasn’t an easy way to do it anonymously and easily while they were still at the table.

But what they did next is why I feel they deserved a winning slot. They came up with a clever functional MVP – restaurant rating cards, whipped up at FedEx/Kinkos with a number that diners could text feedback to. They swarmed the neighborhood and found an owner of a restaurant who was on board to give this a try. And then my favorite: They also left cards without asking permission at another restaurant. I love this because anything that removes a perceived obstacle in conducting an experiment is simply a path to quicker learning. I especially like guerrilla  testing with another company’s user base (as long as its not causing too much harm).

The results showed few (or none?) filled out the cards – but it they had a reasonably large result set and they now have hunches on what their next experiments will be.

Why Childcare R’ Us got best Customer Development

They did an excellent job of getting out of the building and engaging customers (parents) by going where they hang out and conducting clear interviews with ample sample size. They also kept the ball going ‘in the building’ when they cool-headedly invalidated four successive pivots and now find themselves eager to explore a new area they’ve been lead to by customer interviews with parents.

Right on, looking forward to the next Lean Startup Machine. In my next post I’ll talk about recurring patterns that I’ve noticed with with teams that find themselves in a trouble spot during the weekend.

 

Market Risk? Probably. Ego Risk? Definitely.

The Lean Startup movement says that to avoid the pain and waste associated with the historically high failure rate of startups, learning is the most valuable activity a startup can spend its precious time on. And the most valuable learning to be had is learning about your riskiest assumptions. Eric Ries makes the strong argument that in most cases (other than scientific breakthroughs or other problems that require major mojo secret sauce)  we don’t need to learn about whether something can be built (Technical Risk) – it can. But rather we need to learn whether we should build it (Market Risk). And for that we need to “get out of the building” (thank you Steve Blank) and talk to customers and learn about their problems and conduct brutally honest tests about specific assumptions.

Oh, but it hurts! Ego Risk

But there’s one problem. Humans and brutally honest tests – not so compatible. Why do so many startups that intellectually “know” that they should be testing their assumptions but still avoid setting up test that can fail? Or actually talking to customers. Because its painful to hear information that you don’t want to hear about something that is important to you. So besides Technical Risk and Market Risk I’d add a third type of risk to be aware of in startups – Ego Risk (thanks to a great conversation with Patrick Smith and Teague Hopkins at LSC DC last month!).

Ego Risk could be described as the anticipated pain of having to make a change to the carefully adorned vision of your startup that you’re spent many hours reveling in how it will change the world and transform your own life. This risk is great – and not least because its likely. So when we get feedback from customers or tests that contradicts the grand vision that we’re incubating we unconsciously avoid accepting that information   We tell ourselves we’re talking to the wrong market segment of customers (see my post “Why is it so hard for startups to focus on one market”), the feedback can’t be trusted because we’re missing just one feature or need a better logo (false negative), or its just an edge case. The effect of this is that we slow down the learning cycle and prevent our startup from making progress. Things we could have learned in two weeks take us six months. Things we could have learned in three months take us a year.

This pain avoidance is totally the norm. Its human. The Lean Startup methodology attempts to dig its screwdriver into one of the fundamental flawed gearboxes of the human entrepreneur and fix the fact that we’re most often driven by big dreams (we fall in love visions of the future) to start our own companies (reality-based endeavors).  How can we be dispassionate about running experiments that may drop the axe in a flash on our big dreams? Its anti-human.

Part of the problem is first time entrepreneurs tend to fuse “vision” (the greater problem domain to be solved) with “solution” (how we’re going to solve it). After all, its much more fun to day dream about a solution than a problem. So acknowledging that a solution may not be right feels like an attack on the vision. (I’ll be posting on this next week – Your Startups Solution is a Monster. Run!! )

How to beat the Ego Risk?

1. Be committed to a vision, not a solution

The best way to lower your Ego Risk is to set expectations up front with yourself ( and your team if you have one) that your commitment is to a vision rather than a specific solution. That vision may be as broad as “I want to start a company that is successful and transforms my life” or committed to a specific problem area “I want to make the babysitter finding process easier for young parents”. That way when the details of a specific solution may need to change, its less of a punch in the gut because your vision is still intct.

One of the coolest expressions of how a team can set their expectations is told in minute 31 of a 33 minute video of the founders of Aardvark doing a Q&A after presenting at  the Startup Lessons Learned conference in 2010 (http://www.justin.tv/startuplessonslearned/b/262666882). Max Ventilla says everybody on the team should know that  “…this is a sinking ship from day one and … we’re going to be totally uncompromising about when we’re going to abandon one ship and get on another”.

2. Accept that it will often feel like crap sometimes, that’s progress

Since humans can’t help hoping from time to time, when that hope gets crushed like a beetle under the reality microscope it will feel like crap for a bit. Think of it as progress. Usually feeling like crap is not fatal.

3. Regularly check-in, what feedback are we avoiding accepting? 

Check in regularly with yourself and your team to sense what areas of testing or customer feedback feel most risky from an emotional level. What answer would suck to hear the most. Then see if you’re putting off customer interviews, dragging your heels in that area to avoid that risk. Its probably the areas that are slowing down your learning.

Most startups should NOT be building a prototype before talking to customers. Likewise, I find in most cases extended mock-ups are just an easier way to waste time before you talk to customers. Yes Market Risk – No (Probably) Technical Risk. But the way we tend to avoid hearing what we don’t want (Ego Risk) and how it distorts our actions is also an important thing to keep your eye on during your constant steering adjustments to making sure your team is focusing on the most valuable activities.

Let me know any other thoughts folks have on hacking the human OS in this regard.

 

Why is it so hard for startups to focus on a single customer group?

I attended the a Lean Startup Circle DC at the end of last year where Paul Singh of 500 Startups was the guest speaker. The topic was getting traction for your startup. While answering questions Paul gave a reasoning why a startup should focus on one type of user even if its technology solves a problem for a variety of users. He said something along the lines of (and I’m paraphrasing):

Focus on one customer segment first because you will learn the language  - the words and phrasing – they use to describe their problem. You will understand how they experience the problem in their particular line of business or ‘market’.

And of course you can then use the insights and language to inform how you describe how your solution solves a problem in their world.

This is not a new insight in product development. Geoffrey Moore talked about picking a beachhead market for technology products in Crossing the Chasm in1995. Offering specialized service for clients has been around since the beginning of management science.

But from my 15 years of experience in my own start-ups as well as working with other entrepreneurs it is stunning how often founders resist focusing on one type of customer (niche market, vertical, segment etc). I have personally experienced the stalled growth (and learning) due to a stubborn reluctance to specializing an offering for a single market.

Why is it so hard for many entrepreneurs and startups to pick a single type of user to begin with? Some thoughts:

Work has begun on a product before talking to any customers

I feel this is the number one reason most technology entrepreneurs struggle with focusing on a single customer group. I have done this at least twice with companies that I’ve founded. The founders have built a prototype of a ‘solution’ before they have dug deeply into how any one specific group of customers experiences the problem. This makes it harder to actually ‘focus’ on a specific customer group because you don’t really want to hear anything different than what you’ve guessed because you’re in the middle of executing on that guess.  Your biggest risk at this point is not Technical risk, yes Market risk, but most immediately: Ego risk (more about Ego Risk in my next post).

But, despite belief in the core startup’s vision, there is a nagging sense that the product probably isn’t ‘exactly right’ for a given group. So the founders lose trust in any single customer group as being a good test for their product and instead go after every customer group.

The knee-jerk urge to get busy with building a solution as soon as you have a vision is one of the fundamental human behavioral bugs that Lean Startup movement is trying to address. Its totally natural. After reveling in the awesomness of a new idea what do you want more than to actually see it, touch it feel it? When you’re foaming at the mouth to bring yourself into a new startup life what could feel more direct than actually building the thing?

But what it leads to is the awkward situation of having designed an awesome lure for fish in general but having to go around casting it again and again to discover which specific species of fish happens to like the lure you’ve already designed (probably none). In my experience building a new modified lure based on what you learn after a single specific species of fish will be much faster.

Specific is boring: Withdrawal pains from the epic vision

Intrinsic to the founding moment of a startup is the geyser of excitement about how a  product will solve a problem across a wide range of customers and life experiences. Each new use case is a spoke in the grand wheel of the core ‘vision’ of a new startup. “..and if we can make people levitate, the possibilities are endless!!

Even if you’ve been fortunate enough to not have developed a product yet, it is still psychologically crushing to put aside the grand vision and work on an incarnation of the product for a specific user group  (a levitating stretcher that can move injured athletes from the field, or a levitating palette that can move a load to a construction site across a rocky terrainso boring!!). It seems like we’re abandoning the vision if we begin work on only one part of it. Its sitting down when we’re pacing around excitedly. Its like someone telling you to walk in the hall when all you want to do is run. Its hard – and its normal.

But I don’t want to pick the wrong one – and ‘miss out’ 

We know our product can solve problems that are common across types of users – what if we pick the wrong one? We’d better try them all and ‘see which one bites’.

The problem with this is that its very hard to ‘try them all’. The very fact of presenting a ‘solution’ that is suitable to one of several markets actively undercuts its appeal to any single market. This is because the language these customers experience the problem in – the terminology, the ‘angle of approach’ of to problem – is different for each customer group.

I feel this mentality is supported by a founder actually knowing very little about any customer group. And therefore hedging their bets. Once you have a dozen conversations with people in a single customer group (market) you will begin to have confidence about whether or not the customer has a problem that your solution would address.

An invitation to awesomeness

So your killer vision is an invitation to awesomeness, not the main course. Go out and talk to people who live around the problem you’re trying to solve. Do enough listening and it’ll combine with the vision and you’ll find the sweet spot of a real business.