Hacking for Defense @ Stanford – Week 4

We just held our fourth week of the Hacking for Defense class. This week the teams turned the corner on understanding beneficiaries and finding product/market fit. The 8 teams spoke to 115 beneficiaries (users, program managers, etc.); we sent each team a critique of their mission model canvas; we started streaming the class live to DOD/IC sponsors and other educators; our advanced lecture explained how to go from concept to deployment in the DOD/IC; and we watched as the students got closer to understanding the actual problems their customers have.

(This post is a continuation of the series. See all the H4D posts here. Because of the embedded presentations this post is best viewed on the website.)

Beneficiaries equals all the stakeholders
In-between class sessions, we reviewed each team’s mission model canvas and sent them a detailed critique of each of the boxes on the right side of their canvas. The critiques seemed to make a difference in this week’s presentations with a noticeable improvement in teams’ beneficiaries/stakeholder understanding. The teams are beginning to understand that beneficiaries mean “Not the name of an organization but all the stakeholders in an organization (users, program managers, saboteurs, legal, finance, etc.)” and that they can’t really understand customer problems until they can diagram the relationships among all the beneficiaries. Then, and only then, can they move on to developing a detailed value proposition canvas for each of the beneficiaries.

Some of the sponsors commented that the teams had a better grasp of the problem space and a deeper understanding of the beneficiaries and their relationships to each other, than they did.

Team Presentations: Week 4
Great technical teams like often want to use the class as a product incubator when we want them to spend an equal amount of time learning about the rest of the Mission Model canvas.

What we’re trying to prevent is to have teams give the DOD/IC yet another great technology demo. They have plenty of those. What teams need to do is deeply understand all the stakeholders in their sponsor organization (analysts, seniors, finance, legal, etc.) so they can get a great product that solves real problems and can be widely deployed quickly.

Narrative Mind had an amazing week. The sponsor’s brief to the team is to figure out how to understand, disrupt, and counter adversaries’ use of social media. After 46 interviews the team could see that there were conflicting definitions of what problems needed to be solved. They realized that different beneficiaries were each describing a different part of a much bigger picture. Take a look at slide 3, as the team synthesized and then summarized the sum of the hypotheses the beneficiaries have of the problem. This was a big learning moment. Slide 4 was another insight as they mapped out who actually owned the problem across multiple DOD and Intelligence organizations. Finally, their beneficiaries on slide 6 were focused and clear. This team is learning a lot.

If you can’t see the presentation click here 

Right of Boom had an insightful week with 19 customer discovery interviews this week across a broad range of beneficiaries. (See slide 2.) These interviews led them to conclude that their initial hypotheses (slides 3-5) were wrong. In slide 6 they were able to map out the entire IED (Improvised Explosive Device) reporting information flow.

Week_4_H4D_Right_Of_Boom Info flowAnd in slide 7 the team really narrows down their beneficiaries and value proposition. They came to an interesting conclusion about how to measure success in their Mission Achievement box.

If you can’t see the presentation click here

Sentinel initially started by trying to use low-cost sensors to monitor surface ships in a A2/AD environment. The team has found that their mission value is really to enable more efficient and informed strategic decisions by filling in intelligence gap about surface ships.

The team started by diagraming the relationships among their beneficiaries (slide 2). They realized that this is just a start. Now they need to overlay the surface ships’ intelligence information flow shown in slides 16 & 19 on top of this org chart. Slides 3-6 are a good narrative of hypotheses validated, invalidated and refined during the week. Slides 8-11 are an excellent example of a deep understanding of the beneficiaries. Their Minimum Viable Product in slides 12-14 this week shows much more problem insight compared to the prior week (slides 18-21.)

If you can’t see the presentation click here

aquaLink started the week believing they were working to give Navy divers a system of wearable devices that records data critical to diver health and safety, and makes the data actionable through real-time alerts and post-dive analytics.

This was a great but painful week for the team as they experienced a bit of an existential crisis while working to drill down into who their customer truly is. The original problem statement from their sponsor asked for a wearable sensor that would monitor the physiological status of divers. As they proceeded with customer discovery, the team found that the majority of the operators who would wear these sensors were ambivalent about the introduction of a vitals monitoring platform, but were much more excited about solving geolocation problems. On the other hand, the medical professionals and some commanders were more interested in monitoring physiological metrics in order to understand chronic long-term health issues facing divers and optimize short-term performance. Slides 2-6 illustrate aquaLink’s evolving understanding of the range of customer archetypes.

Their key take-away was that they would have to decide which beneficiary to focus on. They decided to focus on the operators and divers within SEAL Delivery Vehicle Team One, along with their immediate chains of command in SDVT-1 and Naval Special Warfare Group 3. These were the beneficiaries who viewed aquaLink’s focus on geolocation as the most valuable. See slides 7, 11 and 12.  The team recognized that it was time for a pivot and aquaLink will spend the rest of the class focusing exclusively on geolocation.

If you can’t see the presentation click here

Skynet is using drones to to provide ground troops with situational awareness – helping prevent battlefield fatalities by pinpointing friendly and enemy positions.

The team made progress understanding the Special Operations Command (SOCOM) acquisitions process in slides 3-5 and mocked-up an MVP. However, they still list organizations as beneficiaries.  We asked that they dive deeper into the each of the stakeholders and create a diagram of how the beneficiaries actually interact.

If you can’t see the presentation click here

Capella Space is launching a constellation of synthetic aperture radar satellites into space to provide real-time radar imaging.

The team made progress understanding that some beneficiaries want raw SAR imagery and some want analytics. They are starting to understand the beneficiaries in the Coast Guard; however, they are stymied in trying to find the right people to talk to about commercial data acquisition at the National Geospatial Agency. We asked that they dive deeper into each of the stakeholders and diagram how the beneficiaries actually interact.

If you can’t see the presentation click here

Guardian’s problem to solve was to counter asymmetric drone activities.  This week was a big leap forward in truly understanding their problem and beneficiaries. They did a deep dive (slides 5-7) into really understanding what, exactly, is a forward operating base. They refined their options of the problem space (slide 4) and did a great job of truly understanding the workflow in slide 8. Their mission model canvas in slide 9 had a great update on their beneficiaries while the detailed value proposition canvases in slides 10-12 gave great insight about the pains/gains/jobs to be done those beneficiaries had. 

If you can’t see the presentation click here

Advanced Lecture:  Concept to Deployment in the DOD
This week Jackie Space and Lauren Schmidt gave the advanced lecture. Jackie, an exAir Force officer who spent her career managing overhead reconnaissance systems, flies up from LA every week and has now officially joined the teaching team. Lauren is a member of the Defense Innovation Unit Experimental (DIUx) based at Moffett Field in Mountain View and advises our students in the course along with multiple other members of the DIUx.

Slide 5 “purchasing authority” and Slide 6 “key activities” were real eye-openers for the team.

If you can’t see the presentation click here

Team Learnings
A few of the teams are now writing weekly one-page status reports to their sponsors and mentors. Great idea to keep them informed and make them feel they’re part of the team.

It’s been fun to watch as the teams learn from sponsors; a few teams have been broadening their sponsors understanding of the problem. (How do we know this? When the sponsors asked their team, “Can we use your slides to present to our organization?”) That’s a win for everyone.

This week we had one group of students volunteer to go to Iraq or Afghanistan to see the customer problem first-hand. Travel restrictions and other logistical challenges will likely make this trip infeasible, but the team’s genuine interest in getting to the ground level of customer discovery reflects well on their commitment to the principles of the course’s methodology.

Lessons Learned

  • Civilian students with no prior DOD experience can be taught to deeply understand military and intelligence problems and organizations in 4 weeks
  • These students are passionate and committed to solving problems that protect the homeland and keep Americans safe around the world

Filed under: Hacking For Defense, Teaching

Data-Driven Product Design at the BBC

Iwan Roberts discusses data-driven product design at the BBC - ProductTank September 2014

Iwan Roberts (Business analyst, BBC) is part of a relatively small agile team building location services at the BBC, continuously iterating for over a year now. In this ProductTank talk – “Driven By Data” – Iwan gives a whistle-stop tour of how his team has iteratively built a set of operational dashboards to help them understand their data-driven product, and unravel how users are actually behaving.

The Product

BBC Travel sounds pretty straightforward – you enter a location, you receive travel news across multiple travel modes (road, rail, etc.). The previous iteration of the product was released in 2009, and now the team (& the users!) wanted a responsive product, geolocation support, and to increase the speed with which incidents could be published and discoverable.

The team also wanted to change how users could search for locations. Previously this was done via a simple, relatively coarse-grained list. Now, the product allowed users to search for very specific, fine-grained locations.

Editorial and Tracking Challenges

The BBC Travel team actually have very little editorial control over what they publish. Their data is almost entirely externally sourced, and so their reputation is based on the data they have access to. They also reused common components from other areas of the BBC (e.g. Mapping and search) – which is fine and frugal in principle, but also means that they are even more beholden to external providers (e.g. Google for mapping).

When it comes to monitoring their systems, the team have a dashboard tracking everything from the data they ingest through to their publishing speed. It’s not pretty, but it provides realtime tracking of the health of their public-facing system, and allows them to make data driven decisions as quickly and cleanly as possible. Even the dashboard was developed iteratively in step with the development of the platform itself, with a total focus on providing accurate, realtime data (built using the Dashy framework, in case you’re interested).

Development processes

While developing V4 of the product, V3 was constantly live, allowing the team to iteratively release new features and UI changes, testing new code and features on existing systems. V4 of BBC travel was actually in Beta for months before the team launched – just before a bank holiday. While the deployment went smoothly, the feedback was surprising (especially bearing in mind the long beta – they weren’t expecting any surprises!) There were 3 common feature requests:

  • Order incidents by road type (released – everyone happy)
  • Maps on mobile devices (already on the roadmap – the team just brought it forward)
  • Links to county-level incident list (completely against the grain of the revamped product!)

The team immediately started trying to understand why users where asking for this list, and sensibly turned to investigating actual user behaviour. How were users currently solving this problem? Were they searching for counties? Maybe they’re doing multiple searches? In both cases, the results couldn’t account for the volume of feedback.

Then, after looking at how users interacted with the map after searching, they saw lots of panning behaviours, suggesting that’s “counties” we’re a proxy for longer journeys. Crucially, this potential need never came up in user testing!

What Can We Learn?

By dint of a large user base and investment in analytics and research, the BBC Travel team has access to large amount of user feedback and quantitative data. This is a goldmine when it comes to understanding their users, but the team have realised that they also need to have a clearer definition of success in order to make sense of that data.

Unlike some other products writhing publishing ‘constellations’ success for BBC Travel is actually quite a fleeting interaction. To better understand what it means, they’re conducting experiments – tracking the number of searches each user performs (too many implies poor UX or missing data), or the percentage of users who don’t reach a results page. This is data that their system now allows them to collect, but the only way to make sense of it all is to understand what they’re users are trying to do in their own contexts, not just the context of the product.

If you’re not already gathering data about your users, and how they’re using your product – start immediately! But more importantly, your goal should be to understand what your users are trying to achieve beyond the limited context of your product – if you don’t understand, then you’ll only ever have a limited understanding.

The post Data-Driven Product Design at the BBC appeared first on MindTheProduct.

30 Questions to Determine if the Product Manager Job is Right For You

This is a guest post that was originally published by Stephanie Oh on StephOhSays. Are you considering a career in Product Management, but feeling lost as to how you’re supposed to gauge whether or not the role would be a good fit for you? Several people have asked me for advice on the topic, so before you jump into taking a ... Read More

The post 30 Questions to Determine if the Product Manager Job is Right For You appeared first on Product Manager HQ.

A Framework of Experimental Habit Formation

One of the key challenges of living and working in the future will be continuous learning and experimentation. I’d like to propose a framework for guiding these efforts that is both feasible and focused on the individual: experimental habit formation. I believe it can help resolve one of the fundamental paradoxes of modern life: how to balance our need for stability and routine with our thirst for novelty and exploration.

Experimental habit formation is a precursor and gateway to behavior change. The question “ How do I change?” is not enough, because it presupposes that you know which behaviors to adopt; even if you do, that these behaviors will lead to the outcomes you expect; and even if they do, that these outcomes will remain personally relevant and meaningful forever. By replacing these risky assumptions with tests, experimental habit formation provides a sandbox to “debug” new behaviors before wider deployment.

The first thing that’s clear is that experimental habit formation cannot be developed top-down like other business and self-improvement frameworks. To be feasible for the average individual, it needs to be built from the bottom up. Concretely, we need to start with actual lived experiences, building from there to communities of practice, and finally to academic theories.

I have a particular community of practice in mind, one made up of particularly well-documented lived experiences: The Quantified Self movement. QS is a global movement of people who measure various aspects of their bodies and lives — from their exercise to their productivity to their diet and far beyond — seeking to better understand themselves and their performance through quantification, broadly defined. QSers, as they’re known, attend meetup groups in cities around the world, where they tell their stories in a “show & tell” format. These short presentations answer three questions: “What did you do?” (i.e. Which aspect of yourself did you measure?), “How did you do it?” (i.e. Which tools or methods or procedures did you use to do so?), and “What did you learn?”

I happen to believe that almost nobody appreciates the true implications of The Quantified Self movement, not even its most avid practitioners. It’s not so much that the implications are bigger, just more interesting: QS is nothing less than a living and breathing blueprint for a community, an ideology, and a toolkit for continuous and substantive lifestyle experimentation.

Minimum Viable Behaviors

The first question we need to address is, “Why are habits good vehicles for behavioral experimentation?” Why not just try new things once or twice when it strikes your fancy?

Essentially, because habits are MVBs — Minimum Viable Behaviors. They have a clear beginning, middle, and end (cue, behavior, reward), making them easy to define and identify when they appear. The good ones tend to be internally coherent and inherently rewarding, thus self-sustaining. They are situated in a physical and social context, which makes them socially acceptable and integrate relatively seamlessly into daily life. Perhaps above all, they align with human neurobiology.

Importantly for our purposes, habits are also well-suited to testing hypotheses. They are complex but can be measured as binary: did it/didn’t do it. They are discrete and disposable, with low barriers to entry and exit. Their time-series, repeating nature lends itself well to teasing out confounding influences: circumstances from willpower, time from location, episodic from continuous. Lastly, habits are famously difficult to create and sustain; yet every person maintains many habits, and they come and go all the time. This paradox is a strong hint that they flourish only as organic, emergent patterns. Since emergence is hard to fake, this gives us a high standard of success in our experiments.

Which brings us to the second question: why is it necessary or beneficial to frame new habits as experiments?

N=1 Science and Beyond

Let’s start by defining terms. An “experiment” in this context is any attempt at measuring any aspect of one’s life. There is no distinction between “observation” and “intervention” when the same person is both the researcher and the subject, because any attempt to measure the behavior inevitably changes it. As I’ll explain later, this is a feature, not a bug.

So the more specific question is, what is the value in using some degree of formal experimental structure in trying new habits, even for laypeople?

There is a temptation to view this question through the lens of the “guardians of science,” for example, a clinical scientist. From this perspective, using “a little science” seems much worse than using none at all. The scorecard does, at first glance, seem bleak: self-experiments involve minuscule sample sizes, radically subjective and non-standardized inputs, widely different measurement criteria and devices, non-normal and non-symmetric distributions that wreak havoc on statistical tests, and nary a hint of control, blinding, or randomization. By giving such a questionable process the stamp of “science,” aren’t we just inviting people to trust a conclusion that shouldn’t be trusted?

We could look to the well-developed field of Single-Subject Experiment Design to make a case for such studies. But I think the real answer is that the value of scientific thinking extends far beyond strict adherence to the scientific method. There is a scientific sensibility — a subjective, yet dispassionate mode of observing and thinking that lies at the heart of true inquiry. Richard Feynman called it “…scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty–a kind of leaning over backwards.” The scientific method is useful and necessary in the case of clinical trials affecting the health of millions. But sometimes, achieving statistical significance requires diluting the conclusions so much that their substantive significance is lost, at least at the level of a single individual. Who cares if a weight loss treatment is effective with 99.99999% confidence if the average effect size is 1 pound?

Self-experimentation, as messy and imprecise as it can be at times, is an excellent method for developing a scientific sensibility in the pursuit of self-knowledge. It relies on a behavior that most people already perform in some capacity — a 2013 Pew study found that 70% of Americans track some type of health indicator. It recruits a subject that everyone, no matter their education or training or resources, has access to — themselves. It focuses on things that every person cares about — their personal circumstances and lifestyle.

By relaxing the traditional requirements of population-sized clinical science, we lose universal validity, reliability, and replicability. But we gain a series of powerful benefits in our pursuit of self-improvement.

Five benefits, to be exact.

Concrete reflexivity

The first is concrete reflexivity. The best QS presentations tend to conclude with something to the effect of: “I didn’t come to any firm conclusions, and my results raised more questions than they answered, but I’m generally more aware of this aspect of my life.” Curiously, despite their lack of “actionable results,” the presenter usually concludes with a renewed commitment to self-track even more thoroughly.

This highlights an experience many self-experimenters have reported: that the self-awareness they gained in the process of self-tracking was the real reward. It was a reward they received regardless of what their data ultimately showed. Self-tracking enhances self-awareness by providing a concrete mechanism for self-reflection: the act of recording. So-called “active tracking” requires the subject to input something manually — a response to a question, a self-reported evaluation, or a device reading. Instead of self-awareness being something to ponder during intense meditation sessions on Nepalese mountaintops, it is manifested in something much more mundane: manual data entry. Both methods, it turns out, are capable of generating reveries of conscious attention.

As an example, the TrackYourHappiness project out of Harvard seeks to help people measure their moment-to-moment happiness, by sending them questions via a mobile app at random times throughout the day. The goal is to uncover the causes and correlates of happiness through individualized random sampling. I participated for a month, and had questions such as “How happy are you right now?”, “When was the last time you exercised?”, and “Where are you right now?” sent to me a few times a day. By cross-referencing my answers, the app generated reports of which people, places, and activities make me happiest.


As you can see, the reports are not particularly insightful, but I can report that the experience was jarring, in a good way. The prompts, arriving at random times via text message, helped me realize how much of the day I spend on autopilot, somehow barely conscious of what’s going on both outside and inside.

Which makes the conclusion that project creator Matt Killingworth came to after analyzing many thousands of participants’ data (as told on this NPR podcast) especially intriguing and personally relevant: the single factor with the highest correlation with unhappiness across the entire study was mind-wandering. The more someone had their mind on something other than what they were doing, regardless of whether they were thinking about something more pleasant or less pleasant than what they were doing, the more unhappy they were likely to be both while mind-wandering and in general. This is powerful evidence for the importance of what crunchy types would call “presence.” It’s also difficult to imagine how such a conclusion could be reached without random sampling via mobile devices.

When it comes to individual self-experimentation, the Hawthorne Effect is turned on its head: who cares if you change your behavior because you know you’re being watched, when watching yourself continuously is the whole point?

Personal Relevance

A second benefit to experimental habit formation is that it passes the first and strictest filter we place on all incoming information: is this personally relevant to me?

Paul Lafontaine in this presentation describes his experience tracking his heart rate variability (HRV) continuously throughout his workweek. On this particular day, he was informed 30 minutes ahead of time that he would be briefing six senior executives on a proposal he was developing. One of the executives was attending uninvited specifically to oppose the proposal being discussed. Paul’s first reaction to the news was “This will be a great HRV reading.” Gurus can only dream of such objectivity.


The graph shows the output from his HRV tracking device. Reviewing the data and comparing it with his memory and notes, he realized that the meeting was divided into three parts: his initial presentation (interval 851 to 2976), then a low-key period (2996 to 6000) while the others discussed, followed by a second round of questioning and defense of specific points (6376–8926). Understanding that this is likely a common pattern for his briefing meetings, where he is responsible for both explaining and defending an idea, Paul was able to develop a new strategy: targeting that mid-meeting “break” to regroup, identify which points he wanted to focus on for the second half, and even use his breathing and relaxation techniques to get a second wind.

Notice that these directives are both actionable and relevant to this individual’s workplace and physiology. In contrast to the vast majority of online articles full of generic “productivity tips.” They are based on objective records that can be repeated and reinterpreted, or compared with others. Notice that Paul would not and could not draw universal conclusions from this study on what others should do. At the same time, if you decided this topic was relevant to you, you’d have a clear path forward to performing the experiment yourself.


One of the most common accusations leveled against self-tracking is that it lacks context. By isolating one factor (steps, standing minutes, “fuel points,” etc.) and giving it the false authority of numbers, the accusation goes, many of the inter-relationships so important to human behavior are lost.

But in my opinion, as you may have guessed, this contextlessness is actually a major feature. When a particular behavior or health metric is removed from its context, at least temporarily, one has the opportunity to create a new context. Instead of adopting the “meaning” this behavior holds for your social group or the culture at large, you can decide for yourself what it means for you.

In this talk, Anne Wright describes her experience looking for the cause of an unspecified, debilitating condition she suffered from. After years of appointments with specialists offering generic, unhelpful advice, Anne tried an elimination diet that indicated she had problems with bell peppers, tomatoes, and eggplants. It turns out they are all part of the nightshade family, which contains a neurotoxin that inhibits cholinesterase, a vital enzyme. Who knows how many specialists she would have had to see over how many years before she arrived at such a specific answer, if ever?

Anne’s analysis of our medical system is something many of us can relate to: it is like a giant pinball machine, bouncing you around trying to fit you into a predetermined slot. If you don’t fit anywhere, you end up at the bottom with no answers and a stack of medical bills. You are then subtly (or not so subtly) persuaded that it is “just in your head” or otherwise unworthy of serious consideration. If you are extremely tall, you know you are an outlier, and can take measures to compensate for a world designed for the median. But for many things, medical and otherwise, you don’t know where on the distribution you fall. We are all victims, at some point in our lives and especially in our most unique traits, of the Ecological Fallacy — inferences about us made from inferences about a group to which we belong, which are then turned into individual prescriptions presented as objective facts.

Creating one’s own context for a life change is difficult, but crucial, as numerous studies have shown that people are more likely to achieve goals they set for themselves. It allows people to focus on optimization — improving what’s already working to a certain extent — instead of what specialists from medicine to psychiatry to social work to substance abuse tend to focus on — remediation and intervention in extreme cases. As avid QSer Bob Troia says of his experience with self-tracking:

“I thought I felt great, and then you realize that you’ve sort of been going through life for a while with the parking brake on…And when you start fixing all of those areas, you’re like, ‘Wow! I didn’t realize.’ It wasn’t that I felt bad, I just didn’t realize I could, I should be feeling better.”

Perhaps most crucially in making this whole endeavor feasible, contextualization is the key to locating the reward in the process itself, not just the outcome. Explicit rewards have been shown to do more harm than good in anything but routine, repetitive tasks. They reduce not only performance, but also risk-taking and creativity. For people to enjoy the process, they have to do things “in their own way.” And that they do: tracking their farts and sneezes, representing their results via sound and sculpture, and quantifying home births and their cats’ movements.

Vicarious Learning

The fourth benefit that experimental habits provide is the opportunity for vicarious learning — learning through the experiences of others. By documenting the experience in some form— whether it is spreadsheets, graphs, photos, or written accounts — self-tracking provides an artifact around which a community can cohere. This community is the real secret to why QS works.

The fact is, in spite of our illusion of autonomy, most learning is social learning. Even in an especially self-motivated and science-literate group like QS, most new behaviors are picked up by watching others try and fail, and occasionally succeed. I’ve been astounded to discover that this mimicking behavior isn’t just a quirk of human psychology, but seems to be a property of all sorts of networks, from agent-based simulations to macaques and homing pigeons to machine learning tournaments.

The local meetups create the perfect conditions for networked social learning to thrive: meeting people in person facilitates new connections and trust-building among existing ones; informal talks allow amateurs to present their findings in a non-intimidating format, allowing others with similar interests to self-organize into broad areas like fitness, productivity, mindfulness, and diet, while remaining connected enough to the others to benefit from new discoveries. Lastly, posting the presentations online at a central location allows them to spread as far and wide as possible, drawing greater attention in a network effects-driven cycle.

A recent example of the power of this flywheel has been the movement to hack glucose-monitoring devices using off-the-shelf hardware and custom software. Apparently, the value of continuous monitoring is only realized when that stream of data can be accessed remotely, by a diabetic child’s parents, for example. The article above portrays this movement as new, but I’ve watched it simmer for years in the QS community, only now bubbling up to the surface. And bubble up it has: the FDA has reclassified these devices partly to encourage user-directed innovation. An open-source group Open APS is adding insulin pumps to hopefully one day create a fully functioning artificial pancreas.

Risk mitigation

Finally, an experimental approach limits the risk to one’s sense of self-efficacy, which I described previously as the single greatest barrier to behavior change. There is a limit to how many times someone can fail and “try, try again” before their faith in their own abilities starts to erode. Discrete experiments give you more attempts by turning the fundamental attribution error to your advantage: containing failure to a particular experiment, while taking general credit for successes.

Experiments do this by increasing the number of ways to win, while reducing the number of ways to lose. Experiments cannot fail — they can only produce results. At worst, the null hypothesis is confirmed, helping you narrow down the factorial space. Either way, you learn something, making your next try more likely to succeed. By treating changes as situational and temporary, and holding your character constant, you mitigate permanent damage to it. Instead of tying a single habit to a single path-dependent goal — where the failure of the habit is interpreted as the impossibility of the goal — you identify multiple routes. If one doesn’t work, you move on to the next.

I often recommend “habit cycling”: trying one new habit per month, on a regular schedule. Start on the first of the month, even if you feel unprepared. Especially if you feel unprepared, since your expectations will be lower. It can be as easy as trying drinking a glass of lemon water each morning (one of my personal favorites), or as big as starting a new exercise routine. The point is to avoid analysis paralysis and lower the stakes by making new experiments just part of the routine, not some pivotal crossroads. I recommend stopping the habit after 30 days, even if, especially if, it’s going well. Take a few days off and reflect on what you learned, why it worked or didn’t, and what you want to change going forward.

The reason? It is a terrible feeling to fail, and not know why. But in some ways it’s even worse to succeed, and not know why. Was this a fluke, or did you just uncover a fundamental new truth about yourself? Success is a terrible thing to waste, because it’s so rare.

When you change the way you look at things, the things you look at change

Living an experimental lifestyle is an infinite game: the goal is not to win, but to keep playing. These aren’t just n=1 experiments; they are t=∞ experiments.

I once read a haunting sci-fi short story that put a new twist on the idea of “life as an infinite game.” The story centered on a man living in a futuristic, hyper-prosperous civilization. All of society’s problems had been solved, and death was a distant memory. It had been discovered that humans are not meant to live more than about 125 years. It wasn’t a limitation of physics, biology, neurology, or technology — it was a fundamental limit of conscious self-awareness. Each passing year brought more reasons to not do things than to do them, gradually narrowing into a nihilistic tunnel vision that led inevitably to insanity. The main character developed a strategy to cope: every 100 years he dedicated to a personal passion, at the end of which he would have his mind wiped, only to start again on something else. The story recounts the final moments of a century dedicated to the study of insects. Gazing on cases upon cases of meticulously collected and catalogued bugs from every corner of the world, the man reminisces about his vast entomological experience. As the clock winds down and the mind-wipe begins to take effect, he looks forward to his reincarnation with a childlike anticipation that he hasn’t felt in years.

What strikes me about this story is that this man is in exactly the same position as us. Despite the vast technology at his disposal, neither his past selves nor his future selves really matter. There is no passing of information or identity between lives — for the strategy to work, each life has to be completely sealed off from the others. His challenge in a literal infinite game is exactly the same as our challenge in a metaphorical one: keeping things interesting enough to stay motivated.

There is risk and urgency in this challenge, and not just because we’re mortal. As Lewis Hyde explains in The Gift, imagination has a half-life:

“…when possible futures are given and not acted upon, then the imagination recedes. And without the imagination we can do no more than spin the future out of the logic of the present; we will never be led into new life because we can work only from the known.”

This is the deeper promise of experimental habit formation: it provides a way of acting on possible futures without risking too much in the present. It addresses the fundamental tension — between routine and novelty, stability and exploration — by giving us just enough structure to feel comfortable dancing along the frontier between them. Like a deep-sea exploration vessel, it allows us to roam the ocean depths in shorts and t-shirts, and once in a while discover something remarkable.

Thanks to Nicolas Laurent, Zac Pullen, Mike Dariano, Doug Peckler and Jason Lay for their ideas and suggestions.

Lean Startup Week 2016: Call for Speakers

Guest post by Kirsten Cluthe, editorial director of Lean Startup Co.

Speaking at Lean Startup Week offers renowned and emerging industry leaders the opportunity to share their stories with our global community. And by renowned and emerging, we mean you, person who deserves recognition from our community of 2,000 attendees for the awesome work you’re doing! If you’re interested in presenting at our flagship conference during Lean Startup Week Oct. 31 - Nov. 6 in San Francisco — alongside folks from Google, General Assembly, Hint Water, Sama Group, GE, Salesforce, and IBM, among others — we’d love to hear from you.

Don’t worry about having some kind of conference track record. Our speakers hail from scrappy startups, global enterprise companies, government agencies, faith-based organizations, and the education and social sectors. We highly value diversity in our lineups, and we encourage people of all genders, races, ages, and ethnicities to apply.

If you have Lean Startup experience to share, we encourage you to propose a talk via our Call For Proposals form, regardless of whether you have public speaking experience. Submit your idea as a short video, ideally under three minutes. iPhone videos are totally acceptable, just make sure the sound quality is high enough that we can hear you. Here’s an example of a speaker application that we loved.

There are a limited number of spots available to speak. Below, you’ll find a few helpful tips on how to submit a proposal:

  • You don’t have to be a Lean Startup all-star to apply. You just need a good story, useful tips, compelling advice, or practical applications to share.
  • The core of your proposal should be simple. Focus on answering one of the questions posed in the Call For Proposals form. (you’ll find them on page 2)
  • Deliver the pitch in your application as though you’re speaking from a stage. Although there’s still time to practice, stage presence matters.
  • Presentations in 2016 will be shorter but no less dynamic. Design your pitch as if you were giving an Ignite talk. Here’s more information on how to create an Ignite style talk. 

A few reasons why our speakers decided to participate in the 2015 conference:

“I really got a lot out of Lean Startup [Conference] 2014. ... It has been a great tool for me and my team to make real transformation.” - Freyja Balmer, Director of Product Management, Food.com at Scripps Networks Interactive Inc.

“[I realized] that my experience was valuable for others to hear...[It was] nice to be needed. I [felt] compelled to ‘give back’ as others have done for me.” - David Telleen-Lawton, Career Development Manager, UC Santa Barbara

“I wanted to get more connected to a strong startup community, share my perspective and experiences, and also continue to establish myself and my company among other thought leaders, influencers and doers.” - James Warren, founder, Share More Stories

Ready to apply? We want to hear from you! Applications are due by Friday, May 20, 2016.

A Four-Tier Approach to Combating Feature Bloat

About the Guest Author:

sara-aboulafiaSara Aboulafia is a member of the marketing team at UserVoice. Outside of work, Sara writes and performs music, binge-watches comedy, and spends an unhealthy amount of time futzing with technology before happily retreating to the woods.

Most product people are familiar with the sticky problem of feature bloat. You may also recognize the issue by its other uncomfortable names: feature fatigue, feature creep, or feature overload.

It’s the thing that happens when, for instance, the machine you bought to make the world’s best, quickest espresso also tells you the weather and time, is equipped with an AM and FM radio, has ten settings for temperature, bonus options for froth consistency (thin to thick), and comes equipped with iPhone-integrated alarm tones that tell you when your espresso is done (that part admittedly sounds sort of neat).

While the makers of this espresso machine may have listened to user wants, the inevitable loss of frustrated and confused current and potential users would indicate that the creators didn’t focus on user needs. Yeah, a handful of users may have asked for bells and whistles, but fundamentally what they wanted was a damn good, quick espresso. Put another way, the espresso machine makers prioritized capability over usability and ignored real customer needs.

As MSI.org wrote in a study of the feature fatigue problem over a decade ago: “Because consumers give more weight to capability and less weight to usability when they evaluate products prior to use than they do when they evaluate products after use, consumers tend to choose overly complex products that do not maximize their satisfaction, resulting in ‘feature fatigue.’”

Echoes Rian Van Der Merwe in his more recent article How to Avoid Building Products That Fail: “…more isn’t necessarily better.” Touché.

While loading up features in the beginning may buy you some early users, the high cost of alienated, churning customers, the expense of accruing and paying down additional technical/engineering debt, the potential confusion regarding your product and brand identity, and frustration from engineering and product teams (complete with a loss of organizational trust in your decisions) can lead to catastrophe.

Of course, the espresso machine is a simplified example. In reality, your product-bloating features may initially seem essential but turn out in time to be unnecessary — at which point it may be too late. Your challenge, then, is to decide which features your product and customers actually need and which you can realistically afford and to avoid the others like the plague.

So how to avoid falling into the tempting trap of feature overload? With the laser-focused ammo of a four-tiered process: be informed, mindful, selective, and cutthroat with your feature decisions.

1. Make More Informed Decisions by Validating Feature Requests

Considering all customer feedback does not mean taking all customer feedback. This means you must separate the signals — the worthwhile feedback — from the noise. You can do so by asking validating questions about the quality and significance of customer feedback before acting on it. Primary indicators of “valid” feedback include volume, frequency, request source, and intent.

Is a feature request high-volume or limited to a few customers? Does it come up with some frequency? If you say “no” to one or both of these questions, it may be a sign that the request is noise.

Additionally, the source of the feature request matters: you should know how long the person behind a request has been a customer, their use case, account plan/level/type, their industry, and — depending on your company — how satisfied they are with your product overall (NPS). This will give you a greater measure of the request’s relevance.

If the feature seems actionable, then dig a little deeper to ask what the intent of the request is: What is the underlying pain behind the request? Is there an underlying pain that justifies the feature request? It is your job to use the product management resources at your disposal — from usability tests to field studies to surveys — to uncover your customers’ true needs. Ensure that you only build things that solve actual, real problems, or you may find yourself sagging under the weight of unnecessary features.

2. Make More Mindful Decisions By Considering Complexity Costs and Technical Debt

You know the old saying, “There’s no such thing as a free lunch?” That adage applies to features, too.

Development and support costs are just the beginning of a feature’s expense. But woe to the product manager who does not consider the complexity costs and technical debt a given feature adds to their product.
As Kris Gale, VP of Engineering of Yammer put it to First Round Review: “Complexity cost is the debt you accrue by complicating features or technology in order to solve problems. An application that does twenty things is more difficult to refactor than an application that does one thing, so changes to its code will take longer.” What may take two weeks of coding to develop, for instance, may take much more work and time to maintain down the road.

You must decide whether or not a feature’s complexity cost is necessary or unduly burdensome. A potent mix of common sense, intel from engineering, and data testing that evaluates the feature’s usability and usage can help drive this interrogation. If you do decide to move forward, make sure to continuously check that customers are using and getting value from the feature (see: that it’s meeting your customers’ needs). Have you accidentally made their experience unnecessarily complex? If so, take stock, pivot, and be more mindful moving forward.

3. Make More Selective Decisions By Considering What Matters and Saying NO

“Yes” sounds good, doesn’t it? We tend to equate “yes” with positive, happy things, and “no,” with negative, bad things. But it isn’t that simple.

Oftentimes a “no” to a feature request can actually mean a yes — a yes to remaining aligned with your product vision, your overall product strategy, and other business goals. These are the things that really matter and can be forgotten when in the weeds of product and feature development.

Feature doesn’t solve an actual problem? No. Doesn’t align with your company’s core purpose? No. Not enough demand for a feature (as you’ve discovered when validating it)? No way, José.

If you’re doing your job, “no” should be the answer to the majority of your feature requests. That way, the features that get a “yes” are the ones you, your team, and your customers know have real value. Being more selective also has the added, crucial effect of making you a more trustworthy product manager, which means you can more effectively make decisions that combat the feature bloat that make your team and customers feel, well, sick.

4. Make More Cutthroat Decisions by Removing Bad Features

Saying no to proposed features is often not enough. Sometimes you’ll need to be even more brutal by doing away with poor product features entirely. These are vampire features — learn to recognize them, and then kill them before they kill your product.

If a feature’s complexity cost outweighs its benefits, if it doesn’t meet real customer needs, if only a handful of customers use it, if it’s outdated or irrelevant, etc. etc. — take a deep breath and cut it.

Yes — you (and your engineers… and others) have given this feature life, nurtured it, and even made sure it’s easy on the eyes. But what has it done in return? It’s harmed your business and the integrity of your product. Make like a mob boss and whack it.

In Sum

Feature bloat can cost you precious resources, weigh down your team, and harm the integrity of your product. By making informed, mindful, selective, and cutthroat feature decisions, you can ensure that you’re building a focused, manageable product that meets a true customer need.

The post A Four-Tier Approach to Combating Feature Bloat appeared first on .

Blameless Post Mortems – Bringing the Donuts 04/20/2016

How do you respond when things go wrong? - The NBA’s Steph Curry missed almost 55% of his three-point shots this season. Last year, baseball’s Dee Gordon didn’t get a hit two-thirds of the times he stepped up to the plate. SpaceX failed to land their Falcon 9 reusable booster rocket in 80% of their attempts. That’s one way to...

Product Management Rule #42: These Are Our Rules. What Are Yours?

Product Management Rule #42 from the best-selling book, 42 Rules of Product Management, was written by Greg Cohen, Senior Principal Consultant, 280 Group As we look to advance our products, we must be aware of the larger picture and environment in which we work. Rules are designed to prevent failure and, in particular, the repetition of failure. The majority of rules in the book emerged from the personal, observed, or near failures of each rule’s author in bringing a successful product...[continue reading]

The post Product Management Rule #42: These Are Our Rules. What Are Yours? appeared first on 280 Group Product Management.

Creating Digital Products & Services at The Guardian

Nick Haley at ProductTank, talking about digital product design at The Guardian

Nick Haley, Head of UX at Guardian News Media, joined us at ProductTank at the BBC to give us practical examples of how his team is becoming more agile and how digital products are iterated every day with real users, breaking his thoughts down into five core principles.

1 – Know Your Audience

The UX teams at the Guardian spend a lot of time thinking about flow and digital touchpoints, and this – by necessity – is a data-heavy process. Multi-platform journeys are now the norm (even down to smart watches), and so large amounts of data make it possible to map these journeys, and see not only when people are using services, but where, how, and (to a certain degree) why. Data also helps us start to see not only user journeys, but the life-cycle of digital content.

2 – Create a Vision

Having a Vision of what you want to create is important, but it’s crucial that you’re able to share that with others, in whatever format makes sense. Whether you create a poster, a film, or even a working prototype, having a means of communicating your vision helps align your team and helps you to start spotting challenges and issues. At the same time, you need to be able to validate your vision:

  • Can users actually use it
  • Do they want to use it
  • Can the developers build it?
  • Can your stakeholders support it? (is it going to be true to the brand, support the commercial team…?)

3 – Design in the Browser

One of the biggest changes at The Guardian in recent years is that they’ve moved away from building wireframes, and now build HTML prototypes directly in the browser. This allows the team to test with real content and real devices, and get more meaningful feedback. The leads on to principle number 4…

4 – Test Often, Make Decisions

With a new UX Lab and embedded user research, product teams can run quick, lightwegith tests and maintain a good flow in their work, shipping improvements to their product. At the same time, the dev teams are practicing Continuous Delivery.It allows you to continously learn from features – release it on Monday, and refine it into a fantastic state by Friday.

However, it’s easy to test – it is absolutely crucial that you also make hard decisions, and close down certain paths to focus on others.

5 – Respect the Craft

Everyone on your team will have a varied background – maybe a few of your UX specialists used to be developers, for example. Respecting each other’s crafts means that you have to acknowledge that your developers are the best people to do code reviews and deployments, or that your designers are the right people to make decisions about product UI and UX.

Don’t undermine each other – talk to your colleagues and have them contribute. Create a great team.

The post Creating Digital Products & Services at The Guardian appeared first on MindTheProduct.

Hacking for Defense (H4D) @ Stanford – Week 3

We just held our third week of the Hacking for Defense class. This week the 8 teams spoke to 108 beneficiaries (users, program mangers, etc.), we held a Customer Discovery workshop, we started streaming the class live to DOD/IC sponsors and other educators, our advanced lecture was on Product/Market fit for the DOD/IC and we watched as the students solved their customer discovery obstacles and started getting closer to their customers.

(This post is a continuation of the series. See all the H4D posts here. Because of the embedded presentations this post is best viewed on the website.)


Customer Discovery in the DOD/IC Workshop
We normally hold a Customer Discovery workshop during the evening the first week of the class. But spring break and the “How to Work with the DOD” workshop got in the way. So we inserted an abbreviated version at the front of this week’s class.

When working with the DOD/IC there are some unique obstacles of “getting out of the building and talking to customers”. For example, members of the DOD will not respond to ”cold calls” and those in the Intel community won’t even tell you their names. In addition, most of the sponsors are working on classified problems. So how do teams understand the customer when the customer can’t tell you what they do? The Workshop talked about how to address those and other Discovery issues.

If you can’t see presentation click here

Team Presentations: Week 3
After the Customer Discovery workshop the 8 teams presented what their hypotheses, experiments and what they learned outside the building this week.

Team Right of Boom (previously named Live Tactical Threat Toolkit) is trying to help foreign military explosive ordnance disposal (EOD) teams better accomplish their mission. The team originally was developing tech-centric tools for foreign teams to consult with their American counterparts in real time to disarm IED’s, and to document key information about what they have found.  Now they are honing in providing accurate high-volume post-incident IED reporting.

Last week this team was floundering. They had confused getting interviews and building minimal viable products with truly trying to “become the customer.” We strongly suggested that there was no way that could understand the day in the life of an explosive ordnance disposal expert by just listening to them – they needed to stand in their shoes. So to their immense credit the team suited up in full bomb disposal gear and got of the building. They earned our respect (and a name change for the team.)

If you can’t see the Right to Boom video click here (turn up the volume!)

If you can’t see the Right to Boom presentation click here


Team Capella Space
  is launching a constellation of synthetic aperture radar satellites into space to provide real-time radar imaging.

This week the team learned a ton. They mapped out competitive offerings, found that Government funding is not the proper channel for Capella, but did find that the Coast guard is currently in dire need of situational awareness at high resolution and that military customers want access to raw data; commercial customers highly value processed data for actionable insights

If you can’t see the Team Capella Space presentation click here

Team aquaLink is working to give Navy divers a way to monitor their own physiological conditions while underwater (core temperature, maximum dive pressure, blood pressure and pulse.) Knowing all of this would give divers early warning of hypothermia or the bends.

This week they validated that divers will want real-time alerts regarding vitals (and put up with the additional gear/procedures) of issues that threaten mission success. The found that navy medical researchers want data on vitals, the rebreather (air consumption), and the dive computer (dive profile). Their hypotheses going forward are that a heads up display is the ideal form of information transmission during a dive and system should be modular to allow for the integration of evolving technology (geolocation and communication)

If you can’t see the Team aquaLink presentation click here

Team Guardian is working to protect soldiers from cheap, off-the-shelf commercial drones. What happens when adversaries learn how to weaponize drones with bullets, explosives, or chemical weapons?

Guardians current hypotheses is that they have to provide drone detection, identification and protection against attacks from drones or swarm of drones. And that the user will be a 19 solider not trained to use complex equipment.

If you can’t see the Team Guardian presentation click here

Team Narrative Mind is trying to understand, disrupt, and counter adversaries’ use of social media. Current tools do not provide users with a way to understand the meaning within adversary social media content and there is no automated process to disrupt, counter and shape the narrative.

The team is coalescing around the idea that the two minimal viable products for their sponsor are, 1) automatically generate an organizational chart of a target terrorist groups over time, and 2) generate a social network map of how terrorist groups interact with each other.

If you can’t see the Team Narrative Mind presentation click here

Team Sentinel initially started by trying to use low cost sensors to monitor surface ships in a A2/AD environment.

The team has found that their mission value is really to enable more efficient and informed strategic decisions by filling in intelligence gap about surface ships in an A2/AD environment via:

  1. Increased number of data streams (i.e. incorporate open source data)
  2. Automated data aggregation (i.e. from disparate sources) and analysis
  3. Enhanced intel through contextualization
  4. Improved UI/UX

If you can’t see the Sentinel presentation click here

Team Skynet is also using drones to to provide ground troops situational awareness. (Almost the inverse of Team Guardian.)

The team invalidated the hypotheses that military/commercial systems exist that could already solve the problem. In addition, they originally believed that soldiers on foot needed a deployable drone system. They discovered that drones are best used with teams with vehicles or for short ranged dismounted operations.

If you can’t see the Team Skynet presentation click here

Advanced Lecture: Product/Market fit in the DOD/IC
The advanced lecture for week 3 was on the unique needs of finding Product/Market fit in the DOD/IC. Pete Newell described why a solutions in the DOD fails and then described “battlefield calculus” – how two identical sounding missions (and their inherent problems) are actually radically different based on what echelon of force executes them, by the size of force, their location, even by how well they are trained.  Despite the obvious, people still try to deliver “one-size-fits-all” solutions. To properly insure a solution is actually used it is important to become familiar with the pattern of life of the user and their unit.

Pete also pointed out that teams need to “Look for Conflict” between what may have been provided to solve a similar problem and the solution the teams are about to recommend. You needed to ask: Are the circumstances similar? Or are their a myriad of conditions present that will invalidate what was a good solution under different circumstances?

If you can’t see the presentation click here

Mission Model and Value Proposition Canvas
To students, “who are the beneficiaries?” feels fuzzy on day one. And given most of them had no exposure to the DOD or Intel Community it’s not a surprise. The reason we have the teams talk to 10-15 people every week, is that with enough data they can begin to fill in the details. A few of our guests have commented how knowledgeable the teams were in talking about the sponsor organizations and problems.

That said, listening to the team presentations there was a wide difference between teams in how well they understood that the definition of “beneficiaries.”  Many of teams were still listing names of organizations rather than the title and archetype of the people who mattered/cared/decided/users, etc.

Understanding who are the beneficiaries is critical to understanding the rest of the mission model canvas.

When the students have a more nuanced understanding who are the individual beneficiaries is when they can build a detailed Value Proposition Canvas for each beneficiary that makes sense.  (Several teams had Value Proposition Canvas of organizations, some had fewer Proposition Canvas than they had beneficiaries, some Proposition Canvases were so generic it was clear that had insufficient data on individual needs of specific archetypes, etc.)

This is all par for the course and part of the student learning. We now need to sharpen their focus.

An after class action for the teaching team is to read through every team’s week 3 presentation slide-by-slide and give each team a detailed, written, box-by-box critique of the right-side of their Mission Model and Value Proposition Canvas.  We want to help them get this right.

Sponsor Education – a Network Begins to Form
The teaching team, liaisons, mentors and DIUx are all working their networks to get students relevant beneficiaries to talk to. (More about what a wonderful asset DIUx has been in a future post.) Joe and Pete are continuing to work hard on educating the sponsors about their role. (We are collecting all our learning in an Educators Guide so other universities can run the class.)

One emerging unexpected benefit, is that Pete and Joe are continuing to expand the network of innovators in the DOD/IC who are helping our student teams. I’ve had several critique our presentations and offer suggestions on the nuanced parts of the IC mission and acquisition system I didn’t understand.

Live Streaming the Class
The DOD/IC sponsors who gave us these problems were curious about how the teams were learning so rapidly. (Others in their commands and agencies wanted to watch as well.) So this week we began to live-stream the student presentations. And other universities who want to offer this class have begun to have their educators watch the class. (We’ll be offering a train-the-trainer educators class later this year.)

Lessons Learned from Week 3

  • Teams still running at full speed
  • Understanding beneficiaries is critical to understanding the rest of the mission model canvas.
    • Written team-by-team offline critique is needed to keep them on course
  • Support is coming from lots of places in the DOD/IC
    • DIUx and our liasons have been great in connecting the students

Filed under: Hacking For Defense, Teaching