One of the trickiest parts of Product Management is roadmapping – figuring out what to build. There are almost as many strategies and approaches to figuring it out as there are product managers!
I’m currently working on roadmapping for a major effort at Rockbot, and so I’ve been thinking a lot about my process and how process shapes results more generally. One things that I’ve realized is that my background as a political scientist – my first career track before I pivoted into tech in my late 20’s – has shaped my approach in some unique ways. I’d like to share some of those in the hope that they may prove useful to others.
At root, Poli Sci is about using the scientific method to understand what people care about and what drives behavior at the scale of societies and cultures. A Product Manager would describe that effort in terms of defining a problem space, identifying pain points, and then finding the right solution to the right pain point. In many ways, the biggest difference is the terminology used and the scale of the problems the two disciplines address. Many of the actual tools are cross applicable or even identical.
The difference is that, where political science students undergo explicit training in avoiding selection bias, doing statistical analysis, conducting surveys, organizing dialogue groups and listening sessions, and the other tools of the trade; there is no generally accepted training curriculum for Product Management. Unfortunately that often means PM’s have to reinvent the wheel. It is my firm belief that many if not most tech startups could improve their Product Management practice by actively recruiting poli sci grads who have that training.
Let me give a few examples.
Surveys are one of the most powerful tools for any product manager seeking to understand a new problem space. Unfortunately, unconscious bias in the framing of questions, in the selection of who to survey, and in the analysis of data can all cause problems that derail Product.
First, think about the questions you are asking. Have they been framed in a way that allows multiple valid answers? What assumptions have you made in the framing? What biases (conscious or otherwise) are embedded into how you ask the question? Members of any organization or in group carry a mental map of the group, complete with a host of unspoken assumptions. In tech, that inevitably includes assumptions about what the product does and how it will be used – assumptions your customers and potential customers may not share. Making them explicit and deliberately validating or discarding them is critical to making sure you’re asking the right questions.
When you ask someone how you should solve a problem you presuppose that this is the correct problem to solve. Is that necessarily the case? How do you know? What adjacent problems exist that might also be pain points? What is the relative value for each of the pain points in your problem set and what are the intersections between them? The core art of product management is identifying which of the 99 problems in your set of a hundred pain points you’re not going to solve so you can focus on solving the right one in a compelling way. Letting go of biases is essential as one gathers data and defines selection criteria.
Political scientists often refer to ideologies and systems of interlocking beliefs as ‘lenses’. With training it can become possible to consciously put your lense down and try on others. No one is truly objective of course, but this practice is a core part of the PoliSci discipline and has been invaluable to me over the years. So when I write a survey I make a conscious effort to try on different lenses as I proof read it and examine it for embedded biases before sending it out. The result is better survey data to drive my product planning and iteration process.
Another set of tools common to both disciplines are dialogue groups and listening sessions. There is tremendous and obvious value to sitting down and listening to groups of clients, potential clients, internal stakeholders, and others during a product planning process. A good listening session can be much more than just an unstructured jam session though – it can be a critical tool to get out of one’s own head, try on different lenses, and understand the problem space from other people’s perspectives.
The first step, of course, is identifying the questions you want answered and then figuring out who is likely to have useful insight on those questions. At the risk of stating the obvious, a listening session with your customer service team is likely to yield a different perspective on the problem set than a meeting with sales because the two teams deal with customers at different points in the business relationship. Listening sessions with existing and potential customers will also yield different insights from each other because of selection bias – by definition your existing customers are the ones who believe you add value to their businesses and lives.
That doesn’t mean ignore your customers stated pain points, but it does mean be aware that their pain points are not synonymous with the set of pain points for your potential customers.
Aside from conscious bias checking, there’s an art to running good discussion groups, dialogue sessions, etc. The person who talks endlessly because they are confident they have all the answers will never, in fact, have all the answers. A good Political Scientist or PM will intentionally and explicitly create space for people who haven’t spoken up as much. Actively inviting their feedback and participation will improve both the quality of your data set and the culture of your organization.
Likewise, ensuring that your feedback groups both internally and externally are appropriately diverse is critical. Election polling that is biased towards specific age groups, incomes, party affiliations, or demographics will produce similarly biased results that do not reflect reality – something we all saw clearly in the last presidential election when polling that was skewed towards “likely voters” systematically over-represented party loyalists and incorrectly predicted the outcome of the election. Customer polling with similarly biased selection criteria will likewise produce flawed results. Tech startups have a well documented diversity problem in terms of age, gender, race, ethnicity (yes, those are different things), and class. The result is often products that are less than optimal as product leaders rely too much on their internal consensus and do not consciously correct for the holes in their data set.
I could probably expand this essay into a graduate seminar – and some day maybe I will. For now, I’ll leave you with the observation that whether one’s title is political scientist or product manager the goal is to understand what moves people and how to turn their pain and frustration into opportunities for positive and constructive change. In both cases, two of the most powerful tools in our toolbox are empathy and relentless work to overcome our own biases. And if we all got a bit better at those things our entire society would benefit.