To Avoid the Perils of Get-Rich-Quick, Work With People Who’ll Play the Game Again (and Again)

Why is it so irritating to be nickel and dimed?

There is of course a financial aspect (you’re trying to take more of my money) and one of expectations (I might have paid $15 if you’d quoted me that, but you told me it was $10 and now you’re asking for $15).

But perhaps more important is the message you’re sending: a few extra bucks on this transaction is more important to you than our relationship.

In an extreme case, that behavior is completely rational (even if dishonest). If you weren’t well off and had limited future opportunities, you’d probably opt to “nickel and dime” Bill Gates for a million bucks.

In most real world cases, it’s a little more dubious.

Prisoner’s Dilemma, Over and Over

Everyone knows about the Prisoner’s Dilemma, in which the rational strategy for both arrested parties is to testify against their partner. This results in a situation where both men wind up worse than if they’d colluded and remained silent.

In this situation, the players only play the game one time, and almost inevitably wind up screwing one another over.

However, the optimal strategy in other variations of the game gets a lot more complex. If rather than just playing once, players play an unknown number of repeated games, a “tit for tat” strategy can be a very good one.

In a tit for tat strategy, a player starts off playing nice. He plays nice in subsequent rounds if his opponent just played nice with him, but he plays mean if his opponent just played mean with him. If both players adopt this strategy, they’ll wind up colluding and they will consistently help one another out.

Playing the Business Game Many Times

Business is usually both more enjoyable and more successful when your co-players expect to play the game repeatedly.

I can try to extract as much value out of you from you as possible for the thing I’m working on right now. That may mean taking your money, driving you to overwork yourself on my behalf, or getting you to do me favors. I’ll probably be better off tomorrow than I would have been, but you won’t trust me and I won’t be as well positioned the next time I play the game.

Or I can work with the expectation that we’ll play the game again: by doing a little more for you and asking a little less, you’ll treat me well in the future and we’ll both wind up better than we otherwise would have.

Oddly, those who likely have the most games in front of them — new professionals just out of school — are generally more likely to act like they’re only playing the game once. When I first started working, I didn’t have the experience of working with the same people at different companies.

Sure, I might have thought, I’m working with that guy now, but could I really have imagined that I would be part of his company or he of mine ten to twenty years down the road? Probably not: I didn’t fully internalize the importance of investing in relationships.

When, later in your career, you’ve had the experience of working with one person two or three times, you see the pattern. You realize that for any of your colleagues, this may not be the last time you work together.

LinkedIn is, at its core, an embodiment of the importance of ongoing professional relationships. Keith Rabois recently mentioned that an amazing five of LinkedIn’s first twenty-seven people (Keith, Lee Hower, Reid Hoffman, Josh Elman, and Matt Cohler) are partners at VC firms; in large part, that’s because each plays the game collaboratively, with the expectation that there will be many future iterations. Keith, Lee, Reid, Josh, and Matt are all in it for the long term.

It’s very easy to adopt a mentality that prioritizes the thing that’s in front of you. In business, that often manifests itself as a get rich quick scheme.

If your goal is success right now (screw the future), you can spend time with the nickel-and-dime, get-rich-quick types. If you’re interested in the longer term, I suggest working with people like Keith, Lee, Reid, Josh, and Matt.

How to Build a Quant Venture Fund

We’re Moneyball for carpet cleaning!

It’s easy to mock the mindless business metaphors of the day, and many of the “Moneyball for _____” ideas deserve that mockery.

But in the field of venture capital, there’s a huge opportunity for data — used correctly — to help investors make better decisions. Last month TechCrunch published a story suggesting that it’s already happening.

My somewhat informed guess is that most of what’s in the article is unsophisticated and/or vaporware. The “deep data mines” Leena Rao mentions are probably more like spreadsheets filled in by interns and/or Python scripts. Such spreadsheets — complete with stats from app stores, Alexa, and AngelList — would certainly be useful, but hardly qualify as analytically brilliant.

So what could an innovative VC do instead? I will walk through a model, mainly tailored for an early stage fund.

Data Collection

Probably the most important piece of building a predictive model is the collection of a great set of features that are likely to be predictive.

In the context of a venture fund, you’d likely want to collect a bunch of data at the time of an investment. You’d then have to wait a while (probably a couple of years) to see how those data predicted winners and losers.

For the data to be predictive, you need (1) underlying data structure that doesn’t change much and (2) a world where the same factors are likely to predict success.

That’s relatively easy if you’re trying to figure out whether a restaurant’s second location will be successful: you can look at a bunch of common metrics like customers, revenue per customer, employees per customer, demographics of the old location and the new one, etc.

It’s a lot tougher in the startup world, where the rules of the game are constantly changing. A million users today is very different from a million users in 2003; most of today’s important platforms didn’t even exist ten years ago.

That then begs the question: what are the “metrics” that are most likely to be predictive in the world of 2018 or 2023? To create a model that survives for more than a year or two, one would have to look at variables that will look similar in five or ten years.

Perhaps surprisingly, that means ignoring (or at least de-emphasizing) the variables of 2013 being tracked in that Python spreadsheet.

Instead, the model would use scores generated by (hopefully insightful) VCs themselves. Those VCs — and possibly other reviewers — would score each startup on traits like the following:

– how charismatic is the best founder? (1-5)
– how business-smart is the best founder?
– how tech-smart is the best founder?
– how well do the founders seem to complement one another?
– do you think the best founder could be “world class” at something some day (even if not today)?
– what about the worst founder?
– how well do the founders understand the market?
– how scrappy and hard-working do the founders seem?
– how would you rate the potential size of the business (independent of founders)?
– how would you rate the company’s user traction?
– how would you rate the company’s revenue traction?

This is just a set of example questions.

If you got the same (say) 5-6 people to rate all of those companies, you’d have 50+ data points for each startup. It might be the case that averaging scores (e.g., the average founder charisma score) would wind up being key. Or it might turn out that one VC’s “how scrappy” score is incredibly predictive while another’s was not at all.

Ideally, you’d do the actual reviews in a very standardized way: reviewers would either always talk to people in person for a certain amount of time, always talk to them on Skype, etc.

You’d also compile the answers to more straightforward questions: how many founders were there, how much revenue did they have, how many months had they been working together, had they started companies before, what colleges did they attend, etc. Some of those are probably predictive.

You’d have to do this for a bunch of companies (over 100, ideally a lot more), but not actually do anything with the data. i.e., if a firm invests in a startup and VC #1 rates the founder’s charisma a 4, business smarts a 3, etc., you just stick that in a database and let it sit.

The Model

When you build a predictive model, you use those inputs to predict something. Sometimes, what you’re trying to predict — who will win a basketball game, whether a customer will pay for your service — is fairly straightforward. In this case, it’s not.

The obvious value to be predicted is the long-term valuation of the company (or the fraction thereof that an investment would capture): the money it returns to investors when it folds, its acquisition price, or its valuation at IPO. This would, after accounting for details like liquidation preferences, dilution, and taxes, reflect the return for a potential investor.

It’s not necessarily clear, however, that this would yield the best predictive model. As any VC knows, returns are shaped in large part by one or two exceptional wins: many funds have one very successful investment that provides the majority of returns. If there are only a few such companies a year, even an all-knowing VC is unlikely to have the data scale to make a good predictive model.

Instead, an investor might look at a few different scores:

1) How good did we think this investment would be when they first pitched us?
2) How good did we think they’d be 2 months after we invested? In two months working together, the VC should hopefully learn a lot to tell her if this was a sound investment. This is probably only a useful data point for actual investments, as opposed to rejected deals.
3) How good did we think they’d be 1 year after we invested?

etc., etc.

It may be the case that VCs (et al) are pretty good at #2 but not so good at #1. To gauge that, they could build a predictive model to predict how they’ll feel a few months in, given the dozens of measurements they took pre-investment. Within a couple of months, they may be able to more effectively forecast the likelihood of intermediate success potential. This intermediate gauge is subjective but still more thorough than what’s being done: what predicts how I’ll assess this opportunity when I really understand it more deeply?

Because returns are driven by a few outliers, asserting an exact expected long term valuation for early stage companies is difficult. If a promising but young company has a 0.1% chance of some day having a $100 billion valuation, they’re ostensibly worth at least $100 million. If, however, those odds are only 0.001%, they may be worth as little as $1 million. And a venture firm has virtually no chance of having enough data to distinguish between 0.1% odds and 0.001% odds.

However, a venture firm should have enough data to distinguish between larger tiers of companies: companies at the 98th percentile or higher, companies between the 95th and 98th, companies between 90-95, companies between 75 and 90, and everyone else. Some of the companies in the top tier — defined by competence and potential more than ultimate outcome — will fail; others will be incredibly successful and go public.

The goal of this model would be to predict the tier, rather than the ultimate outcome.

Implementation

Step one of this process is about recording data, not about changing decision making. And step two would be to take months or years of data and build a model.

Only after months or years, when the firm has actual outcome data and a predictive model, would it actually use the model to help with decisions.

An early stage fund that makes dozens of investments a year is best positioned to execute on this strategy. There are two reasons for this:

1) The metrics defined are more about people and potential, and less about business fundamentals; except in rare cases it would be irresponsible for a late-stage fund to give a company a $100 million valuation based only on those scores.
2) They have a lot more data.

All of the “human” factors that make an angel fund successful today — dealflow, partners’ skill at evaluating founders, helpfulness to entrepreneurs — would be equally important in this sort of fund. However, a formalized predictive decision-making process could improve returns significantly.

See also: Data Scale: Why Big Data Trumps Small Data