Interviewing: Like Networking, But Better

Some entrepreneurs spend half their time at conferences meeting people, at networking dinners schmoozing, on Twitter chatting up a storm. In some cases this is a good use of their time and in other cases it’s kind of a waste.

I’m an introvert, and I think I’d go crazy if that was my life.

Nevertheless, I met or talked with a huge number of people on my Circle of Moms journey. And one particular channel stands out in my mind: people I interviewed for jobs.

I interviewed many hundreds of people over the 4.5 years I spent on Circle of Moms. We offered a job to 50 or 60 of them, and about half the people we offered wound up joining our team. Sometimes, our interviews were quick phone conversations with me; other times someone came into our office multiple times.

Of course, I truly got to know the people who came in multiple times. We talked about our goals as a company, their professional aspirations, and how to write code to find anagrams. It went beyond being a purely transactional relationship. So it was tough for us to make a decision to offer to another candidate, or — if we made an offer — for them to choose to work somewhere else.

When something doesn’t end in a signed employment agreement, it’s easy to vilify the other side: he’s a jerk for not making me an offer, she’s a jerk for choosing to work at Zynga instead of Circle of Moms. Here’s my advice: don’t do that. If you aren’t going to hire this person for your company now, things might change and they might join you later — but that’s unlikely. What’s more likely is that you cross paths down the road somewhere else — ideally by design — in our small world (of Silicon Valley in my case).

Since I sold Circle of Moms in February, no fewer than ten people I interviewed but didn’t hire have reached out to me (for a variety of reasons). At Circle of Moms, we turned some of those ten down; some of them turned us down. Each is a talented person I was lucky to cross paths with and could imagine working with in the future. For an introvert, at least, the interview process was a great form of thoughtful, deep networking without the awkward cocktail party practice of looking around for someone you might know.

A few months ago, I signed on as an adviser to Makeupbee, which was started by Brent Francia. Brent and I met when he interviewed to be a software engineer at Circle of Moms; he holds the special distinction of turning us down twice! The second time he said no, he did a great job explaining why he decided to work with Zynga rather than Circle of Moms; it was a fantastic example of being clear about goals and motivations and saying “no” gently. We stayed in touch after that, occasionally sharing advice on product and technology. Early this year, he reached out to get some advice about Makeupbee and building a business for a largely female audience. A short time later, he asked me to join on in a formal advisory role.

So why should you establish a good rapport with the people you’re interviewing? Because you want them to come work for you, of course. Because it’s the right thing to do, certainly. But –especially if you’re an introverted engineer who doesn’t get out a lot — remember the long run reason: this could be a superstar you want on your side some day.

The Slow Road to Entrepreneurship: Learnings From Early PayPal

We all know the stories: Zuckerberg, Jobs, and Gates dropped out of school and founded three of the companies that define our world. They went from college students to entrepreneurs, no transition required. But that’s the exception; to generalize Dave McClure’s instant classic, those of us in Silicon Valley’s 99.9% have to content ourselves with being relatively late bloomers.

That’s reflective of a larger narrative that dominates much of the talk around entrepreneurship: the narrative implies that one is either 0% entrepreneur or 100% entrepreneur. Such a binary classification implies that entrepreneurs have to go from 0 to 100 in one step. While there’s some truth in that — you won’t completely understand the thrills, stresses, and demands of starting a company unless you do it yourself — you can certainly prepare yourself for future entrepreneurship by being a part of early stage companies with people who can help you learn quickly.

My own story is certainly reflective of that: I wasn’t ready for entrepreneurship at age 21 or 22. By my late 20′s, I was aching and ready to start and grow a company. Here’s how that change happened.

My first substantial job experience after college (following eight months as an engineer at a failed startup) was at early-ish PayPal. I was part of the company’s growth from small in 2000 (a very unprofitable hundred person startup) to huge in 2004 (massive subsidiary of massive Ebay).

As a new employee, I knew nothing about fraud detection and received minimal guidance. Somewhat lost, I added very little value to the company my first few months. Three months after joining PayPal, Sports Illustrated wrote an article about my side project, Team Rankings. I sensed some irritation from my boss Max: I was doing only so-so work at PayPal, but was getting significant publicity for a side project. That irritation seemed unfair to me at the time; having now been a founder myself, I can completely relate. Founders want people on their team executing at the highest level; seeing someone perform better on a side project than in the office doesn’t send that signal.

Fortunately, I soon found my way, and (with some help) figured out how to turn massive amounts of data into statistical models that could accurately predict fraud. I felt like I was living on the edge because I eschewed the “easy” off the shelf enterprise tools others were using. Some 23-year-olds rebel by using mind-altering drugs or traveling the world; for me rebellion was writing software from scratch to build statistical models. And this defined me: any job other than predictive modeling struck me as superficial, scientifically empty, and not worth doing.

My job at PayPal focused and insulated me. In 3.5 years at the company, I did one thing: build technology to predict fraud. Most of what was going on at the company — operations, usage, product, competition, finance — were of no concern whatsoever. I became very skilled in a few very specific areas, but knew very little about the goings on across PayPal.

If I had a publicist, I’m sure he or she would tell me to broadcast that I magically learned how to build a business while I was at PayPal: surely that magic would cement my place as part of an all-knowing PayPal Mafia. Alas, I don’t have a publicist, so the truth will have to do: PayPal taught me very little about how to build a successful startup.

But the PayPal experience was formative in two key ways. First, it showed me that talented, driven, resourceful people with virtually no knowledge of an industry could become skilled in areas they’d never known about before (for me: fraud detection) and collectively build a large Internet business and change the world. Second, my colleagues at PayPal set a bar for the caliber of thought and effort that I now expect from those I work with.

After several great and educational years, by early 2004 my job at PayPal had become routine. Fifteen months after Ebay acquiring us, the company’s combative, execution-focused culture had been swallowed by Ebay’s relentless drive to maximize employee time spent in PowerPoint meetings. I didn’t have deep insight into the business of PayPal and Ebay, but I knew I didn’t want to play the big company game. Yet for financial reasons, I was motivated to stay around to vest my remaining stock options.

Researching my choices, I discovered that Ebay had a policy which allowed employees to work just 24 hours a week, while continuing to fully vest their stock. This held a lot of appeal — especially since I was interested in spending some time helping out some friends who were working on a new site called LinkedIn.

Ebay’s lenient vesting policy was likely designed for new mothers or those with health issues — not 26-year-old males interested in moonlighting at a startup. That didn’t dissuade me: I soon shifted my schedule to one where I worked 3 days a week at PayPal and spent the rest of my time at LinkedIn. And the LinkedIn days were a lot of fun, as I got to work with a small team, focus on a completely different set of data problems, and understand how a social network could rapidly grow.

At this point, in the first half of 2004, my boss Nathan at PayPal was trying (struggling) to find cool stuff to work on, so we spent some time with other groups at Ebay looking for interesting data problems to solve. But I soon realized that once I started to look down upon my employer, it would be difficult for me to do top-quality work. I was still building good fraud models, but I was no longer psyched about my job at PayPal, and the caliber of my work certainly suffered. I admire those who can be completely professional and work at full intensity for anyone at any time, but I’m not like that. When I’m excited about a project and a company, I’m hard-working, clever, and efficient. When I’m coasting, I’m none of those things.

Trying to foster more commitment, Nathan came to me in July 2004 and told me that I had to choose between working full-time at PayPal and leaving. I’m guessing he thought this would push me to increase my commitment, but it had the opposite effect: I was bored, disgruntled and antsy, and I was going to leave. I left PayPal in August and darted off to France for a month of cycling.

At this point, I had aspirations of starting a company some day. I’d had some entrepreneurial experience with Team Rankings (more on that later), and had built up some skills at PayPal. At PayPal, I’d worked on some other side projects that could have turned into their own companies (none amounted to much). But I didn’t have in mind a specific company that needed to be started, nor was I compelled to start a company just to start something. And LinkedIn seemed like an attractive place to be: a strong 15-20 person team, an innovative and useful product, a really interesting data set. So I decided I’d join LinkedIn full-time.

I’d spend two and a half years at LinkedIn. I was the first analytics scientist and would lead what’s now called the Data Science team. Unlike my insulated time at PayPal, my years at LinkedIn would get me very close to the business and the product, piquing my interest in a much wider array of topics. Ultimately, that experience would nudge me to jump off the entrepreneurship cliff and start my own company. In my next post (follow me on Twitter), I’ll tell the story of those two and a half years and of the key experiences that led me to start something myself.

How to Improve Student Testing with Crowdsourcing and Data

In a recent post, I wrote about why A/B testing versus holistic UX design is a false choice.

One choice measures, then it optimizes against an overly simplistic set of variables. The other generally eschews measurement, instead opting to blindly trust the holistic vision of a (no doubt brilliant) product designer. It’s a false choice because one can choose to instead have close to the best of both worlds: use principles of holistic design, then measure the crap out of that design.

The current debate about school assessment has a very similar structure.

Teachers unions and education schools lead one side of the debate. Their rallying cry is essentially “we are the experts at teaching students, so no need to waste time with silly standardized tests.” This mindset is justifiable for the very best teachers, who have (like great product designers) good instincts and teach well; it’s completely ludicrous for the worst teachers, who need either data that can help them improve or a new profession. Moreover, it results in a ridiculous system where pay (and layoffs) are based on seniority and performance. Think about the three best teachers you had and the three worst. If the three best were also the three most senior, and the three worst were the most junior, I’ll shut up. But I’m guessing that’s not the case, which means that — because of the anti-testing, anti-accountability mentality — your best teachers weren’t getting what they deserve.

On the other side are advocates of accountability and metrics, who want to rid the world of unsuccessful teachers and schools. Without measuring student performance, they argue, it’s impossible to know which students, teachers, and schools are performing well and which are not. To my mind, this is all good. But there’s one hitch: the tool of choice for accountability advocates is crappy standardized testing. In their current form, standardized tests are time-consuming for teachers and students, aren’t measuring what most teachers want to measure, and have an oversimplified form (usually multiple choice). Moreover, they usually matter immensely for schools but don’t affect individual students’ outcomes at all.

As with the parallel testing-vs-design question, there’s a third way that takes advantage of the best of both worlds.

Virtually all teachers use tests and essays to assess their students, and it’s nearly always the case that all students in a class get the same questions or assignments. So there’s little debate as to whether testing that’s standardized at the class level is a good tool for gauging student progress: it’s widely used to grade students on their progress.

That begs a question: why can’t one use the same system both for evaluating students’ performance in a class and for evaluating their overall progress relative to the state or country?

If it’s 1950, the answer is that there’s no fast way to transmit questions and answers to test between teachers in different schools. So if there’s only one US History teacher in a high school, and you want to coordinate evaluation across schools, you have to mail questions and answers between schools and everything is slow and messy.

Good news! It’s not 1950, it’s 2012. So data transfer isn’t an issue.

But how do you give teachers the right to choose how their students will be evaluated, while also tracking at the district, state, or federal level?

It’s a little bit nuanced, so here goes:

1) Group teachers based on the subject they’re teaching, the way they want to evaluate students (e.g., long form answers, short text answers, multiple choice, all of the above, etc.)

2) For any given subject — very narrowly defined (adding fractions with small denominators; Kennedy-era Civil Rights legislation attempts; basic properties of neutrons; To Kill a Mockingbird first three chapters) — allow teachers to add questions they use for their tests.

3) For new questions, allow upvotes and downvotes from other teachers teaching the same subject, to allow the system to tell the difference between good questions and bad ones.

4) When it comes time to give students a test, the teacher picks subject(s) on which s/he will test students. Immediately before the test is to be given, the teacher will be shown a pool of questions which may be on the test. The teacher will have the opportunity to filter out questions which are a poor fit for the class. This should be done not based on difficulty (which, when mature, the system should handle satisfactorily), but based on subject matter covered.

5) The actual test questions will be taken from the pool the teacher saw, and given to students. Ideally, on a computer, but that’s not a dealbreaker. This is the straightforward part!

6) The students’ responses will be evaluated by other teachers in the same group (from #1), not by the student’s teacher (a few students might be evaluated directly by the teacher in addition to someone else, for calibration/sanity check purposes, etc.) The grading system used could be similar to existing ones — possibly grading students on a curve relative to others in the class — or it could be relative to other students state- or nationwide.

7) This then allows anyone — the teacher, the school, the district, the state — to gauge student performance relative to certain norms. There are a lot of checks to put in place to make sure teachers are evaluating students comparably — if one teacher gives a set of 20 answers an average of a C, and another teacher gives the same set an average of a B+, either the system or the teacher will need to adjust — but once that’s in place, it’s relatively straightforward to combine question difficulty and teacher scoring.

Such a system would have a number of benefits. It would crowdsource questions, which in the long run would both save teachers time (less time creating tests) and lead to better tests.

It would provide better, faster feedback to teachers and administrators, telling them almost immediately how well their students are performing relative to norms, and if and whether there are gaps in knowledge and understanding.

And it would eliminate the painful process of standardized testing that exists today, while providing a quantitative framework that teachers could easily buy into.

No, you don’t need a real-time data dashboard*

When Circle of Friends started to grow really quickly in 2007, it was really tough for Ephraim and me to stay focused.

Many times over the course of a the day, Ephraim would turn and ask me how many signups we’d had in the last ten minutes. That might have been annoying, but for the fact that I was just as curious: I’d just run the query and had an immediate answer.

Rapid viral growth can be unbelievably addictive for the people who are working to propagate it. You tweaked a key part of your flow and you want to see what kind of impact it’s having — right now. You’ve added more users in the last hour than you’ve ever added in an hour before, and you wonder if the next hour will be even better.

That addictiveness can be a great asset to growth hackers; I’d argue that anyone who doesn’t have that sort of jittery restlessness probably wouldn’t be the right fit in a growth hacking role. Restlessness is a huge motivator: I want to grow the user base, so I’m going to implement this feature and push it out as quickly as possible just so I can see what impact it will have. And if this feature doesn’t work, I’m going to try and get something else out before I leave the office so I can see if I’ve uncovered something else before it’s time to go to bed.

One day, I came up with a feature idea as I was walking to the train station in the morning. I coded it up and pushed it out while on the 35 minute train ride. There was a ten minute walk from the train station to my office; by the time I got to the office I saw that my feature was increasing invitations by around 20%.

I loved telling that story to potential engineering hires.

Here’s the thing, though: if everyone in your company behaves like that, you may acquire a huge user base, but you’ll likely never build anything of long-term value. You’ll wind up optimizing purely for short-term performance, never moving toward a strong vision

Back during that Circle of Friends growth period, I decided to automate an hourly stats email to Ephraim and myself. It satisfied our curiosity about how things were growing right now, but it stopped me from running SQL queries every five minutes. At least in theory, that meant we were focused on real work for 58 minutes every hour. In retrospect, it seems ridiculous that we needed stats updates every sixty minutes, but that actually was an improvement.

My distracted experience is why I worry about the effect of analytics companies that now promote a real-time dashboard as an awesome new feature.

It’s technically impressive that they’ve implemented real-time functionality. And at first glance, it’s very cool that I as a user can log in mid-day and see how stats are trending.

But the key distinction — and about 60% of analytics questions I’ve seen people ask over the years are on the wrong side — is if you’re looking at stats now because you’re curious and impatient, or because those stats will actually drive business decisions.

I’m afraid that in most cases, real-time stats are being used by people who aren’t iterating as quickly as growth hackers. The “need” for stats is driven more by curiosity and impatience than by decision-making.

Execs who are making big picture decisions are probably better served by looking at data less frequently. Growth hackers and IT ops types can and should attack problems restlessly — a big part of their job is optimizing everything for the immediate future. But executives are best-served waiting (perhaps until the end of the week), so they can take a long, deep look at the data and think more strategically.

* unless you’re a growth hacker or something similar