Category Archives: Talent

The Bespoke Talent Equation

Premise: Christensen explains that the rate of technological improvement outpaces the customer demand/stress on the technology, leading to a recurring phenomenon where something seems like a toy in its inadequacy is soon more than adequate of meeting complex needs of customers.

Hypothesis: In the talent professions, if you can fund yourself by finding those pockets of customer demand in which technology is adequate to customer need, we will be surprised some day as to how the technology can supplant beyond just the pockets.

The New Talent Verification

The new reputation is moving away from flawed institutional inter-mediation (brand name of a firm for example) to crowd-sourced reputation and identity verification tied to sharing and search of knowledge.  Fred today:

The power of the GitHub model is not just a repository of work and version control in the cloud. It’s the public nature of much of that work. And the reputation and identity effects for those who publish some or all of their work publicly.

Tools like StackOverflow (a USV portfolio company) and GitHub allow programmers to see how other programmers have solved similar problems. I was at a hackathon up at Columbia University last weekend and one of the hacks was a development environment that automatically queried StackOverflow and GitHub as you are writing code so that you always have in front of you the answers to the questions you are most likely to ask. The developer who did the hack introduced it by saying something like “programming these days is more about searching than anything else”. That reflects how collaborative the sharing of knowledge has become in the world of software development as a result of these cloud based tools for developers.

Talent Stories: What Do You Get For $1000/Hour?

The talent matching problem by example.

A top firm recently added an antitrust partner, stating to the world, and I am paraphrasing here, that this person:

specializes in antitrust issues.

This same new partner advocated in front of the DOJ Antitrust Division on behalf of a client.  The DOJ staff attorney that he was negotiating with wondered and, quoting here, shared that this new partner asked:

the most basic and clueless questions that I began wondering whether he was purposely playing dumb to get information out of me.

Not an act.

This illustrates a fundamental information asymmetry.

The evaluation above by an expert is opaque to a client.  At one time, perhaps, the client could count on the top firm’s imprimatur of a person as partner.  Now, it cannot.

So what then?

A Real-Time and Distributed Talent Ledger

Recently, a law firm that I am familiar with was named Practice Group of the Year in a certain area.

As a practical matter, these awards play a meaningful role in the market because — due to lack of better information and high search costs for such information– clients take them seriously as forward looking indicators of and shorthand for talent.  Tens of millions of legal fees ride on such awards.

The award struck me as ill-fitting knowing that the group — beyond a couple of notable exceptions — has had a lot of turmoil recently with many of the best members no longer there, new members who are unproven, and some there who have checked out.

Others noticed too.  In response to complaints that the award was not deserved because in some cases, certain achievements were misrepresented and in other cases, credit was due to key folks who had left the practice, this was the publication’s reasoning/defense paraphrased:

The general feeling was that senior partners taking too much credit for matters is way too common a thing to make a fuss about.

I understand the main criticism is that *** losing these people makes them much weaker practice-wise going forward, but these awards are backward-looking over two years and it was pointed out to me that * and * left a month after the cut-off for the consideration period.

This reply touches a lot of the reasons that current measures of talent are so often misleading:

  • certain folks talking too much credit and other folks getting little credit;
  • “Backward looking” and not “forward looking”

In other words, the award conveys an undeserved competence.

Abstracting up, the issues lie in the fact that the talent measure is not current nor accurate and it relies on a centralized source that can be gamed.

Thinking of this, with the idea of a Bitcoin network wide distributed ledger: let me ask, is there a way to capture at least some of this information (recognizing that much is subjective) in a way that bypasses a centralized intermediary that can be gamed and become stale.

 

The Fire-In-The-Belly Cycle

Growing up as a sports fan, you learned that players enter the league, playing for the love of the game, and some, after a few flush years, stop caring as much, feeling entitled, distracted by their lifestyles and their paycheck.

Growing up, you realize this works in many professions,  You encounter those partners, a decade when they last gave good advice, much too worried about the size of their paycheck and their mortgage for their upstate log mansion and Jackson Hole ranch to care about their clients or show up on calls.  As one hedge fund GC recently told me, he never hires the senior partner, because he knows that person does not care, but the hungry junior partner might care.

Here is how this works

One chief investment officer at a $5 billion institution breaks down the typical hedge fund life cycle into four evolutionary stages. During the early period, when a fund is starting out, its managers are hungry, motivated, and often humble enough to know what they don’t know. This tends to be the best time to put money in, but also the hardest, as the funds tend to be very small. Stage two occurs once the fund has achieved some success, when those making the decisions have gained some confidence but they aren’t yet so well-known that the fund is too big or impossible to get into.

Then comes stage three—the sort of plateau before the fall—when the fund gets “hot” and suddenly has to beat back investors, who tend to be drawn to flashy success stories like lightning bugs to an electric fence. Stage four occurs when the fund manager’s name is spotted as a bidder for baseball teams or buyer of zillion-dollar Hamptons mansions. Most funds stop generating the returns they once did by this stage, as the manager becomes overconfident in his abilities and the fund too large to make anything that could be described as a nimble investing move.

You maximize your chance at success by understanding where folks fall in that cycle.

Hiring Humility

It’s refreshing to see Google being so upfront about their past mistakes in hiring philosophy:

On the hiring side, we found that brainteasers are a complete waste of time. How many golf balls can you fit into an airplane? How many gas stations in Manhattan? A complete waste of time. They don’t predict anything. They serve primarily to make the interviewer feel smart.

One of the things we’ve seen from all our data crunching is that G.P.A.’s are worthless as a criteria for hiring, and test scores are worthless — no correlation at all except for brand-new college grads, where there’s a slight correlation. Google famously used to ask everyone for a transcript and G.P.A.’s and test scores, but we don’t anymore, unless you’re just a few years out of school. We found that they don’t predict anything.

What’s interesting is the proportion of people without any college education at Google has increased over time as well. So we have teams where you have 14 percent of the team made up of people who’ve never gone to college.

After two or three years, your ability to perform at Google is completely unrelated to how you performed when you were in school, because the skills you required in college are very different. You’re also fundamentally a different person. You learn and grow, you think about things differently.

Another reason is that I think academic environments are artificial environments. People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment. One of my own frustrations when I was in college and grad school is that you knew the professor was looking for a specific answer. You could figure that out, but it’s much more interesting to solve problems where there isn’t an obvious answer. You want people who like figuring out stuff where there is no obvious answer.

Career Shorts

This has been sitting in my draft folder for months now, having been forgotten. Appropriately, I found this today and am posting.

This is the economist, Tyler Cowen, on career choice as distortion by short-term signaling issues.

“I think about this a lot: you’re young, you come from a smart, wealthy family, you’re somehow supposed to show that you’re successful quite quickly. Banking, law, consultancy allow you to do this; engineering, science and entrepreneurship less so. Your friends expect it, your parents, your potential mates do … So we see so many talented people very quickly having to signal how smart they are but that may not be the longest-term social productivity.”

Build, Buy, or Prize

I have discussed innovation prizes and Kaggle before as a source of algorithmic or other innovation.

But what about a vivid example in its fully glory.  How about about offering a prize and succeeding to gain publicity, get multiple self-selected teams working on a problem and coming up with a creative solution, and becoming, in the process, known as a place to do cool engineering, making it easier to attract talent?

This is what Netflix did, as described in a recent Businessweek profile on Reed Hastings and Netflix.

His geeky side became fully apparent in December 2005. He was convinced that the star rating system provided all the information Netflix needed to predict accurately what people want to watch. Others at the company argued that more indicators—whether people started playing something and then stopped, or searched for a particular actor, etc.—were needed as well. Hastings spent two weeks over his Christmas vacation pounding away on an Excel spreadsheet with millions of customer ratings to build an algorithm that could beat the prediction system designed by his engineers.

He failed. Still, the attempt sparked the creation of the Netflix Prize, a $1 million bounty to the person or group that could improve its ratings-based algorithm the most. It was the rare meaningful publicity stunt: The winning team, a collection of independent engineers from around the world, built Netflix a better prediction engine. And a company that was famous for red DVD mailers and outmaneuvering Blockbuster started gaining attention as a place for creativity.

Netflix can now hire just about any engineer it wants. That’s a function of the computer science the company does and its reputation as the highest payer in Silicon Valley. Managers routinely survey salary trends in Silicon Valley and pay their employees 10 percent to 20 percent more than the going rate for a given skill. Fired employees also get ultragenerous severance packages; the idea is to remove guilt as an obstacle to management parting ways with subpar performers.

The Talent Combine

Forbes reports on the accelerating trend of using programming contests to smoke out the best talent even when it is not looking for a job:

A few years ago such out-of-the-way stars were invisible to U.S. recruiters. Today it’s much easier to spot them. Thanks to a flurry of online programming contests that attract entrants worldwide, it’s possible to identify coders who do Caltech-quality work, even if they live halfway around the world and earned their degrees at Ural State University…

InterviewStreet was the ticket out of Siberia for Yakunin, the programmer from Ekaterinburg. He wowed the hiring engineers at Quora, a knowledge-sharing website in Mountain View, Calif., by being the only person out of more than 700 respondents to win a perfect score on a CodeSprint challenge it sponsored. Often the best coders aren’t eager to apply for a job. They just want to prove their mettle against all comers. Mindful of this dynamic, InterviewStreet moved the bulk of its contests to a website called HackerRank, where most entrants log in with pseudonymous user names. Job hunters authorize the site to reveal their real names to potential employers.

Blunt Tools

While data is a game-changer, it’s also important to remember that half-assed data tools are not better than no data tools.  A good example are the horrible software algorithms used to sort through resumes over the last decade:

Algorithms and big data are powerful tools. Wisely used, they can help match the right people with the right jobs. But they must be designed and used by humans, so they can go horribly wrong. Peter Cappelli of the University of Pennsylvania’s Wharton School of Business recalls a case where the software rejected every one of many good applicants for a job because the firm in question had specified that they must have held a particular job title—one that existed at no other company.