Moneyball for Design

One of the more common questions I hear from a venture capitalist is…

“How do you control quality when design is so subjective?”

It’s a fair question. I lead ConceptDrop – a platform that matches brands with a vetted community of designers and creative pros. So in essence, design is our livelihood.

Yet, design is hard to master. The output is rarely a binary success or failure. Instead, you often have one piece of work, and a multitude of opinions. Sure, you can always evaluate whether someone’s work has the right design or artistic principles, but it still doesn’t guarantee positive reception by an audience.

So how do you mitigate the variance of opinions? Is there a way to build a model that can actually predict which pieces of creative will be received favorably?


If you haven’t heard of Moneyball, it was made-famous by author Michael Lewis, as he told the story of the Oakland A’s building a winning team through a data-driven approach, despite a disadvantaged payroll.

The concept is fairly easy to grasp. People tend to make decisions based on subjective reasoning; our internal calculator that is influenced by group-think, biases, and other overt factors. And despite human confidence for rational decision making, our internal calculators often misfire, or break entirely.

The “Moneyball” approach, in contrast, guides us to make decisions based on statistics and/or numerical models. With enough data, these models can consistently outperform humans across nearly every function. Unsurprisingly, as our access to data has grown, a number of teams and organizations outside of sports, have started to apply their own Moneyball for X strategies.

So when I started reading Michael Lewis’ most recent novel, I had the epiphany that we were actually leaning almost entirely on data to control our quality and performance – Moneyball for design, if you will.

I’ll outline this with a hypothetical example –

Let’s say you have received this rare stroke of good fortune; two designers have offered to create your entire website for free. The only caveat; you can only choose one of them, and you must do so based on limited information.

Here’s what you receive for Designer #1.

  • Access to an absolutely stunning, beautiful website created by Designer #1


Here’s what you receive for Designer #2

  • 31 years old
  • Avg. work email response time of 32 minutes
  • Typically charges $75/HR
  • Accepts projects within 6 minutes
  • Freelances 20 hours per week
  • Major in Business Administration


So here’s where we stand. For Designer #1, we have the Mona Lisa of websites, but no other information. For Designer #2, we have 6 data points, but zero examples of past work.

Intuitively, most people would look at Designer #1, who has an incredible work example, and choose them for the project. But, our computer system would actually select Designer #2.

In this case, a handful of solid data points is actually much better at predicting future performance than one great data point. But as humans, we usually opt for the undisguised judgment.

How do people perceive designers and artists? Out-there? Creative? Unconventional? Most people also assume that business degrees and email response times are things that don’t matter much much here. Maybe sometimes, but the quick decision is more likely to only represent what most people perceive a great designer to look like. It’s the convenient mental computation – a bias that causes us to make a snap judgment that seems more likely, but is actually wrong. This is the graphic design example of the Representative Heuristic, originally proposed by psychologists Amos Tversky and Daniel Kahneman.

So how do you use data to transcend our cognitive biases? Well, what we have found is that even the smallest behaviors, leave traces of data that can be used to inform future decisions. Did the designer attach their portfolio or use a website? Did they build their own website, or did they use some free template site, like Wix? These are little things we can capture

The potential data-points are endless. And honestly, a lot of it is futile until you achieve some level of statistical significance. What I can say, is that over time, you can absolutely optimize a model as you start to understand what data points actually do matter.

In our Company’s system, we went from a 100% human-directed matching process a few years ago, to a 100% technology/algorithm enabled process. And the more we rely on technology and algorithms to make our client/freelancer matches, the more our client retention and project ratings improve. Now, we also match ~99% of our projects within 60 minutes without any kind of human overhead or intervention.

With enough data, you can translate insights into recommended actions for nearly every activity. You make a hypothesis, input your data points, extract feedback, and run it back again — aiming for incremental gains.

So to answer the original VC question, “How do you control quality when design is so subjective?”

We can’t control who sees our designs, but we can control who makes them. So when we pair clients with designers, we eradicate our own judgmental subjectivity. We play Moneyball, and approach it like math nerds.

Related posts