Imagine a product manager sitting down with his team on a Monday morning to plan the next week. The main item on the agenda is the decision of whether to create a mobile app for their e-commerce website.

The team quickly gets excited about the idea, imagining all the possibilities it could entail. They propose features like personalized shopping recommendations, integrations with social media platforms, multiple payment options, and in-app notifications about releases of new products.

But our product manager is starting to feel uneasy. 

He has trouble identifying even for himself what the problem is. It all seems too much, too fast. There are too many unspoken assumptions being made and too many sky-high expectations being formed. He knows better than anyone how much all these features are going to cost, and how long they’re going to take, and wants to test some basic assumptions before committing to a full-fledged mobile app. 

He knows he has to say something.

Why Quality Is Sacrosanct But Shouldn’t Be

Our product manager doesn’t dare say “I think we need to do this at lower quality first.” 

Quality is sacrosanct in the modern workplace. You’re never allowed to say you’re going to deliver something at 70% quality, or 50% quality, not to mention 20% quality. The unspoken rule is that you always do “your very best.”

And yet, we never really follow this rule, do we? What percentage of the tasks you complete are executed with maximum effectiveness? How many of the projects you take on reach their full, unmitigated potential? How much of the work you do is the best it could possibly be?

I would argue that that number is close to zero. It has to be, because the closer you get to the mythical 100% quality mark, the more costs skyrocket. It might take you as much time and effort to go from 90% to 95% quality as it took you to go from 0% to 90%. And it might take you that much time again to go from 95% to 99%. And there probably is no such thing as 100% perfection.

In other words, quality is an asymptote when you take costs into account: it can only approach 100%, but never reach it. And the closer you get to that 100% standard, the more costs grow exponentially.

I think we know and understand this tradeoff at an intuitive level. All day long, we make such tradeoffs fluidly:

  • When sending quick emails to close colleagues you settle for a lower quality level versus an email sent to an important client
  • When giving presentations to the Board of Directors you put a lot more shine and polish into your slides than when running the weekly standup
  • A piece of writing you know will be published nationally in print will receive a lot more editing attention than something going on your personal blog

We also do have ways of communicating to others that this tradeoff is being made. We say things like:

  • “Let’s make a basic prototype and test it before going to manufacturing.”
  • “Let’s put up a rough landing page to gauge interest before building a full website.”
  • “Let’s shoot a concept video on a smartphone before hiring a production crew.”
  • “Let’s mock up this app feature before putting it into production.”
  • “Let’s do a dry run of the presentation before getting on the Zoom call.”

We have a whole vocabulary of terms and expressions to convey that we are going to do something at a lower quality than we could otherwise achieve, in order to save time, minimize expenses, mitigate risk, or test assumptions.

Notice how each of the terms above communicates some version of “Don’t judge this too harshly or nitpick details – it’s just a rough draft.” This language is meant to set the right expectations so you have permission to experiment with something you don’t know is going to work. This loosens the usual standards and conventions you operate under by asking your audience to consider the big picture rather than obsess over some tiny error.

What all this means is that we are constantly making tradeoffs about how much quality we can “afford” for a given task, document, deliverable, project, or goal. No unit of work can escape this tradeoff – it applies to the shortest email you answer to the grandest goals you have for your life. All we can do is move up or down the quality/cost curve.

Graph Cost vs. Quality

But there is something missing from the diagram above: quality is not just a single curve.

Consider the following question: Which is higher quality – a lawn mower or a cold brew coffee? That question makes no sense. We have to compare similar items for quality to even apply. 

Well, how about a Toyota Camry versus a Tesla Model 3? You might jump to say that the Tesla is obviously better. But it depends on the situation. If you’re a first-year college student living in a dorm on a shoestring budget with no access to an electric charger and afraid of break-ins in a new city, then the Camry is better along nearly every dimension.

And that is the word we need to unpack: dimension. The “quality” of a product isn’t just one dial that gets turned up or down. There are multiple dimensions of quality, each of which can be dialed up or down independently. For example, a car can be judged by its:

  • Horsepower
  • Maximum range
  • Aesthetic appeal
  • Carrying capacity
  • Cost of ownership
  • Amenities and features
  • And many other criteria…

Each brand and model of an automobile has a different combination of these dimensions they maximize, minimize, or satisfice on in order to appeal to a target customer. Different kinds of customers value certain dimensions of quality differently, which is why we have a thriving marketplace of numerous automakers, instead of a winner taking all. 

Obvious, right?

Now consider how this idea applies to our digital creations – the websites, reports, slide decks, code bases, and pieces of art and music we produce. Let’s take a piece of writing for example. It might be judged on its quality along the following dimensions:

  • Density of insight
  • Entertainment value
  • Specificity and detail
  • Practicality
  • Storytelling
  • Clarity

There are many other criteria we could use, but let’s take those 6 as a starting point. Once you break down the concept of “quality” this way, you can see that individual pieces of writing aren’t better or worse than each other – they are different, rating higher or lower on each of these dimensions of quality.

A tweet probably beats out an in-depth essay when it comes to density of insight but loses miserably on specificity and detail. A how-to article wins when it comes to practicality, but likely falls short with its storytelling. 

This kind of comparison applies even to highly similar pieces of writing. Imagine two journalistic articles covering the same story, such as an oil spill off the Alaskan coast. They might rate the same on specificity, practicality, and clarity, but if one is told in a more entertaining way, that single superior dimension of quality might differentiate it from the other.

This effect is magnified on the Internet, where even a slight difference in just one dimension of quality can mean the difference between relative obscurity and viral growth. That’s because there are no barriers to what people can access online, which means a huge majority of their attention tends to flow to a tiny percentage of all the things that get posted online each day. 

The Power of Knowing Which Dimensions Matter

For a given digital creation, we have control over which dimensions of quality to invest our time and effort into. It’s not random. 

When working on a piece of writing, for example, you could ask yourself: “Which dimensions of quality are most important for this piece right now?” In most cases, there are only one or two that truly matter, and the others only need to be “good enough” or don’t matter at all. 

For example:

  • For a “how to” article explaining how to use a piece of software, the quality of the storytelling probably doesn’t matter as much, whereas clarity is paramount
  • For a thought leadership piece, the level of insight is of utmost importance, whereas the specificity might not matter as much
  • For an “elevator pitch” for a new business idea, the length needs to be as short as possible, whereas the practicality might not be important for now

Once you see that the quality of any output is made up of multiple dimensions and that typically only one or two truly matter, you are free to spend most of your time and attention on only those dimensions. And not only are you free to, that is what you must do if you want any chance of standing out.

In contrast, a guaranteed way to get stuck and bogged down is to try to maximize many dimensions at once. That isn’t possible or necessary, since a given piece of writing can really only make one promise: a clearly explained idea, an engaging story, a practical series of tips. Therefore, any effort you spend improving the lower priority dimensions is not only wasted, it might even interfere with and obscure the ones that matter.

This is another way of saying, “Not all aspects of a piece of work matter equally.” By taking opinionated stances about which ones do matter, and pouring all your time and attention into them, you have a chance at “winning” the attention game along a dimension that no one else can match.

The Pitfalls of Defaulting to High Quality

If all this sounds reasonable, why don’t we do it?

I’ve long noticed that most people tend to have a default level of quality that they get stuck on, like a car stuck in a certain gear. It’s like a habit – a sticky set of behaviors and personal standards that are entwined with their identity, and thus difficult to change. 

It’s easy to see why defaulting to a low level of quality would be limiting to one’s career and life – it will be hard to get that job you want, much less keep it, if the output you produce never reaches a high enough level of quality. 

It’s much more difficult to see how being stuck at a high level of quality can be just as limiting. In fact, in my work with high-performers, this is one of the most common limitations keeping them from taking the next step in their careers and businesses. 

Let me explain.

For any given profession or kind of work, there is usually a “standard of quality” that people hold themselves to. For example:

  • A graphic designer defaulting to a high degree of quality will insist that every graphic asset that leaves their desk be highly polished and ready for printing
  • A writer defaulting to a high level of quality will refuse to send in their draft or manuscript until it perfectly meets their vision for what they want to express to the world
  • A software engineer will continue refining her code until every line has been thoroughly documented, tested, and validated to the highest standards of quality

Again, notice that each of these cases describes a respectable, admirable professional. Such people are already rare! How unusual is it to encounter any high-quality piece of work anywhere? It is an accomplishment to reach such heights, and relatively few ever will.

But…if you have reached those heights, the very attitudes and skills that got you there are likely now holding you back. Here’s why: not everything can or should be high quality

In fact, most things most of the time should not be. Why? Because it’s so damn expensive. If you insist that every. single. piece. of work that leaves your desk (or computer) with your name attached be at the highest level of quality you are capable of achieving, several increasingly severe consequences will start to happen.

First, over time, you’ll produce less and less

You won’t have as much time to obsess over every detail and polish every facet as you get older. You may have a spouse and kids and a dog and all the responsibilities those wonderful beings entail. Your metabolism will slow down and your energy levels will go with it. This is just how life works. You won’t have the same boundless energy and vast swaths of free time in the future as you had in your youth.

Second, you’ll be limited to individual contributor positions. 

As long as you identify solely as a craftsperson – as the expert who is always there on the workbench or at the drawing board or in the studio – that’s where you will remain. The privilege of being in those places doing the work you love will slowly turn to resentment as you realize you have no other choice. There is nothing you will grow to hate as much as something you used to love.

Third, you’ll be limited in the scale and impact of what you can create

Any significant work of art, culture, engineering, or business requires other people to reach its potential. And not just a few people you personally know – a lot of people in far-flung places, most of whom you will never meet. Even for something as seemingly solitary as writing my book, I can count over two dozen people who were directly involved, and there were probably hundreds more indirectly involved. Working with others is challenging because they will never do it quite as well as you. But you have to learn to live with that if you ever want to manifest a grand vision.

Fourth, you’ll be under-compensated and under-appreciated. 

The world doesn’t pay experts very well. Related to all the points above, as long as you have to control every aspect of your output, you’ll never receive the financial rewards and respect you deserve. You’ll be stuck producing exquisitely crafted but small, limited works of art that someone else will find a way to commercially exploit, likely leaving you with peanuts.

The paradoxical conclusion of all this is that, for the highest-performing professionals at the top of their game, the bottleneck to their growth is in learning to lower their standard of quality.

Why It’s So Hard to Lower Your Standard of Quality

At first glance, it would seem easy to simply not try as hard and stop before you’ve brought something to perfection. But it’s hard for several reasons.

First, as I mentioned previously, our identities are closely tied to the default level of quality we are most comfortable with

We know how to reliably reach our favored level of quality. It’s comfortable, predictable, and brings expected rewards. Many of us have built entire identities around the results we produce: we are the “kind of person” who “does great work” and are thus determined to never fall below that standard. Letting go of that standard can feel like letting go of who we are.

Second, delivering at a lower level of quality is not just a matter of stopping when you reach a certain point. It requires you to understand which dimensions of quality can be sacrificed, and which still need to be maximized. 

You might make hundreds of micro-decisions over the course of producing a document or deliverable. For each decision, you have to become more sensitive to which dimensions of quality it is improving, and whether that dimension matters for the current iteration you’re working on. You have to spend more time thinking at a “meta” level, considering questions such as how long each feature is going to take, which risks or other consequences it creates, and which prerequisites depend on which others. 

For a rough-cut concept video, do you really need title screens, precise editing, cinematic background music, and maximum resolution, or can any of those be saved for a later iteration? For the first round of photo proofs, do you really need touch-ups, color grading, and cropping, or does only one of those matter until you gather more information? To answer such questions, you have to get so clear about precisely which assumption you’re currently testing, exactly which information you’re trying to surface, or which hypothesis specifically you’re validating.

Third, working at lower levels of quality surprisingly requires more advanced communication and collaboration skills

Other people and stakeholders have to be ready to receive the “quick and dirty” draft you’ve made, which means you need to prepare them in advance. It’s of no use to deliver it in half the time if they aren’t ready and it will just sit around collecting dust. The way you communicate has to change because you have to calibrate the expectations of your collaborators so they consider the big picture instead of zeroing in on some inconsequential detail.

Fourth, you have to get much, much better at receiving feedback

This is a whole collection of skills within itself: how to ask for specifically the kind of feedback you need, how to ask follow-up questions to discover what people really think, how to convey which kinds of feedback aren’t helpful, how to decide who to get feedback from and in what form, how to document and structure that feedback so it’s helpful, how to implement what you’ve learned without getting discouraged or losing your vision. 

And fifth, all the points above require greater emotional intelligence and self-awareness

Each attitude or skill I’ve mentioned is about embracing change, and even accelerating it, as a means of learning faster. It turns out that in order to learn faster, you have to expose yourself: to people’s opinions of you and your work, to the consequences of mistakes and failures, to the disappointment of a promising new experiment not working out. You’re going to need more emotional fluidity to be able to pivot abruptly from one promising direction you may already be invested in, to another more promising one.

Moving From Quality to Fidelity

All the ideas and observations I’ve offered above point to one glaring need for modern knowledge workers: to replace the concept of “quality” with a more subtle and sophisticated one: fidelity. 

Quality is an industrial-age idea. It comes from a time when society changed slowly, business was about making something strictly uniform, and we could expect to spend our careers in one field or even one company perfecting our craft.

But all of that has changed. At every level, our society and politics and economy and culture are all shifting far faster than ever before, and in more unpredictable ways. Quality is no longer about sticking faithfully to a timeless process passed down through the generations. It depends instead on your ability to maintain situational awareness about your environment and adapt your thinking and behavior to match it. 

The only way to maintain such situational awareness is to constantly test and probe your environment to discover what is happening and why. Such tests have to be low-quality because they have to be fast. 

You can’t spend a year building a mobile e-commerce app, because the e-commerce landscape will look completely different in a year and your hypothesis will be obsolete, even if it was correct! You can’t spend months artfully crafting your take on an emerging trend, because, by the time you publish it, most of the value of taking an early stand on it will be gone.

The word “quality” has a moralistic connotation that implies more of it is always better than less. That’s why we need to let go of it. It’s time to embrace fidelity instead.

The word “fidelity” means “faithfulness,” as in “How faithful should this deliverable be to the ultimate version of what it could be?” Sometimes, the answer may be that whatever you’re creating demands the highest levels of fidelity. If you’re at the end of a major project for example, and it’s time to deliver the final product to a client, it’s probably wise to maximize fidelity.

But fidelity is also a morally neutral term, conveying that more is not necessarily better. There is tremendous value in being able to produce rough, early, unfinished, unpolished experiments, especially when speed and adaptability are the top priorities. 

If you’re early in a project, or there are still a lot of unknowns, or you’re trying something new and risky, then working at low fidelity might serve you better. You can save tremendous amounts of time and expense, not to mention avoid huge risks and pitfalls, by creating something rough and ready and then iterating from there.

How can we use this new understanding of fidelity to increase our speed?

By giving ourselves permission to reduce the fidelity of whatever we’re creating. To dial it down to the absolute minimum needed to answer only the next, most important question we’re facing. To focus all our attention only on the next bottleneck, and ignore everything else.

Using AI For Low-Fidelity Prototyping

There is a special place for Artificial Intelligence when it comes to creating low-fidelity prototypes.

In a previous piece, I wrote about my experiments using ChatGPT to summarize books, my conclusion was that there wasn’t much value in the book I was trying to summarize. It didn’t answer any open questions for me, or solve any problems I’m facing.

This might seem like a failed experiment, and it was. But there is a lot of value in failed experiments – they reveal what isn’t true, doesn’t work, or isn’t worth pursuing.

Why was the conclusion I reached so meaningful? Because writing a summary of this book had been on my to-do list for 5 years. It was a “marginal” task that I had some interest and desire in doing, but not enough to actually commit the time and energy needed. Looking at the many tasks I’ve accumulated in my task manager over the years, most of them fall into that category: potentially important enough to keep around, but never urgent enough to actually do.

By using ChatGPT to make a low-fidelity summary that was just good enough for me to get a sense of the book’s contents, I was able to test the assumption that it would be relevant to my needs much more quickly than it would have taken otherwise. In a sense, I was able to create a “rapid prototype” of the summary that wasn’t good enough to publish but was good enough to help me decide whether this task was worth doing at all.

I suspect this may be the greatest impact of AI tools in the short term: allowing us to quickly create low-fidelity, 80/20 prototypes to test assumptions about what we should do next and get an idea of what a final version might look like if we do.

Imagine this scenario: you have the ability with the mere click of a button to have AI complete any task on your to-do list at 50% of the quality that you’d be able to do yourself. You can do so almost instantaneously, without risk or penalty if it goes wrong, and at no cost other than a couple of minutes and an affordable monthly subscription.

Let’s say you run “AI tests” of 50 tasks on your to-do list, revealing that:

  • 20 aren’t worth doing at all
  • 15 can be executed completely by AI without your involvement
  • 10 need to be restructured and broken into pieces for AI to then complete with your supervision
  • 5 require your full and undivided attention

That would be a tremendously powerful breakdown to have at your disposal. It basically represents a plan for how to tackle a broad spectrum of tasks, which would replace a large amount of cognitive effort you would otherwise have to spend yourself. 

Many of the “intermediate” stages of our workflows include this kind of categorization, analysis, chunking, decision-making, and planning of tasks. By replacing these intermediate stages with AI, I think our time and attention will get freed up to spend in two places: the very beginning of our creative process – deciding which information to capture as inputs in the first place – and the very end of our creative process – polishing and refining the final product to perfection as only humans can.

This use case alone might dramatically free up our time since we all spend a fair proportion of our days doing tasks that don’t require our full attention. But there is another, even more interesting and profound way I think we’ll use AI. 

It arises from the fact that there is an inherent amount of uncertainty surrounding much of our work. We don’t know in advance which tasks require 100% quality, 50% quality, or are worth doing at all at any level of quality. Often you don’t understand the nature and potential value of a task until you’re already doing it, as with my example of summarizing a book.

Reducing that uncertainty is another area where I think AI will make a major impact. If you can have it complete a task at 50% quality at virtually no cost, that should be enough to eliminate a lot of uncertainty about whether it’s worth doing and what the best approach is. 

You might ask it to generate a mockup of a webpage you’re considering making, or a batch of test code you’re thinking of writing, or an outline for a course you’re thinking of designing – since this labor is free, there’s no downside in trying it, and only upside if it happens to produce something you can use. 

Once you’ve seen a rough, low-fidelity version produced by AI, and made the decision to green-light the full-fidelity version, you might need to take it from there to bring it to a level where it can be published. But even in that case, you’ve gained a tremendous benefit: you’ve been able to visualize many more (and weirder, more divergent) scenarios and consider more (diverse, unusual) options before committing your precious time and attention to one.

This approach also gets around a lot of the personal baggage and identities that we attach to a certain standard of quality. We won’t be as attached to the work that AI does on our behalf and thus can tolerate a much wider range of fidelity than we would ever accept from ourselves. 

By replacing the loaded term “quality” with the more precise “fidelity,” by focusing all your attention on the aspects of the deliverable you’re working on that matter most, by treating everything you do as a continuous iteration, and by using AI to rapidly test new directions before committing to them, you’ll open up a world of possibilities in which emerging technologies are an ally of your creative vision, not an impediment to it.

Follow us for the latest updates and insights around productivity and Building a Second Brain on Twitter, Facebook, Instagram, LinkedIn, and YouTube. And if you’re ready to start building your Second Brain, get the book and learn the proven method to organize your digital life and unlock your creative potential.