AI Can't Fix Everything — But It Might Rescue the Ideas Worth Saving

When I meet people out and about and they find out what I do, the first question I am asked is almost always around AI - how we are using it, how much impact it has, whether it’s going to be our civilization’s undoing or our savior. I would describe myself as an AI pragmatist, and this is my take.


When I was first introduced to the software world 28 years ago, ‘quality’ largely meant moving, storing and displaying data accurately. There was almost no weight placed on the look and feel of the application, the user experience, or the performance. If it got the bits and bytes where they were going, it was judged to be good. 


This meant the cost to develop “good enough” was low, and therefore lots of people tried it. If it costs $100 to open a lemonade stand and 500K to open a Starbucks, you’re going to see a lot of people open lemonade stands and fewer open Starbucks. And that’s what happened. By 2011, 5 years into my journey as a tech consultancy founder, people with 12K, an idea and a little gumption could shoot their shot at the tech landscape. Most failed, but a few succeeded, and paved the way for others who came later. It was a fast-paced time fueled by Diet Coke and dreams.


But the expectations didn’t stay low. A few apps pushed the envelope with a better look and feel and nicer features that catered to the user. Language was reformed and changed from stiff, formal voices to more casual lingo. Suddenly users’ eyes were opened to what could be. And having seen ‘better’, they were no longer impressed with ‘functional’.


Around the same time, online fraud entered the picture and it hasn’t left since. Security went from ‘it has https’ to dozens of considerations on every project. 


For us, a company built on the ideas that the experience matters and security is non-negotiable, this collective awakening has been an exciting time. We’ve enjoyed making better and better interfaces as the business will (and budgets) have caught up with user desires and the threat landscape. We’ve appreciated seeing more public awareness of security and compliance. 


But, as always, there have been tradeoffs. Budgets required to create a table-stakes, consumer-facing app in 2024 skyrocketed to more than 15x what they had been in 2012. And that changed the landscape. Most people with modest budgets and an idea could no longer take their shot at tech glory. It just cost too much.


That’s not all bad. Some of that impact is weeding out ideas that were going to fail anyway—they weren’t that strong, or the sponsor wasn’t that passionate, or they just didn’t solve a meaningful problem. The world is no worse off without them, and the visionary may even be a little better off (at least in the short term) for not expending time and money on something that was never going to yield a great benefit. To the extent that it causes people to evaluate their business case more thoroughly, this can be a good thing.


And yet, I can’t help but notice that some great ideas have fallen by the wayside. We’ve certainly seen some genuinely passionate visionaries with useful ideas decide to shelve it because the risk was too great given the cost. And that’s a loss. One of those ideas just might have changed the world for the better. And it might have been the springboard that leader needed to become the bold, innovative person they always had the potential to be.

What’s AI doing about it?


The current AI revolution is likely to drive costs down (though not as much as some people claim) and that is a good thing. If we can make it more affordable to pursue ideas, a few of those promising-but-just-a-bit-too-costly ideas may make it to fruition, and that’s a few more shots at making a difference. 


To understand the full picture, we need to look at two separate uses of AI:

  • Leveraging AI to write code

  • Integrating AI within the software itself, such as a chatbot feature


The drastic improvement that I sense people are hoping for is around the first component: leveraging AI to write code. No one’s wrong for wishing for this: it could reduce cost for every coding project across the board, so it’s enticing. It’s appealing to me too—we sometimes lose work because the client can’t justify the dollars. Every time a visionary with a good idea walks away because it was too expensive, I know on the one hand we did the right thing in telling them, but I also wish the dynamics were different and we could bring the project to fruition. If AI can help reduce the number of visionaries walking away with their shoulders slumped after a high-level estimate, that would be incredible.


Yet as much as I or anyone else might want it to be, from what I can see today, this effect won’t be staggering. That’s because actual coding (typing code manually) is maybe 1/3 of the typical time spent on a non-AI development project, and most of the other aspects take about the same or more time with AI. So even if AI reduced the coding time very significantly, say by two thirds, we’d be at around a 22% net improvement in cost given all of the other components of a successful project. That’s still very meaningful when comparing cost-benefit decisions. But perhaps not revolutionary. 


For our work, we’re using AI as a coding helper to drive down costs where the needs are well-defined and the risk is appropriate. For other situations, the work is too nuanced and AI doesn’t make sense. We’ve seen some nice benefits so far, especially in new development that’s fairly ‘boilerplate’ in nature.


The components that make up a quality, trusted product such as design, quality assurance and code reviews still need to happen—and arguably need to be more rigorous with AI at the helm. AI can take you from 70 to 200mph. You don’t want to do that with faulty airbags.


So you might ask yourself (I have): will the sudden availability of a less expensive coding component push some of these standards for experience, quality and security back down? Will people be so enamored by the glut of new tech that they are willing to give a little on the experience?


I don’t think so.


Because the other factor that we haven’t talked about yet is the attention economy. In 2011, tech was still novel enough that new ideas could get attention and traction fairly easily. That’s changed dramatically, and I’d argue AI-driven content has only made it worse. The sheer volume of smoothly worded garbage in my inbox, my LinkedIn feed and about anywhere else you look has rendered it much, much more challenging for new ideas to stand out.


In a 2011 environment starved for tech, ‘useful’ was enticing enough to get noticed. Now, it’s not enough. I don’t think users will be able to recalibrate their expectations back down. If anything, a wave of lower-quality AI products is likely to make users more discerning about the trust signals a product gives off. 


Security, by contrast, isn’t based on vibes. It’s based on proven practices and industry standards. It’s hard to picture that ever moving backwards, especially with threats accelerating. In fact, the trust signals associated with user experience actually dovetail perfectly with security. If there are flaws in vibe-coded applications, they are likely to be in less-visible areas like security. Breaches are going to happen, and that will only drive up the focus on security and compliance.


While AI-assisted coding may not quite be the revolution we hope for, the second category, AI-driven features may very well make up the difference. It allows us to achieve results that would previously have been so time-consuming, they weren’t even considered.


Let’s look at a brief example. Let’s say that I want to read a scanned or photographed document so the user doesn’t have to re-enter a bunch of tedious data. In the olden days, we would fire up ye olde OCR scanner to read the text in the document. Depending on the use case, we’d have to use the position on the page or the text around the specific data we’re looking for to find the data. This worked fine for predictable situations where the document looked similar every time, say a W-2 form for taxes. But it fell apart quickly when the document was variable, say an order form that could be from hundreds of different companies. Code would have to be individually written for each possible format, and that made the project terribly expensive both for the initial effort and to maintain as documents change.


AI is a revolutionary solution here, because it allows us to solve the problem (extracting data from variable input) that was so time-consuming it was essentially unsolvable before. Instead of writing code for each variant, a developer can simply feed it to AI and rely on its ability to understand context to extract the relevant information.


And that’s just one example. There are many more:

  • AI-generated summaries of complex or hard-to-read documents

  • AI-based predictive analytics, such as analyzing which cases are likely to need escalation or which patients are likely to suffer complications

  • AI-authored verbiage to assist customer service personnel, enhanced with that specific customer’s unique data

  • AI-driven identity management, allowing users to scan a variety of ID documents to prove their identity during high-stakes registrations

  • AI-led data summarization, allowing users to feed in large datasets and get rapid analysis (we’re experimenting with this in our own BI product, Rover)

  • AI image and video recognition, tracking output in a factory to look for anomalies or safety issues

Each of these is a massive step forward, because it allows developers to affordably solve a problem we could not solve in a satisfactory way with traditional programming. 

That’s good news for the visionaries of the world, walking into tech meetings with a spreadsheet and a breathless pitch on the tip of their tongue. I’m looking forward to seeing a few more walk out with square shoulders and a grin on their face, ready to take on the world.

Next
Next

Vision or Ego? Four Questions Every Visionary Should Ask