AI Guide for the Curmudgeonly

There’s perhaps no more introspective soul at a conference right now than the AI holdout. At every panel, there are earnest people who approach me after, glancing down at their shoes when they confess that they’re not sure about AI, and haven’t adopted yet. What follows varies in wording but the fundamental core is this: Am I alone to not be fully on board with this? If I open myself up to the idea, am I too late?

The answer to both questions is no, you’re not alone and you’re not too late.

This week I’m going to answer some of the most common questions I am asked. If you have other questions, shoot them to me and we may do another edition of this sometime soon.

I tried AI and it hallucinated! There’s no way it can do a person’s job. Can’t I just wait for this fad to blow over?

I’m sorry my friend, but no.

I can tell you that I felt this way myself for the first year or so of the LLM boom. Sure, it’s useful for limited, specific applications, I thought. But it can’t possibly change how we work! What we do is way too nuanced and complex. ChatGPT told us to open up port 88 for crying out loud! Real transformation is a long way off.

Change always wins in the end. The long-term track record of progress naysayers claiming “this newfangled thing will pass” with a toothpick in their mouth and a squint in their eye is rather poor. 

As of today, AI is involved in almost all of our projects. The technology that was so spotty initially has rapidly evolved to be truly useful. And that’s reflective of the expected pattern.

Early Promise -> Early Disappointments -> Better Use Cases -> Refined Technology -> Positive Impact

This is how tech adoption always works. Yes, AI makes mistakes. This is more pronounced with low-quality input (short prompts, lack of context, faulty data). But this doesn’t mean the technology isn’t around to stay. Early airplanes could only fly a few hundred feet. They stuck around, and AI will too, because the upside is so compelling, and because human beings are driven to figure out how to do it better.

So no, you can’t drag your feet hard enough to stop this change from coming. It’ll happen with or without you. And one day in the not-too-distant future, you may wake up and find you’re on the other side, glad for this new tool at your disposal. Which usually leads to the next worry.

I’m afraid I’ve waited too long to start adopting AI and everyone else is way ahead. Is it worth starting now?

The extremely short answer is: yes.

Let’s take a closer look at this fear. We all know adoption happens on a curve. There’s an interesting pattern we’ve seen over the years where as a new technology takes off, there are simultaneously lots of people who think they’re behind and relatively few implementing in a meaningful way. There are a couple of reasons for this.

  • Organizations are applying some spin to their publicly stated positions on the AI adoption curve. “We use the AI search results pretty regularly” turns into “AI is a pivotal part of our core workflows”. My theory is this is a way that people deal with the anxiety associated with rapid change. The truth is that most firms aren’t quite as far ahead as they act like they are. On day 2 of a new implementation, somehow they’ve been doing this for years. This probably comes from the same mental quirk that drives leaders to overstate their expected growth in those annual industry surveys. Unless that’s just me. 

  • Meaningful implementations take time and resources to get right—but the hype undersells that. The inherent conflict in the hype rush is that everyone’s trying to implement AI faster, faster, faster, and yet impactful AI implementations largely come down to proper planning, use case selection, and resource allocation. Anything you can conceive and stand up in an afternoon probably isn’t that useful—it needs refinement to make it work. Yet with so many unknowns, companies can be afraid to give a new project the time and resources needed. This leads to half-baked pilots that never make it to the point of being impactful, not because of a lack of potential but because of unrealistic expectations. If a leader has been sold on the idea that AI is a plug-and-play solution, they won’t be prepared to allocate the resources needed to bring the project to fruition.

  • But the payoff is huge. AI can solve problems that previously couldn’t be solved. So amid the hype and chest-puffing and failed pilots, there are some genuinely remarkable transformations. These are primarily occurring today in back-end processes, not full customer-facing automation, though use cases exist for both (more on this later). This is the carrot that keeps everyone going.

All of this means that there’s good news: you’re probably not as far behind as the posturing you see online would indicate.

There’s more good news in that if you plan properly and allow for the needed resources, you can create impressive transformation within your organization. But to do that, you need to select the right use case.

All right, you’ve convinced me to give it a shot but I don’t want to get blamed for a failed pilot. How do I pick a winning project?

This is a common question. There is no shortage of things AI could do for your team. But what’s the right one to start with? 

The right choice will have a few characteristics:

  • It’s important enough to care about. AI is like any other tech implementation: it takes effort to get it right. In order to justify that effort, the issue you’re working on needs to be more than just a minor annoyance. An AI tool to solve a problem that takes 10 minutes once a month may not be worth it unless there are extenuating circumstances (it’s causing a high error rate, customer frustration, etc). You want the thing that’s causing substantial pain, or an opportunity that yields substantial benefit.

  • The stakes are appropriate—or you can make them appropriate. AI is best used to handle tasks that matter, but which can also tolerate some variation in the AI response. Why? AI is statistical, not deterministic. That means there’s some risk of error every time it responds. It won’t do things perfectly consistently. Ask the same question 100 times, and you will get some variation in the answer. If the stakes are too high (say, AI deciding where to cut in neurosurgery) that variation may be unacceptable. The solution is to find a way to lower the impact of an AI mistake - such as having AI recommend something to a human operator, who ultimately makes the call.

  • You can clearly identify what you want. AI result quality varies widely, and one of the biggest factors is how much accurate information the AI has about the request. This might be in the form of training data, a prompt, or context passed with the prompt. Whatever the source, the less information AI has, the more likely it is to hallucinate.

  • Your data set is appropriate to the request—and AI’s current capabilities. AI can do many things well. It can’t do everything, especially if there is a limited amount of data available to train and validate the model. Many businesses are in this situation, and there’s no easy way out of this one. Be honest with yourself about the state of your data, and the capabilities of the model you’re using. I know some companies are using AI to clean up the data that AI will then consume. There are legitimate applications of this technology (cleaning data), and risky ones (inventing data). AI is notoriously less successful when making up missing information vs. acting on known data. If you’re asking AI to make up missing data, versus pulling from a known reference source, that’s essentially hallucination mining, and the results are likely overconfident guesses.

Most organizations find that when they run their wish list through the above criteria, it narrows considerably. The good news is that once you select your use case and plan properly, the implementation goes relatively quickly. So give yourself a little more time up front for a more successful project. And by the way, if you are looking for professional assistance with this, we offer AI Strategic Planning workshops where we guide your team through an interview process and then prepare custom recommendations and cost-benefit analysis.

I keep hearing about the privacy and security risks of AI. Is it safe to adopt?

Much of the hand-wringing around AI security stems from what happens to any sensitive data your team might put into an AI tool. The logic goes that if you’re sharing your sensitive data with AI, it could store that in an insecure way, or even train its model on that data which could ultimately expose that data to risk. Is it likely? Probably not. But the results could be catastrophic.

There’s also a nuance here that I think is important: if you’re in a regulated industry or have a need to preserve high trust, while the actual chance of sensitive data being accidentally leaked by a model may be low, if you can’t explain where the data went and why it’s secure, you still have a problem.

In that respect, AI is like any other technology product: there ought to be some rules around what kinds of data it’s authorized to store and what it does with that data. You may not think of it, but any cloud-hosted SaaS product that touches sensitive data has a similar quandary. So how do we solve this? The same way we’d solve it for any other tech platform:

  • Assess the AI tool’s security and privacy posture. This should tell you how the data is used, how it’s transmitted and stored, how long it’s retained, and if you have the right to request deletion of your data.

  • Make a decision about what you’re comfortable with in terms of the types of data it can handle. Maybe some tools are okay only for non-sensitive data, while others may be authorized for use with sensitive data. A lot of the information you need to make this call can be found by reading publicly available documentation.

  • Consider how files are transmitted and stored. AI can be powerful for reading and analyzing files, but be sure to factor in how to store and transmit files securely. How long do they need to be kept around? What happens if the user deletes their thread?

  • Think through if a private solution is the right choice. Public models may not give the control or defensibility you need. Consider a private solution where your data is not commingled with others or used to train models if you have the need to process sensitive data.

  • Evaluate how the tool will make sure access to data is controlled. For example, should everyone be able to access the same information, or do some people only see their own files? Make sure AI isn’t opening up the entire database when it shouldn’t.

With upfront planning and diligence, an AI solution can be no riskier than any other tech transformation play. That also keeps you out of the hot seat.


You don’t have to become an AI enthusiast overnight, or ever. But I do encourage you to cautiously try a real use case, and give it an honest shot. Make up your own mind. Then, if you still feel curmudgeonly toward AI, by all means, yell at those kids to get off your digital lawn. But you may just find a kernel of something that sparks your own curiosity for how the world could be better.

Next
Next

Sharing the Vision