Don’t Let AI FOMO Make You Build the Wrong Things
No one feels the pressure of other people’s FOMO as deeply as a leader with a technology play. A leader is under pressure from investors and boards, who tend to fear irrelevance more than failure on the front side of a new venture. At the same time, if failure does come to pass, it’ll be plopped unceremoniously at the feet of the leader. This creates an impossibly narrow lane to walk: being daring enough to do something everyone can call transformational, while being certain you’re going to succeed at it.
Nowhere is this more evident than when a new tech frontier comes to the mainstream. Fifteen years ago, it was mobile apps. If you were anyone in the tech space, you had to have a mobile app. Half of those chasing mobile apps had no idea what they were going to do with one, but they knew they had to have it.
Flash forward a decade and a half, and the frenzy has calmed. Apps are still popular and still one of the most effective ways to build customer loyalty, but boards and leaders are now clear-eyed about the prospects. An app must have a purpose. It must make things better for the customer (or the intended audience, if it’s not the customer).
AI is currently having the moment that mobile apps had 15 years ago. The potential is remarkable. The buzz factor is huge. And yet, without a clear plan, companies risk wasting large amounts of money on a press release launch that no one will care about.
Can AI make a huge impact for a company? Yes, it can. Is the mere fact of “doing AI” guaranteed to create that impact? No, it’s not.
AI has to solve a real problem—better than the way it was solved before.
What do your users fear most?
Since AI can be a non-trivial investment, you must be able to achieve a non-trivial benefit. The cool factor of saying you have AI in your release notes isn’t going to cut it. To find the biggest benefit opportunities, look for the biggest pain points.
What do your users really fear, dread, or hate about their experience with your brand? This can be awkward to examine. In some companies, there’s a habit of sanitizing strong negative user moments and acting like, since there hasn’t been a ready-made solution to them, the pain point doesn’t exist. Just because you haven’t known how to solve a pain point doesn’t mean your users don’t feel it.
Take, for example, the moment of having a loan application denied. It’s a painful experience for the consumer—frustrating, possibly derailing their plans, and often tied to embarrassment or shame.
From the financial institution’s perspective, some denials are unavoidable. Maybe the applicant’s credit score is far too low or the loan amount far exceeds their income and collateral. The business may be entirely correct in its decision, but the pain point still exists in the customer’s world.
You see similar patterns in other industries: a patient whose preauthorization is denied, a consumer who can’t afford medical care, or an adult caring for a dying family member. In each case, the problem has two dimensions:
A functional problem (you can’t get what you need)
A feeling problem (you feel alone, unseen, or misunderstood)
These are the kinds of problems companies often downplay—precisely because they’re not easy to fix. But for customers, they’re defining moments that can create loyalty or cause them to leave.
AI offers an opportunity to change these pain points for the better. Imagine havingyou could have an AI assistant working with the consumer as they put together their loan application, helping them forecast how likely they are to be approved, maybe even recommending they seek a lower amount or delay and work on their credit score (with specific steps) if there’s a high risk of denial. That would provide massive value, removing a peak negative moment and positioning the institution as the user’s advocate and partner.
The same could be said for an assistant that helped users explore options for paying for medical care, how to get their preauthorization approved, or how to care for their loved one. All of these plays solve problems that actually matter to the user.
Now contrast that with trying to get people to talk to an AI chatbot instead of calling customer service. When was the last time your life as a consumer was transformed by a chatbot? They might be useful sometimes. But at best, chatbots are typically a slick version of an FAQ. At worst, they can be tone-deaf and an obstacle.
I’m not saying don’t create an AI chatbot—it can be a useful tool (and I’ll have more tips on that in a future edition). But take it for what it is: an incremental step. It won’t lead you to transformation.
Get the emotion right first
As we saw in the loan denial example (and in similar moments like a denied preauthorization or a family caregiving crisis) there are always two intertwined problems: one functional, one emotional.
The functional problem tends to be what leaders focus on. And that makes sense: business leaders are taught to solve problems, and that looks like a problem to be solved. However, it’s not always solvable, at least not in the short term.
Problems in the first category often have a subset that can be solved through education or getting the user to change tactics, and another subset that cannot be solved in the short term. The same framework applies to our other examples: patients who can’t afford their care, members whose preauthorizations are denied, and people taking care of sick loved ones. Some of these can be solved with knowledge of the system and education. Some cannot.
The second category of problems—the feeling problems—are almost always solvable. This is good news because that category also creates user loyalty and stickiness. It’s also the category most closely linked to well-being. A University of Houston study of patients experiencing financial stress found that increasing feelings of support and belonging actually improved outcomes more than offering financial aid.
That’s right, if we can’t solve the functional problem, but we make people feel supported and a sense of belonging, their well-being improves along with medical outcomes. But if we solve the functional problem but fail to address the emotional needs, the person’s outcomes remain flat.
Solving for the feeling is crucial to making a difference.
AI is the hero we need
The most under-utilized aspect of AI is its capacity to cater to emotional nuance in varied situations. Let’s apply this same pattern to another high-emotion, high-stakes moment: an employee (or their loved one) receiving a serious medical diagnosis. Just like in the loan denial example, there’s a functional challenge (understanding options, costs, and next steps) and a feeling challenge (shock, fear, and isolation).
Addressing both—especially the emotional side—is where AI’s adaptability and empathy can shine.
Imagine you were tasked with writing a script for call center employees at a crisis support line offered as an employee benefit. You are charged with covering every possible interaction they could have with an employee. We’ll even make it easier and assume you’re doing this only for employees who have received an upsetting medical diagnosis or have had one in their family.
Now, the trick is to make sure that, at the end of the call, the employee feels seen, heard, and supported—and that they’re not alone. You also want to ensure you’re providing helpful information so they know what to do (if there’s anything that can be done), such as connecting them with financial aid resources and support groups in their community.
Your callers will have a variety of backgrounds, financial situations, and diagnoses, and you want to make sure they feel truly heard and that responses don’t sound robotic. How long would that take you?
You’d probably never finish. The sheer number of possible topics, the nuances in responding depending on whether it’s them or their loved one with the diagnosis, whether they’re worried about finances, whether the diagnosis is terminal, whether there are treatment options—it’s endless. That’s why knowledgeable, compassionate call center staff are so valuable.
Now, think about what it’s like to interact with an emotionally intelligent, well-informed AI tool like ChatGPT. It can do all of that—well, and quickly.
Notice that the information is helpful, but you could also get that from Googling. What ChatGPT provides that searching does not is emotional nuance. It validates you and provides appropriate support based on the context of what was said.
Now, take that empathy factor and put it into the tool we discussed earlier, to guide users through difficult situations (and, in some cases, prevent them).
That’s transformation.
Not just what, but how
AI alone doesn’t create loyalty, but empathy does. AI, when used thoughtfully, is more than a tool for efficiency; it’s a bridge between what your users need and how they feel while getting it. By solving for both function and feeling, you transform not just your product, but your relationship with the people you serve. The leaders who recognize this will not only survive the AI wave—they’ll define it.