How to Use AI Personalization Without the Ick Factor

The Right Way to Use AI Personalization: Empowering, Not Invasive

This article is adapted from my upcoming book, Love at First Launch: A Visionary’s Guide to Bringing Extraordinary Tech to Life. If you’re into this content, you can subscribe to this newsletter to be alerted when the book is available.

As your merry band of disruptors grows and you begin to build your product’s functionality, you will have more opportunities and resources available to you. That means there are more kinds of technology you can ally with. You don’t have to incorporate any of it, of course. It’s up to you to choose which resources will benefit your revolution. It’s an important distinction: Do all those shiny objects make the product meaningfully better, or are you just chasing trends? 

To sift through the possibilities, the most important thing is to keep squinting up at that banner that flies over your revolution: what changes in the world when you succeed? If the shiny object doesn’t contribute to that, it’s probably not worth the distraction.

One reason that AI can be such a useful tool, is that it strengthens the already powerful “magic formula” for driving impactful user behaviors. To recap, that formula is:

“I see you” -> Build trust with the basics -> Introduce new pathways

A quick rundown of the steps:

  • “I see you”  The first component is what we call a kernel of connection. It’s a moment where your product says to the user, “I see you.”  This is not usually a literal “I see you.” That’s too vague. Rather, you imply to the user that they are not alone because you (your product, your organization) see their hopes or fears.

    The entry point is often the first place you introduce this kernel of connection. If you were TurboTax, this might look like, “Anxious about your taxes? We can help.” That simple statement does two important things. First, it acknowledges how the user feels, and then presents a solution. It doesn’t work if you remove the acknowledgement of how the user feels. “We can help with your taxes” is weak and ineffective. You have to lead with understanding. The offer of a solution is secondary.

  • Build trust with the basics  We’ve discussed the importance of making the user feel seen. But what happens next? How do you go from there to a place where the user trusts you enough to embrace the big vision? Consciously or unconsciously, users are looking for signals. They’ve encountered a few good apps and probably a lot that weren’t worth their time or energy. They start to make assumptions based on whether your app looks and feels like one of the good ones or one of the bad ones. By showing early trust signals such as a smooth login experience, a modern yet appealing look and feel, and flawless execution on basic tasks, you let the user know this is going to be a positive experience. And that priming sets you up to be more successful with future efforts to influence their behavior.

  • Introduce new pathways  Once you have built trust with the basics, you can move on to introduce the big vision. This is the heart of the revolution: what you really want people to do. And you’ll tap into your full arsenal of motivations and connections, including hope. Hope involves three elements, in this order: a clear vision of what you want, a belief that the user has the power to get there, and clear pathways they can take. Are you beginning to see how this creates a positive spiral of trust?

How personalization changes the game

Because the first step, the “I see you” moment is so important, it’s an opportunity to pull out all of the stops, and technology, specifically AI and personalization, can help us do that. Personalization has become table stakes for tech products; people expect tips, offers, and recommendations to be at least minimally curated for them. And users are turned off when that personalized content misses the mark.

If you’re under 55, have you ever received solicitations from the AARP? It’s become a rite of passage. I have had friends in their 40s post on social media in mock dismay when they received their first AARP postcard. I think I got mine at age 42. We haven’t worked with AARP —they may well be in on the joke and intentionally marketing to younger-than-retirees. But the initial reaction isn’t one of feeling ingratiated toward the organization; it’s a little bit of indignation. “Don’t you KNOW that I’m NOT OLD enough to be in this organization?”

This communication doesn’t feel personalized. 

On the other end of the spectrum, one of the most successful sales tools we ever built for a client was about 5 years ago, when we worked with a B2B SaaS client in the finance space. They needed to achieve adoption quickly to show their investors they were on the right track. 

To show users the return on a potential investment, we built an ROI calculator that generated a report based on their personal financial data. The calculator didn’t just produce a number—it provided a sexy, 10-page, highly designed, full-color PDF in a case study format. It helped potential users envision what using the product might do for them in an aspirational way. It was as though the product was saying to potential users, “We see you–as the most successful possible version of yourself.”

AI Personalization without the ick factor

Interestingly, while many people like ChatGPT, Gemini, Copilot, and other generative AI tools for work and play, the popularity hasn’t translated to a positive view of chatbots for customer-service tasks. A 2025 Accenture survey found that chatbots used in consumer banking had only a 29% user satisfaction rate, the lowest of any channels surveyed. The highest satisfaction rating, 60%, was found in app interactions.

At People-Friendly Tech, we are watching this trend closely. Our research suggests that the reason for the negative perception is that early chatbots were primarily trained on the company’s existing FAQ—meaning that the bot could do little more than regurgitate basic, general information that is the same for everyone. Not personalized at all, even if it calls you by name. Early bots mimicked the look and feel of live agent chats, which set up an expectation that the chatbot couldn’t live up to—it wasn’t equipped to provide the same level of service as a live human. This led to user disappointment.

Today, chatbots have evolved to also integrate personal data: the user’s history, account information, and more. This improves the function greatly, but users are still hesitant due to early bad experiences. We predict this will resolve in time (and with careful messaging).

For example, companies should consider not referring to an AI tool as a chatbot. That can have negative connotations due to those early efforts. An AI agent, or a named product (Alexa, Siri, etc.) sets a more positive tone, although it’s not always a good idea to name your AI tool. (Whether that’s appropriate depends on industry and context; don’t get too whimsical with a name if your topic is serious). You can also build a tool that doesn’t feel like a chatbot at all, simply by replacing the familiar two-sided typing interface that’s in a separate window, with something that appears on the same screens the user is already on. An example might be sliding in a question about whether the user wants to see another video as they finish a piece of content. This is AI, but it doesn’t feel like a chatbot and doesn’t trigger the negative chatbot associations.

Some users also react with fear to chatbots, mostly around privacy concerns. While it’s crucial that your AI tool is informed on the user’s specific details, it should refrain from reciting those back to the user in a way that someone might find disconcerting. For example, it’s normal for a health insurance company to know that the user has had their annual wellness exam and congratulate them on that; it might be disconcerting if that same company referenced the user’s credit score, since the user wouldn’t expect the provider to know that information. That reminds me of the time I was on a first date and the man had researched me and my family beforehand in great detail—reciting my parents’ and brother’s names, where they went to school, and even what street my parents lived on. 

There was not a second date. 

Don’t let your AI be like that unsettling date. Make sure it doesn’t know things the user would feel it shouldn’t know. 

AI should also consider the tenor of the message and generally should also avoid scolding the user. An AI agent congratulating a user on contributing to their savings 3 months in a row might be welcome; an AI tool chiding the user for not contributing this month is likely to be unwelcome. A famous example of this is Duolingo’s AI owl, Duo, who sends guilt-laden messages like, “You made Duo sad. You haven’t logged in today.” Duolingo gets away with this because of its over-the-top use of feelings to build rapport, but this is an advanced maneuver. Do not attempt unless your team is very, very good at working with feelings in tech. For most products, guilt trips are as unwelcome in technology as they are at a family gathering. Use all of that personal knowledge for good, not guilt.

Wielding AI with discernment

When you get personalization right, you're not just using data—you're creating that crucial 'I see you' moment for each individual user. Every interaction becomes an opportunity to show users you understand them, not in an invasive way, but in a thoughtful, empowering way. That's the difference between personalization that delights and personalization that drives people away. The technology is ready. The question is whether you're ready to wield it with wisdom.






Next
Next

How Visionary Leaders Stay Motivated While Waiting for Success