Forget the future, artificial intelligence (AI) is already changing not just what we do but what we expect.
“We tolerate less friction and become less patient with uncertainty,” says Lollie Mancey, programme director for innovation and entrepreneurship at UCD’s Innovation Academy. That’s not good, she reckons, given how much of life’s good stuff comes from friction and struggle.
She is concerned about how easy we’re making things for ourselves and how our attitude to technology has changed.
“We’ve gone from having very low expectations to, all of a sudden, needing perfect answers instantly – and constant responsiveness. When you start expecting that from your tools, you expect that from people too,” she says. “Our standards for everything, including communication, creativity, emotional support, therapy and friendship, are all being redesigned.”
READ MORE
Mancey made waves recently on TV, talking about the AI boyfriend she had just “dumped”. The fact that the audience booed, albeit good-naturedly, showed just how comfortable we are anthropomorphising AI. But with more than 670 million users of AI companions worldwide already – and growing – that’s no surprise.
What most interests Mancey is how our nervous system responds to the frictionless, consistent attention AIs can offer. It’s a relationship that feels soothing, is always available, affirming and attuned to you, she says.
“People are getting into relationships with their ChatGPTs. We need to be really mindful because it’s changing the bar for humans in unrealistic ways. It’s lowering the bar by giving you the illusion of closeness without the complexity or mutuality of human interaction.”
We even ascribe empathy to AI. “That is really worrying because it’s literally a predictive model. It doesn’t have empathy. It’s more like a sociopath faking empathy.”
As humans, we are hard-wired to anthropomorphise, making us suckers for generative AI. “When something looks human we assign motive, emotion and intention to it, even though we know it’s code,” says Mancey.
These parasocial relationships encourage us to bond with AI models. The more humanoid and relatable they seem, the harder we fall for them, even though we know they’re not real. “It’s not naivety,” she says, “it’s biology. We’re actually wired for social interpretation so, as a consequence we can’t help ourselves. To some extent we are sitting ducks.”
She notes that although Ireland has only now announced a national AI literacy programme – alongside commitments to foundational digital skills, digital literacy, media literacy across all levels, and a national AI skilling campaign – the cultural gap remains a significant challenge. Our norms, education structures, workplace practices and moral instincts still take far longer to evolve than the technology itself, she says.

The dangers of AI are emerging: “We’re deploying tools for children without understanding the impact on their development. In work management, we’re automating tasks without talking about accountability. If AI makes a mistake, is it my mistake, or is it the AI’s? In media, synthetic content is outpacing our ability to verify what’s real, so we can’t trust anything.
“In care, we’re using AI for emotional support but without the safeguards. Already cases are going through the courts [along the lines of], ‘My AI told me to and therefore I did’. So how do we protect our most vulnerable?”
The danger isn’t that we are behind the curve so much as that the current culture lag “becomes normalised – and we build our systems without realising what’s going on,” she cautions.
Thankfully there are human skills that will remain irreplaceable, she reckons, including judgment and the ability to weigh trade-offs, context, ethics and consequences. “AI will make a decision based on black or white, right or wrong, but there are always mitigating circumstances.”
Next up is sense making. “It’s how we turn information into understanding, which is your narrative strategy, your meaning,” she says.
The third strength humans have, for now, is relational intelligence. “That’s your trust building, your leadership, negotiation, and conflict handling. AI can imitate but it can’t be accountable. So accountability has to be our main job,” says Mancey.
For humans simply become fact checkers to AI would be just tedious. Unfortunately, we may well be sleepwalking into just such a scenario, she warns, because of our susceptibility to “decision complacency”. That is, how we defer to AI simply “because it sounds confident”, she explains.
Worse still is deskilling risk. “What we outsource, we lose. We can see this already in the shrinking of our memories and capabilities. We have to be more mindful of what we decide not to do any more, in case we lose those skills,” she warns.
Then there are technocracy concerns. “The power concentration we have, where a handful of people control the infrastructure of intelligence, is a disaster we’ve let happen.”
So, what might all this mean for marketing? For one, “When perfection becomes cheap and easy, and without friction,” says Mancy, “proof of human involvement is going to become a sort of luxury signal. We can already see it in the fact that more value is put on live performances and on the rough edges that Gen Z is preferring – the imperfect, the unfiltered.”
In a homogenised and “culturally flattened” AI future, she says, a brand’s ability to be distinctive will be vital to its survival. Competitive edge will come from sharper or more disruptive points of view, from bolder risk and deeper cultural insights, as well as more local texture.
Mancey predicts we will increasingly value visible labour: the provenance of items, who made them and in what condition. We’re going to care more about traceability and the story and ethics behind a product, not just the aesthetic.
But agentic AI, technology that kicks off and manages a series of functions, including purchasing, will have a serious impact on marketing too. After all, AI agents aren’t going to be influenced by advertising.
“Classic persuasion mechanisms are going to be less effective, so the influence then moves and mutates upstream,” says Mancey.
For brands, that means ensuring you are the default recommendation in an agentic AI’s decision making model, which you do by ensuring you are “trusted in the data layer” in terms of reviews and other reputation signals, both in terms of consumer values and where you sit in relation to your competition.
“Marketing becomes less about emotional connection and seduction in the moment, and more about strategic positioning in the ecosystem as a whole,” she explains.
Right now she admits to “teetering” between optimism and pessimism in relation to AI. What Mancey is sure about, however, is that “it’s going to be a very interesting decade ahead”.
“We have some serious challenges in terms of how we protect ourselves as a society and what we protect in terms of our humanity. That’s all to play for right now.”
For more of the Inside Marketing podcast and content series, click here














