The AI Mirror
Diego Bonifacino on what your organization's drawings reveal about its deepest fears
As artificial intelligence becomes ubiquitous in 2025, psychodynamic research reveals that when people draw AI, they’re not really drawing technology—they’re exposing their organization’s hidden anxieties, power struggles, and existential dilemmas.
Across boardrooms from Silicon Valley to Shanghai, the message seems universal and urgent: AI or die. CEOs face unrelenting pressure from investors demanding credible AI roadmaps. Analysts cite predictions—such as McKinsey’s estimate of potential 40% productivity gains by 2035—to argue that hesitation borders on negligence. Fall behind now, the logic goes, and your competitors will eclipse you before the decade ends.
But what if this fervent embrace of AI isn’t purely about innovation or opportunity?
During his research on how professionals actually experience the rise of artificial intelligence, Diego Bonifacino—an action researcher and graduate of INSEAD’s Executive Master in Change—encountered an unsettling pattern. That buoyant sales director who champions AI in every meeting? The manager who eagerly volunteers for every pilot program? Their enthusiasm, he discovered, may be less about strategic vision and more about fear.
“The determining factor for this research emerged during my very first interview,” Bonifacino recalls. He was speaking with a conversational designer—someone who would readily describe herself as an AI enthusiast and who relied on the technology every day. But when he asked about shifting work expectations, her tone faltered. Her expression tightened.
“Are you worried about your work?” he finally asked.
“Sì,” she admitted quietly in Italian. “It is difficult not to be.”
In that moment, Bonifacino recognized something disquieting: her fear was also his own. Beneath his professional optimism about AI’s transformative potential, he realized he had been running from an uncomfortable truth—much of today’s excitement about artificial intelligence may be fear in disguise.
That realization became the foundation for Bonifacino’s unprecedented study exploring how leaders and professionals really feel about artificial intelligence—not through surveys or interviews, but through something far more revealing: their drawings.
Drawing Out the Truth
Over three research cycles involving 45 participants across Southern Europe, Bonifacino developed a deceptively simple method he calls “Picture AI.” He asked executives, engineers, healthcare workers, and business leaders to draw what AI means to them and their organizations. What emerged wasn’t just a collection of doodles—it was a psychological X-ray of how organizations unconsciously relate to one of the most transformative technologies of our time.
The drawings revealed a striking pattern: AI functions as what psychoanalysts call a “phantastic object”—a blank screen onto which people project their deepest desires, anxieties, and unresolved conflicts. When participants drew AI, they weren’t really drawing technology. They were externalizing their fears about obsolescence, their frustrations with chaotic workflows, their hopes for relief from drudgery, and their organizations’ hidden power dynamics.
One participant, working in environmental services, drew AI as a house processing customer chaos—revealing not just a vision of AI, but an admission that customer input felt overwhelming. A nursing consultant’s drawing exposed the tension between following procedures and showing empathy to patients. These weren’t AI visions; they were confessions about work itself.
“When people talk about AI,” Bonifacino concluded, “they are often not just talking about AI—they reveal powerful insights about organizational life.”
The Metaphors We Choose
The symbolic language participants used clustered around three dominant metaphors: brains, magic, and nature.
Brains appeared in eight drawings, depicted in various states of transformation. In one, the human brain shrinks as the AI brain grows—a visual representation of cognitive displacement anxiety. In another, a Pac-Man devours the brain, while elsewhere a rocket-propelled brain suggests enhancement. These competing visions revealed deep ambivalence: Is AI augmenting our intelligence or replacing it?
Magic emerged as a metaphor for the unknown. Participants drew magic boxes that transform raw inputs into finished goods, and magic wands that solve problems without understanding. One participant explicitly contrasted what he knew about AI with what his company saw—highlighting a dangerous ignorance gap. The element of magic tied directly to control: Who wields the wand? Who understands the spell?
Nature and organic metaphors suggested something even more unsettling: the delegation of control to forces beyond human authority. Trees with expanding roots, dawn-like suns that people walk toward “almost like in a religious procession,” and references to Avatar’s “tree of souls” painted AI not as a tool but as a ruling system—something that grows according to its own logic and reorganizes society.
Whether through brains, magic, or nature, the drawings spoke of relinquishing control—or at least profound ambivalence about it.
Fear-Driven Enthusiasm
The most significant finding emerged from Bonifacino’s psychodynamic analysis: approximately two-thirds of participants exhibited defensive postures toward AI.
Using a mental-state coding framework developed by psychoanalysts, he estimated whether individuals operated from what’s called a “depressive position”—characterized by the ability to tolerate ambiguity and integrate contradictions—or from more defensive positions marked by splitting, idealization, or paranoia.
The majority showed signs of defensiveness. Even more revealing, their enthusiasm often served as a sophisticated defense mechanism. Bonifacino identified what psychoanalysts call “identification with the aggressor”: siding with what frightens you to regain a sense of safety. People adopt AI’s language and logic to cope with vulnerability, without consciously acknowledging their fear.
Consider the participant who drew a broken heart—one half black (the bad Terminator-like AI) and one half golden (the helpful AI). The explanation emphasized control: “man is always the one who manages AI.” Yet the image told a different story: a heart divided, unable to integrate its contradictions.
Another drew humanity’s next evolutionary step: an enormous brain connecting all aspects of society, but the human figure had shrunk to just a head, eyes closed, body gone—a Matrix-like vision. Though the participant acknowledged fear, the drawing revealed something deeper: a loss of physicality, agency, and grounded reality.
One drawing showed a stark contrast between today’s “sad” work—colored like coal—and tomorrow’s AI-enabled “WOW” moments of strategic thinking. As researcher Stefan Selke noted, AI has become a “secular promise”—a modern myth offering salvation in place of old religious or ideological systems.
“Was the entire hype behind AI fear-driven?” Bonifacino asked. The answer, uncomfortably, seemed to be yes—at least in significant part.
The Faultlines That Fracture
Two latent divisions—what researchers call “faultlines”—repeatedly disrupted group cohesion: domain expertise and gender.
The expertise divide was palpable. “We’ve been doing AI for the last 20 years,” many technically-oriented participants emphasized, drawing a sharp line between themselves and those excited by recent generative AI. This wasn’t just pride; it was a boundary that excluded non-technical colleagues from meaningful participation.
Bonifacino contrasted two European sales directors: one from a global beverage company, one from a top tech firm. The beverage director kept AI at arm’s length, believing its impact was “still five years away” and that the conversation was “reserved for technical people.” The tech director, by contrast, lived with AI daily—her targets set by algorithms, her coaching supported by generative models.
The gender gap also emerged. All women in the study included humans in their drawings; 44% of men did not. Women tended to focus on tangible outcomes—changes in work composition, autonomous societal elements. Men were more likely to draw processes, networks, interplanetary visions, and boundaries. Only men drew scenarios without any human presence.
These patterns echo research showing gender gaps in generative AI adoption as large as 20%. Faultlines matter because they determine who gets heard, whose concerns are validated, and whose imagination shapes the future.
Three Questions, Five Dimensions
In the final phase, Bonifacino presented participants with a collage of all the drawings and asked them to identify themes, concerns, and questions. Despite having only minutes to observe, participants converged on remarkably similar insights.
From their observations, five core tensions emerged:
Evolutionary Path: Excitement about transformation meets fear of the unknown
Cognitive Boundaries: AI should enhance, not erase, human creativity and intuition
Bandwidth: AI connects everything but risks overwhelming us with information
Reliance: Productivity gains come at the potential cost of agency and skill
Accessibility: Democratizing AI power raises questions about ethics and misuse
Even more striking, participants’ questions—though phrased differently—collapsed into three essential inquiries:
How will we use AI, and to what end?
What will be our role?
How do we feel about it?
These questions map directly onto Organizational Role Analysis, a framework for understanding the dynamic relationship between individual, role, and organization. Bonifacino synthesized these insights into the AI Reconciliation Canvas (AIR)—a practical tool that crosses the three questions with the five dimensions, creating fifteen conversation spaces where teams can explore their relationship with AI. It’s not a solution template but a “holding environment” where groups can confront tensions, clarify boundaries, and move from reactive adoption to conscious co-creation.
What Organizations Should Do
The implications of this research challenge conventional AI adoption strategies. Technical training and change management programs typically treat resistance as an information problem. Bonifacino’s findings suggest something different: resistance is information.
First, recognize that AI discussions reveal organizational truth. When teams talk about AI, listen for what they’re really saying about work, control, value, and identity. Their AI concerns are symptoms of deeper organizational dynamics that preceded the technology.
Second, create spaces for collective reflection before rushing to implementation. The Picture AI method proved surprisingly powerful. In just minutes, groups surfaced themes, concerns, and questions that might never emerge in traditional town halls or surveys.
Third, attend to who’s in the room. Faultlines around expertise and gender aren’t just diversity issues; they determine whose intelligence shapes adoption. When technical experts monopolize AI conversations, organizations lose the grounded, human-centered perspectives that non-technical colleagues bring.
Fourth, don’t mistake enthusiasm for readiness. The finding that two-thirds of participants exhibited defensive postures—even among motivated MBA students—suggests that visible excitement may mask unacknowledged anxiety. Leaders should cultivate the capacity to hold ambiguity, integrate contradictions, and tolerate not-knowing.
The Mirror, Not the Answer
Perhaps the most profound insight from Bonifacino’s research is simply this: AI is a mirror. It reflects back our organizational anxieties, our unspoken power dynamics, our hopes for relief from drudgery, and our fears of obsolescence.
In 2025, as AI capabilities accelerate and adoption pressures intensify, organizations face a choice:
They can treat AI as purely a technical implementation—something to roll out, measure, and optimize.
Or they can recognize it as an opportunity for organizational truth-telling: a chance to surface what’s not working, who’s not being heard, and what people truly need to do their best work.
The drawings Bonifacino collected—brains growing and shrinking, magic boxes processing chaos, trees extending roots, broken hearts, humans walking toward digital dawns—aren’t just about AI. They’re about us: our relationship to work, to control, to knowledge, to each other, and to an uncertain future. “When we talk about AI,” Bonifacino concluded, “we are, in essence, talking about ourselves.”
The question isn’t whether AI will transform organizations. It already has. The question is whether we’ll do the human work—the psychological, relational, emotional work—required to make that transformation conscious, inclusive, and genuinely beneficial. The technology will keep advancing regardless. But the quality of our collective thinking, our ability to hold complexity, and our willingness to examine what the AI mirror shows us—that will determine whether we shape this transformation or are simply swept along by it.
In the end, adopting AI wisely isn’t a technical challenge. It’s a human one. And as Bonifacino’s drawings reveal, we’re still figuring out how we feel about that.
The research discussed is based on Diego Bonifacino’s thesis “Picture AI: Navigating the Organizational Dynamics of AI Adoption” completed in 2025 as part of INSEAD’s prestigious Executive Master in Change programme.
References and Additional Reading
Bion, W. R. (1962). Learning from Experience. London: Heinemann.
Bonifacino, D. (2025). Picture AI: Navigating the Hidden Dynamics of Organizational AI Adoption. Thesis for Executive Master in Change, INSEAD.
de Maat, S., Scherbakova, O., & van de Loo, E. (2024). Metaphors of Success: Finding Potential Manifestations of Unconscious Phantasy? Socio-Analysis, 25, 1–16.
Desmarais, A. (2025, February 11). Here’s What Has Been Announced at the AI Action Summit. Euronews. https://www.euronews.com/next/2025/02/11/heres-what-has-been-announced-at-the-ai-action-summit
Freud, A. (2018). The Ego and the Mechanisms of Defence. London: Routledge.
French, R. B., & Simpson, P. (2010). The ‘Work Group’: Redressing the Balance in Bion’s Experiences in Groups. Human Relations, 63(12), 1859–1878.
Lau, D. C., & Murnighan, J. K. (1998). Demographic Diversity and Faultlines: The Compositional Dynamics of Organizational Groups. The Academy of Management Review, 23(2), 325–345.
Long, S., Newton, J., & Sievers, B. (2006). Coaching in Depth: The Organizational Role Analysis Approach. London: Routledge.
McKinsey & Company. The economic potential of generative AI: The next productivity frontier. (2023, June). Retrieved November 30, 2025, from https://www.mckinsey.com/~/media/mckinsey/business%20functions/mckinsey%20digital/our%20insights/the%20economic%20potential%20of%20generative%20ai%20the%20next%20productivity%20frontier/the-economic-potential-of-generative-ai-the-next-productivity-frontier.pdf
Menzies, I. E. P. (1960). A Case-Study in the Functioning of Social Systems as a Defence Against Anxiety. Human Relations, 13(2), 95–121.
Otis, N. G., Cranney, K., Delecourt, S., & Koning, R. (2024). Global Evidence on Gender Gaps and Generative AI. https://doi.org/10.31219/osf.io/h6a7c
Pasmore, W., Winby, S., Mohrman, S. A., & Vanasse, R. (2019). Reflections: Sociotechnical Systems Design and Organization Change. Journal of Change Management, 19(2), 67–85.
Selke, S. (2024). Future Technology as Comfort: Promising Tales About AI. In Lifelike: Artificial Intelligence, Humanoid Robots, and the Future of Humanity (pp. 255–279). Wiesbaden: Springer.
Thatcher, S. M. B., & Patel, P. C. (2012). Group Faultlines: A Review, Integration, and Guide to Future Research. Journal of Management, 38(4), 969–1009.
Tuckett, D., & Taffler, R. (2008). Phantastic Objects and the Financial Market’s Sense of Reality. The International Journal of Psychoanalysis, 89(2), 389–412.
Turkle, S. (2023, March 10). Should You Be Nice to AI Chatbots Such as ChatGPT? Scientific American. https://www.scientificamerican.com/article/should-you-be-nice-to-ai-chatbots-such-as-chatgpt/
Van der Kolk, B. A. (2015). The Body Keeps the Score: Brain, Mind, and Body in the Healing of Trauma. New York: Penguin Books.
Zhan, E. S., Molina, M. D., Rheu, M., & Peng, W. (2024). What Is There to Fear? Understanding Multi-Dimensional Fear of AI. International Journal of Human–Computer Interaction, 40(22), 7127–7144.








