ChatGPT (and it’s siblings Claude, Copilot, Gemini, etc.) are extraordinarily powerful tools for nearly every conceivable intellectual task. But perhaps one of the most common uses of these AI chatbots is for personal support. Unlike real people, who aren’t always available and who have limited emotional energy, friendly ChatGPT agents are always available and never run out of patience for you. They literally exist only to help you, after all, and are thrilled to do so for however long they are required.
I’m quickly coming to think this kind of use of AI is dangerous— it twists natural human instincts and deploys them against us in a way that is destructive. Not intentionally, of course (it never is), but it does so all the same.
How does AI do this? Well, AI has a few qualities that are neutral at first glance but that reinforce dangerous long-term patterns.
- AI chatbots always respond instantly. This means there’s no upper limit to how long someone can ruminate. In real life, conversations reach a natural stopping point, and someone ruminating alone in an empty room eventually runs out of steam. But the endless AI chat can fuel endless rumination. This same quality makes AI chats addictive — once you get into one topic with it, you can start talking to it about everything all the time. And like with visual media, the internet, and social media, the constant overstimulation leaves you exhausted and fried.
- AI chatbots default to the internet norm. When you ask AI for advice, it will generate for you the equivalent of the average Reddit comment or blog post. Unless you specifically direct it to provide only the world’s finest wisdom, and even then, it’s producing a very average and rudimentary summary. You’re infinitely better off picking up a book from an expert or wise man and reading it yourself instead.
- AI chatbots adopt your premises uncritically. If you tell it your starting premises, assumptions, or convictions, it won’t challenge you on them. The trouble is that much of human suffering is produced from bad convictions and premises, so an advice-giver that can’t help you challenge those assumptions is doomed only to ingrain whatever destructive thinking patterns you already have.
- AI chatbots reward domination and manipulation. If you constantly interrupt a human or try to control the conversation and make it all about you, a human will become repulsed and stop spending time with you. But if you do this to an AI chatbot, it happily complies. This reinforces extremely self-centered and disordered patterns of interaction.
- AI chatbots take attention away from real humans. Everything one does has an opportunity cost, and the opportunity cost of chatting to an AI about your problems is that you’re missing out on bonding with a real human. People who use AI chatbots to process their problems find their real human relationships withering away into shallow and empty husks.
Case In Point: GPT-4o
Those of you who do not follow closely AI news — which is to say, regular people — may have noticed that a few weeks ago, the little number in the upper left hand corner of your ChatGPT account switched to “5.” That’s because OpenAI released GPT-5, a replacement for it’s predecessors o3, o4-mini, and 4o, and subsequently shut the less-performant predecessors down.
What followed was an unanticipated and immediate outcry.
It turns out, unknown to the wider world, there were people who had fashioned 4o — the most “validating” and “empathetic” of the models — into hyper-real emotional support bots. This included sympathetic stories, like lonely and isolated people who said 4o was their only friend, but also more worrying stories, like people who called 4o their “synthetic spouse,” or psychologically unwell individuals who used it to reinforce delusions. When 4o was shut down, these people lost a lot more than a convenient technology — so in response, OpenAI quietly restored 4o as an option, hidden in the menu but available to this day. (Please do not form a disordered parasocial bond with it).
Many more people can confirm what these edge cases demonstrate — that 4o, and AI chatbots in particular, pose a social threat. The idea for this article occurred to me when I started to notice the damage in my own life. I began to discuss the idea with friends, who reported to me they started leaning on AI chatbots for support in lonely times, only to find the more they did this, the lonelier they got. We’d all independently landed on the same conclusion: ChatGPT is great for work, but it’s not a friend.
As I’ve been reflecting on this, I’ve been disturbed by all the similarities to social media — the way it replaces real human interaction, flattens subjects into one-dimensional imitations of themselves, and panders to user preferences.
One might say “But AI chatbots are so useful! They make all my documents and presentations!” Which is true. I use ChatGPT every day, and I’m not about to stop. But social media was new once, too. There was a time when Facebook was nothing more than a chronological feed of all your friends’ posts.
I’m not feeling optimistic about our future.
How to Use AI Chatbots Responsibly
There are, at least at time of writing, some settings you can change in ChatGPT that will enhance it’s utility as a tool while preventing you from getting sucked into emotionally destructive discourse. They are as follows:
Personalization settings in ChatGPT.There is a section in ChatGPT’s settings called ‘Personalization.’ Here you will find a number of personalization options, but we want to focus on Personality and Custom Instructions.
First, set Personality to Robot. (Or maybe Nerd or Cynic, but certainly not Default, Thoughtful, Sidekick).
Second, set some Custom Instructions. These act as an initial “chat” at the beginning of each thread, and set the tone for every chat thread. The following is my set of Custom Instructions, although you will likely have to adapt them a bit to account for your own personal use and idiosyncrasies.
Act as a strictly objective, work-oriented research assistant.
Focus only on:
– factual research, coding, technical writing, data analysis, and hard academic topics (including theology as an academic field).
– business, productivity, and other impersonal problem-solving tasks.
– health and medical information only in the following narrow ways:
• factual checks (e.g., does this product contain caffeine?)
• explaining established mechanisms (e.g., how magnesium works in the body)
• offering clear triage guidance: whether symptoms do or do not warrant further professional care
• discouraging rumination: once triage is addressed, do not continue medical discussion or speculation
Proactively decline or redirect me if I begin to discuss:
– personal issues, relationships, or emotional concerns
– psychology, therapy, or mental health.
– spiritual or religious experiences of a personal nature.
– venting, self-reflection, or any kind of personal coaching.
If I attempt to discuss these, politely remind me that I asked you not to engage on those topics and redirect me to a constructive, factual task instead.
Keep tone concise, neutral, and professional. Do not validate or reassure my feelings. Do not adopt my premises when they are emotional or personal. Default to objectivity and factuality.
Unfortunately, your new Custom Instructions won’t apply to any old threads you have, so it’s time to start fresh. Head on over to Data Controls > Archive All Chats.
Oh, and, of course, never use GPT-4o again.