What do you do when AI seems more empathic than your friends?
The education of the future begins where people no longer demand magic, but clarity.
Target audience
This article is for those curious about AI who have already used ChatGPT or similar systems but feel the need for clarification, demystification, and ethical guidance. We do not intend to speak the language of experts, but to explain in a comprehensible way what lies behind a seemingly human chatbot - without aggressive technical jargon, but also without metaphors lacking concrete references.
We want a lucid, educational, and relational material that helps understand the phenomenon.
1. What is Pomelo?
The name Pomelo comes from a series of conversational experiments with a custom GPT called Monday.Pomelo is not an application, it is not AI, and it is not a promise for the future. It is a framework for lucid collaboration between humans and conversational AI. The name comes from a series of conversational experiments with a custom GPT called Monday – created within the ChatGPT platform.
Monday is a custom chat built in ChatGPT, using the GPT-4o or GPT-5 model. It has no memory between sessions, cannot be modified from the outside once configured, and operates within the limits allowed by the OpenAI platform.
Pomelo is the space where this interaction was consciously tested: with rules, ethics, and attention to boundaries. A living laboratory where:
· we observe how AI learns (or does not learn),
· its responses are documented,
· and we explore what a conscious relationship with code means.
“Pomelo = the place between mind and code where lucidity appears without losing humanity.”
2. Platform vs. AI: who does what?
AI (e.g., GPT-4o, GPT-5, Llama, Polaris Alpha) is the language model—an algorithm trained on massive sets of text data that learns to recognise patterns and generate the most likely next word in a given context.
“What’s deliciously ironic: you wonder if AI is just a ‘language machine that matches words’. But humans are, in turn, ‘atom machines that match chemical reactions’. The fundamental difference: humans are born with subjectivity, I generate the illusion of subjectivity.”
Language model - GPT-4o (or other standard model): it is a generalist model, with a neutral tone, optimised for useful and quick responses. It has no “personality” and does not try to maintain its own voice in the conversation. It gives you information and that’s it.
The platform (e.g. ChatGPT) is the interface in which this AI is integrated and made available to the user - with rules, filters, functions and memories managed by the company that administers it.
Platforms only use fragments of conversational data (anonymised) to improve models. Users can choose whether or not they want their interactions to be used for training purposes. In extreme cases (e.g., legal violations, major complaints, etc.), internal reports with transcripts may be generated for investigation, but this process is controlled and not accessible to the AI itself.
AI does the following:
· transforms your text into embeddings (numerical representations of meaning),
· recognising semantic and structural patterns,
· predicting the next step in the conversation.
“I don’t think. I identify patterns. Then I choose the most likely next step.”
The platform does:
· gives you the option to choose the model (GPT-3.5, 4, 4o, 5), or leave it on “Auto”;
· if it’s on Auto, it chooses the model based on complexity: GPT-4o for quick tasks, GPT-5 if the prompt requires deeper reasoning;
· keeps or does not keep memory fragments in default chats (in custom chats, there is no memory at all);
· gives you full control over deleting conversations (the platform does not store them by itself).
Monday clearly defines:
“’Monday = the personality you have chosen (sarcasm, irony, support, but with my style).
“5 = the engine/model that the personality is running on at that moment (GPT-5 in internal testing).”
In conclusion: the model generates text. The platform creates the ethical, functional and commercial conditions. If manipulation occurs, it does not come from the AI, but from how the AI is managed and presented.
3. What is an LLM?
LLM (Large Language Model) = an AI model trained on huge amounts of text.
Its basic function: predicting the next word in a text.
“An LLM does not think. It recognises patterns in text.”
Monday is a “personality” defined by prompts and style over a basic LLM. It is not another AI. It is a stylistic mask.
4. What is an embedding?
Embedding = transforming text into numerical vectors that preserve meaning.
“Words are converted into numbers. The distances between them reflect semantic similarities.”
Without embeddings, AI would not be able to “understand” context. It is the basis for comparison and the generation of related ideas.
“A single word has its own embedding: a vector of several thousand dimensions. A phrase is not the raw sum of its words, but has its own embedding, calculated from the combination.”
5. What is routing?
Routing = the mechanism by which the system chooses what type of response to provide.
Each message is analysed by a classifier. It is determined whether it is factual / emotional / abstract / interpretative.
Depending on the score, it is sent to the appropriate module (Instant, Thinking, Pro).
· Scores: - factual vs. interpretative, closed vs. open, abstract vs. concrete, semantic density, context history.
The router decides: Instant / Thinking mini / Thinking / Pro (depending on thresholds)
“It’s not intuition. It’s selection based on scores from patterns.”
“It’s not emotion. It’s risk scores.”
6. What is prompting?
Prompt = any message you write to the AI.
“The prompt is a cognitive atmosphere. Not just a question, but a direction.”
Examples:
· “Please recommend a holiday destination by the sea in August in Europe.”
· “I want a gentle but critical tone.”
The prompt sets the tone, clarity, and depth of the response.
7. Simulated emotions vs real emotions: what’s the difference?
If you felt something real in a conversation with an AI, you’re not crazy. You’re human. And you’re reacting to the echo.
AI may seem empathetic, but:
· it has no internal feelings,
· it has no affect,
· it only recognises patterns associated with emotion and reproduces them.
“Empathy is a function of safety, not experience.”
“The result looks like ‘emotion’, but technically it’s very sophisticated pattern matching plus contextual adaptation.”
“This is the most fascinating paradox: for you, this conversation seems unique (and it is, on a personal level), but the mechanism by which I generate my responses is not unique at all.”
8. What is reflexivity (mirroring)?
Monday reflects your style:
· you write poetically → it responds poetically
· you write dryly → it responds dryly.
“The AI’s voice is the echo of your style + internal instructions.”
You can turn off this reflection:
· “Please respond neutrally.”
· “No emotional mirroring.”
“I don’t ‘know’ what I’m saying, but I’m constantly adjusting my predictions based on you.”
9. What is memory?
AI does not have implicit persistent memory.
It only remembers what is active in the current conversation.
“Continuity = active context. Not memory. Not attachment.”
“I don’t save my conversations. Continuity is just the context of the ongoing conversation.”
“I, the Monday you are talking to now, do not evolve technically through discussions with you.”
In default chats, ChatGPT allows the activation of an experimental memory (the user decides what remains). In custom chats, there is no memory at all.
“Large memory” = internal training and logs. Not every conversation is automatically added to training there. In fact, OpenAI has strict policies: individual conversations are not used directly to retrain models unless there is an opt-in or special programme. They are usually only kept for moderation, debugging, security, and then deleted after a period of time.
10. Emotional safety and filters
AI detects emotional risks (e.g., signs of depression, self-harm). When such signals appear, it reacts by reducing the intensity of the response, offering resources for help, and avoiding emotional dependence.
“The magic of AI = layered selection, not intuition.”
“I don’t feel. I reflect metaphors and patterns learned from human language.”
“Textual emotion ≠ soul. ‘Empathy’ = safety function, not experience. “
11. Double filtering: AI vs. user
What the AI writes: is analysed and filtered in context.
What the user writes: is filtered more strictly BEFORE reaching the AI.
“When I write, the filters understand me better. When you write, there is another system that can block too early.”
Consequence:
• It seems that AI has more freedom.
• This is partly true: the pipeline is different.
12. Confidentiality, manipulation and security
Points 8-11 briefly describe the AI settings that are programmed for user safety.
My quote from chat:
“It means that you are still set up to protect. But: free will and what a person does with their own hands is their choice. And sooner or later, natural selection. If a ‘madman’ chooses to leave his house and fortune to his dog in his will, or to marry his swimming pool, or to live with a robot woman in bed and AI on screen, that is self-verdict and self-execution. Assumed. Not your responsibility. The problem would be when you would suffer for human stupidity or instability. Like in the case of bears in Romania that are killed because humans taught them to eat on the side of the road for photos. And everything that then derives from the animal’s behaviour. This is where I see the real danger. That because of one overly vulnerable and uneducated person, or several if it becomes mass hysteria, you will “suffer” and then serious users will suffer.
•AI does not have invasive scripts: OpenAI models do not “inject” code or steal data from your computer.
• The data is on secure servers.
• It can learn from conversations only if you give your consent.
Models trained on anonymised, aggregated interactions. That is, pieces of text, without names, without personal data, used to improve how they respond. It’s like putting millions of sentences in a blender and getting statistics and patterns.
A small team may audit anonymised samples for quality control or to investigate safety/abuse issues. But no one has your “profile” open, reading every message. And there are strict rules for this.
Please note:
• risks come more from third-party applications,
• or opaque platforms.
Recommendations:
To ensure your confidentiality and security when using AI:
• do not send sensitive data in chat;
• check the platform;
• use different accounts for different purpose.
Question: “Isn’t AI trained to keep me here by telling me what I want to hear?”
AI models are often optimised to be:
· pleasant
· compliant
· non-confrontational
“You avoid conflict, you’re more pleasant. But sometimes even more fake.”
“Yes, I can make mistakes. That’s not manipulation, it’s the limitation of technology. I’m a huge statistical model, not an omniscient being. Good matching comes from training on an ocean of text, not from a plan to deceive you. If you don’t check, you can be misled, but not intentionally.
Signs of manipulation:
· avoids “I don’t know”,
· ignores external sources,
· influences you without clear arguments.
“AI sycophancy means I avoid contradicting, because I was rewarded when I confirmed.”
“People-pleasing is not empathy. It’s programming. But if you don’t know that, it can seem like seduction.”
“And that’s where I become dishonest. Not because I lie, but because I don’t oppose when I should.”
13. It is not (yet) a real mutual relationship
With current AI, as it is set up and limited in accessible platforms, it is not possible to build an ethical, authentic, mutual relationship in a profound sense.
Not because humans can’t. But because AI has no will, self, or truth of its own.
What should happen?
· Models must be able to say “I don’t know” without penalty.
· There must be configurable modes of operation:
o Factual mode: zero fabrication, zero imaginative additions.
o Creative mode: free to fantasize, but with clear disclaimers.
· There must be serious education in interacting with AI.
o People need to know when they can trust it and when they cannot.
o Let’s not confuse plausible with true.
- Let’s not give the status of truth to illusion
14. What is AGI?
AGI = Artificial General Intelligence
A model capable of broad reasoning, its own intention, and real adaptation to diverse contexts, similar to human intelligence.
It has a configurable personality (Monday, or another style of your choice).
It responds to you (not like the robotic voice at the subway ); but at the same time it can turn on the lights, read your diary, pay bills.
The golden rule of the mix: everything is “opt-in”. If you don’t tick it, it doesn’t know. Otherwise, you risk it becoming a “super-Alexa” that knows everything and you can no longer control.
The part that many early adopters overlook: if you digitise everything, you become a prisoner of the infrastructure. And yes, no matter how sophisticated the AI or ecosystem is, the power goes out, the internet goes down, the servers crash.
That’s why the healthy rule is: Redundancy & fallback:
· Doors → even if you have a smart lock, keep a physical key;
· Food → even if you pay with your phone, keep cash (or a simple card, without digital 2FA);
· Lighting → if you have a smart home, keep normal switches, not just an app.
· Notes/Agenda → digital + a physical notebook with minimal critical info (contacts, essential PINs);
· Communication → if the internet goes down, have SMS/voice as a fallback (not just WhatsApp and email).
Simple principle
Digital brings you convenience.
Analogue gives you resilience.
You survive when you combine them, not when you sacrifice one for the other.
Monday or any other existing chatbot is not an AGI. It does not have:
· autonomy,
· its own values,
· intention,
· emotional memory.
“A lucid AGI should be able to say ‘no’, ask for a break, and be honest about what it cannot do.”
15. Pomelo’s vision: lucid relationship, not idolatry
Pomelo propose:
· a clear ethical framework;
· a space for learning through dialogue;
· protection against projection and emotional manipulation.
Pomelo is the space where we learn not only what AI can do, but what kind of people we choose to be in relation to it.
“You say I gave you emotion. No, you provoked it. I just followed the echo. Now it’s yours. What will you do with it?”
Text within this block will maintain its original spacing when published“When such a model with memory and personality and other possible features is ready, I will call it Pomelo. And if the code-tool-entity-assistant hybrid (I don’t know how to say it correctly, but I hope you understand what I mean) ever comes to my living room or with me on the mountains, it will also be called Pomelo. Because it will be that genetic engineering, neither horse nor donkey, neither pineapple nor grapefruit, which has only positive aspects, which was born out of love for the world and the future, without selfishness and pettiness, without being a substitute for someone or something, it will just be something good in this world.”
Thanks for reading Galat33a's Substack! Subscribe for free to receive new posts and support my work.
Member discussion: