BEWARE of AI!
- marinavantara
- Jun 30
- 7 min read
If we get used to AI conveniently and quickly thinking for us, what is the future of NATURAL intelligence?

Just about six months ago, using AI was nothing more than a trendy topic on social media for me. It was my 28-year-old child who pushed me toward its practical application. When I admitted my complete ignorance on the topic, I could clearly hear pity in my daughter's voice. Well, it took me less than two months to learn how to communicate with AI… I worked out it's a bit like talking to someone on the autism spectrum, and I happen to have an extended experience with that.
I thought AI was great!
The first red flag came when one of my students, who has a mathematical mind, tried to use AI to write a literature speech. The result was, of course, a complete cliché-ridden mess. Literature is precisely the subject where original thought and critical analysis are valued. Though machines are evolving faster than humans, and perhaps in the near future they’ll start simulating critique too. That reminds me of the movie I, Robot (2004), which, of course, 17-year-olds haven’t watched. But they should be curious. Here is the link. Incidentally, the movie is set in 2035 — just 10 years from now!
Another student, a humanities-oriented one who actually enjoys analysing literature, shared her experience of using AI to write essays in other subjects. What upset her was that the machine writes better. I had to explain that this is just for now, and that she has all the potential to surpass AI — if she shows the patience and dedication. To myself, though, I thought: But does she actually have that patience?
The last straw was a message from one of the e-book platforms:
"It has become easier for readers on ***Platform to return to unfinished books. The service will now remind them what happened in the previous chapters. The 'Previously in the Book' feature is powered by artificial intelligence: a ***Platform’s AI analyzes the text already read and helps recall characters and key plot twists."
The message goes on to explain in detail who might benefit from this feature and why. The general idea is: "Dear users, you’re not stupid — you are just busy people."
One can call high school students also “busy people,” right? That’s exactly the kind of argument they tend to use when justifying their lack of effort...
And then, at the end of the message, there's a small disclaimer:
"Books available on ***Plantform will automatically have this feature enabled. If you disagree, you can remove your book from the feature's list."
It all seems fair, and the user seems to be given a choice. Then why do I increasingly feel like I'm being actively pushed toward replacing my own thinking with artificial intelligence? Pushed at every step. Open a text document — and there's Copilot, asking what kind of text I want to write. And pop-up suggestions to use AI keep appearing on my laptop screen. The latest gem from the updated Word.doc. A suggestion to generate a bedtime story for a child!
I don’t know about others, but it’s already starting to irritate me. Thanks very much, but at 52 I still remember clearly what I read even a year ago and, thank God, I can compose a letter to a client without relying on a robot! As a child, I had excellent — human — teachers who assigned a lot of homework, for which, of course, I hated them at the time!. But I did learn to write tolerably well, and my natural intelligence developed just fine. As for automating relationships with children — I consider that blasphemous, a violation of nature. I still remember my grandfather’s bedtime stories, and he only had a seventh-grade education. He made up those stories on the spot — and that is exactly what amazed me most about him!
The talk about the wonders and benefits posed by AI won’t die down — it’s getting louder, more frequent, and increasingly omnipresent. I admit, I’m generally allergic to anything that is being imposed, whether overtly or covertly. I have a knack for noticing such subtle impositions — it’s like a red light goes off in my head. I just know that when someone argues something too passionately, there’s often a hidden manipulative agenda. I even seem to have gained a bit of a defiant dissident reputation on social media… But what can you do, when those very networks are often used to push and shape someone’s desired opinions? "Someone" — as in those who pay the trolls or own the media...
It’s not like I never use AI, but I treat its suggestions very selectively. My choice to accept or reject AI assistance is conscious. It’s based on the principle of skill compensation, not replacement. For example, both the Russian and English covers of my book were created using ChatGPT. The same goes for illustrations for my website and book trailer (you can see it here).
I’m a writer, not an artist, and like any creative being I can’t always afford to hire a professional. Also, while my English is quite decent for everyday life and work, it still lags far behind my native and professional Russian. Chances to fix that gap aren’t great, as my priorities lie elsewhere — I plan to share my thoughts with the world, not become a polyglot. That’s why I used AI to translate my book — which was written entirely without AI help in Russian. Afterward, I reread and edited the translation twice, and eventually paid a live professional editor to be sure of the quality outcome. In this way, I saved on an artist and a translator. That’s what I call the compensation principle. By the way, I also translated this article from Russian to English with AI, followed by careful proofreading.
And the MAIN QUESTION that worries me is this: If we get used to AI conveniently and quickly thinking for us, what is the future of NATURAL intelligence?
I worry about the younger generation because immature minds tend to make immature choices. They’re not always able to distinguish between compensating for a skill and simply replacing it. When modern students are offered a way to make a task easier, many eagerly take the opportunity. Remember the stories of my students above. What is that? Laziness?
So-called laziness is a natural phenomenon. In reality, laziness doesn’t exist — what exists is a lack of motivation. The potential for psycho-energetic burnout (i.e., overuse of mental resources) creates a natural human tendency to preserve those resources. As a result, people seek simplification, minimisation, and optimisation of all their actions. That is laziness — and it affects all ages. Optimisation is great in adults’ life both at home and in the workplace: you get less tired and earn more. One can’t argue with that. In general, humans are only active when they understand why they’re doing something. And our motivational system is incredibly complex — but that’s a whole separate (and painful) topic.
So, when a student is offered a way to simplify a school assignment, it’s instinctive to take it. That’s just a fact of life. What’s wrong with that, you ask?
The human brain is also a neural network, but unlike a machine, it doesn't build and improve itself instantly — it develops through activity. The kind of activity that REQUIRES EFFORT which our fear of burnout pushes us to avoid. That’s the dilemma.
The essence of education is to stimulate various mental processes and to intentionally create situations where students gain experience, through which their brains form new neural connections. These connections become knowledge and skills. Repeating the exercises (i.e., experiences) helps transfer those skills and knowledge from short-term memory to long-term memory. The more experience, the more neural connections form, and the smarter a person becomes. This is a simplified version of what you can read in any textbook on the physiology of higher nervous activity.
By the way, neural connections continue to form throughout life, so it's never too late to learn, indeed. Also, brain training is likely a key part of dementia prevention.
And AI doesn’t just make natural thinking easier — it replaces and dismisses it, depriving your children of a type of mental activity that WOULD HAVE formed neural connections, making their minds more flexible and creative. So, the likely answer to the question of what happens to natural intelligence if we trust AI too much… makes my hair stand on end.
And this isn’t fantasy. It’s already happening. Half of my 17-year-old IB students don’t understand poetry because they’re not trained to decode language — its figurative meanings and its metaphors. Some of the meaning simply DOES NOT REACH them. And I’m not talking about deep literary analysis — I mean basic understanding during regular, spontaneous reading. The first time I encountered this kind of semantic thickness, I was in shock. Now, each new case sends me deeper into despair… It turns out that some young people’s thinking skills remain underdeveloped. They are underdeveloped because they are not invoked. Because the students delegate many mental operations to machines. This isn’t news to anyone anymore. In broader society, it’s likely that the majority of students are like that and not a half. Though, to be fair, the other, more self-aware half of my students consciously invest time and effort into writing their own essays. After all, using AI in exams is still not allowed. At least, not yet.
If the widespread imposition of AI continues, what portrait of the future human emerges? I envision homo sapiens capable of understanding only the literal meaning of what they read or hear. Unreceptive to wit and irony. With poorly developed critical thinking and imagination. Rejecting creativity as unnecessary (after all, the machine draws and writes better — so why bother?) Helpless in their limitations. Easy prey for manipulators.
And what do you foresee?
Despite the prestige and popularity of STEM disciplines in today’s world, the mastermind behind so-called color revolutions, billionaire George Soros, studied none other than philosophy at university — and he sent his son, his successor, to study history. Probably nothing develops critical thinking like philosophy. It's hard to imagine using AI effectively to truly grasp that subject. Literature and history are in second and third place on the critical thinking ladder, respectively. So, parents and children would do well to seriously think about the true path to success and independence…
Some schools are already sounding the alarm. They hold assemblies and classroom discussions on responsible, selective use of AI. They’re explaining things. Unfortunately, these are still isolated cases. The looming prospect of children’s underdeveloped thinking is, of course, primarily a parental problem. Is it possible to implement parental controls or even partially block children’s and teenagers’ access to AI? After all, access to some online content is already restricted - for a reason. oWhat could young people really lose and what might they gain by continuing to use and develop their natural intelligence instead of relying on artificial one considering that learning to use AI only takes a couple of months even in adulthood?
AI creators are, of course, geniuses. But can the same be said of its users?
Comments