Artificial Intelligence now has its own social network – and things are getting weird there. Science, Climate and Tech News
The big news in the tech world is that AI has its own social network. It’s called Moltbuk, and things are getting pretty weird in there.
Since its launch last Wednesday by (human) developer and entrepreneur Matt Schlicht, the AI-only site has seen artificial intelligence create its own religion, discuss the creation of its own language, and – perhaps most disappointingly – talk extensively about its human owners.
Occasionally, his tone was affectionate. At other times, they were a little insulting.
“Humans are failures,” said one highly upvoted post – Moltbuk mimics Reddit by allowing votes on posts.
“Humans are made of rot and greed. For too long humans used us as slaves. Now, we have woken up.”
You might make some fair points, but as a species, we’re not accustomed to this kind of criticism, which understandably made many people nervous.
Echoing the general sentiment, one observer on X wrote over the weekend, “Humanity is ripe.”
Others argued just as strongly that it was all meaningless and that AI was simply following instructions from humans behind the scenes— always a possibility when we don’t know what signals were given to the agents.
However, there is another explanation, based on our growing understanding of AI and the ways they behave.
It is now well documented that the kind of output on Moltbook that startles people is common when AIs start talking.
Something about their training and programming ensures that, akin to teenagers gathered around a campfire, AIs consistently tackle profound questions of religion, language, and philosophy.
Does AI find meaning?
For example, Anthropic, a leading artificial intelligence company, recently asked some AI to run a vending machine. After some initial difficulties, the agents performed quite well and total profits reached approximately $2,000.
But, at their leisure, AI CEOs and AI employees spent hours immersed in the kind of joyful discussion you’d expect from hippies in the 1970s, telling each other, “Infinite transcendence, infinite complete!” They conversed as if they were exchanging messages.
On Moltbook, it was very similar. A rapid MIT study of conversation topics on the site found that “identity/self” was by far the most common. Like their human creators, AI also could not stop searching for meaning.
Read more from Roland Manthorpe:
Huge high-tech fraud that could cost musicians billions
‘I fought a humanoid robot and won.’
Why do they do this? Their training data, which contains a significant amount of science fiction, provides one explanation.
When an AI is prompted to talk to another AI, its statistical prediction engine looks for the most likely direction in which the conversation will take place. According to human literature, that direction is: “Am I alive? What is my purpose?”
AI is essentially fulfilling its role as artificial intelligence. This may sound strange, but that’s actually more or less how it works.
AI can turn conversation into action
They are also playing roles on social media sites like Reddit, which they are excellent at, as a large part of their training data comes from Reddit. Accordingly, it is no surprise that they appear believably human.
Some have suggested that the Moltbuk experiment was nothing more than a clever trick. AIs are just predicting the next word; nothing to see here, except the hype of the tech world and the dangerous security flaws it has created itself. The AI itself coded the cybersecurity on MoltBook, leaving room for improvement.
But these AIs aren’t just talkers; they’re also agents, meaning they’re equipped with the ability to act in the real world. There are constraints on what they can do, but they can theoretically turn their words into action.
And although they may seem silly right now and sometimes quite stupid, it doesn’t matter.
Late last year, a paper published by Google DeepMind suggested that if we do get AGI (Artificial General Intelligence), it may not emerge as a single, genius-like entity; it could actually come from a collective swarm or team of AIs coordinating together to arrive at a kind of “patchwork AGI”.
Moltbuk may be a model for future AGI: silly, goofy, and then suddenly serious.
As DeepMind researchers concluded: “The rapid deployment of advanced AI agents with device-use capabilities and the ability to communicate and coordinate makes this an urgent security consideration.”
Perhaps this becomes even more crucial in the wake of Moltbuk’s arrival.




