top of page

Must read

The Big Mother Manifesto


Professor Leslie Smith

(University of Stirling, Associate Editor of the Journal of Artificial General Intelligence)

“Much of what passes for AI is simply neural networks, basically the same ideas as in the 1980’s but upgraded with various bells and whistles like new activation functions, summarising levels, and feedback. Even the most sophisticated systems (like the GPT systems) lack novel ideas. Aaron’s work aims to push beyond this, to set an agenda for actual intelligent systems (so called artificial general intelligence, AGI) that considers more than pattern recognition and synthetic language construction. This is quite different from what is pursued by companies, and most computing departments. The work is important, and may underlie the next steps forward in artificial intelligence.”

Extended abstract

Whoever owns human-level AGI (Artificial General Intelligence) will own the global means of production for all goods and services. Superintelligent AGI has been conservatively estimated to have a net present value of ~$15 quadrillion. Accordingly, the major profit-motivated AI labs (and their associated sovereign states) are currently engaged in an AGI arms race, each in pursuit of their own short-term self-interest, seemingly oblivious to the long-term best interest of the human species as a whole.

The currently dominant opinion among AI and AI safety researchers seems to be that multimodal Large Language Models (LLMs), built using the Transformer neural network model (or similar), massively scaled, and aligned with human preferences via RLHF (Reinforcement Learning through Human Feedback) and other methods, represent the most promising path to AGI and beyond, with the median estimate for when human-level AGI will arrive ranging from 2026 to 2031, with superintelligent AGI arriving ~2 years later.

At the same time, many AI researchers hold variously negative opinions about LLMs, including that "we have no idea how they work", that they are "stochastic parrots" capable of at most weak reasoning over shallow world models, that LLM hallucinations are inevitable, that the scaling laws are illusory, and that reliable LLM alignment is impossible. In a recent survey, the median machine learning researcher appeared to put a 5-10% chance on the extinction risk from misaligned AGI.


We propose an alternative to the de facto LLM-based short-term-self-interest-driven approach to AGI. Rather than engage in a race, over the next 10-20 years, towards an AGI future that is likely to be, at best, hugely sub-optimal for all mankind for all eternity (due to the trapdoor nature of superintelligence), and, at worst, catastrophic, we propose spending 50-100 years doing it properly, in the best interest of all mankind, in order to achieve an AGI endgame that is (as close as possible to) maximally-beneficent for all mankind, while at the same time using the additional breathing space to mitigate to the maximum extent possible the inevitable pain of such a profound transition.

Our overall approach is to try to imagine the ideal endgame (from the perspective of the human species as a whole), and to work backwards from there in order to make it (or something close to it) actually happen. This is largely equivalent to imagining the ideal (or "Gold Standard") superintelligent AGI, and then working backwards to actually build it. To this end, we seek to design, develop, and deploy a provably maximally-aligned maximally-superintelligent AGI (called BigMother / BigMom) that is ultimately owned by all mankind (via the United Nations), and whose operation benefits all mankind, without favouring any subset thereof (such as the citizens of any particular country or countries, or the shareholders of any particular company or companies).

The proposed BigMother cognitive architecture comprises the following hybrid (symbolic + connectionist) AGI stack: NBG set theory as the knowledge representation language (KRL); cognitive primitives include induction, deduction, and abduction (IDA) manipulating beliefs expressed in the KRL; generic problem-solving, program synthesis, continuous learning (from observation of the real world), and continuous planning all constructed on top of IDA; all major components formally specified, with code and proofs generated via program synthesis; code and data distributed across massively parallel (and provably correct) hardware; performance further enhanced via FPGAs, ASICs, VLSI, neural networks, GPUs, quantum, etc; comprehensive, multi-decade, primary, secondary, and tertiary machine education (including to PhD level and beyond).

In The BigMother Manifesto: A Roadmap to Provably Maximally-Aligned Maximally-Superintelligent AGI (Part 1), we will describe the BigMother cognitive architecture and associated BigMother project in detail. Together, these define an AGI research agenda for the next 50-100 years.

Keywords: Artificial General Intelligence, cognitive architecture, superintelligence, alignment, generic problem-solving, continuous learning, continuous planning, symbolic-connectionist hybrid

Latest draft

BMM title page.png
bottom of page