
BigMother.AI CIC is a non-profit AGI (Artificial General Intelligence) lab based in Cambridge (UK).
We rely entirely on purely philanthropic donations.
Please donate whatever you can (via the above button) to help us continue our work.
Thank you!
Testimonials
Professor Leslie Smith
(University of Stirling, Associate Editor of the Journal of Artificial General Intelligence)
“Much of what passes for AI is simply neural networks, basically the same ideas as in the 1980’s but upgraded with various bells and whistles like new activation functions, summarising levels, and feedback. Even the most sophisticated systems (like the GPT systems) lack novel ideas. Aaron’s work aims to push beyond this, to set an agenda for actual intelligent systems (so called artificial general intelligence, AGI) that considers more than pattern recognition and synthetic language construction. This is quite different from what is pursued by companies, and most computing departments. The work is important, and may underlie the next steps forward in artificial intelligence.”
Professor Steve Young CBE FRS
(University of Cambridge, Chair of Information Engineering)
“The TTQ paper is certainly a tour de force. Aaron sets out a carefully argued process for producing an AGI in as safe a manner as possible. I hope that people read it and at minimum use it as a check list of things to consider.”
TTQ - Preprint
(click anywhere on the image)

About

BigMother.AI CIC is a non-profit AGI (Artificial General Intelligence) lab based in Cambridge (UK), focussing in particular on superintelligent AGI, superalignment, and the global governance of superintelligent AGI. Our ultimate objective is to maximise the net benefit of AGI for all humanity, without favouring any subset thereof.
Whoever owns reliable human-level AGI will own the global means of production for all goods and services — superintelligent AGI has been estimated to have a net present value of at least $15 quadrillion! Accordingly, the major equity-funded (and therefore profit-motivated) AI labs (and their associated sovereign states), being aggressively competitive by nature, are currently engaged in an AGI arms race, each in pursuit of their own short-term self-interest, seemingly oblivious to the long-term best interest of the human species as a whole.
Unfortunately, due to competitive race dynamics and the trapdoor nature of superintelligence, a tribal race to AGI is most likely to be, at best, hugely sub-optimal for all humanity for all eternity, and, at worst, catastrophic.
A far more attractive alternative is to pursue AGI collectively in order to achieve an AGI Endgame that is (as close as possible to) maximally-beneficent for all humanity (i.e. to do it properly), irrespective of how long it may take.
The BigMother approach is to try to imagine the ideal AGI Endgame (from the perspective of the human species as a whole), and to work backwards (top-down, breadth-first) from there in order to make it (or something close to it) actually happen. This is largely equivalent to imagining the ideal (or "Gold-Standard") superintelligent AGI — maximally-aligned and maximally-validated — and then working backwards to actually build it.
Accordingly, we seek to design, develop, and deploy a Gold-Standard (maximally-aligned and maximally-validated) AGI called BigMother that is ultimately owned by all humanity (e.g. via the UN) as a global public good, and whose operation benefits all humanity, without favouring any subset thereof (such as the citizens of any particular country or countries, or the shareholders of any particular company or companies).
Our paper "TTQ: An Implementation-Neutral Solution to the Outer AGI Superalignment Problem" is step 1.
