top of page

Welcome to BigMother Labs!

Our research agenda

BigMother Labs (BML) is a non-profit AGI (Artificial General Intelligence) lab based in Cambridge (UK), focussing in particular on superintelligent AGI, superalignment, and the global governance of superintelligent AGI. Our overall objective is to maximise the net benefit of AGI for all humanity. In this respect, we seek to make the maximum possible contribution.​

 

Accordingly, we tend to focus on the really big, civilisation-level problems pertaining to AGI.

The race to AGI


Whoever owns reliable human-level AGI (and beyond) will effectively own the global means of production for all goods and services. Noted AI academic Stuart Russell has estimated the net present value of AGI to be at least $13 quadrillion. Accordingly, the major equity-funded AI labs (being ultimately accountable to equity investors) are currently engaged in a race to AGI.

The AGI Endgame

We define the AGI Endgame as being the point in time at which humanity develops fully autonomous systems 
 able to sense and affect the physical world  that are more intelligent than any human (otherwise known as agentic superintelligent AGI). The behavioural nature of the agentic superintelligent AGI that humanity builds will determine the subsequent fate of all humanity for all eternity. The prudent assumption is that this is an irreversible trapdoor moment in human history  in other words, once through it, there can be no going back.

The potential upside

The more positive potential AGI Endgames include:

  •     worlds in which humanity's accumulated environmental damage has been reversed

  •     worlds in which humans are no longer forced to work merely in order to exist

  •     worlds of sustainable radical abundance without war, poverty, hunger, or disease

  •     worlds in which human life is abundant and genuinely emotionally rich and joyful.​​​​

 

The potential downside

Moloch was an ancient Canaanite deity that granted material favours such as wealth and power to parents in return for sacrificing their children. It has since become a metaphor for sacrificing future generations for short-term gain (transgenerational debt, pollution, etc).

Due to competitive race dynamics (which incentivise profit-motivated AI labs to cut corners on safety, alignment, and validation), conjoined with the trapdoor nature of superintelligence, a race to AGI is most likely to be, at best, hugely sub-optimal for all humanity for all eternity, potentially catastrophic, and possibly even existential — the ultimate Molochian Trap.​​​

AGI endgame.png

The actual AGI Endgame

The AGI Endgame that actually transpires will be determined entirely by what we do as a species in the coming years and decades. In other words, near-utopia for all humanity for all eternity is essentially ours for the taking; all we have to do is make it happen, and not mess up.

LLMs might not be the path to reliable human-level AGI after all
 

A number of notable AI academics (including Emily Bender, François Chollet, Yann LeCun, Gary Marcus, and Richard Sutton), with whom we concur, have stated publicly that LLMs (Large Language Models) — the neural networks underpinning popular chatbots such as ChatGPT, Claude, and Gemini — are unlikely to be the path to reliable human-level AGI.

If this analysis is correct, then fundamentally different technical approaches are required.

A brief reprieve

This is actually a good thing. It means that the world is not in fact racing towards a perilous Molochian Trap quite as quickly as the news headlines might otherwise suggest. It buys us time — perhaps a decade or two. Time in which to pursue other technical solutions, yes, but, more importantly, time to get suitable AGI-related regulations and other measures in place.

​​​​​

Working backwards from the ideal AGI Endgame

At BML, our overall approach is to try to imagine the ideal AGI Endgame (from the perspective of the human species as a whole), and to work backwards from there (top-down, breadth-first) in order to make it (or something maximally close to it) actually happen in the real world.

 

This is equivalent to imagining the ideal "Gold-Standard" (i.e. maximally-aligned and maximally-validated) superintelligent AGI, and then working backwards to collectively build it.

AGI's North Star

Should any AI lab ever deploy an agentic superintelligent AGI that fails to meet the Gold-Standard, then that directly translates into a less than ideal AGI Endgame, which then negatively impacts the lives of billions, potentially trillions of future humans, for all eternity.

 

Why would anyone ever build such a thing? And why would anyone formulate AGI policy that allowed the building of such a thing? Accordingly, once defined, Gold-Standard AGI naturally becomes the North Star for the entire AGI field, including both developers and policymakers.

Implications for AGI governance

This has profound implications for global AGI governance, because domestic, regional, and global AGI regulation and other policies that steer the world towards Gold-Standard AGI will automatically (especially if coordinated) also steer the world towards the ideal AGI Endgame.

Theory of change

How does BML intend to make the maximum possible positive impact in respect of the maximum possible number of humans? Our basic process is as follows: identify the major problems (bigger is better), decompose these into smaller problems, solve the smaller problems, disseminate the results, repeat (until Gold-Standard AGI has been fully achieved).

​​​

Any research that BML undertakes seeks to have at least one of the following net effects:

  • moving the actual AGI development trajectory — ideally across all AI labs globally — closer to the optimal trajectory (i.e. closer to the upper line in Figure 1 above); any such net effect will positively impact the lives of billions, potentially trillions, of future humans

  • changes to global AGI regulation pertaining to (not necessarily agentic) superintelligent AGI, positively impacting the lives of billions, potentially trillions, of future humans

  • moving the dotted "AGI Endgame" line in Figure 1 further to the right (i.e. further into the future), effectively slowing down agentic superintelligent AGI development, thereby giving global society more time to adapt to such a profound change; any such net effect will positively impact the lives of billions of present-day humans (many of whom will be fearful of losing their financial security, and of the related potential for massive wealth inequality)

  • changes to global AGI regulation pertaining to (not necessarily agentic) less-than superintelligent AGI, positively impacting the lives of billions of present-day humans.

 

Major AGI R&D problems include:

  • what exactly does it mean for an AGI to be maximally-aligned?

  • what exactly does it mean for an AGI to be maximally-validated?

  • outer AGI superalignment (the most important open problem in AI) — how do we define a final goal FG for an agentic superintelligent agent S that correctly states what we want?

  • inner AGI superalignment — how do we build an agentic superintelligent agent S that forever pursues the final goal FG as intended (what technical approach should we use)?

  • continuous learning — how do we build a maximally-aligned and maximally-validated learning mechanism that continuously observes the physical universe via a sensor array and, from those observations, constructs a fully interpretable internal world model that accurately and comprehensively captures the structure of the physical universe?

  • continuous planning — how do we build a maximally-aligned and maximally-validated planning mechanism that, given the world model maintained by the learning mechanism (plus an effector array), continuously strives to achieve and then maintain its final goal FG?

Major AGI governance problems include:

  • what policies (e.g. coordinated domestic, regional, and global regulation) do we need for AGI, and, in particular, for agentic superintelligent AGI?​

  • should there be a global requirement that advanced (and therefore "existentially-critical") AGI systems must be certified against strict quality standards (proportionally stronger than those currently employed for safety-critical systems such as nuclear and aerospace), and should these existentially-critical certification requirements mandate the application of formal methods and formal verification to all hardware and software components?

  • what concrete steps can and should we take to resolve the global coordination problem?

BML's contributions to date

BML's first published paper
"TTQ: An Implementation-Neutral Solution to the Outer AGI Superalignment Problem" (which encapsulates about 40 years' work in a single document):

 

  • introduces the concept of Gold-Standard AGI

  • presents a complete theory of AGI (adopting a pedagogic style in order to be accessible to less technical readers such as AGI policymakers, i.e. the people who actually regulate AGI)

  • defines 3 levels of narrow AI (ANI 1-3), and 7 levels of general AI (AGI 1-7)

  • outlines a neurosymbolic continuous learning mechanism for AGI that constructs a fully interpretable internal model of the physical universe given a sequence of observations

  • defines maximal-alignment

  • defines maximal-validation

  • describes an implementation-neutral solution to the outer AGI superalignment problem

  • outlines a proposed neurosymbolic non-LLM-based cognitive architecture for AGI, together with its construction sequence (which utilises early stages of the AGI to build later stages).

BML's future contributions

Three follow-up BML papers are currently planned, focusing specifically on inner AGI superalignment, global AGI governance, and multi-decade AGI project management. 

In the longer term, funding permitting, and in addition to playing our part in the global AGI governance process, we seek to design, develop, and deploy a Gold-Standard (maximally-aligned and maximally-validated) AGI called BigMother that is ultimately owned by all humanity (e.g. via the United Nations) as a global public good, and whose operation benefits all humanity, without favouring any subset thereof (such as the citizens of any particular country or countries, or the shareholders of any particular company or companies).

Growing BML

The rate of progress that BML can actually achieve depends entirely on the number of researchers that we are able to employ, and on the resources that we are able to provide them.

In the longer term, we seek to grow to a maximum of 150 employees (the Dunbar limit).

As a non-profit AGI lab, we currently rely entirely on purely philanthropic donations.

Testimonials

Professor Leslie Smith

(University of Stirling, Associate Editor of the Journal of Artificial General Intelligence)

“Much of what passes for AI is simply neural networks, basically the same ideas as in the 1980’s but upgraded with various bells and whistles like new activation functions, summarising levels, and feedback. Even the most sophisticated systems (like the GPT systems) lack novel ideas. Aaron’s work aims to push beyond this, to set an agenda for actual intelligent systems (so called artificial general intelligence, AGI) that considers more than pattern recognition and synthetic language construction. This is quite different from what is pursued by companies, and most computing departments. The work is important, and may underlie the next steps forward in artificial intelligence.”

Professor Steve Young CBE FRS

(University of Cambridge, Chair of Information Engineering)

The TTQ paper is certainly a tour de force. Aaron sets out a carefully argued process for producing an AGI in as safe a manner as possible. I hope that people read it and at minimum use it as a check list of things to consider.

TTQ - Early preprint

(click anywhere on the image)

TTQ title page.png

BigMother Labs is a non-profit AGI lab based in Cambridge (UK).

We currently rely entirely on purely philanthropic donations.

Please donate whatever you can (via the above button) to help us continue our work.

​​​​

Thank you!

bottom of page