top of page

Must read

BM Manifesto (transparent, megaproject).png

Testimonials

Professor Leslie Smith

(University of Stirling, Associate Editor of the Journal of Artificial General Intelligence)

“Much of what passes for AI is simply neural networks, basically the same ideas as in the 1980’s but upgraded with various bells and whistles like new activation functions, summarising levels, and feedback. Even the most sophisticated systems (like the GPT systems) lack novel ideas. Aaron’s work aims to push beyond this, to set an agenda for actual intelligent systems (so called artificial general intelligence, AGI) that considers more than pattern recognition and synthetic language construction. This is quite different from what is pursued by companies, and most computing departments. The work is important, and may underlie the next steps forward in artificial intelligence.”

Abstract

Whoever owns human-level Artificial General Intelligence (AGI) will own the global means of production for all goods and services. Superintelligent AGI has been conservatively estimated to have a net present value of $15 quadrillion. Accordingly, the major equity-funded/profit-motivated AI labs (and their associated sovereign states), being aggressively competitive by nature, are engaged in an AGI arms race, each in pursuit of their own short-term self-interest, seemingly oblivious to the long-term best interest of the human species as a whole.

​​

Rather than engage in a race, over (realistically) the next 20-30 years, towards an AGI future that, due to competitive race dynamics and the trapdoor nature of superintelligence, is likely to be, at best, hugely sub-optimal for all mankind for all eternity, and, at worst, catastrophic, we propose spending ~50-100 years doing it properly (less, if possible, as long as quality is not compromised; more if safety demands), in the best interest of all mankind, in order to achieve an AGI endgame that is (as close as possible to) maximally-beneficent for all mankind, while at the same time using the additional breathing space to mitigate to the maximum extent possible the inevitable pain of such a profound societal transition.

​​

Our overall approach is to try to imagine the ideal AGI endgame (from the perspective of the human species as a whole), and to work backwards from there in order to make it (or something close to it) actually happen. This is largely equivalent to imagining the ideal superintelligent AGI (effectively the Gold Standard AGI), and then working backwards to actually build it. To this end, we seek to design, develop, and deploy a provably maximally-aligned maximally-superintelligent AGI (called BigMother/BigMom) that is ultimately owned by all mankind (as a global public good), and whose operation benefits all mankind, without favouring any subset thereof (such as the citizens of any particular country or countries, or the shareholders of any particular company or companies).

​

In The BigMother Manifesto: A Roadmap to Provably Maximally-Aligned Maximally-Superintelligent AGI, we will describe the BigMother cognitive architecture and associated BigMother megaproject in detail. Together, these define an AGI research agenda for the next ~50-100 years.


Keywords:  Artificial General Intelligence, superintelligence, megaproject, cognitive architecture, superalignment, Turner's Three Laws, knowledge representation, induction, deduction, abduction, problem-solving, program synthesis, AGI learning, AGI planning, symbolic-connectionist hybrid

Latest draft

BMM title page.png
bottom of page