• On May 30, 2023, hundreds of artificial intelligence experts and other notable figures signed the following short Statement on AI Risk: Mitigating the...
    7 KB (776 words) - 09:58, 20 July 2025
  • May 2023, CAIS published the statement on AI risk of extinction signed by hundreds of professors of AI, leaders of major AI companies, and other public...
    7 KB (561 words) - 16:42, 29 June 2025
  • of artificial intelligence Robot ethics § In popular culture Statement on AI risk of extinction Superintelligence: Paths, Dangers, Strategies Risk of...
    127 KB (13,309 words) - 09:56, 20 July 2025
  • Thumbnail for AI boom
    granted rights. Industry leaders have further warned in the statement on AI risk of extinction that humanity might irreversibly lose control over a sufficiently...
    64 KB (5,464 words) - 21:18, 26 July 2025
  • in AI. AI safety Artificial intelligence detection software Artificial intelligence and elections Statement on AI risk of extinction Existential risk from...
    133 KB (13,069 words) - 15:35, 21 July 2025
  • P(doom) (category Existential risk from artificial intelligence)
    p(doom) Records. Existential risk from artificial general intelligence Statement on AI risk of extinction AI alignment AI takeover AI safety "Less likely than...
    15 KB (1,011 words) - 08:11, 3 August 2025
  • (2015) Statement on AI risk of extinction AI takeover Existential risk from artificial general intelligence Regulation of artificial intelligence PauseAI "Pause...
    13 KB (1,412 words) - 09:57, 20 July 2025
  • Thumbnail for AI takeover
    An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs...
    42 KB (4,474 words) - 04:15, 2 August 2025
  • Thumbnail for Human extinction
    that there is a relatively low risk of near-term human extinction due to natural causes. The likelihood of human extinction through humankind's own activities...
    65 KB (7,247 words) - 00:44, 2 August 2025
  • Thumbnail for Global catastrophic risk
    modern civilization. Existential risk is a related term limited to events that could cause full-blown human extinction or permanently and drastically curtail...
    53 KB (5,626 words) - 05:07, 1 August 2025
  • intelligence (AI) systems. It encompasses AI alignment (which aims to ensure AI systems behave as intended), monitoring AI systems for risks, and enhancing...
    88 KB (10,513 words) - 22:49, 31 July 2025
  • Thumbnail for Shane Legg
    Shane Legg (category AI safety scientists)
    concern of existential risk from AI, highlighted in 2011 in an interview on LessWrong and in 2023 he signed the statement on AI risk of extinction. Before...
    14 KB (1,025 words) - 05:43, 9 May 2025
  • global priority. "Statement on AI Risk". Center for AI Safety. Retrieved 1 March 2024. AI experts warn of risk of extinction from AI. Mitchell, Melanie...
    135 KB (14,800 words) - 17:53, 2 August 2025
  • Thumbnail for Effective accelerationism
    Effective accelerationism (category Human extinction)
    primarily from one of the causes effective altruists focus onAI existential risk. Effective altruists (particularly longtermists) argue that AI companies should...
    25 KB (2,112 words) - 09:57, 20 July 2025
  • Artificial intelligence (redirect from AI)
    competing in use of AI. In 2023, many leading AI experts endorsed the joint statement that "Mitigating the risk of extinction from AI should be a global...
    285 KB (29,145 words) - 07:39, 1 August 2025
  • Superintelligence: Paths, Dangers, Strategies (category Existential risk from artificial intelligence)
    it as a work of importance". Sam Altman wrote in 2015 that the book is the best thing he has ever read on AI risks. The science editor of the Financial...
    13 KB (1,273 words) - 09:58, 20 July 2025
  • arguments dismissing AI risk and attributes much of their persistence to tribalism—AI researchers may see AI risk concerns as an "attack" on their field. Russell...
    12 KB (1,133 words) - 09:57, 20 July 2025
  • Thumbnail for Michelle Donelan
    Michelle Donelan (category Articles containing potentially dated statements from May 2020)
    risks. Soon after, hundreds of AI experts including Geoffrey Hinton, Yoshua Bengio, and Demis Hassabis signed a statement acknowledging AI's risk of extinction...
    36 KB (2,770 words) - 20:37, 28 July 2025
  • Alignment Research Center (category Existential risk from artificial intelligence)
    alignment of advanced artificial intelligence with human values and priorities. Established by former OpenAI researcher Paul Christiano, ARC focuses on recognizing...
    8 KB (683 words) - 09:56, 20 July 2025
  • Thumbnail for Jaan Tallinn
    Jaan Tallinn (category AI safety advocates)
    GPT-4", and in May, he signed a statement from the Center for AI Safety which read "Mitigating the risk of extinction from AI should be a global priority...
    17 KB (1,556 words) - 10:57, 19 July 2025
  • Thumbnail for Geoffrey Hinton
    University of Toronto before publicly announcing his departure from Google in May 2023, citing concerns about the many risks of artificial intelligence (AI) technology...
    67 KB (5,797 words) - 04:41, 29 July 2025
  • Machine Intelligence Research Institute (category Existential risk organizations)
    since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach...
    17 KB (1,160 words) - 23:07, 2 August 2025
  • Thumbnail for Permian–Triassic extinction event
    Marine extinction intensity during Phanerozoic % Millions of years ago (H) K–Pg Tr–J P–Tr Cap Late D O–S The Permian–Triassic extinction event, colloquially...
    389 KB (41,157 words) - 06:56, 3 August 2025
  • Thumbnail for Lila Ibrahim
    Lila Ibrahim (category AI safety advocates)
    for AI Safety statement declaring that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such...
    12 KB (834 words) - 13:59, 30 March 2025
  • Thumbnail for De-extinction
    De-extinction (also known as resurrection biology, or species revivalism) is the process of generating an organism that either resembles or is an extinct...
    104 KB (11,188 words) - 21:53, 1 August 2025
  • was a failed 2024 California bill intended to "mitigate the risk of catastrophic harms from AI models so advanced that they are not yet known to exist"....
    46 KB (4,498 words) - 09:57, 20 July 2025
  • force. The AI Act sets rules on providers and users of AI systems. It follows has a risk-based approach, where depending on the risk level, AI systems are...
    148 KB (15,340 words) - 06:32, 29 July 2025
  • Thumbnail for Timeline of artificial intelligence
    ISSN 1932-2909. S2CID 259470901. "Statement on AI Risk AI experts and public figures express their concern about AI risk". Center for AI Safety. Retrieved 14 September...
    124 KB (4,803 words) - 18:56, 30 July 2025
  • Thumbnail for Nick Bostrom
    Nick Bostrom (category Commons category link is on Wikidata)
    original on 18 October 2015. Retrieved 5 September 2015. Andersen, Ross (6 March 2012). "We're Underestimating the Risk of Human Extinction". The Atlantic...
    50 KB (4,770 words) - 16:40, 13 July 2025
  • Thumbnail for Holocene extinction
    The Holocene extinction, also referred to as the Anthropocene extinction or the sixth mass extinction, is an ongoing extinction event caused exclusively...
    270 KB (25,820 words) - 12:18, 24 July 2025