Close Menu
KumbhCoinorg
    What's Hot

    [LIVE] Crypto News Today, December 16 – Bitcoin Falls Below $86K Amid Extreme Fear and Significant Liquidations – Next Crypto To Explode?

    December 16, 2025

    1 in 20 Emails Carry Hidden Threats

    December 16, 2025

    EUR/USD Outlook Firm as ECB Holds, Eying EU PMI, US NFP

    December 16, 2025
    Facebook X (Twitter) Instagram
    Trending
    • [LIVE] Crypto News Today, December 16 – Bitcoin Falls Below $86K Amid Extreme Fear and Significant Liquidations – Next Crypto To Explode?
    • 1 in 20 Emails Carry Hidden Threats
    • EUR/USD Outlook Firm as ECB Holds, Eying EU PMI, US NFP
    • The Best Payroll Apps Of 2025
    • Missing Teen Found Dead in Ditch Weeks Later, Family Says
    • Ella McCay review – comfort food reheated, with a…
    • Privacy for the Powerful, Surveillance for the Rest: EU’s Proposed Tech Regulation Goes Too Far
    • 16 December, 2025 – Alpha Ideas
    Facebook X (Twitter) Instagram
    KumbhCoinorg
    Tuesday, December 16
    • Home
    • Crypto News
      • Bitcoin & Altcoins
      • Blockchain Trends
      • Forex News
    • Kumbh Mela
    • Entertainment
      • Celebrity Gossip
      • Movie & TV Reviews
      • Music Industry News
    • Market News
      • Global Economy Insights
      • Real Estate Trends
      • Stock Market Updates
    • Education
      • Career Development
      • Online Learning
      • Study Tips
    • Airdrop News
      • Ico News
    • Sports
      • Cricket
      • Football
      • hockey
    KumbhCoinorg
    Home»Education»Study Tips»Why Are We Talking About Superintelligence?
    Study Tips

    Why Are We Talking About Superintelligence?

    kumbhorgBy kumbhorgNovember 4, 2025No Comments5 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Why Are We Talking About Superintelligence?
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link

    A couple of weeks ago, Ezra Klein ​interviewed​ AI researcher Eliezer Yudkowsky about his new, cheerfully-titled book, ​If Anyone Builds it, Everyone Dies​.

    Yudkowsky is worried about so-called superintelligence, AI systems so much smarter than humans that we cannot hope to contain or control them. As Yudkowsky explained to Klein, once such systems exist, we’re all doomed. Not because the machines will intentionally seek to kill us, but because we’ll be so unimportant and puny to them that they won’t consider us at all.

    “When we build a skyscraper on top of where there used to be an ant heap, we’re not trying to kill the ants; we’re trying to build a skyscraper,” Yudkowsky explains. In this analogy, we’re the ants.

    In this week’s ​podcast episode​, I go through Yudkowsky’s interview beat by beat and identify all the places where I think he’s falling into sloppy thinking or hyperbole. But here I want to emphasize what I believe is the most astonishing part of the conversation: Yudkowsky never makes the case for how he thinks we’ll succeed in creating something as speculative and outlandish as superintelligent machines. He just jumps right into analyzing why he thinks these superintelligences will be bad news.

    The omission of this explanation is shocking.

    Imagine walking into a bio-ethics conference and attempting to give an hour-long presentation about the best ways to build fences to contain a cloned Tyrannosaurus. Your fellow scientists would immediately interrupt you, demanding to know why, exactly, you’re so convinced that we’ll soon be able to bring dinosaurs back to life. And if you didn’t have a realistic and specific answer—something that went beyond wild extrapolations and a general vibe that genetics research is moving fast—they’d laugh you out of the room…

    But in certain AI Safety circles (especially those emanating from Northern California), such conversations are now commonplace. Superintelligence as an inevitability is just taken as an article of faith.

    Here’s how I think this happened…

    In the early 2000s, a collection of overlapping subcultures emerged from tech circles, all loosely dedicated to applying hyper-rational thinking to improve oneself or the world.

    One branch of these movements focused on existential risks to intelligent life on Earth. Using a concept from discrete mathematics called expected value, they argued that it can be worth spending significant resources now to mitigate an exceedingly rare future event, if the consequences of such an event would be sufficiently catastrophic. This might sound familiar, as it’s the logic that Elon Musk, who identifies with these communities, uses to justify his push toward us becoming a multi-planetary species.

    As these rationalist existential risk conversations gained momentum, one of the big topics pursued was rogue AI that becomes too powerful to contain. Thinkers like Yudkowsky, along with Oxford’s Nick Bostrom, and many others, began systematically exploring all the awful things that could happen if an AI became sufficiently smart.

    The key point about all of this philosophizing is that, until recently, it was all based on a hypothetical: What would happen if a rogue AI existed?

    Then ChatGPT was released, triggering a general vibe of rapid advancement and diminishing technological barriers. As best I can tell, for many in these rationalist communities, this event caused a subtle, but massively consequential, shift in their thinking: they went from asking, “What will happen if we get superintelligence?” to asking, “What will happen when we get superintelligence?”

    These rationalists had been thinking, writing, and obsessing over the consequences of rogue AI for so long that when a moment came in which suddenly anything seemed possible, they couldn’t help but latch onto a fervent belief that their warnings had been validated; a shift that made them, in their own minds, quite literally the potential saviors of humanity.

    This is why those of us who think and write about these topics professionally so often encounter people who seem to have an evangelical conviction that the arrival of AI gods is imminent, and then dance around inconvenient information, falling back on dismissal or anger when questioned.

    (In one of the more head-turning moments of their interview, when Klein asked Yudkowsky about critics–​such as myself​–who argue that AI progress is stalling well short of superintelligence, he retorted: “I had to tell these Johnny-come-lately kids to get off my lawn.” In other words, if you’re not one of the original true believers, you shouldn’t be allowed to participate in this discussion! It’s more about righteousness than truth.)

    For the rest of us, however, the lesson here is clear. Don’t mistake conviction for correctness. AI is not magic; it’s a technology like any other. There are things it can do and things it can’t, and people with engineering experience can study the latest developments and make reasonable predictions, backed by genuine evidence, about what we can expect in the near future.

    And indeed, if you push the rationalists long enough on superintelligence, they almost all fall back on the same answer: all we have to do is make an AI slightly smarter than ourselves (whatever that means), and then it will make an AI even smarter, and that AI will make an even smarter AI, and so on, until suddenly we have Skynet.

    But this is just a rhetorical sleight-of-hand—a way to absolve any responsibility for explaining how to develop such a hyper-capable computer. In reality, we have no idea how to make our current AI systems anywhere near powerful enough to build whole new, cutting-edge computer systems on their own. At the moment, our best coding models seem to ​struggle with consistently producing ​programs more advanced than basic vibe coding demos.

    I’ll start worrying about Tyrannosaurus paddocks once you convince me we’re actually close to cloning dinosaurs. In the meantime, we have real problems to tackle.

    Superintelligence talking
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous Article7 Dead Following Avalanche on Mountain in Nepal: Reports
    Next Article L&D For The Global Audience: Tips For Multilingual Projects
    kumbhorg
    • Website
    • Tumblr

    Related Posts

    Study Tips

    Australia Just Kicked Kids Off Social Media. (Is the U.S. Next?)

    By kumbhorgDecember 15, 2025
    Study Tips

    Why Your Study Systems Don’t Work Anymore

    By kumbhorgDecember 9, 2025
    Study Tips

    Why is the Internet Becoming TV?

    By kumbhorgDecember 8, 2025
    Study Tips

    David Grann and the Deep Life

    By kumbhorgNovember 29, 2025
    Ico News

    Top IDOs Everyone Talking: Projects Creating Crypto Hype

    By kumbhorgNovember 26, 2025
    Study Tips

    When it Comes to AI: Think Inside the Box

    By kumbhorgNovember 24, 2025
    Add A Comment

    Comments are closed.

    Don't Miss

    [LIVE] Crypto News Today, December 16 – Bitcoin Falls Below $86K Amid Extreme Fear and Significant Liquidations – Next Crypto To Explode?

    By kumbhorgDecember 16, 2025

    Another day, another red Bitcoin candle. Why is crypto crashing? Liquidations have exceeded $658 million…

    1 in 20 Emails Carry Hidden Threats

    December 16, 2025

    EUR/USD Outlook Firm as ECB Holds, Eying EU PMI, US NFP

    December 16, 2025

    The Best Payroll Apps Of 2025

    December 16, 2025
    Top Posts

    Satwik-Chirag storm into China Masters final with straight-game win over Malaysia | Badminton News

    September 21, 2025105 Views

    SaucerSwap SAUCE Crypto Breaks Key Resistance Amid Nvidia-Hedera Deal

    July 15, 202545 Views

    Unlocking Your Potential with Mubite: The Future of Crypto Prop Trading

    September 17, 202533 Views

    Stablecoins 2025 Exchange Reserves: Insights into DeFi Trends

    September 8, 202532 Views
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    About Us

    Welcome to KumbhCoin!
    At KumbhCoin, we strive to create a unique blend of cultural and technological news for a diverse audience. Our platform bridges the spiritual significance of the Kumbh Mela with the dynamic world of cryptocurrency and general news.

    Facebook X (Twitter) Pinterest WhatsApp
    Our Picks

    [LIVE] Crypto News Today, December 16 – Bitcoin Falls Below $86K Amid Extreme Fear and Significant Liquidations – Next Crypto To Explode?

    December 16, 2025

    1 in 20 Emails Carry Hidden Threats

    December 16, 2025

    EUR/USD Outlook Firm as ECB Holds, Eying EU PMI, US NFP

    December 16, 2025
    Most Popular

    7 things to know before the bell

    January 22, 20250 Views

    Reeves optimistic despite surprise rise in UK borrowing

    January 22, 20250 Views

    Barnes & Noble stock soars 20% as it explores a sale Barnes & Noble stock soars 20% as it explores a sale

    January 22, 20250 Views
    • Terms and Conditions
    • Privacy Policy
    • Contact Us
    • About Us
    © 2025 Kumbhcoin. Designed by Webwizards7.

    Type above and press Enter to search. Press Esc to cancel.