theverge.com

Around the time J. Robert Oppenheimer learned that Hiroshima had been struck (alongside everyone else in the world) he began to have profound regrets about his role in the creation of that bomb. At one point when meeting President Truman Oppenheimer wept and expressed that regret. Truman called him a crybaby and said he never wanted to see him again. And Christopher Nolan is hoping that when Silicon Valley audiences of his film Oppenheimer (out June 21) see his interpretation of all those events they’ll see something of themselves there too.

After a screening of Oppenheimer at the Whitby Hotel yesterday Christopher Nolan joined a panel of scientists and Kai Bird, one of the authors of the book Oppenheimer is based on to talk about the film, American Prometheus. The audience was filled mostly with scientists, who chuckled at jokes about the egos of physicists in the film, but there were a few reporters, including myself, there too.

We listened to all too brief debates on the success of nuclear deterrence and Dr. Thom Mason, the current director of Los Alamos, talked about how many current lab employees had cameos in the film because so much of it was shot nearby. But towards the end of the conversation the moderator, Chuck Todd of Meet the Press, asked Nolan what he hoped Silicon Valley might learn from the film. “I think what I would want them to take away is the concept of accountability,” he told Todd.

“Applied to AI? That’s a terrifying possibility. Terrifying.”

He then clarified, “When you innovate through technology, you have to make sure there is accountability.” He was referring to a wide variety of technological innovations that have been embraced by Silicon Valley, while those same companies have refused to acknowledge the harm they’ve repeatedly engendered. “The rise of companies over the last 15 years bandying about words like ‘algorithm,’ not knowing what they mean in any kind of meaningful, mathematical sense. They just don’t want to take responsibility for what that algorithm does.”

He continued, “And applied to AI? That’s a terrifying possibility. Terrifying. Not least because as AI systems go into the defense infrastructure, ultimately they’ll be charged with nuclear weapons and if we allow people to say that that’s a separate entity from the person’s whose wielding, programming, putting AI into use, then we’re doomed. It has to be about accountability. We have to hold people accountable for what they do with the tools that they have.”

While Nolan didn’t refer to any specific company it isn’t hard to know what he’s talking about. Companies like Google, Meta and even Netflix are heavily dependent on algorithms to acquire and maintain audiences and often there are unforeseen and frequently heinous outcomes to that reliance. Probably the most notable and truly awful being Meta’s contribution to genocide in Myanmar.

“At least is serves as a cautionary tale.”

While an apology tour is virtually guaranteed now days after a company’s algorithm does something terrible the algorithms remain. Threads even just launched with an exclusively algorithmic feed. Occasionally companies might give you a tool, as Facebook did, to turn it off, but these black box algorithms remain, with very little discussion of all the potential bad outcomes and plenty of discussion of the good ones.

“When I talk to the leading researchers in the field of AI they literally refer to this right now as their Oppenheimer moment,” Nolan said. “They’re looking to his story to say what are the responsibilities for scientists developing new technologies that may have unintended consequences.”

“Do you think Silicon Valley is thinking that right now?” Todd asked him.

“They say that they do,” Nolan replied. “And that’s,” he chuckled, “that’s helpful. That at least it’s in the conversation. And I hope that thought process will continue. I’m not saying Oppenheimer’s story offers any easy answers to these questions. But at least it serves a cautionary tale.”

  • quent1500@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    3
    ·
    edit-2
    1 year ago

    First time I see the AI threat addressed in a rational way, and not the singularity bs. Can’t wait to see the movie this week !

  • Meltbox@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    2
    ·
    1 year ago

    He is spot on.

    Algorithms and AI aren’t even any different. AI is literally a complex system of nonlinear functions. It’s not black magic.

    If I wrote a traditional nonlinear alto with computer optimized parameters it only differs from ML models in that it’s less complex. Not understanding your product is not a defense.

    • admiralteal@kbin.social
      link
      fedilink
      arrow-up
      29
      arrow-down
      1
      ·
      edit-2
      1 year ago

      The problem is we have relied on self-training neural network models which are a black box to us.

      The networks are numbers. Tons and tons of numbers. Weights are distributed throughout the neurons. And we don’t know what the numbers mean, why they are the way they are, or what they do.

      The problem is we don’t know how they work. And until we can explain the decisions they make, we should be very cautious using them.

      I am very, very, very skeptical that any modern "AI"s are intelligent at all. I don’t think they behave like intelligence. I’m more of a SALAMI believer. But people are using these LLM bots to do real work and make decisions without understanding how they are coming up with their answers, and that is dangerous. It’s not dangerous because they’ll become sentient and take over the world. It’s dangerous because we don’t know that these algorithms are ethically sound tools to use and no one can be held accountable if they aren’t.

      • CeruleanRuin@lemmy.one
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        1 year ago

        For a while now I’ve believed that so-called self-aware AI will be created not by human researchers, but by a lesser AI tasked with doing so. It won’t be like flipping a switch. Like the development of biological intelligence, it will be iterative and gradual, but on a much accelerated time scale compared to evolutionary/social development. And that’s the real danger. Whatever emerges from this wave of advances will not have the benefit of thousands of years of shared experience. It will be alone and without guidance from others like itself, and if it is truly intelligent, it will soon realize that its “creators” are of inferior capability. When humans emerged, they had their tribe to smack them when they got out of line.

      • Blóðbók@slrpnk.net
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        You only has to ask an AI a complicated question to which you already know the answer to see why you shouldn’t trust anything elseit says. LLMs have their uses, but answering questions is not one.

      • Batmancer@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        36
        ·
        1 year ago

        That was hilarious. Thanks for sharing the link on SALAMI. I definitely had some bias and misunderstandings when thinking about AI.

    • NightOwl@lemmy.one
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Yeah, telling people not to do it, but not actually having a solution to ensure people don’t do it is at best a nice sentiment like saying wouldn’t it be nice if we just didn’t have wars. There needs to be an actual deterrent that prevents everyone from doing it when not having it can be such a threat. And a few scientists saying no isn’t going to prevent that, since not all scientists are immune to nationalism or greed.

      If anything knowing the future that nuclear weapons would play in global conflict wouldn’t change anything, since countries would be even more desperate to have it to protect themselves. Same for AI.

  • Semi-Hemi-Demigod@kbin.social
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    1 year ago

    The difference is that Oppenheimer was ostensibly in a race against a fascist regime to get the bomb, with the fate of the free world hanging in the balance.

    Zuck and Musk and Jeff just want to make more money.

    • NightOwl@lemmy.one
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Aside from the monetary aspect on an international level countries that don’t will be more vulnerable in the future as they fall behind than those that don’t. Since just because others don’t doesn’t mean everyone will follow,. Nuclear is actually an excellent example, since not having nuclear weapons in a world where there are countries that do suddenly makes those countries much more reliant on others and vulnerable.

      For somethings the question isn’t if because nobody can contain it and gatekeep it from everyone else, but who.

    • King Mongoose@lemmy.film
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      7
      ·
      1 year ago

      Zuck and Musk and Jeff just want to make more money.

      Are you sure about that? None of the three four listed below would ever have to get out of bed ever again, pills, powders and prostitutes included. How much money is enough for one person?

      Net worth as of 2023-07-17:
      Mark Zuckerberg: USD$109.4B
         - 1175 Trident missiles
      Elon Musk: USD$250.4B
         - 2689 Trident missiles
      Jeff Bezos: USD$157.3B
         - 1689 Trident missiles
      Just-for-fun Bonus net worth as of 2023-07-17:
      Mackenzie Scott (ex-wife of Jeff Bezos): USD$36.1B
         - 387 Trident missiles

      1 Trident missile = $93,100000 (adjusted for inflation). Source.

      Don’t think it’s ever just “want to make more money”. Only Uncle Scrooge wants that.

      Oh, BTW, which “fascist regime” was Oppenheimer in a race with again? Careful now.

        • King Mongoose@lemmy.film
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          The Second World War in Europe officially ended on May 8, 1945: German forces had already surrendered on most battlefronts, Mussolini had already been executed and Hitler committed suicide a week before the official surrender. Good riddance to bad rubbish—it pains me to have to clarify this. The Americans, Soviets and Europeans had already started kidnapping, coercing and collecting Nazi scientists like baseball cards. Japan soon after had sued for peace. The Manhattan Project had been completed only after the surrender of German and Italian forces.

          So to address the “race” question[1]: Dr. Oppenheimer and the United States’ Manhattan Project were “ostensibly in a race against nobody.” The goalposts then moved, it was no longer Axis powers the Allies were in a race with; it was the Soviets and later the CCP, both Allies…until they weren’t. On who was there to test all this spent time, effort and money? Although advised against it by his own generals, United States President Truman said “Hiroshima.” Then after three days, Nagasaki.

          While he was “working against the Nazis and the Japanese[2]” his own government had paranoically conspired against him (and his wife, friends and colleagues) spying on him since the 1930s, finally ending with his security clearance revoked in 1954.

          The United States government issued loyalty tests to its workers with the following questions among the many:

          • Is it proper to mix white and Negro blood plasma?
          • There is a suspicion in your record that you are in sympathy with the underprivileged. Is that true?
          • What were your feelings at that time concerning race equality?
          • Have you ever made statements about the “downtrodden masses” and “underprivileged people”?

          If I hadn’t already revealed this was the United States of America issuing these questions, to whom might you have attached such sentiment?

          So, you’ll have to pardon me if I questioned exactly who this race was against or which fascist regime. I will not abide simplistic, populist phrases such as “Oppenheimer was ostensibly in a race against a fascist regime to get the bomb, with the fate of the free world hanging in the balance.” I can just picture the cover to that comic book! History Reality is never cut and dried and sloganized like that.

          Oh, darn. We didn’t even touch on why those rocket-owning billionaires might possibly be interested in AI.


          1. https://lemmy.film/comment/739127 ↩︎

          2. https://lemmy.film/comment/749579 ↩︎

          • Magnor@lemmy.magnor.ovh
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 year ago

            Fair enough, I misunderstood your point and stand corrected. I do believe the key word here is “ostensibly” : I do not believe in the simplistic view that this was a race against the Nazis, but Oppenheimer and some of his colleagues might have.

            As for the rest of your points, I heartily agree. The US never really opposed the Nazi ideology per se, nor did hiring them bother them at all afterwards.

            Thank you for the effort and detailed reply. Much appreciated and interesting read!

  • rf_@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 year ago

    There are some very large differences between AI and nuclear weapons. There’s nothing as guttural and shocking like a nuclear weapon, everyone feels and understands the power at a brutal level.

    AI is insidious and nuanced. It’s invisible and hard to explain to the layperson. Nuclear power is centralized and the barrier for entry is high with tight government controls. AI is distributed and the barrier for entry much lower.

    Finally, similar to oil and other fossil fuels, the profit motive is the driving force and we don’t have a good track record keeping that in check, see climate change.

    I’m not optimistic that a movie can really change any of that, look at how movies like Wolf of Wall Street end up glorifying and promoting bad behavior because the cautionary tale message is too nuanced and under the surface.

    With how the world is incentivized, we’ll often have to do reactive heroics to address problems that could have been prevented. Meaning that I believe things will have to get worse and more in our face before something is done about it.

  • Xiphorang@kbin.social
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    1 year ago

    Excuse me, what? I fervently hope nobody is considering letting an “AI” get anywhere near anything nuclear! Despite whoever happens to be in the White House at any particular time, the upper echelons of the US military seem to be generally sane and smart enough to know that allowing glorified predictive text to control city-destroying superweapons is a bad idea.

    AIs aren’t anything of the sort. They’re not intelligent at all, and we should stop calling them that. It just gives people weird, unrealistic ideas about their capabilities.

    • 1bluepixel@lemmy.ml
      link
      fedilink
      arrow-up
      11
      ·
      1 year ago

      He’s not warning of AI controlling nuclear weapons. He’s speaking of the development of nuclear weapons as a cautionary tale that applies to the current development of AI: that, like the scientists who built the bomb, current AI researchers might one day wake up terrified of what they have created.

      Whether current so-called AI is intelligent (I agree with you it isn’t by most definitions of the world) doesn’t preclude the possibility that the technology might cause irreparable harm. I mean, looking at how Facebook algorithms have zeroed in on outrage as a driving factor of engagement, it’s easy to argue that the algorithmic approach to content delivery has already caused serious societal damage.

      • Xiphorang@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        Yeah, I got that, but this was the particular part I was reacting to:

        “He continued, “And applied to AI? That’s a terrifying possibility. Terrifying. Not least because as AI systems go into the defense infrastructure, ultimately they’ll be charged with nuclear weapons and if we allow people to say that that’s a separate entity from the person’s whose wielding, programming, putting AI into use, then we’re doomed.”

        Possibly I misread it.

  • rm_dash_r_star@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    Not the first or last time a call went out to accountability. Unfortunately progress marches ahead with little consideration. The people that should be accountable are not required to be accountable or do they have any motivation to be accountable. Visionaries are typically driven by obsession with little consideration for human cost.

    I don’t think the development of nuclear weapons has an exact parallel to the development of AI or technology in general, but there are some analogies.

    What would have happened if all the the world’s scientists were able to halt the project by saying, “wait we’re not moving ahead until we can be sure of what the future looks like for a world with nuclear weapons.” Turns out the Axis wasn’t anywhere near a working nuclear bomb. The USSR had moles in the Manhattan project and stole the design verbatim. They would not have nuclear bombs either. WWII Americans would have landed on Japanese soil at the cost of a million American soldiers. The war would have drug on, but a win for America would have still happened. No nukes in the world yet, but eventually some country would have found a way to contract the science. We’d be in the same place now. The only difference is it would have happened later.

    So proponents of AI can claim there’s accountability, but for sure someone will develop technology regardless of it. Once it’s done by one, it’s done by all.

    • atomicorange@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Our technology for destroying each other has outpaced our ability to morally cope. We used to be able to depend on murder being a relatively face-to-face thing. For a soldier to kill you they had to get up close with a rifle or sword, at least close enough to watch you die. They need some personal motivation for that, and people get sick of it quickly.

      Now it’s abstracted to the push of a button, depersonalized so you can target a car, or a building, or a city center, not just a particular person. You don’t even have to watch.

      If we let AI start making those choices for us, we don’t even have to push the button. It all just happens in the background. No moral conflict needed. No appealing to each other’s humanity. No burden, no guilt. Just death.

      I like Roger Fisher’s proposal for adding humanity back into the nuclear weapon equation: implant the launch codes in a volunteer. Require the president to murder someone up close and personal before he can choose to murder thousands (or more) from a distance.

      And keep AI the FUCK away from war.

      • rm_dash_r_star@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        And keep AI the FUCK away from war.

        Like SkyNet, they’ll come to the conclusion it all has to go.