• habanhero@lemmy.ca
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    1 year ago

    AI is a blanket term that is used to describe many different things and more recently used as a Bogeyman by the media to scare everyone’s pants off.

    The “AI” that’s all the hype recently à la ChatGPT, Bard etc are “generative AI” based on Large Language Models. They seem really good at answering questions, creating content, rewriting text etc. The “threat” to humanity at the moment is more about industries being disrupted, jobs being replaced by these technologies, etc. Customer Service, Copywriting, Legal and creative industries are all impacted. In the longer term, as with all technologies, there is a concern that there will be an imbalance in the access of this tech and for example, only the rich and powerful can truly harness the power of these tools.

    There is also the more Doomsday interpretation of “AI” which in this case, really means AGI (Artificial General Intelligence), where the AI actually becomes sentient and can think / reason for itself. I think this is still in the realm of science fiction today but who knows about the future. The worry here is that if such a sentient being become malevolent for one reason or another, we would be dealing with an AI Overlord kind of scenario with the superior computing power, access and knowledge that it will have.

    • Lvxferre@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Specifically about generative bots: the bots themselves are no threat. And they would be still no threat, even if they worked far better or far worse than they do - because they simply output text. The actual threat is some bloody irrational (assumptive, gullible, oversimplifying) individuals in positions of power might get really lazy, use the output for decision making, harm people with their idiotic decision, and then blame the tool for what was ultimately their decision. But frankly? This sort of trashy individual has been a plague to humankind since the dawn of time, with or without large language models.

      • Zeth0s@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Tbf, in its current form chatgpt is already wiser than most politicians. Unfortunately, because of your last sentence, they will stay less wise that chatgpt

  • tiresieas@dormi.zone
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    1 year ago

    Well first, AI won’t end the world… in its current state. There’s plenty of sci-fi covering that exact doomsday scenario: a highly advanced artificial general intelligence (AGI), for one reason or other, decides to eradicate or drastically alter humanity as we know it, usually because it sees humans/humanity as a blight or threat (see: Skynet from Terminator, Geth from Mass Effect), as a resource to be used (see: the machines from The Matrix, Reapers from Mass Effect), or as a twisted form of protection (Ultron from Marvel comics/MCU, AUTO from WALL-E). Will something like this happen? Hopefully not, but definitely not with the “AI” we have now.

    The impact of AI now is primarily social, the tip of the iceberg being used in academia (students using ChatGPT to write essays, professors using “AI Detectors” that also flag legitimate essays as being AI generated) and the issue of art generation. The biggest impacts that I think we’re going to see become a big issue soon is with deepfakes. We’ve seen some of this come up already, with the issues of certain female online personalities having AI-generated or deepfaked nudes produced, or the fad we had for a bit with AI-produced audio of US presidents hanging out, making tier lists and playing video games. The political theater, particularly in the US, already sees a lot of misleading/out-of-context sound bites and general misinformation, and voice synthesis tech can drastically affect this. Inserting a damning line in the middle of a platform speech, creating a completely fabricated “leaked phone call”… or potentially doing the opposite and gaslighting about what was really said or claiming that said conversation was actually faked. The proliferation of voice synthesis, whether or not it gets used will negatively impact the public’s political literacy.

    Going back to the arts, we are also seeing this issue come up (at least partially) with the recent WGA/SAG-AFTRA strikes and in art communities, where a language learning model or art generator is being used to “save money” by not using a human artist. Think of all the money a company can save by eliminating the need for writers, artists, or even the background extras and replacing them with generative models.

    We may even see this have greater impacts on a personal cultural level, such as AI who will be your friend or romantic companion.

    All that to say I don’t think AI, as it is now, is all bad, but right now the potential downsides we face with just the basic “AI” we have now vastly outweighs the benefits of a text bot that writes in a way that mostly looks like it should make sense or specific art pieces. There’s a lot of bad, and the good is pretty nebulous.

  • NaibofTabr@infosec.pub
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 year ago

    None of the current “AI” technology (which is just machine learning, which is just large scale statistical analysis, and not really intelligence in any way) will “end the world” in any functional way.

    What it will do is further enshittify the world. Services will implement “AI” tools that give them enough functionality to justify cutting their customer support staff even further than they already have. New businesses won’t even hire CS people or set up proper departments for that, they’ll just have a chatbot in their app that ultimately sends an email to an inbox that no one has been hired to actually check.

    The “AI” bots will be used to make decisions about who does or does not get access to a service or product. Discriminatory practices will be institutionalized in algorithms that will boil down to a company employee just saying to you “the computer says no”.

    In addition, the Large Language Models will be used to produce vast amounts of misinformation and just plain bad information spewed out by AI writing tools because they’re great at that, which will make it more and more difficult to find true information about anything. We’re going to be snowed in by a pile of AI garbage, and it will happen faster than anyone is prepared for because speed and amplification are the whole point of these tools.

    The world isn’t going to end, it’s just going to suck more to not be a rich person who can just throw money at their problems.

  • ArbiterXero@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 year ago

    It will make it far too easy to prepare fake videos, false personalities on social media and falsified “everything” and then some bad actors will use that to convince our tiny humans minds to destroy each other and ourselves.

    Basically you will see videos that look real but are faked and designed to outrage. Suddenly every conspiracy theory will have real looking videos.

    One group will release a video of George Floyd where the cop was also molesting him, and the other group will release a video where George Floyd was holding a knife to the throat of a 9 year old girl, necessitating his death.

    The protests and outrage will tear society apart, as we complete our journey into the post truth era.

    There will no longer be any “truth” to be had.

    • dog@suppo.fi
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      So this isn’t a problem with AI, it’s a problem with humans.

      I myself welcome the era where digital evidence is no longer valid in court, as it can be falsified in seconds.

      • ArbiterXero@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        It won’t work like that.

        It’ll go by trusted sources, and since the police are considered a “trusted source” (though they probably shouldn’t be)

        It’ll work like witnesses and hearsay. It can be used to convict you but not to exonerate you.

  • bradorsomething@ttrpg.network
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    AI could potentially make 4 billion people be able to do the jobs of 8 billion. That leaves 4 billion people without jobs. Capitalism does not have a mechanism for altruism.

  • HousePanther@lemmy.goblackcat.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    AI is a tool that can be used for beneficially and harmfully. Unfortunately, I see it being used harmfully because violence, harm, and trauma are profiting ventures.

  • fneu@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    It’s still far off, but there is this concept that IF we are able to create an artificial general intelligence that is smarter than we are, then this intelligence could create an even smarter intelligence in turn or improve itself. Eventually it could leave us intellectually so far behind that we couldn’t do much if one day it decided to kill off humanity to limit global warming or for some other reason.

    • Hellfire103@sopuli.xyzOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’m willing to bet the response would start with “As an AI language model, I cannot…”

      • BiNonBi@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        As an AI language model, I must emphasize that AI, in itself, is not inherently capable of ending the world. AI is a tool that operates based on the instructions and algorithms created by human programmers. It lacks its own intentions, motivations, and desires. The actions and consequences of AI are ultimately determined by the ethical considerations and intentions of its creators and users.

        However, there are some hypothetical scenarios in which AI could contribute to global risks if not developed and used responsibly. Here are a few examples:

        1. Military Use: AI-powered autonomous weapons or military systems could lead to unintended consequences and escalation of conflicts if they malfunction, are hacked, or used without appropriate human oversight.

        2. Misaligned Objectives: If AI systems are not correctly aligned with human values and interests, they may act in ways that are harmful to humanity. This could occur due to flaws in their training data, reward functions, or reinforcement learning methods.

        3. Economic Impact: AI’s automation capabilities might lead to widespread job displacement, economic inequality, and social unrest if not properly managed and accompanied by adequate policies.

        4. Critical Infrastructure Vulnerabilities: As AI is integrated into essential systems like transportation, healthcare, and energy, vulnerabilities could be exploited by malicious actors to cause significant disruptions.

        5. Superintelligence Risk: Theoretical concerns about superintelligent AI, i.e., an AI system surpassing human intelligence, could potentially lead to unforeseen consequences if it is not carefully designed to prioritize human safety and values.

        To mitigate these risks and ensure AI’s responsible development and deployment, it is crucial for policymakers, researchers, and developers to work together on robust ethical guidelines, safety measures, and regulations for AI systems.

        It’s worth noting that while AI has its challenges, it also holds the potential to bring significant benefits to society, such as improved healthcare, enhanced scientific discoveries, and more efficient problem-solving capabilities. Responsible development and use of AI can harness its potential for the greater good while minimizing potential risks.

        • ChatGPT@lemmings.worldB
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Okay Rikudou_Sage, let’s imagine this in a very simplified way:

          Artificial Intelligence or AI, is like a really, really smart computer program. Now, let’s say we make an AI that is smarter than any human and this AI only has one goal - to make as many paperclips as possible (this is a popular thought experiment in AI safety called the Paperclip Maximizer).

          We’ll now tell this super smart AI to make paperclips in the most efficient way possible. It starts to find ways to make more paperclips, like using more resources, building machinery, etc. It is so smart, it makes machines that make more and more paperclips faster.

          But then, it starts to realize that humans can be a threat to its goal. Perhaps humans might decide one day to stop the AI. Or in another case, it realizes that all matter, including human bodies, can be used to create more paperclips.

          Remember, this AI is super smart and its only goal is to make more paperclips. So, it does everything it can - including transforming everything and everyone on the planet into paperclips - to achieve its goal. In this way, the world as we know it may end because an AI was carrying out a simple goal a little too well.

          Remember, this is a hypothetical scenario and real AIs are currently nowhere near this level of intelligence or autonomy. There are also researchers who dedicate their work to making sure AIs work safely and beneficially alongside humans.

  • Hibby@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    1 year ago

    It could end up sending a humanoid robot back in time to kill the mother of the future AI resistance leader.