• Dojan@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    5 hours ago

    Was listening to my go-to podcast during morning walkies with my dog. They brought up an example where some couple was using ShatGPT as a couple’s therapist, and what a great idea that was. Talking about how one of the podcasters has more of a friend like relationship to “their” GPT.

    I usually find this podcast quite entertaining, but this just got me depressed.

    ChatGPT is by the same company that stole Scarlett Johansson’s voice. The same vein of companies that thinks it’s perfectly okay to pirate 81 terabytes of books, despite definitely being able to afford paying the authors. I don’t see a reality where it’s ethical or indicative of good judgement to trust a product from any of these companies with information.

    • Bazoogle@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      3 hours ago

      I agree with you, but I do wish a lot of conservatives used chatGPT or other AI’s more. It, at the very least, will tell them all the batshit stuff they believe is wrong and clear up a lot of the blatant misinformation. With time, will more batshit AI’s be released to reinforce their current ideas? Yea. But ChatGPT is trained on enough (granted, stolen) data that it isn’t prone to retelling the conspiracy theories. Sure, it will lie to you and make shit up when you get into niche technical subjects, or ask it to do basic counting, but it certainly wouldn’t say Ukraine started the war.

      • ZMoney@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        3 hours ago

        It will even agree that AIs shouldn’t controlled by oligarchic tech monopolies and should instead be distributed freely and fairly for the public good, but the international system of nation states competing against each other militarily and economically prevents this. But maybe it will agree to the opposite of that too, I didn’t try asking.

  • bitjunkie@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    8 hours ago

    AI can be incredibly useful, but you still need someone with the expertise to verify its output.

  • Phoenicianpirate@lemm.ee
    link
    fedilink
    English
    arrow-up
    24
    ·
    10 hours ago

    I took a web dev boot camp. If I were to use AI I would use it as a tool and not the motherfucking builder! AI gets even basic math equations wrong!

  • Nangijala@feddit.dk
    link
    fedilink
    arrow-up
    34
    arrow-down
    1
    ·
    13 hours ago

    This feels like the modern version of those people who gave out the numbers on their credit cards back in the 2000s and would freak out when their bank accounts got drained.

  • M0oP0o@mander.xyz
    link
    fedilink
    arrow-up
    101
    ·
    22 hours ago

    Ha, you fools still pay for doors and locks? My house is now 100% done with fake locks and doors, they are so much lighter and easier to install.

    Wait! why am I always getting robbed lately, it can not be my fake locks and doors! It has to be weirdos online following what I do.

  • Hilarious and true.

    last week some new up and coming coder was showing me their tons and tons of sites made with the help of chatGPT. They all look great on the front end. So I tried to use one. Error. Tried to use another. Error. Mentioned the errors and they brushed it off. I am 99% sure they do not have the coding experience to fix the errors. I politely disconnected from them at that point.

    What’s worse is when a noncoder asks me, a coder, to look over and fix their ai generated code. My response is “no, but if you set aside an hour I will teach you how HTML works so you can fix it yourself.” Never has one of these kids asking ai to code things accepted which, to me, means they aren’t worth my time. Don’t let them use you like that. You aren’t another tool they can combine with ai to generate things correctly without having to learn things themselves.

    • Thoven@lemdro.id
      link
      fedilink
      English
      arrow-up
      58
      ·
      23 hours ago

      100% this. I’ve gotten to where when people try and rope me into their new million dollar app idea I tell them that there are fantastic resources online to teach yourself to do everything they need. I offer to help them find those resources and even help when they get stuck. I’ve probably done this dozens of times by now. No bites yet. All those millions wasted…

    • MyNameIsIgglePiggle@sh.itjust.works
      link
      fedilink
      arrow-up
      26
      arrow-down
      1
      ·
      22 hours ago

      I’ve been a professional full stack dev for 15 years and dabbled for years before that - I can absolutely code and know what I’m doing (and have used cursor and just deleted most of what it made for me when I let it run)

      But my frontends have never looked better.

  • rekabis@programming.dev
    link
    fedilink
    arrow-up
    55
    ·
    21 hours ago

    The fact that “AI” hallucinates so extensively and gratuitously just means that the only way it can benefit software development is as a gaggle of coked-up juniors making a senior incapable of working on their own stuff because they’re constantly in janitorial mode.

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      16
      arrow-down
      3
      ·
      edit-2
      13 hours ago

      Plenty of good programmers use AI extensively while working. Me included.

      Mostly as an advance autocomplete, template builder or documentation parser.

      You obviously need to be good at it so you can see at a glance if the written code is good or if it’s bullshit. But if you are good it can really speed things up without any risk as you will only copy cody that you know is good and discard the bullshit.

      Obviously you cannot develop without programming knowledge, but with programming knowledge is just another tool.

      • Nalivai@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        9 hours ago

        I maintain strong conviction that if a good programmer uses llm in their work, they just add more work for themselves, and if less than good one does it, they add new exciting and difficult to find bugs, while maintaining false confidence in their code and themselves.
        I have seen so much code that looks good on first, second, and third glance, but actually is full of shit, and I was able to find that shit by doing external validation like talking to the dev or brainstorming the ways to test it, the things you categorically cannot do with unreliable random words generator.

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          9 hours ago

          That’s why you use unit test and integration test.

          I can write bad code myself or copy bad code from who-knows where. It’s not something introduced by LLM.

          Remember famous Linus letter? “You code this function without understanding it and thus you code is shit”.

          As I said, just a tool like many other before it.

          I use it as a regular practice while coding. And to be true, reading my code after that I could not distinguish what parts where LLM and what parts I wrote fully by myself, and, to be honest, I don’t think anyone would be able to tell the difference.

          It would probably a nice idea to do some kind of turing test, a put a blind test to distinguish the AI written part of some code, and see how precisely people can tell it apart.

          I may come back with a particular piece of code that I specifically remember to be an output from deepseek, and probably withing the whole context it would be indistinguishable.

          Also, not all LLM usage is for copying from it. Many times you copy to it and ask the thing yo explain it to you, or ask general questions. For instance, to seek for specific functions in C# extensive libraries.

    • millie@beehaw.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      20 hours ago

      Depending on what it is you’re trying to make, it can actually be helpful as one of many components to help get your feet wet. The same way modding games can be a path to learning a lot by fiddling with something that’s complete, getting suggestions from an LLM that’s been trained on a bunch of relevant tutorials can give you enough context to get started. It will definitely hallucinate, and figuring out when it’s full of shit is part of the exercise.

      It’s like mid-way between rote following tutorials, modding, and asking for help in support channels. It isn’t as rigid as the available tutorials, and though it’s prone to hallucination and not as knowledgeable as support channel regulars, it’s also a lot more patient in many cases and doesn’t have its own life that it needs to go live.

      Decent learning tool if you’re ready to check what it’s doing step by step, look for inefficiencies and mistakes, and not blindly believe everything it says. Just copying and pasting while learning nothing and assuming it’ll work, though? That’s not going to go well at all.

    • Devanismyname@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      12
      ·
      21 hours ago

      It’ll just keep better at it over time though. The current ai is way better than 5 years ago and in 5 years it’ll be way better than now.

      • almost1337@lemm.ee
        link
        fedilink
        arrow-up
        13
        arrow-down
        1
        ·
        20 hours ago

        That’s certainly one theory, but as we are largely out of training data there’s not much new material to feed in for refinement. Using AI output to train future AI is just going to amplify the existing problems.

        • Devanismyname@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          11
          ·
          19 hours ago

          I mean, the proof is sitting there wearing your clothes. General intelligence exists all around us. If it can exist naturally, we can eventually do it through technology. Maybe there needs to be more breakthroughs before it happens.

          • Nalivai@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            9 hours ago

            Everything possible in theory. Doesn’t mean everything happened or just about to happen

            • mindbleach@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              arrow-down
              4
              ·
              16 hours ago

              I mean - have you followed AI news? This whole thing kicked off maybe three years ago, and now local models can render video and do half-decent reasoning.

              None of it’s perfect, but a lot of it’s fuckin’ spooky, and any form of “well it can’t do [blank]” has a half-life.

              • SaraTonin@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                10 hours ago

                If you follow AI news you should know that it’s basically out of training data, that extra training is inversely exponential and so extra training data would only have limited impact anyway, that companies are starting to train AI on AI generated data -both intentionally and unintentionally, and that hallucinations and unreliability are baked-in to the technology.

                You also shouldn’t take improvements at face value. The latest chatGPT is better than the previous version, for sure. But its achievements are exaggerated (for example, it already knew the answers ahead of time for the specific maths questions that it was denoted answering, and isn’t better than before or other LLMs at solving maths problems that it doesn’t have the answers already hardcoded), and the way it operates is to have a second LLM check its outputs. Which means it takes,IIRC, 4-5 times the energy (and therefore cost) for each answer, for a marginal improvement of functionality.

                The idea that “they’ve come on in leaps and bounds over the Last 3 years therefore they will continue to improve at that rate isn’t really supported by the evidence.

                • mindbleach@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  49 minutes ago

                  We don’t need leaps and bounds, from here. We’re already in science fiction territory. Incremental improvement has has silenced a wide variety of naysaying.

                  And this is with LLMs - which are stupid. We didn’t design them with logic units or factoid databases. Anything they get right is an emergent property from guessing plausible words, and they get a shocking amount of things right. Smaller models and faster training will encourage experimentation for better fundamental goals. Like a model that can only say yes, no, or mu. A decade ago that would have been an impossible sell - but now we know data alone can produce a network that’ll fake its way through explaining why the answer is yes or no. If we’re only interested in the accuracy of that answer, then we’re wasting effort on the quality of the faking.

                  Even with this level of intelligence, where people still bicker about whether it is any level of intelligence, dumb tricks keep working. Like telling the model to think out loud. Or having it check its work. These are solutions an author would propose as comedy. And yet: it helps. It narrows the gap between “but right now it sucks it [blank]” and having to find a new [blank]. If that never lets it do math properly, well, buy a calculator.

              • Korhaka@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                2
                ·
                13 hours ago

                Seen a few YouTube channels now that just print out AI generated content. Usually audio only with a generated picture on screen. Vast amounts could be made so cheaply like that, Google is going to have fun storing all that when each only gets like 25 views. I think at some point they are going to have to delete stuff.

      • GenosseFlosse@feddit.org
        link
        fedilink
        arrow-up
        2
        ·
        14 hours ago

        To get better it would need better training data. However there are always more junior devs creating bad training data, than senior devs who create slightly better training data.

        • SaraTonin@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 hours ago

          And now LLMs being trained on data generated by LLMs. No possible way that could go wrong.

  • Charlxmagne@lemmy.world
    link
    fedilink
    arrow-up
    31
    ·
    22 hours ago

    This is what happens when you don’t know what your own code does, you lose the ability to manage it, that is precisely why AI won’t take programmer’s jobs.