• fmstrat@lemmy.nowsci.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      Believable because:

      However, the system is highly specialized for scientific journal articles. When presented with real articles from university newspapers, it failed to recognize them as being written by humans.

      So outside of its purview? Agree.

    • givesomefucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      30
      ·
      1 year ago

      Why?

      Chatgpt writes them all the same. So its not so much “an AI wrote this” as it is “Bob always writes like this, we know Bob wrote this because _____”

      It’s a bad headline, but the article immediately clarified.

      • neekz0r@lemmy.ml
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        1
        ·
        1 year ago

        All of these were written by chatgpt:

        To make toast, start by plugging in your toaster and adjusting the browning setting to your preference. Take a slice or two of bread and place them into the toaster slots. Push down the lever to activate the heating elements. Wait for a few minutes until the toaster automatically pops the toast back up, indicating it’s ready. Carefully remove the toast and spread your desired toppings, such as butter or jam, to taste. Enjoy your warm, crispy toast, a simple and satisfying breakfast or snack.

        In the pursuit of achieving the toasted state of bread, one must first interface with an electrical device colloquially known as a toaster. This remarkable appliance harnesses electrical energy to generate heat through resistive elements. Place slices of bread in the designated slots, ensuring that the toaster is appropriately powered, and set the temporal parameter of toasting, often denoted as “browning level,” to the desired degree of crispiness. Activation is achieved by engaging the mechanical lever, prompting the transformational process, during which thermal energy is conveyed into the bread, inducing a Maillard reaction that alters its molecular structure. Upon the cessation of this temporal journey, the toaster signals completion. Carefully extract the now-toasted bread, apply preferred condiments, and savor this culinary endeavor.

        Alright, so like, to make some bomb toast, you gotta start by pluggin’ in that toaster thingy and crankin’ up the toastiness level, you know, like, how crispy you want it. Then you grab your slices of bread and just slide ‘em into those slots in the toaster, pop the sucker down, and hit the magic button to fire it up. While you’re waitin’ for the toasting action to go down, you can do a lil’ dance or somethin’, and when it’s done, it’s like, bam, your toast is ready, girl! Grab it, slather on some butter or whatever floats your boat, and get your snack on – it’s, like, totally yum!

        Well, back in our day, making toast was a straightforward affair. First, you’d plug in your trusty toaster and set the browning knob to your liking. Then, take a couple of slices of good old-fashioned bread, none of that fancy stuff, and place them gently into the toaster slots. Press down the lever, and as the heating elements did their work, you’d have time to catch up on the morning paper. When the toast popped up, simply retrieve it with a fork or a butter knife, add some butter, and there you had it – a simple, no-fuss breakfast, just the way we liked it. Those were the days, my friend.

        Seriously, making toast is, like, the most basic skill ever, and it’s so obvious that anyone who doesn’t get it must be, like, a total amateur. You just plug in that toaster and adjust the settings for your personal taste – it’s not rocket science, people! Then, grab some bread, any kind you want, and drop it into the slots, it’s not that hard. Push the lever down, and boom, the heat does its thing. It’s, like, literally impossible to mess up. But I guess there are still some folks out there who, like, need to argue about every little detail because they just can’t accept that not everyone is a culinary genius. 😒🍞 #ToastGate

        No, if chatgpt does not write it all the same.

        • givesomefucks@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          28
          ·
          edit-2
          1 year ago

          It’s hilarious watching people act like AI is so good that AI can’t tell an AI wrote something.

          To you those might seem completely different, but you’re overestimating AI on one side and underestimating on the other.

          It’s a hell of a lot easier for AI to check for similarities than it is to write something without similarities, even if a human can’t see them. Checking is always easier than producing for AI

            • agent_flounder@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              1 year ago

              Here is the link in that article to The study

              Regarding mathematical impossibility…

              We then provide a theoretical impossibility result indicating that as language models become more sophisticated and better at emulating human text, the performance of even the best-possible detector decreases. For a sufficiently advanced language model seeking to imitate human text, even the best-possible detector may only perform marginally better than a random classifier. Our result is general enough to capture specific scenarios such as particular writing styles, clever prompt design, or text paraphrasing.

              Interesting. Now, this is just one paper. And one paper does not mean the science is settled on that topic.

              The implications are certainly interesting.

              I’m curious how much data would be required to successfully mimic a specific writing style (e.g. lemmy post or research paper or letter to family) for a specific person. And conversely how easy it would be to detect.

              I haven’t thought about this in depth yet. But the threats that come to mind are: someone spoofing me for some reason or me using AI to “research” and write for me (school, say) so I don’t actually have to learn anything. The former makes me wonder if digital signatures will become more widely adopted. The latter probably requires a different approach to assessing the knowledge of students. I’m sure there are other threats we can think of given a little more time.

            • givesomefucks@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              7
              ·
              1 year ago

              This is like someone disputing an article about the Wright Brothers first flight with one from 6 months earlier that says manned flight can’t happen…

  • EvilBit@lemmy.world
    link
    fedilink
    English
    arrow-up
    82
    arrow-down
    2
    ·
    1 year ago

    As I understand it, one of the ways AI models are commonly trained is basically to run them against a detector and train against it until they can reliably defeat it. Even if this was a great detector, all it’ll really serve to do is teach the next model to beat it.

    • magic_lobster_party@kbin.social
      link
      fedilink
      arrow-up
      27
      ·
      1 year ago

      That’s how GANs are trained, and I haven’t seen anything about GPT4 (or DALL-E) being trained this way. It seems like current generative AI research is moving away from GANs.

      • KingRandomGuy@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Also one very important aspect of this is that it must be possible to backpropagate the discriminator. If you just have access to inference on a detector of some kind but not the model weights and architecture itself, you won’t be able to perform backpropagation and therefore can’t generate gradients to update your generator’s weights.

        That said, yes, GANs have somewhat fallen out of favor due to their relatively poor sample diversity compared to diffusion models.

      • EvilBit@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        I know it’s intrinsic to GANs but I think I had read that this was a flaw in the entire “detector” approach to LLMs as well. I can’t remember the source unfortunately.

  • demonsword@lemmy.world
    link
    fedilink
    English
    arrow-up
    75
    ·
    edit-2
    1 year ago

    No references whatsoever to false positive rates, which I’d assume are quite high. Also, they single out that they built this detector to catch chemistry-related AI-generated articles

  • CthulhuOnIce@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    1
    ·
    1 year ago

    I really really doubt this, openai said recently that ai detectors are pretty much impossible. And in the article they literally use the wrong name to refer to a different AI detector.

    Especially when you can change Chatgpt’s style by just asking it to write in a more casual way, “stylometrics” seems to be an improbable method for detecting ai as well.

    • Fredthefishlord@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      7
      ·
      1 year ago

      It’s in openai’s best interests to say they’re impossible. Completely regardless of the truth of if they are, that’s the least trustworthy possible source to take into account when forming your understanding of this.

      • CthulhuOnIce@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        1 year ago

        openai had their own ai detector so I don’t really think it’s in their best interest to say that their product being effective is impossible

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    37
    ·
    1 year ago

    Willing to bet it also catches non-AI text and calls it AI-generated constantly

    • snooggums@kbin.social
      link
      fedilink
      arrow-up
      15
      ·
      1 year ago

      The best part of that if AI does a good job of summarizing, then anyone who is good at summarizing will look like AI. Like if AI news articles look like a human wrote it, then a human written news article will look like AI.

    • floofloof@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      1 year ago

      The original paper does have some figures about misclassified paragraphs of human-written text, which would seem to mean false positives. The numbers are higher than for misclassified paragraphs of AI-written text.

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    8
    ·
    1 year ago

    This is kind-of silly.

    We will 100% be using AI to generate papers now and in the future. If the AI can catch any wrong conclusions or misleading interpretations, that would be helpful.

    Not using AI to help you write at this point is you wasting valuable time.

    • theluddite@lemmy.ml
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I do a lot of writing of various kinds, and I could not disagree more strongly. Writing is a part of thinking. Thoughts are fuzzy, interconnected, nebulous things, impossible to communicate in their entirety. When you write, the real labor is converting that murky thought-stuff into something precise. It’s not uncommon in writing to have an idea all at once that takes many hours and thousands of words to communicate. How is an LLM supposed to help you with that? The LLM doesn’t know what’s in your head; using it is diluting your thought with statistically generated bullshit. If what you’re trying to communicate can withstand being diluted like that without losing value, then whatever it is probably isn’t meaningfully worth reading. If you use LLMs to help you write stuff, you are wasting everyone else’s time.

      • Excrubulent@slrpnk.net
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        1 year ago

        Yeah, I agree. You can see this in all AI generated stuff - none of it has any purpose, no intention.

        People who say it’s saving them time, I mean I have to ask what these people are doing that can be replaced by AI and whether they’re actually any good at it, and whether the AI has improved their work or just made it happen faster at the expense of quality.

        I have turned off all predictive writing of any kind on my devices, it gets in my head and stops me from forming my own thoughts. I want my authentic voice and I can’t stand the idea of a machine prompting me with its own idea of what I want to say.

        Like… we’re prompting the AI, but are they really prompting us?

        • theluddite@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Amen. In fact, I wrote a whole thing about exactly this – without an LLM! Like most things I write, it took me many hours and evolved many times, but I take pleasure in communicating something to the reader, in the same way that I take pleasure in learning interesting things reading other people’s writing.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        1 year ago

        How is an LLM supposed to help you with that?

        I have it read and review a couple paragraphs of a research article, many many times, to create a distribution of what was likely said in those paragraphs, in a tabular format. I’ll also work with it to create an outline of an idea I’m working on to keep me focused, and help develop my research plan. I’ll then ask it to drill down into each sub-point and give me granular points to focus on. Obviously, I’m steering, but its not too difficult to use it in such a way that it creates a scaffolding for you to work from.

        If you use LLMs to help you write stuff, you are wasting everyone else’s time.

        If you aren’t using LLMs to help you write stuff, you are wasting your own time.

        • theluddite@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I don’t think that sounds like a good way to make a good paper that effectively communicates something complex, for the reasons in my previous comment.

    • Laticauda@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Not using AI to help you write at this point is you wasting valuable time.

      Bro WHAT are you smoking. In academia the process of writing the paper is just as important as the paper itself, and in creative writing why would you even bother being a writer if you just had an ai do it for you? Wasting valuable time? The act of writing it is inherently valuable.

  • Lunch@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 year ago

    Didn’t OpenAI themselves state some time ago that it isn’t possible to detect it?

  • Deckweiss@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    4
    ·
    edit-2
    1 year ago

    I don’t understand. Are there places where using chatGPT for papers is illegal?

    The state where I live explicitly allows it. Only plagiarism is prohibited. But making chatGPT formulate the result of your scientific work, or correct the grammar or improve the style, etc. doesn’t bother anybody.

    If you use chatGPT you should still read over it, because it can say something wrong about your results and run a plagiarism tool on it because it could unintentionally do that. So whats the big deal?

    • alienanimals@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      7
      ·
      1 year ago

      It’s not a big deal. People are just upset that kids have more tools/resources than they did. They would prefer kids wrote on paper with pencil and did not use calculators or any other tool that they would have available to them in the workforce.

      • Phanatik@kbin.social
        link
        fedilink
        arrow-up
        9
        arrow-down
        1
        ·
        1 year ago

        There’s a difference between using ChatGPT to help you write a paper and having ChatGPT write the paper for you. One invokes plagiarism which schools/universities are strongly against.

        The problem is being able to differentiate between a paper that’s been written by a human (which may or may not be written with ChatGPT’s assistance) and a paper entirely written by ChatGPT and presented as a student’s own work.

        I want to strongly stress that in the latter situation, it is plagiarism. The argument doesn’t even involve the plagiarism that ChatGPT does. The definition of plagiarism is simple, ChatGPT wrote a paper, you the student did not and you are presenting ChatGPT’s paper as your own, ergo plagiarism.

          • olmec@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Correct me if I am wrong with current teaching methods, but I feel like the way you outlined things is how school is taught. Calculators were “banned” until about 6th grade, because we were learning the rules of math. Sure, we could give calculators to 3rd graders, but they will learn that 2 + 2 = 4 because the calculator said so, and not because they worked it out. Calculators were allowed once you get into geometry and algebra, where the actual calculation is merely a mechanism for the logical thinking you are learning. Finding the answer to 5/7 is so trivially important to finding that that value for X is what makes Y = 0.

            I am not close to the education sector, but I imagine LLM are going to be used similarly, we just don’t have the best way laid out yet. I can totally see a scenario, where in 2030, students have to write and edit their own papers until they reach grade 6 or so. Then, rather than writing a paper which tests all your language arts skills, you will proof-read 3 different papers written by LMM, with a hyper focus on one skill set. One week, it may be active vs passive voice, or using gerunds correctly. Just like with math and the calculator, you will move beyond learning the mechanics of reading and writing, and focus on composing thoughts in a clear manner. This doesn’t seem like a reach, we just don’t have curriculum ready to take advantage of it yet.

        • RiikkaTheIcePrincess@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          There’s a difference between using ChatGPT to help you write a paper and having ChatGPT write the paper for you.

          Yeah, one is what many “AI” fans insist is what’s happening, and the other is what people actually do because humans are lazy, intellectually dishonest piles of crap. “Just a little GPT,” they say. “I don’t see a problem, we’ll all just use it in moderation,” they say. Then somehow we only see more garbage full of errors; we get BS numbers, references to studies or legal cases or anything else that simply don’t exist, images of people with extra rows of teeth and hands where feet should be, gibberish non-text where text could obviously be… maybe we’ll even get ads injected into everything because why not screw up our already shitty world even more?

          So now people have this “tool” they think is simultaneously smarter and more creative than humans at all of the things humans have historically claimed makes them better than not only machines but other animals, but is also “just a tool” that they’re only going to use a little bit, to help out but not replace. They’ll trust this tool to be smarter than they are, which it will arguably impressively turn out to not be. They’ll expect everyone else to accept the costs this incurs, from environmental damage due to running the damn things to social, scientific, economic, and other harms caused by everything being generated by “hallucinating” “AI” that’s incapable of thinking.

          It’s all very tiring.

          (And now I’m probably going to get more crap for both things I’ve said and things I haven’t, because people are intellectually lazy/dishonest and can’t take criticism. Even more tiring! Bleh.)

          • Phanatik@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Everything you’ve said I agree with wholeheartedly. This kind of cornercutting isn’t good for us as a species. When you eliminate the struggle involved in developing skills, it cheapens whatever you’ve produced. Just soulless garbage and it’ll proliferate the most in art spaces.

            The first thing that happened was that Microsoft implemented ChatGPT into Windows as part of their Copilot feature. It can now use your activity on your pc as data points and the next step is sure as shit going to be an integration with Bing Ads. I know this because Microsoft presented this to our company.

            I distrusted it then and I want it to burn now.

      • BraveLittleToaster@lemm.ee
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        3
        ·
        1 year ago

        Teachers when I was little “You won’t always have a calculator with you” and here I am with a device more powerful than what sent astronauts to the moon in my pocket 24/7

        • LukeMedia@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Fun fact for you, many credit-card/debit-card chips alone are comparably powerful to the computers that sent us to the moon.

          It’s mentioned a bit in this short article about how EMV chips are made. This summary of compute power does come from a company that manufactures EMV chips, so there is bias present.

    • kirklennon@kbin.social
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      1 year ago

      Why should someone bother to read something if you couldn’t be bothered to write it in the first place? And how can they judge the quality of your writing if it’s not your writing?

      • Deckweiss@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        Science isn’t about writing. It is about finding new data through scientific process and communicating it to other humans.

        If a tool helps you do any of it better, faster or more efficiently, that tool should be used.

        But I agree with your sentiment when it comes to for example creative writing.

        • sab@kbin.social
          link
          fedilink
          arrow-up
          4
          arrow-down
          2
          ·
          edit-2
          1 year ago

          Science is also creative writing. We do research and write the results, in something that is an original product. Something new is created; it’s creative.

          An LLM is just reiterative. A researcher might feel like they’re producing something, but they’re really just reiterating. Even if the product is better than what they would have produced themselves it is still more worthless, as it is not original and will not make a contribution that haven’t been made already.

          And for a lot of researchers, the writing and the thinking blend into each other. Outsource the writing, and you’re crippling the thinking.

        • Laticauda@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          Science is about thinking. If you’re not the one writing your own research, you’re not the one actually thinking about it and conceptualizing it. The act of writing a research paper is just as important as the paper itself.

      • agent_flounder@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        To me this question hints at the seismic paradigm shift that comes from generative AI.

        I struggle to wrap my head around it and part of me just wants to give up on everything. But… We now have to wrestle with questions like:

        What is art and do humans matter in the process of creating it? Whether novels, graphic arts, plays, whatever else?

        What is the purpose of writing?

        What if anything is still missing from generative writing versus human writing?

        Is the difference between human intelligence and generative AI just a question of scale and complexity?

        Now or in the future, can human experience be simulated by a generative AI via training on works produced by humans with human experience?

        If an AI can now or some day create a novel that is meaningful or moving to readers, with all the hallmarks of a literary masterwork, is it still of value? Does it matter who/what wrote it?

        Can an AI have novel ideas and insights? Is it a question of technology? If so, what is so special about humans?

        Do humans need to think if AI one day can do it for us and even do it better than we can?

        Is there any point in existing if we aren’t needed to create, think, generate ideas and insights? If our intellect is superfluous?

        If human relationships conducted in text and video can be simulated on one end by a sufficiently complex AI, to fool the human, is it really a friendship?

        Are we all just essentially biological machines and our bonds simply functions of electrochemical interactions, instincts, and brain patterns?

        I’m late to the game on all this stuff. I’m sure many have wrestled with a lot of this. But I also think maybe generative AI will force far more of us to confront some of these things.

    • gullible@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I don’t think people are arguing against minor corrections, just wholesale plagiarism via AI. The big deal is wholesale plagiarism via AI. Your argument is as reasonable as it adjacent to the issue, which is to say completely.

    • TropicalDingdong@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      1 year ago

      If you use chatGPT you should still read over it, because it can say something wrong about your results and run a plagiarism tool on it because it could unintentionally do that. So whats the big deal?

      There isnt one. Not that I can see.

      • Jesusaurus@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        At least within a higher level education environment, the problem is who does the critical thinking. If you just offload a complex question to chat gpt and submit the result, you don’t learn anything. One of the purposes of paper-based exercises is to get students thinking about topics and understanding concepts to apply them to other areas.

        • TropicalDingdong@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          1 year ago

          You are considering it from a student perspective. I’m considering it from a writing and communication/ publishing perspective. I’m a scientist, I think a decent one, but I’m a only a proficient writer and I don’t want to be a good one. Its just not where I want to put my professional focus. However, you can not advance as a scientist without being a ‘good’ writer (and I don’t just mean proficient). I get to offload all kind of shit to chat GPT. I’m even working on some stuff where I can dump in a folder of papers, and have it go through and statistically review all of them to give me a good idea of what the landscape I’m working in looks like.

          Things are changing ridiculously fast. But if you are still relying on writing as your pedagogy, you’re leaving a generation of students behind. They will not be able to keep up with people who directly incorporate AI into their workflows.

          • KingRandomGuy@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I’m curious what field you’re in. I’m in computer vision and ML and most conferences have clauses saying not to use ChatGPT or other LLM tools. However, most of the folks I work with see no issue with using LLMs to assist in sentence structure, wording, etc, but they generally don’t approve of using LLMs to write accuracy critical sections (such as background, or results) outside of things like rewording.

            I suspect part of the reason conferences are hesitant to allow LLM usage has to do with copyright, since that’s still somewhat of a gray area in the US AFAIK.

            • TropicalDingdong@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              1 year ago

              I work in remote sensing, AI, and feature detection. However, I work almost exclusively for private industry. Generally in the natural hazard, climate mitigation space.

              Lately, I’ve been using it to statistically summarize big batches of publications into tables that I can then analyze statistically (because the LLMs don’t always get it right). I don’t have the time to read like that, so it helps me build an understanding of a space without having to actually read it all.

              I think the hand wringing is largely that. I’m not sure its going to matter in 6 months to a year. We’re at the inflection (like pre-alpha go) where its clear that AI can do this thing that was thought to be solely the domain of humans. But it doesn’t necessarily do it better than the best of us. We know how this goes though. It will surpass, and likely by a preposterous margin. Pandoras box is wide open. No closing this up.

  • TheLurker@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    Well with VC investment low due to higher interest rates it was only a matter of time before academic people started posting bullshit papers to lure that sweet sweet VC money.

    Seems like a few people at the University of Kansas in Lawrence are making a run at a start up.

  • nfsu2@feddit.cl
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Isnt this like a constant fight between people who develop anti-ai-content and the internet pirates who develop anti-anti-ai-content? Pretty sure the piratea always win.

    • Overzeetop@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      You sully the good name of Internet Pirates, sir or madam. I’ll have you know that online pirates have a code of conduct and there is no value in promulgating an anti-ai or anti-anti-ai stance within the community which merely wishes information to be free (as in beer) and readily accessible in all forms and all places.

      You are correct that the pirates will always win, but they(we) have no beef with ai as a content generation source. ;-)

      • nfsu2@feddit.cl
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Oh yes, by fight I mean that no matter how hard developers push proprietary software they get craked anyway, its so funny.

  • Cyborganism@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    I say we develop a Voight-Kampff test as soon as possible for detecting if we’re speaking to an AI or an actual human being when chatting or calling a customer representative of a company.

    Edit: I made a mistake.