• FunkyStuff [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 months ago

    I don’t think that’s the problem. The problem is that an AI can’t know truth from falsehood, or when things are being omitted, overtly emphasized, etc. The only thing it can actually evaluate is tone, and the factual, objective affect that all news reporting tends to use is gonna read as unbiased. It’d only register as biased if they started throwing out insults, used lots of positive or negative adjectives, or other kinds of semantically evident bias. You’d basically need an AGI to actually evaluate how biased an article is. Not to mention that attempting to quantify that bias assumes that there even is a ground truth to compare against, which might be true for the natural sciences but is almost always false for social reality.

    • UlyssesT [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 months ago

      You’d basically need an AGI to actually evaluate how biased an article is.

      Too many bazingas, including a few on Hexbear, believe that a sufficiently large dataset (and sufficient burned forests and dried up lakes) will make the treat printers AGI.

    • JoeByeThen [he/him, they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      Oh yeah, if you want something reflecting objective reality, sure absolutely. You need context out the wazoo. But, if you’re just measuring a spread of bias from Democrat to Republican among the hegemonic media sources that are already only reporting within that context you can probably be pretty accurate for which way they’re leaning. Especially since within that spread “reporting” is largely gonna be providing support for talking points from one party or the other.