it is fucking priceless that an innovation that contained such simplicities as “don’t use 32-bit weights when tokenizing petabytes of data” and “compress your hash tables” sent the stock exchange into ‘the west has fallen’ mode. I don’t intend to take away from that, it’s so fucking funny peltier-laugh

This is not the rights issue, this is not the labor issue, this is not the merits issue, this is not even the philosophical issue. This is the cognitive issue. When not exercised, parts of your brain will atrophy. You will start to outsource your thinking to the black box. You are not built different. It is the expected effect.

I am not saying this is happening on this forum, or even that there are tendencies close to this here, but I preemptively want to make sure it gets across because it fucked me up for a good bit. Through Late 2023–Early 2024 I found myself leaning into both AI images for character conceptualization and AI coding for my general workflow. I do not recommend this in the slightest.

For the former, I found that in retrospect, the AI image generation reified elements into the characters I did not intend and later regretted. For the latter, it essentially kneecapped my ability to produce code for myself until I began to wean off of it. I am a college student. I was in multiple classes where I was supposed to be actively learning these things. Deferring to AI essentially nullified that while also regressing my abilities. If you don’t keep yourself sharp, you will go dull.

If you don’t mind that or don’t feel it is personally worth it to learn these skills besides the very very basics and shallows, go ahead, that’s a different conversation but this one does not apply to you. I just want to warn those who did not develop their position on AI beyond “the most annoying people in the world are in charge of it and/or pushing it” (a position that, when deployed by otherwise-knowledgeable communists, is correct 95% of the time) that this is something you will have to be cognizant of. The brain responds to the unknowable cube by deferring to it. Stay vigilant.

  • NaevaTheRat [she/her]@vegantheoryclub.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 days ago

    I can code at a basic level. Like I can make and ship a webapp with some tears.

    Running a server with a GPU is probably a bit high cost. The build and electricity (~50c a kW hour).

    I wasn’t aware running voice recognition cost so much in ?vector calculations? I remember trying some Google thing at some point and finding it utterly hilarious that they thought it was ready for home use since you couldn’t script cues yourself and they seemed to have a very 20 something American man view of what working in the kitchen entails. No real capacity for making multiple things at once, tweaking recipes and saving them, etc

    • KnilAdlez [none/use name]@hexbear.net
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      The GPU is for the llm, voice recognition is easy for any modern pc. You don’t need to use an llm, but it does give you some flexibility in the commands you can give it. Without an llm, home assistant can only use sentence matching. Sentence matching in HA is pretty good, don’t get me wrong, but llm’s are a level above.

      The home assistant scripting language is just yaml, really easy syntax.

      • Lyudmila [she/her, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        3 days ago

        Is there any way to use something like a Coral processor instead of a whole GPU to run that LLM?

        If it is, I think that would make it very nearly energy and cost effective enough to run at home. If not, sentence matching sounds best for now.

        • KnilAdlez [none/use name]@hexbear.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          Unfortunately no, llm’s are too big for a device with no on board memory. That being said, you can try a very small llm on CPU and see how you like it. That is what I am doing currently. You can also use a hybrid option where it tries sentence matching first, then falls back on the slow llm. I am going to write a beginner’s guide about home assistant tomorrow and go into all of this, but it’s pretty easy to get up on your own.

        • KnilAdlez [none/use name]@hexbear.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          No problem! I love home assistant and private IOT, so I’m always happy to talk about it. I’ll probably write a beginner’s guide in a day or two since people seem interested.

          • NaevaTheRat [she/her]@vegantheoryclub.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 days ago

            Oh that would be fantastic, please contact me with a link if you do.

            I looked into it for some esp32 stuff (weather station + led cube link maybe) and didn’t really grok most of its capabilities or the ecosystem.