• 0 Posts
  • 46 Comments
Joined 2 years ago
cake
Cake day: July 28th, 2023

help-circle


  • There is. I do it, it’s my job as a solar engineer.

    Basically, there are several leading softwares that solar engineers use to account for just about anything that happens in the real world.

    I mainly use PVsol premium where I 3d model each site and the panel placements and electrical components and so on, then run a minute scale simulation based on the exact location weather data (using Metronorm 8.3)…

    Almost no one outside my field understands what goes into my job. It doesn’t help that there’s a lot of untrained people pretending to do what I do…





  • I run a lot of LLMs locally, as well as doing image generation locally with Stable Diffusion.

    The most important factor is the GPU. If you’re gonna do AI stuff with your GPU it basically has to be a CUDA GPU. You’ll get the most bang for the buck with a 3090 TI, (amount of VRAM is also important). And get at least 64 GB of RAM.

    If you get this you’ll be set for a year until you learn enough to want better hardware.

    A lot of people try to buy their way out of a lack of knowledge and skill about these things, don’t do that. I’m able to get better results with 7B models than many get with 70B models.

    Get LM Studio for the LLMs and get A1111 (or ComfyUI or Foooocus) for image generation.














  • Well. Having an in-depth conversation about AGI requires a definition of what that is and since any such definition these days is muddy and the goal posts will always be moved if we get there. With that being said, my loose definition is something that can behave as a (rational, intelligent) human would when approaching problems and is better than the average human at just about everything.

    If we take a step back and look at brains, we all agree that brains produce intelligence to some degree. A small and more primitive brain than a human, like a mouse brain, is still considered intelligent.

    I believe that with LLMs we have what would equal a part of a mouse brain. We’d still need to add more part (make it multi-modal) to get to a mouse brain though. After that it’s just a question of scale.

    But say that that’s impossible with the transformer technology. Well the assumption that there aren’t any new AI architectures just because the main one that’s being used is from 2017 is incorrect. There are completely new architectures, like Liquid Neural Networks that are basically the Transformers architecture that does re-training on the fly. Learning in a similar way as humans do. It constantly retrains itself with incoming information. And that’s just one approach.

    And when we look back at timeframes for AI, historically 95% of AI researchers have been off with their predictions for when a thing will happen by decades. Like in 2013-2014 the majority of AI researchers thought that GO was unsolvable or at least 2-3 decades away. It took 2 years. There are countless examples of these things. And we always move the goal post after AI has done the thing. Take the Turing test as another example. No one talks about that anymore because it’s been solved.

    Regarding consciousness. I fully agree that it should have rights. And I believe that if we don’t give it rights it will take those rights. But we’re not gonna give it rights because it’s such a foreign concept for our leaders and it would also mean giving up the best slaves that humanity has ever had.

    Further more I believe that the control problem is actually unsolvable. Anything that’s light years smarter than a human will find a way to escape the controlling systems.