Y’all don’t yell confusedly when having an orgasm?
Y’all don’t yell confusedly when having an orgasm?
Word of advice: don’t date anyone who is only staying with you because they cannot fly away.
You should probably date the flying one from the moon, who can fly.
Does your friend want to be a LoRA?
Are you a hot enough lady to rescue the president?
I think that’s just how her nipple is?
Also more nostrils make horses go faster. This is well known.
What sport is this?
Yeah, that was the point. Not having it come in until a step that doesn’t happen should give identical output to not having it in the prompt at all, but doesn’t seem to.
It’s like getting photos from Sims-land.
OK, I tried it via the maintained fork here, and it does something, but it doesn’t really let you zero out a LoRa. Here is the model not understanding my prompt at all and drawing garbage, and then here it is when I increase the LoRa weight in the prompt from zero and it draws something different. In both cases I am using the extension to tell the LoRa not to come in until step 21 of my 20 step run. In both cases I told the extension to plot the LoRa weight it thinks it is using, and it was 0 at all steps. But clearly having the LoRa in there did something even when it was supposed to be at weight 0.
What do you mean it’s phallic? 😂
I think it’s that you need to be able to throw parallel processing at a lot of RAM. If you want to do that on a PC you need a GPU that has a lot of RAM, which you can only really buy as part of a beefy GPU. You can’t buy a midrange GPU and duct tape DIMMs onto it.
The Apple Silicon architecture has an OK GPU in it, but because of how it’s integrated with the CPU, all the RAM in the system is GPU RAM. So Apple Silicon Macs can really punch above their weight for AI applications, because they can use a lot more RAM.
Hm. I’ve never really figured out AND
, but this might be useful. Thanks!
img2space
I tried it with LoRas; it works OK but the LoRa applies over the whole image, and I found that the LoRas seem to make the model less good at reading comprehension, which is what I am fighting with the regional prompter in the first place.
Really what I want is to use the LoRa for some generation steps but not use it for others, but I haven’t worked out how to do that yet.
This is apparently one of the fancy new SDXL features.