FWIW It’s not clear cut if AI generated data feeding back into further training reduces accuracy, or is generally harmful.
Multiple papers have shown that generated images by high quality diffusion models with a proportion of real images in mix (30-50%) improve the adversarial robustness of the models. Similiar things might apply to language modeling.
Now that you mention it, I do face the problem of being in my head too much while conversing. That often leads to problems. I guess I just need more practice controlling it like you said.
Good news everyone!
Why does Lemmy make it look harder than it is? It’s not a massive load compared to what modern servers and applications are designed to handle.
I couldn’t sign up on beehaw and lemmy.ml after multiple tries. It feels worse than a simple centralised platform one can build in a month.
Is there alternative to reddit for people like me who don’t need this kind of decentralisation (Lemmy feels like centralisation, just multiple number of it, if any instance can cut off like this.) but likes the (text heavy)interface of Lemmy?
Thank you for sharing your insights. It’s good to hear a different perspective.
What I believe is that these studies tell you the general behaviour of people and need not necessarily apply to all of us.
If focusing on positive outcomes helps you, may be it’s because your baseline is to be on the opposite end by default in a way that being extremely positive helps you be balanced? Just a hypothesis.
Personally, I get anxious if I try to suppress negative thoughts and focus only on positives, instead I try to use the Buddhist technique of “The Glass Is Already Broken” which helps me be calm and disciplined.