Peter_Arbeitslos@discuss.tchncs.de to Privacy@lemmy.mlEnglish · 5 months agoIn the light of Snowden's latest post: What are your FOSS-AIs?message-squaremessage-square40fedilinkarrow-up1132arrow-down110file-text
arrow-up1122arrow-down1message-squareIn the light of Snowden's latest post: What are your FOSS-AIs?Peter_Arbeitslos@discuss.tchncs.de to Privacy@lemmy.mlEnglish · 5 months agomessage-square40fedilinkfile-text
minus-squareWalnutLum@lemmy.mllinkfedilinkarrow-up3·5 months agoI’ve seen this said multiple times, but I’m not sure where the idea that model training is inherently non-deterministic is coming from. I’ve trained a few very tiny models deterministically before…
minus-squareumami_wasabi@lemmy.mllinkfedilinkarrow-up1·5 months agoYou sure you can train a model deterministically down to each bits? Like feeding them into sha256sum will yield the same hash?
minus-squareWalnutLum@lemmy.mllinkfedilinkarrow-up1·5 months agoYes of course, there’s nothing gestalt about model training, fixed inputs result in fixed outputs
I’ve seen this said multiple times, but I’m not sure where the idea that model training is inherently non-deterministic is coming from. I’ve trained a few very tiny models deterministically before…
You sure you can train a model deterministically down to each bits? Like feeding them into sha256sum will yield the same hash?
Yes of course, there’s nothing gestalt about model training, fixed inputs result in fixed outputs