dualmindblade [he/him]

  • 2 Posts
  • 68 Comments
Joined 4 years ago
cake
Cake day: September 21st, 2020

help-circle







  • It really is, another thing I find remarkable is that all the magic vectors (features) were produced automatically without looking at the actual output of the model, only activations in a middle layer of the network, and using a loss function that is purely geometric in nature, it has no idea the meaning of the various features it is discovering.

    And the fact that this works seems to confirm, or at least almost confirm, a non trivial fact about how transformers do what they do. I always like to point out that we know more about the workings of the human brain than we do about the neural networks we have ourselves created. Probably still true, but this makes me optimistic we’ll at least cross that very low bar in the near future.






  • Okay just thinking out loud here, everything I’ve seen so far works as you described, the training data is taken either from reality or generated by a traditional solver. I’m not sure this is a fundamental limitation though, you should be able to create a loss function that asks “how closely does the output satisfy the PDE?” rather than “how closely does the output match the data generated by my solver?”. But anyway you wouldn’t need to improve on the accuracy of the most accurate methods to get something useful, if the NN is super fast and has acceptable accuracy you can use that to do the bulk of your optimization and then use a regular simulation and or reality to check the result and possibly do some fine-tuning.



  • So this is way way outside my expertise, grain of salt and whatnot… Wouldn’t the error in most CFD simulations, regardless of technique, quickly explode to its maximum due to turbulence? Like if you’re designing a stirring rotor for a mixing vessel you’re optimizing for the state of the system at T+ [quite a bit of time], I don’t believe hand crafter approximations can give you any guarantees here. And I get the objection about training time, but I think the ultimate goal is to train a NN on a bunch of physical systems with different boundary conditions and fluid properties so you only need to train once and then you can just do inference forevermore.




  • Okay I’m posting this in news even though I don’t have a link, but my source is super solid…

    In the first round of arrests at UT Austin the Travis county DA cited “copy pasted” probable cause affidavits as reason for dropping the charges. This time around the campus police took their time with the paperwork, which is why we haven’t heard yet whether the charges are going forward. They were instructed to “personalize” all the PC stuff and … they’re still copy pasted, at least some of them are. With the thousands of hours of video and all the effort they could muster they still couldn’t do it, as far as is known the only difference from last time is that they’re trickling in gradually rather than all being filed at once.