Reversal knowledge in this case being, if the LLM knows that A is B, does it also know that B is A, and apparently the answer is pretty resoundingly no! I’d be curious to see if some CoT affected the results at all

  • noneabove1182@sh.itjust.worksOPM
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    1 year ago

    To start, everything you’re saying is entirely correct

    However, the existence of emergent behaviours like chain of thought reasoning shows that there’s more to this than pure text predictions, it picks up patterns that were never explicitly trained, so it’s entirely feasible to ponder if they’re able to recognize reverse patterns

    Hallucinations are a vital part of understanding the models, they might not be long term problems but getting them to understand what they actually know to be true is extremely important in the growth and adoption of LLMs

    I think there’s a lot more to the training and generation of text than you’re giving it credit, the simplest way to explain it is that it’s text prediction, but there’s way too much depth to the training and model to say that’s all it is

    At the end of the day it’s just a fun thought inducing post :) but when Andrej karparthy says he doesn’t have a great intuition on how LLM knowledge works (though in fairness he theorizes the same as you, directional learning) I think we can at least agree none of us know for sure what is correct!