I’ve gone down a rabbit hole here.

I’ve been looking at lk99, the potential room temp superconductor, lately. Then I came across an AI chat and decided to test it. I then asked it to propose a room temp superconductor and it suggested (NdBaCaCuO)_7(SrCuO_2)_2 and a means of production which got me thinking. It’s just a system for looking at patterns and answering the question. I’m not saying this has made anything new, but it seems to me eventually a chat AI would be able to suggest a new material fairly easily.

Has AI actually discovered or invented anything outside of it’s own computer industry and how close are we to it doing stuff humans haven’t done before?

  • treadful@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    If the LLM sees your question and associates a particular compound with superconductors, it’s because it’s seen these things related in other writings (directly or indirectly).

    I’m not convinced of this. LLMs haven’t been just spitting out prior art, despite what some people seem to suggest. It’s not just auto-complete, that’s just a useful analogy.

    For instance, I’m fascinated by the study that got GPT4 to draw a unicorn using LaTeX. It wasn’t great, but it was recognizable to us as a unicorn. And apparently that’s gotten better with iterations. GPT (presumably) has no idea what a unicorn looks like, except through text descriptions. Who knows how it goes from written descriptions of a mythical being to a 2d drawing with a markup language without being trained on images, imagery, or any concept of what things look like.

    It’s important not to ascribe more intent behind what your seeing than exists.

    But also, this is true as well. I’m trying hard not to anthropomorphize this LLM but it sure seems like there’s some emergent effect that kind of looks like an intelligence to a layman like myself.

    • MajinBlayze@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      To be clear, I’m not trying to make the argument that it can only produce exactly what it’s seen, I recognize that this argument is frankly overstated in media. (The interviews with Adam Conover are great examples; he’s not wrong per se, but he does oversimplify things to the point that I think a lot of people misunderstand what’s being discussed)

      The ability to recombine what it’s seen in different ways as an emergent property is interesting and provocative, but isn’t really what OP is asking about.

      A better example of how LLMs can be useful in research like what OP described would be asking it to coalesce information from multiple existing studies about what properties correlate with superconducting in order to help accelerate research in collaboration with actual material scientists. This is all research that could be done without LLMs, or even without ML, but having a general way to parse and filter these kinds of documents is still incredibly powerful, and will be a sort of force multiplication for these researchers going forward.

      My favorite example of the limitation on LLM’s is to ask it to coin a new word, then google that word. It physically is unable to produce a combination of letters that it doesn’t have indexed, and it doesn’t have an index for words it hasn’t seen. It might be able to create a new meaning for a word that it’s seen, but that isn’t necessarily the same.