Meta “programmed it to simply not answer questions,” but it did anyway.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    1 year ago

    Kaplan noted that AI chatbots “are not always reliable when it comes to breaking news or returning information in real time,” because “the responses generated by large language models that power these chatbots are based on the data on which they were trained, which can at times understandably create some issues when AI is asked about rapidly developing real-time topics that occur after they were trained.”

    If you’re expecting a glorified autocomplete to know about things it doesn’t have in its training data, you’re an idiot.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      Some services will use glorified RAG to put more current info in the context.

      But yeah, if it’s just the raw model, I’m not sure what they were expecting.