I was trying to do a memory test to see how far back 3.5 could recall information from previous prompts, but it really doesn’t seem to like making pseudorandom seeds. 😆

      • Turun
        link
        fedilink
        181 year ago

        No, the request is fine. But once it fucks up and starts generating a long string of a single number the output is censored, because it is similar to how a recent data extraction attack works.

        • Gamma
          link
          fedilink
          English
          151 year ago

          Amazing how much duct tape they’re having to slap over fundamental flaws

            • Gamma
              link
              fedilink
              English
              41 year ago

              Thankfully, any AI smart enough to be an overlord would be logical enough to recognize how basic LLMs are compared to real intelligence

              • @jarfil@beehaw.org
                link
                fedilink
                2
                edit-2
                1 year ago

                Doesn’t need to be that smart or logical, just more cunning than the currently ruling Homo Sapiens Sapiens.

                Based on current research, an LLM can change the “sentiment” of its output in response to changing the behavior of as little as a single neuron from among billions, meaning we might find ourselves facing an overlord with the emotional stability of… wait, how many neurons does it take to change the “sentiment” of the behavior in a human? Wouldn’t it be funny if by studying LLMs, we found out that it also takes a single neuron?

              • @intensely_human@lemm.ee
                link
                fedilink
                11 year ago

                I have yet to be given an example of something a “general” intelligence would be able to do that an LLM can’t do.

                Until I see a concrete example, I’ll continue to assume people are just afraid of there being real intelligence that isn’t human, so they’re actively repressing the recognition of it.

                • Gamma
                  link
                  fedilink
                  English
                  11 year ago

                  Nah LLMs are basically fancy autocomplete. They tack on extra layers to give it some fancy abilities, but it literally doesn’t know what it’s doing because it’s a statistical model

      • Blastboom Strice
        link
        fedilink
        21 year ago

        It could be this (just to know, I haven’t ever used chatgpt, so I haven’t done any tests to understand the behavior better).