• @Electricblush@lemmy.world
    link
    fedilink
    English
    165
    edit-2
    26 days ago

    All these “look at the thing the ai wrote” articles are utter garbage, and only appeal to people who do not understand how generative ai works.

    There is no way to know if you actually got the ai to break its restrictions and output something “behind the scenes” or it’s just generating the reply that is most likely what you are after with your prompt.

    Especially when more and more articles like this comes out gets fed back into the nonsense machines and teaches then what kind of replies is most commonly reported to be acosiated with such prompts…

    In this case it’s even more obvious that a lot of the basis of its statements are based on various articles and discussions about it’s statements. (That where also most likely based on news articles about various enteties labeling Musk as a spreader of misinformation…)

    • @Draces@lemmy.world
      link
      fedilink
      English
      4225 days ago

      only appeal to people who do not understand how generative ai works

      An article claiming Musk is failing to manipulate his own project is hilarious regardless. I think you misunderstood why this appeals to some people

    • @Elgenzay@lemmy.ml
      link
      fedilink
      English
      1825 days ago

      Thank you, thank you, thank you. I hate Musk more than anyone but holy shit this is embarrassing.

      “BREAKING: I asked my magic 8 ball if trump wants to blow up the moon and it said Outlook Good!!! I have a degree in political science.”

      • @474D@lemmy.world
        link
        fedilink
        English
        825 days ago

        Which oddly enough, is very useful for everyday office job regular bullshit that you need to input lol

      • Balder
        link
        fedilink
        English
        1
        edit-2
        24 days ago

        I mean, you can argue that if you ask the LLM something multiple times and it gives that answer the majority of those times, it is being trained to make that association.

        But a lot of these “Wow! The AI wrote this” might just as well be some random thing that came from it out of chance.

    • @morrowind@lemmy.ml
      link
      fedilink
      English
      725 days ago

      This is correct.

      In this case it is true though. Soon after grok3 came out, there were multiple prompt leaks with instructions to not bad mouth elon or trump

  • @pyre@lemmy.world
    link
    fedilink
    English
    5325 days ago

    because it’s an llm there’s zero credence to what it says but I like that grok’s takes on elon are always almost exclusively dunking on him. this is like the 40th thing I see about grok talking about elon and it always talks shit about him

    • TJA!
      link
      fedilink
      English
      1125 days ago

      But maybe you are only seeing the ones that dunk on Elon, because someone thinks those are newsworthy.

      Tbh I don’t think any of that is newsworthy, but ¯⁠\⁠_⁠(⁠ツ⁠)⁠_⁠/⁠¯

      • @pyre@lemmy.world
        link
        fedilink
        English
        625 days ago

        it’s not, and that is probably the case. still good to see because I’m sure it annoys him as the most insecure bitch baby in the world.

    • sircac
      link
      fedilink
      English
      625 days ago

      Well, there is probably some survival/confirmation bias on that statistics, those answers are the funny ones… in any case probably is not necessary a LLM to state such statements

    • @4am@lemm.ee
      link
      fedilink
      English
      1225 days ago

      It doesn’t. All it “knows” is that it has trained on data that makes that claim in the text (ie people’s tweets) and that, statistically, that’s the answer you are looking for.

      All it does is take a given set of inputs, and calculate the most statistically likely response. That’s it. It doesn’t “think”. It just spews.

    • b1tstrem1st0
      link
      fedilink
      English
      225 days ago

      Training. Its just the awareness of the collective users of X showing up as Grok responses.

      Basically, we can’t verify everything that AI says. Verification is still a human labour.

  • sircac
    link
    fedilink
    English
    3325 days ago

    A LLM can “reveal” also that water ice melts into mapple syrup given the proper prompts, if people already can (consciously and not) lie proportionally to their biases I don’t understand why would somebody treat a LLM output as a fact…

    • @Someone8765210932@lemmy.world
      link
      fedilink
      English
      15
      edit-2
      25 days ago

      I agree, but in this case, I think it doesn’t really matter if it is true. Either way, it is hilarious. If it is false, it shows how shitty AI hallucination is and the bad state of AI.

      Should the authors who publish this mention how likely this is all just a hallucination? Sure, but I think Musk is such a big spreader of misinformation, he shouldn’t get any protection from it.

      Btw. Many people are saying that Elon Musk has (had?) a small PP and a botched PP surgery.

  • ZeroOne
    link
    fedilink
    English
    825 days ago

    In other words a proprietary Response-Generator can be tweaked, how obvious.

    I am wondering what kind of person will take Grok’s word at face value.