• raspberriesareyummy@lemmy.worldOP
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      2 months ago

      I think there’s an important nuance to lmgtfy or RTFM. These two were clearly identifiable as the kind of - sometimes snarky - min-effort response, and sometimes absolutely justified (e.g. if I googled the question of OP and the very first result correctly answers their question, which I have made the effort of checking myself).

      For the slop responses however, the receiver has to invest sometimes considerable time into reading & processing it to even understand that it might be pure slop. And in doubt, as a reader we are left with the moral dilemma of potentially offending the writer by asking “Did you just send me LLM output?”

      It is both harder to identify and it drives a wedge into online (and personal) relationships because it adds a layer of doubt or distrust. This slop shit is poison for internet friendships. Those tech bros all need to fuck off and use their money for a permanent coke trip straight until they become irrelevant. :/

  • Blaster M@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    Well, it’s common courtesy that if someone is asking you, assume they already asked google or whatever and think you might have the answer they can’t find.

    • raspberriesareyummy@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      ·
      2 months ago

      That, and for some questions (i.e. nuances), a personal opinion is much more relevant to the asker than some random slop explanation. In this case I wanted to know which word construct in Turkish comes closes to the English “[ so and so ] is [ whatever ], isn’t it?” vs. “[ so and so ] is not [ whatever ], is it?” - Because Turkish has “isn’t it?” (değil mi? = not so?) but it doesn’t have “is it?”, mostly because “to be” is used much different in the language.

      A google result wouldn’t help me at all - the pure grammar answer is “there’s no form of ‘is it’ to be coupled with a negative assumption/assertion”. But does a language construct exist to transport the nuance of “the speaker assumes that something is NOT [soandso], and wants to ask confirmation” vs. the speaker assuming that something IS [soandso], and asking for confirmation.

      I still don’t know the answer, but it appears this nuance can’t be expressed in Turkish without describing around it in a longer sentence.

  • owenfromcanada@lemmy.ca
    link
    fedilink
    arrow-up
    6
    arrow-down
    2
    ·
    2 months ago

    I don’t quite get the equivalence there. I’d say an LLM response is more on par with responding with a link to lmgtfy.com or something.

    The intellectual equivalent of sending someone a dick pic would be a cold contact with LLM-generated text promoting or pushing something that you didn’t otherwise show interest in. Or like, that friend from highschool who messages you out of the blue and you realize after a few messages that they’re trying to sell you their MLM garbage.

    • Pyr@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      Or just sending the link to chatgpt.

      “Don’t ask me, just ask chatgpt! What am I, your boss or something?!”

    • raspberriesareyummy@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      3
      ·
      2 months ago

      I don’t quite get the equivalence there.

      It’s garbage insulting your intellect and personal relationship with the sender. Whereas an unsolicited dick pic is garbage insulting your eyes and personal relationship with the sender.

      • owenfromcanada@lemmy.ca
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        2 months ago

        They’re both garbage, sure, but I wouldn’t call it an equivalent. Especially in severity–one is insulting, the other is sexual harassment.

        The key word is “unsolicited.” An LLM response to a question you ask is garbage, but it’s solicited garbage. Like asking someone in Home Depot where the hammers are, and having them take 10 minutes for them to look it up on their phone. It’s a stupid response, but it was solicited. It’s at least a lazy attempt to respond relevantly, however insulting.

  • CallMeAnAI@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    2 months ago

    I mean on one hand, it’s a shower thought. On the other, this is a really dumb shower thought.

    • Apytele@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      2 months ago

      I often use AI to break up my ADHD mono-sentence paragraph. I’ll stream of consciousness my reply then tell it to not change my wording but break up the excessively long sentences, and to reorder and split things into paragraphs that follow well. I’m still doing the writing, but having an advanced spell check is actually super useful.

    • raspberriesareyummy@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      That’s the polite variant, but it still involves the use of LLM, and the assumption that machine learning is AI (it’s not, despite what the tech bros tell you). People using LLMs should be treated like people who pick their nose and eat their boogers at the dinner table. :p

  • Jakeroxs@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    2 months ago

    Specifically if you don’t even specify its ai, like I don’t mind using it, but be upfront that you don’t know and consulted an AI.

    Like I see it happening at my work, people just straight copy pasting from copilot or w/e and it’s clear to me that’s what it is (especially if its discussing things I know that person has never heard of before lol)

    • raspberriesareyummy@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      ·
      2 months ago

      I am slowly switching to increasingly less diplomatic reactions when I feel someone is using slop to respond to me or produce any kind of work text. Eventually I’ll probably advance to offensive reactions à la “Are you so f*cking incompetent that you can’t do better than copy-pasting into a glorified word prediction software?”

      • Jakeroxs@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        edit-2
        2 months ago

        I definitely use it at work to “corporate” my emails or descriptions for things because my way of speaking would be frowned upon lmao. Literally “corpo this sentence please” or something along those lines.

        Edit: To be clear, in communications where I have to sound corpo, when talking with my fellow workers I’m normal lmfao

  • radicallife@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    2 months ago

    But I have my phone’s texting set permanently to respond with AI so I never have to talk to anyone.