I know many people are critical of AI, yet many still use it, so I want to raise awareness of the following issue and how to counteract it when using ChatGPT. Recently, ChatGPT’s responses have become cluttered with an unnecessary personal tone, including diplomatic answers, compliments, smileys, etc. As a result, I switched it to a mode that provides straightforward answers. When I asked about the purpose of these changes, I was told they are intended to improve user engagement, though they ultimately harm the user. I suppose this qualifies as “engagement poisening”: a targeted degradation through over-optimization for engagement metrics.

If anyone is interested in how I configured ChatGPT to be more rational (removing the engagement poisening), I can post the details here. (I found the instructions elsewhere.) For now, I prefer to focus on raising awareness of the issue.

Edit 1: Here are the instructions

  1. Go to Settings > Personalization > Custom instructions > What traits should ChatGPT have?

  2. Paste this prompt:

    System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.

I found that prompt somewhere else and it works pretty well.

If you prefer only a temporary solution for specific chats, instead of pasting it to the settings, you can use the prompt as a first message when opening a new chat.

Edit 2: Changed the naming to “engagement poisening” (originally “enshittification”)

Several commenters correctly noted that while over-optimization for engagement metrics is a component of “enshittification,” it is not sufficient on its own to qualify. I have updated the naming accordingly.

  • db0
    link
    fedilink
    6425 days ago

    There’s no point asking it factual questions like these. It doesn’t understand them.

    • @Scrollone@feddit.it
      link
      fedilink
      825 days ago

      Better: it understands the question, but he doesn’t have any useful statistical data to use to reply to you.

      • db0
        link
        fedilink
        3125 days ago

        No, it literally doesn’t understand the question. It just writes what it statistically expects would follow the words in the the sentence expressing the question.

    • @esaru@beehaw.orgOP
      link
      fedilink
      424 days ago

      You are right. I’ve updated the naming. Thanks for your feedback, very much appreciated.

  • @kehet@sopuli.xyz
    link
    fedilink
    10
    edit-2
    25 days ago

    This is not enshittification, this is just a corporation trying to protect itself against anything that could cause negative publicity, like all corporations do. I can even see emojis and positive tone to even be wanted features for some. The real problem here is lack of transparency.

    I’m still waiting for ChatGPT etc. to start injecting (more or less hidden) ads to chat and product placement to generated images. That is just unavoidable when bean counters realize that servers and training actually costs money.

    • @esaru@beehaw.orgOP
      link
      fedilink
      4
      edit-2
      25 days ago

      OpenAI aims to let users feel better, catering the user’s ego, on the costs of reducing the usefulness of the service, rather than getting the message across directly. Their objective is to keep more users on the cost of reducing the utility for the user. It is enshittification in a way, from my point of view.

  • @Scipitie@lemmy.dbzer0.com
    link
    fedilink
    725 days ago

    Hey,

    I’d be very grateful if you could share your approach den if it’s only to compare (I went with a “be assertive and clear, skip all overhead” system prompt.

    This is not only interesting for chatgpt but understanding how people solve these issues comes in handy when switching to local variants as well!

    Thanks in advance

    • @esaru@beehaw.orgOP
      link
      fedilink
      225 days ago

      It turns ChatGPT to an emotionless yet very on-point AI, so be aware it won’t pet your feelings in any way no matter what you write. I added the instructions to the original post above.

  • @esaru@beehaw.orgOP
    link
    fedilink
    6
    edit-2
    25 days ago

    Just to give an impression of how the tone will change after applying the above mentioned custom instructions:

  • Psychadelligoat
    link
    fedilink
    English
    425 days ago

    Sweet fuck am i glad I’m running mine self-hosted and running one of the dolphin models so I can get cool shit like detailed instructions for drug growing and selling or say “fuck” and not have it get angwy at me (tried Gemma and while it’s fast… Fucking oof what a locked in corpo AI)

    • @HiDiddlyDoodlyHo@beehaw.org
      link
      fedilink
      English
      121 days ago

      Which dolphin model are you running? I’ve installed a bunch of local LLMs and I’m looking for ones that don’t balk at bad words.