• 6 Posts
  • 1.21K Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle

  • brucethemoose@lemmy.worldtoMemes@lemmy.mlWho?
    link
    fedilink
    arrow-up
    1
    ·
    17 hours ago

    Yeah.

    People harp on Onlyfans, but how sexualized and “softcore teasing” Insta and even TikTok are kinda creeps me out. They’re literally entry points to OF.

    I have a parent who’s blissfully off of social media, and it was interesting to see their reaction to what they’re like now.










  • Friend, I’m going to be blunt: I think you may have spent time creating this with help from an LLM, and it told you too much of what you want to hear because that’s what they’re literally trained to do.

    As an example…”relativistic coherence?” Computational cycles and SHA512 checksums and bit flips and prime instances? You are mixing modern technical terms and highly speculative, theoretical concepts in a way that… just isn’t really compatible.

    And the text, from what I can parse, is similar. It mixes a lot of contemporary “anthropic” concepts (money, the 24 hour day, and so on), terms that loosely apply to text LLMs, and a few highly speculative concepts that may or may not even apply to the future.


    If you are concerned about AI safety, I think you should split your attention between contemporary, concrete systems we have now and the more abstract, philosophical research that’s been going on even before the LLM craze started. Not mix them together.

    Look into what local LLM tweakers are doing. With, for instance, alignment datasets, experiments on “raw” pretrains, or more cutting edge abliration like: https://github.com/p-e-w/heretic

    In other words, look at the concrete, and how actual safety systems can be applied now. Outlines like yours are interesting, but they can’t actually be applied or enforced.

    And on the philosophical side, basically ignore any institute or effort started after 2021, when all the “Tech Bro” hype and the release of ChatGPT 3.5 in 2022 muddied the waters. But there was plenty of safety research going on before then. There are already many documents/ideas similar to what you’re getting at in your outlines: https://en.wikipedia.org/wiki/AI_safety







  • Short answer: Yes.

    Long answer: Yes. Tune all of that nonsense out… But don’t fret if you own a tiny bit, either.

    I kind of like Warren Buffet’s ramblings on it:

    Gold gets dug out of the ground in Africa, or someplace. Then we melt it down, dig another hole, bury it again and pay people to stand around guarding it. It has no utility. Anyone watching from Mars would be scratching their head.

    You could take all the gold that’s ever been mined, and it would fill a cube 67 feet in each direction. For what that’s worth at current gold prices, you could buy all – not some – all of the farmland in the United States. Plus, you could buy 10 Exxon Mobils, plus have $1 trillion of walking-around money. Or you could have a big cube of metal. Which would you take?



  • Eh I disagree with the power usage point, specifically. Don’t listen to Altman lie through his teeth; generation and training should be dirt cheap.

    See the recent Z Image, which was trained on a shoestring budget and costs basically nothing to run: https://arxiv.org/html/2511.22699v2

    The task energy per image is less than what it took for me to type out this comment.


    As for if we “need” it, yeah, that’s a good point and what I was curious about.

    But then again… I don’t get why people use a lot of porn services. As an example, I just don’t see the appeal of OF, yet it’s a colossal enterprise.


  • But if we can stop people from looking at the illegal/dangerous stuff, and use AI to create it, let those people watch that instead, I think that would be a net positive. Of course you’d want to identify them, tag them and keep them separate from the rest of people; it’s not a solution to the problem they create, but if you can reduce the demand for it, I dunno, I want nothing to do with that kind of stuff, but I feel like there’s a solution in there somewhere.

    CP detectors got really good well before image gen was even a thing. They had to, as image hosting sites had to filter it somehow. So that’s quite solvable.

    Look at CivitAI as a modern example.

    They filter deepfakes. They filter CP. They correctly categorize and tag NSFW, all automatically and (seemingly) very accurately. You are describing a long solved problem in any jurisdiction that will actually enforce their laws.


    If you’re worried about power/water usage, already solved too. See frugal models like this, that could basically serve porn to the whole planet for pennies: https://arxiv.org/html/2511.22699v2


    IMO the biggest sticking point is datasets… The Chinese are certainly using some questionable data for the base models folks tend to use, though the porn finetunes tend to use publicly hosted booru data and such.