This has been bothering me lately because I understand that in the future everything will most likely only get worse and it will be impossible to tell whether a product was made by a person or generated by AI. Of course, there won’t be any normal divisions; most likely, everything will resemble a landfill, and who knows who did it AI or people, and trusting corporations, as you know, is a bad idea; they are lying hypocrites.

In that case, are there any databases or online archives containing content created exclusively by humans? That is, books, films, TV series, cartoons, etc?

  • SuluBeddu@feddit.it
    link
    fedilink
    arrow-up
    1
    ·
    15 hours ago

    Two that I noticed are:

    For drawings in the ghibli style, you can see noise on areas that should have all the same colour. That’s because of how the diffusion model works, it’s very hard for it to replicate lack of variation in colours. If fact that noise will always exist, it’s just more noticeable on simple styles.

    For music, specifically with Suno, it tends to use the similar sounding instruments between different tracks of the same specifispecified genres, and those sounds might change during the track and never come back to their original sound (because it generates section by section of the track from start to end, the transformer model will feed the last sections back as input to generate the new ones, amplifying possible biases in the model)

    • Jimmycrackcrack@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      3 hours ago

      I wonder if the noise situation would still be apparent if the model trained only on Ghibli style anime drawings.

      • SuluBeddu@feddit.it
        link
        fedilink
        arrow-up
        1
        ·
        16 minutes ago

        Yes, I don’t think it’s a matter of training.

        The diffusion model generates pictures by starting on a canvas with random pixels, then it edits those pixel colours and carves the picture out of that chaos

        To achieve an area with all the same colour, it would need to put very exact values on the last generation step.

        It can be fixed easily with a very subtle lowpass filter, but that would be human intervention. The model itself will have a hard time replicating it