• @world_hopper@lemmy.ml
    link
    fedilink
    English
    22 years ago

    A lot of these comments are missing a large point which is that, if the claim is true, the books are being pirated and then effectively used for a commercial application.

    So the authors are losing money through this process and did not give their permission for their work to be used in a commercial way.

    The decision of this case will be wildly important for the development of AI.

    • @monobot@lemmy.ml
      link
      fedilink
      English
      02 years ago

      If they have access to some library with those books they are ok.

      I doubt they just used pirated book to train their AI and than publishing it without having non pirated paper trail, it is not that hard.

      But let’s see.

      The only a problem here is how have they accessed the books, they don’t share copyrighted material to others. But I don’t think anyone should be held guilty for reading a book, so I hold the same stance for AI.

      If you don’t want people to read your book, just don’t publish it.

      • @world_hopper@lemmy.ml
        link
        fedilink
        English
        12 years ago

        Do you know what library genesis and z-library are?? They are literally libraries of pirated materials.

        And yeah they can read the book, but shouldn’t be able to use its content in a commercial way (e.g. make money) off of its contents without the permission of the writer/copyright holder.

  • Storksforlegs
    link
    fedilink
    English
    2
    edit-2
    2 years ago

    People keep taking issue with this articles use of “summarizing” and linking to wikipedia… Summaries of copyrighted work are obviously not illegal.

    This article is oversimplified and does a crummy job of explaining the problem. Ars Technica does a much better job explaining.

    The fact that the ai can summarize these works in detail is proof that they were trained using copyrighted material without permission, (which is not fair use) Sarah Silverman is obviously not going to be hurt financially by this, but there are hundreds of thousands of authors who definitely will be affected. They have every right to sue.

  • @ag_roberston_author@beehaw.org
    link
    fedilink
    English
    2
    edit-2
    2 years ago

    I’m actually surprised by the comments in here. This technology is incredibly disruptive to authors, if they are correct that their intellectual property has been misused by these companies to train LLMs, then they absolutely should have the right to prevent that.

    You can both be pro AI and advancement, and still respect creators intellectual rights and the right to not have all content stolen by megacorporations and used by them to create profits while decimating entire industries.

    • FIash Mob #5678
      link
      fedilink
      English
      12 years ago

      Eventually the bad actors are going to lose a lot of money trying to litigate their theft of people’s art. It was always going to end up in the legal system. These apps are even programmed to scrub watermarks and signatures. It’s deliberate theft.

  • @nothacking@discuss.tchncs.de
    link
    fedilink
    English
    0
    edit-2
    2 years ago

    if asked by a user prompts chatGPT to summarize a copyrighted book, it will do so.

    So will a human. Let’s stop extending copyright law. Also, how you know it read the book, and not a summary of it, of which there are loads on the internet?

    • @SpaceToast@mander.xyz
      link
      fedilink
      English
      02 years ago

      This is why I am pro AI art. It’s no different than a human taking inspiration from other work.

      Nobody comes up with anything truly original. It’s all inspired by someone before them.

      • @AndrewZabar@beehaw.org
        link
        fedilink
        English
        12 years ago

        I don’t know how anyone is pro AI anything other than the pigs making money from it. Only bad can result of it. And will.

  • Sigma
    link
    fedilink
    English
    02 years ago

    I guess she found a way to make money on a book nobody is buying after all.

    • @middlemuddle@beehaw.org
      link
      fedilink
      English
      12 years ago

      They made a musical out of it so I’m sure it sold just fine. The pointless disparaging based on no facts isn’t very useful to this topic.

  • @moosetruce@beehaw.org
    link
    fedilink
    English
    02 years ago

    I tested by asking ChatGPT 3.5 specific questions about The Bedwetter, and it seems like it was not trained on the full text of the book. I asked it what is the first sentence, and then what is the second paragraph, and it gave plausible but incorrect answers. I asked it for the table of contents, and then if a specific chapter was in the book, and it said “my responses are generated based on pre-existing data and do not have real-time access to specific book content”. I asked who wrote the foreward, and who wrote the afterward. It said Patton Oswalt wrote the foreward and that there is no afterward. In reality, Sarah wrote the foreward and God wrote the afterward.

    ChatGPT conversation
    Table of contents and first chapter from Google Books.

    • @technojamin@beehaw.org
      link
      fedilink
      English
      12 years ago

      LLMs compress data, there’s no way ChatGPT could remember every detail of the book alongside all the other information it stores in its encodings. The issue isn’t whether the entire text of the book is contained within the encodings, it’s whether it was trained on the book in the first place.

  • @Moonrise2473@feddit.it
    link
    fedilink
    English
    02 years ago

    Seems very improbable that they scraped a pirate website with forced registration and tight daily download limits (10 books a day max?) to get content that’s often mislabeled and not presented in an homogeneous way.

    Probably it’s just using the excerpt from Amazon (which instead with paid API access is much more easy to access) as a prompt and build on it

    • luciole (he/him)
      link
      fedilink
      English
      22 years ago

      There’s been ongoing suspicions that pirated content was used to train popular LLMs simply because popular datasets used for training LLMs do include such content. The Washington Post did an article about it.

      Google’s C4 dataset used for research included illegal websites. What remains to be seen is if it was cleaned up before training Bard as we know it today. OpenAI as revealed nothing on its dataset.

  • @CreativeTensors@beehaw.org
    link
    fedilink
    English
    02 years ago

    My pie in the sky hope is that copyright somehow becomes less stringent after all of this.

    Don’t get me wrong I want protections for creators and support reasonable copyright (life of the author +25 years with the possibility of a 15 year extension) but letting a company lord over an IP for damn near a century isn’t ideal for anyone.

    • @EvilColeslaw@beehaw.org
      link
      fedilink
      English
      12 years ago

      The major scenario that I at least hope holds true out of this is that the AI “creations” aren’t eligible for copyright themselves. If the powers that be allow all this AI created stuff copyright protection it’s going to be a gigantic mess.

            • @Dominic@beehaw.org
              link
              fedilink
              English
              12 years ago

              For now, we’re special.

              LLMs are far more training data-intensive, hardware-intensive, and energy-intensive than a human brain. They’re still very much a brute-force method of getting computers to work with language.

    • @HughJanus@lemmy.ml
      link
      fedilink
      English
      02 years ago

      This is what I never understood about the whole training on AI thing.

      When a human creates an artwork, they don’t do it out of a vacuum. They’ve had a lifetime of inspiration from artwork they’ve discovered that inspires then to create something wholly new. AI does the same thing

      • luciole (he/him)
        link
        fedilink
        English
        12 years ago

        The AIs we are talking about are large language models. They take human work as input and produce facsimiles. They are owned by individuals or companies that have no permission to exploit in this way intellectual property tied to other people’s livelihoods to copy them.

        LLMs are not sentient, they don’t have inspiration, they are not creative and therefore do not create in the sense an artist would. They are an elaborate mathematical equation.

        “Training” an AI has nothing to do with training an actual living being. It’s just tuning: adjusting an algorithm incrementally until the operator is satisfied with the result. I think it’s defendable to amount this form of extraction to plagiarism.

      • @Dominic@beehaw.org
        link
        fedilink
        English
        12 years ago

        AIs are trained for the equivalent of thousands of human lifetimes (if not more). There’s no precedent for anything like this.