• @BrianTheeBiscuiteer@lemmy.world
    link
    fedilink
    English
    131 year ago

    While I think the realism of some models is fantastic and the flexibility of others is great it is starting to feel like we’re reaching a plateau on quality. Most of the white papers I’ve seen posted lately are about speed or some alternate way of doing what ControlNet or inpainting can already do.

    • Fubarberry
      link
      fedilink
      English
      81 year ago

      Have you seen the SD3 preview images? They’re looking seriously impressive.

    • AggressivelyPassive
      link
      fedilink
      English
      21 year ago

      That’s maybe because we’ve reached the limits of what the current architecture of models can achieve on the current architecture of GPUs.

      To create significantly better models without having a fundamentally new approach, you have to increase the model size. And if all accelerators accessible to you only offer, say, 24gb, you can’t grow infinitely. At least not within a reasonable timeframe.

      • Kbin_space_program
        link
        fedilink
        -2
        edit-2
        1 year ago

        Will increasing the model actually help? Right now we’re dealing with LLMs that literally have the entire internet as a model. It is difficult to increase that.

        Making a better way to process said model would be a much more substantive achievement. So that when particular details are needed it’s not just random chance that it gets it right.

        • AggressivelyPassive
          link
          fedilink
          English
          61 year ago

          That is literally a complete misinterpretation of how models work.

          You don’t “have the Internet as a model”, you train a model using large amounts of data. That does not mean, that this model contains any of the actual data. State of the at models are somewhere in the billions of parameters. If you have, say, 50b parameters, each being a 64bit/8 byte double (which is way, way too much accuracy) you get something like 400gb of data. That’s a lot, but the Internet slightly larger than that.

          • Kbin_space_program
            link
            fedilink
            -3
            edit-2
            1 year ago

            It’s an exaggeration, but its not far off given that Google literally has all of the web parsed at least once a day.

            Reddit just sold off AI harvesting rights on all of its content to Google.

            The problem is no longer model size. The problem is interpretation.

            You can ask almost everyone on earth a simple deterministic math problem and you’ll get the right answer almost all of the time because they understand the principles behind it.

            Until you can show deterministic understanding in AI, you have a glorified chat bot.

            • AggressivelyPassive
              link
              fedilink
              English
              51 year ago

              It is far off. It’s like saying you have the entire knowledge of all physics because you skimmed a textbook once.

              Interpretation is also a problem that can be solved, current models do understand quite a lot of nuance, subtext and implicit context.

              But you’re moving the goal post here. We started at “don’t get better, at a plateau” and now you’re aiming for perfection.

              • Kbin_space_program
                link
                fedilink
                -21 year ago

                You’re building beautiful straw men. They’re lies, but great job.

                I said originally that we need to improve the interpretation of the model by AI, not just have even bigger models that will invariably have the same flaw as they do now.

                Deterministic reliability is the end goal of that.

                • AggressivelyPassive
                  link
                  fedilink
                  English
                  21 year ago

                  Will increasing the model actually help? Right now we’re dealing with LLMs that literally have the entire internet as a model. It is difficult to increase that.

                  Making a better way to process said model would be a much more substantive achievement. So that when particular details are needed it’s not just random chance that it gets it right.

                  Where exactly did you write anything about interpretation? Getting “details right” by processing faster? I would hardly call that “interpretation” that’s just being wrong faster.

    • @snooggums@midwest.social
      link
      fedilink
      English
      21 year ago

      When the output of something is the average of the inputs it will naturally be mediocre. It will always look like the output of a committee by the nature of how it is formed.

      Certain artists stand out because they are different from everyone else, and that is why they are celebrated. M.C. Escher has a certain style that when run through AI looks like a skilled high school student doing their best impression of M.C. Escher.

      Now as a tool to inspire, AI is pretty good at creating mashups of multiple things really fast. Those could be used by an actual artist to create something engaging. Most AI reminds me of photoshop battles.

      • @webghost0101@sopuli.xyz
        link
        fedilink
        English
        21 year ago

        Who says the output is an average?

        I agree for narrow models and Loras trained on a specific style they can never be as good as the original but i also think that is the lamest uncreative way to generate.

        Much more fun to use general use models and to crack the settings to generate exactly what you want the way you want,