• stebo
    link
    fedilink
    671 month ago

    Why do people Google questions anyway? Just search “heat cast” or “heat Angelina Jolie”. It’s quicker to type and you get more accurate results.

    • @ByteJunk@lemmy.world
      link
      fedilink
      261 month ago

      Because that’s the normal way in which humans communicate.

      But for Google more specifically, that sort of keyword prompts is how you searched stuff in the '00s… Nowadays the search prompt actually understands natural language, and even has features like “people also ask” that are related to this.

      All in all, do whatever works for you, it’s just that asking questions isn’t bad.

      • stebo
        link
        fedilink
        141 month ago

        Google is not a human so why would you communicate with it as if it were a human? unlike chatgpt it’s not designed to answer questions, it’s designed to search for words on webpages

        • queermunist she/her
          link
          fedilink
          8
          edit-2
          1 month ago

          We spend most of our time communicating with humans so we’re generally better at that than communicating with algorithms and so it feels more comfortable.

          Most people don’t want to learn to communicate with a search engine in its own language. Learning is hard.

          • stebo
            link
            fedilink
            -41 month ago

            what’s there to learn about using search terms

              • stebo
                link
                fedilink
                11 month ago

                They’re literally just words? All you need is the ability to speak a language

                • queermunist she/her
                  link
                  fedilink
                  8
                  edit-2
                  1 month ago

                  Surely you see how using a search engine is a separate skill from just writing words?

                  Point is, people don’t want to learn. Natural language searches in the form of questions are just easier for people, because they already know how to ask questions.

        • @ByteJunk@lemmy.world
          link
          fedilink
          31 month ago

          Because we’re human, and that’s a human-made tool. It’s made to fit us and our needs, not the other way around. And in case you’ve missed the last decade, it actually does it rather well.

    • @nyctre@lemmy.world
      link
      fedilink
      181 month ago

      I just tested. “Angelina jolie heat” gives me tons of shit results, I have to scroll all the way down and then click on “show more results” in order to get the filmography.

      “Is angelina jolie in heat” gives me this bluesky post as the first answer and the wikipedia and IMDb filmographies as 2nd and 3rd answer.

      So, I dunno, seems like you’re wrong.

      • @howrar@lemmy.ca
        link
        fedilink
        81 month ago

        Have people just completely forgot how search engines work? If you search for two things and get shit results, it means those two things don’t appear together.

      • stebo
        link
        fedilink
        61 month ago

        both queries give me poor results and searching “heat cast” reveals that she is not actually in the movie, so that’s probably why you can’t find anything useful

        • @pyre@lemmy.world
          link
          fedilink
          1
          edit-2
          1 month ago

          it’s not the queries. it’s Google. it doesn’t care about your stupid results, it just needs to shove a couple more ads in your ass so please disable your blocker and lubricate

      • Search engine algorithms are way better than in the 90s and early 2000s when it was naive keyword search completely unweighted by word order in the search string.

        So the tricks we learned of doing the bare minimum for the most precise search behavior no longer apply the same way. Now a search for two words will add weight to results that have the two words as a phrase, and some weight for the two words close together in the same sentence, but still look for each individual word as a result, too.

        More importantly, when a single word has multiple meanings, the search engines all use the rest of the search as an indicator of which meaning the searcher means. “Heat” is a really broad word with lots of meanings, and the rest of the search can help inform the algorithm of what the user intends.

      • stebo
        link
        fedilink
        11 month ago

        Until they worded it as “Does Angelina Jolie appear in heat?”

    • @warbond@lemmy.world
      link
      fedilink
      41 month ago

      As a funny challenge I like to come up with simplified, stupid-sounding, 3-word search queries for complex questions, and more often than not it’s good enough to get me the information I’m looking for.

    • Why do people Google questions anyway?

      Because it gives better responses.

      Google and all the other major search engines have built in functionality to perform natural language processing on the user’s query and the text in its index to perform a search more precisely aligned with the user’s desired results, or to recommend related searches.

      If the functionality is there, why wouldn’t we use it?

      • stebo
        link
        fedilink
        21 month ago

        that is true but the results will be the same at best, not better

        • Longer queries give better opportunities for error correction, like searching for synonyms and misspellings, or applying the right context clues.

          In this specific example, “is Angelina Jolie in Heat” gives better results than “Angelina Jolie heat,” because the words that make it a complete sentence question are also the words that give confirmation that the searcher is talking about the movie.

          Especially with negative results, like when you ask a question where the answer is no, sometimes the semantic links in the kndex can get the search engine to make suggestions of a specific mistaken assumption you’ve made.

  • wander1236
    link
    fedilink
    661 month ago

    Wouldn’t removing your ovaries and fallopian tubes make you not “fertile” by definition?

  • @otacon239@lemmy.world
    link
    fedilink
    41
    edit-2
    1 month ago

    It also contradicts itself immediately, saying she’s fertile, then immediately saying she’s had her ovaries removed end that she’s reached menopause.

    • @_stranger_@lemmy.world
      link
      fedilink
      291 month ago

      Because you’re not getting an answer to a question, you’re getting characters selected to appear like they statistically belong together given the context.

      • @howrar@lemmy.ca
        link
        fedilink
        141 month ago

        A sentence saying she had her ovaries removed and that she is fertile don’t statistically belong together, so you’re not even getting that.

        • @JcbAzPx@lemmy.world
          link
          fedilink
          English
          161 month ago

          You think that because you understand the meaning of words. LLM AI doesn’t. It uses math and math doesn’t care that it’s contradictory, it cares that the words individually usually came next in it’s training data.

            • @Swedneck@discuss.tchncs.de
              link
              fedilink
              118 days ago

              and those tokens? just numbers, indexes. LLMs have no concept of language or words or anything, it’s literally just a statistical calculator where the numbers encode some combination of letter(s)

          • @howrar@lemmy.ca
            link
            fedilink
            31 month ago

            It has nothing to do with the meaning. If your training set consists of a bunch of strings consisting of A’s and B’s together and another subset consisting of C’s and D’s together (i.e. [AB]+ and [CD]+ in regex) and the LLM outputs “ABBABBBDA”, then that’s statistically unlikely because D’s don’t appear with A’s and B’s. I have no idea what the meaning of these sequences are, nor do I need to know to see that it’s statistically unlikely.

            In the context of language and LLMs, “statistically likely” roughly means that some human somewhere out there is more likely to have written this than the alternatives because that’s where the training data comes from. The LLM doesn’t need to understand the meaning. It just needs to be able to compute probabilities, and the probability of this excerpt should be low because the probability that a human would’ve written this is low.

            • @monotremata@lemmy.ca
              link
              fedilink
              English
              41 month ago

              Honestly this isn’t really all that accurate. Like, a common example when introducing the Word2Vec mapping is that if you take the vector for “king” and add the vector for “woman,” the closest vector matching the resultant is “queen.” So there are elements of “meaning” being captured there. The Deep Learning networks can capture a lot more abstraction than that, and the Attention mechanism introduced by the Transformer model greatly increased the ability of these models to interpret context clues.

              You’re right that it’s easy to make the mistake of overestimating the level of understanding behind the writing. That’s absolutely something that happens. But saying “it has nothing to do with the meaning” is going a bit far. There is semantic processing happening, it’s just less sophisticated than the form of the writing could lead you to assume.

            • @JcbAzPx@lemmy.world
              link
              fedilink
              English
              21 month ago

              Unless they grabbed discussion forums that happened to have examples of multiple people. It’s pretty common when talking about fertility, problems in that area will be brought up.

              People can use context and meaning to avoid that mistake, LLMs have to be forced not to through much slower QC by real people (something Google hates to do).

    • Cordyceps
      link
      fedilink
      81 month ago

      And the text even ends with a mention of her being in early menopause…

  • @Freshparsnip@lemm.ee
    link
    fedilink
    English
    221 month ago

    People Google questions like that? I would have looked up “Heat” in either Wikipedia or imdb and checked the cast list. Or gone to Jolie’s Wikipedia or imdb pages to see if Heat is listed

    • @pyre@lemmy.world
      link
      fedilink
      3
      edit-2
      1 month ago

      doesn’t matter, this is “AI” and it should know the difference from context. not to mention you can have gemini as an assistant, which is supposed to respond to natural language input. and it does this.

      best thing about it is that it doesn’t remember previous questions most of the time so after listening to your “assistant” being patronizing about the term “in heat” not applying to humans you can try to explain saying “dude I meant the movie heat”, it will go “oh you mean the 1995 movie? of course… what do you want to know about it?”

  • @frezik@midwest.social
    link
    fedilink
    141 month ago

    We all know how AI has made things worse, but here’s some context on how it’s outright backwards.

    Early search engines had a context problem. To use an example from “Halt and Catch Fire”, if you search for “Texas Cowboy”, do you mean the guys on horseback driving a herd of cows, or do you mean the football team? If you search for “Dallas Cowboys”, should that bias the results towards a different answer? Early, naive search engines gave bad results for cases like that. Spat out whatever keywords happen to hit the most.

    Sometimes, it was really bad. In high school, I was showing a history teacher how to use search engines, and he searched for “China golden age”. All results were asian porn. I think we were using Yahoo.

    AltaVista largely solved the context problem. We joke about its bad results now, but it was one of the better search engines before Google PageRank.

    Now we have AI unsolving the problem.

  • @ArtificialHoldings@lemmy.world
    link
    fedilink
    7
    edit-2
    1 month ago

    Everyone in this post is the annoying IT person who says “why don’t you just run Linux?” to people who don’t even fully understand what an OS is in the first place.

  • @Retreaux@lemmy.world
    link
    fedilink
    41 month ago

    It’s hilarious I got the same results with Charlize Theron with the exact same movie, I guess we both don’t know who actresses are apparently.

  • @jaschen@lemm.ee
    link
    fedilink
    31 month ago

    I never heard of the movie and was enjoying the content you created that I thought was supposed to be funny.