• stebo@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    73
    arrow-down
    6
    ·
    3 months ago

    Why do people Google questions anyway? Just search “heat cast” or “heat Angelina Jolie”. It’s quicker to type and you get more accurate results.

    • ByteJunk@lemmy.world
      link
      fedilink
      arrow-up
      29
      arrow-down
      3
      ·
      3 months ago

      Because that’s the normal way in which humans communicate.

      But for Google more specifically, that sort of keyword prompts is how you searched stuff in the '00s… Nowadays the search prompt actually understands natural language, and even has features like “people also ask” that are related to this.

      All in all, do whatever works for you, it’s just that asking questions isn’t bad.

      • stebo@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        18
        arrow-down
        4
        ·
        3 months ago

        Google is not a human so why would you communicate with it as if it were a human? unlike chatgpt it’s not designed to answer questions, it’s designed to search for words on webpages

        • queermunist she/her@lemmy.ml
          link
          fedilink
          arrow-up
          10
          arrow-down
          2
          ·
          edit-2
          3 months ago

          We spend most of our time communicating with humans so we’re generally better at that than communicating with algorithms and so it feels more comfortable.

          Most people don’t want to learn to communicate with a search engine in its own language. Learning is hard.

        • ByteJunk@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          3 months ago

          Because we’re human, and that’s a human-made tool. It’s made to fit us and our needs, not the other way around. And in case you’ve missed the last decade, it actually does it rather well.

    • nyctre@lemmy.world
      link
      fedilink
      arrow-up
      21
      arrow-down
      3
      ·
      3 months ago

      I just tested. “Angelina jolie heat” gives me tons of shit results, I have to scroll all the way down and then click on “show more results” in order to get the filmography.

      “Is angelina jolie in heat” gives me this bluesky post as the first answer and the wikipedia and IMDb filmographies as 2nd and 3rd answer.

      So, I dunno, seems like you’re wrong.

      • howrar@lemmy.ca
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        3 months ago

        Have people just completely forgot how search engines work? If you search for two things and get shit results, it means those two things don’t appear together.

      • stebo@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        3 months ago

        both queries give me poor results and searching “heat cast” reveals that she is not actually in the movie, so that’s probably why you can’t find anything useful

        • pyre@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          3 months ago

          it’s not the queries. it’s Google. it doesn’t care about your stupid results, it just needs to shove a couple more ads in your ass so please disable your blocker and lubricate

      • GamingChairModel@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        3 months ago

        Search engine algorithms are way better than in the 90s and early 2000s when it was naive keyword search completely unweighted by word order in the search string.

        So the tricks we learned of doing the bare minimum for the most precise search behavior no longer apply the same way. Now a search for two words will add weight to results that have the two words as a phrase, and some weight for the two words close together in the same sentence, but still look for each individual word as a result, too.

        More importantly, when a single word has multiple meanings, the search engines all use the rest of the search as an indicator of which meaning the searcher means. “Heat” is a really broad word with lots of meanings, and the rest of the search can help inform the algorithm of what the user intends.

    • warbond@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      3 months ago

      As a funny challenge I like to come up with simplified, stupid-sounding, 3-word search queries for complex questions, and more often than not it’s good enough to get me the information I’m looking for.

    • GamingChairModel@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      3 months ago

      Why do people Google questions anyway?

      Because it gives better responses.

      Google and all the other major search engines have built in functionality to perform natural language processing on the user’s query and the text in its index to perform a search more precisely aligned with the user’s desired results, or to recommend related searches.

      If the functionality is there, why wouldn’t we use it?

        • GamingChairModel@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          3 months ago

          Longer queries give better opportunities for error correction, like searching for synonyms and misspellings, or applying the right context clues.

          In this specific example, “is Angelina Jolie in Heat” gives better results than “Angelina Jolie heat,” because the words that make it a complete sentence question are also the words that give confirmation that the searcher is talking about the movie.

          Especially with negative results, like when you ask a question where the answer is no, sometimes the semantic links in the kndex can get the search engine to make suggestions of a specific mistaken assumption you’ve made.

  • otacon239@lemmy.world
    link
    fedilink
    arrow-up
    40
    ·
    edit-2
    3 months ago

    It also contradicts itself immediately, saying she’s fertile, then immediately saying she’s had her ovaries removed end that she’s reached menopause.

    • _stranger_@lemmy.world
      link
      fedilink
      arrow-up
      29
      arrow-down
      1
      ·
      3 months ago

      Because you’re not getting an answer to a question, you’re getting characters selected to appear like they statistically belong together given the context.

      • howrar@lemmy.ca
        link
        fedilink
        arrow-up
        16
        arrow-down
        2
        ·
        3 months ago

        A sentence saying she had her ovaries removed and that she is fertile don’t statistically belong together, so you’re not even getting that.

        • JcbAzPx@lemmy.world
          link
          fedilink
          English
          arrow-up
          17
          arrow-down
          2
          ·
          3 months ago

          You think that because you understand the meaning of words. LLM AI doesn’t. It uses math and math doesn’t care that it’s contradictory, it cares that the words individually usually came next in it’s training data.

          • howrar@lemmy.ca
            link
            fedilink
            arrow-up
            3
            ·
            3 months ago

            It has nothing to do with the meaning. If your training set consists of a bunch of strings consisting of A’s and B’s together and another subset consisting of C’s and D’s together (i.e. [AB]+ and [CD]+ in regex) and the LLM outputs “ABBABBBDA”, then that’s statistically unlikely because D’s don’t appear with A’s and B’s. I have no idea what the meaning of these sequences are, nor do I need to know to see that it’s statistically unlikely.

            In the context of language and LLMs, “statistically likely” roughly means that some human somewhere out there is more likely to have written this than the alternatives because that’s where the training data comes from. The LLM doesn’t need to understand the meaning. It just needs to be able to compute probabilities, and the probability of this excerpt should be low because the probability that a human would’ve written this is low.

            • monotremata@lemmy.ca
              link
              fedilink
              English
              arrow-up
              4
              ·
              3 months ago

              Honestly this isn’t really all that accurate. Like, a common example when introducing the Word2Vec mapping is that if you take the vector for “king” and add the vector for “woman,” the closest vector matching the resultant is “queen.” So there are elements of “meaning” being captured there. The Deep Learning networks can capture a lot more abstraction than that, and the Attention mechanism introduced by the Transformer model greatly increased the ability of these models to interpret context clues.

              You’re right that it’s easy to make the mistake of overestimating the level of understanding behind the writing. That’s absolutely something that happens. But saying “it has nothing to do with the meaning” is going a bit far. There is semantic processing happening, it’s just less sophisticated than the form of the writing could lead you to assume.

            • JcbAzPx@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              Unless they grabbed discussion forums that happened to have examples of multiple people. It’s pretty common when talking about fertility, problems in that area will be brought up.

              People can use context and meaning to avoid that mistake, LLMs have to be forced not to through much slower QC by real people (something Google hates to do).

  • Freshparsnip@lemm.ee
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    3 months ago

    People Google questions like that? I would have looked up “Heat” in either Wikipedia or imdb and checked the cast list. Or gone to Jolie’s Wikipedia or imdb pages to see if Heat is listed

    • pyre@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      3 months ago

      doesn’t matter, this is “AI” and it should know the difference from context. not to mention you can have gemini as an assistant, which is supposed to respond to natural language input. and it does this.

      best thing about it is that it doesn’t remember previous questions most of the time so after listening to your “assistant” being patronizing about the term “in heat” not applying to humans you can try to explain saying “dude I meant the movie heat”, it will go “oh you mean the 1995 movie? of course… what do you want to know about it?”

  • frezik@midwest.social
    link
    fedilink
    arrow-up
    14
    ·
    3 months ago

    We all know how AI has made things worse, but here’s some context on how it’s outright backwards.

    Early search engines had a context problem. To use an example from “Halt and Catch Fire”, if you search for “Texas Cowboy”, do you mean the guys on horseback driving a herd of cows, or do you mean the football team? If you search for “Dallas Cowboys”, should that bias the results towards a different answer? Early, naive search engines gave bad results for cases like that. Spat out whatever keywords happen to hit the most.

    Sometimes, it was really bad. In high school, I was showing a history teacher how to use search engines, and he searched for “China golden age”. All results were asian porn. I think we were using Yahoo.

    AltaVista largely solved the context problem. We joke about its bad results now, but it was one of the better search engines before Google PageRank.

    Now we have AI unsolving the problem.

  • DeusUmbra@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    3 months ago

    This is why no one can find anything on Google anymore, they don’t know how to google shit.

  • ArtificialHoldings@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    5
    ·
    edit-2
    3 months ago

    Everyone in this post is the annoying IT person who says “why don’t you just run Linux?” to people who don’t even fully understand what an OS is in the first place.

  • Bongles@lemm.ee
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    3 months ago

    You’ve sullied my quick answer:

    The assistant figures it out though:

    • LemmyKnowsBest@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      3 months ago

      Maybe that’s why ai had trouble determining anything about AJ & the movie Heat, because she’s wasn’t even in it!

  • Retreaux@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    3 months ago

    It’s hilarious I got the same results with Charlize Theron with the exact same movie, I guess we both don’t know who actresses are apparently.

  • jaschen@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    3 months ago

    I never heard of the movie and was enjoying the content you created that I thought was supposed to be funny.