Why do people Google questions anyway? Just search “heat cast” or “heat Angelina Jolie”. It’s quicker to type and you get more accurate results.
Because that’s the normal way in which humans communicate.
But for Google more specifically, that sort of keyword prompts is how you searched stuff in the '00s… Nowadays the search prompt actually understands natural language, and even has features like “people also ask” that are related to this.
All in all, do whatever works for you, it’s just that asking questions isn’t bad.
Google is not a human so why would you communicate with it as if it were a human? unlike chatgpt it’s not designed to answer questions, it’s designed to search for words on webpages
We spend most of our time communicating with humans so we’re generally better at that than communicating with algorithms and so it feels more comfortable.
Most people don’t want to learn to communicate with a search engine in its own language. Learning is hard.
what’s there to learn about using search terms
Do you think you were born knowing what search terms are?
They’re literally just words? All you need is the ability to speak a language
Whattt
Why wouldn’t I include “the” “a” other articles etc. if I had language but no tech skills
You weren’t born with the knowledge of written language either.
Surely you see how using a search engine is a separate skill from just writing words?
Point is, people don’t want to learn. Natural language searches in the form of questions are just easier for people, because they already know how to ask questions.
Because we’re human, and that’s a human-made tool. It’s made to fit us and our needs, not the other way around. And in case you’ve missed the last decade, it actually does it rather well.
I just tested. “Angelina jolie heat” gives me tons of shit results, I have to scroll all the way down and then click on “show more results” in order to get the filmography.
“Is angelina jolie in heat” gives me this bluesky post as the first answer and the wikipedia and IMDb filmographies as 2nd and 3rd answer.
So, I dunno, seems like you’re wrong.
Have people just completely forgot how search engines work? If you search for two things and get shit results, it means those two things don’t appear together.
both queries give me poor results and searching “heat cast” reveals that she is not actually in the movie, so that’s probably why you can’t find anything useful
it’s not the queries. it’s Google. it doesn’t care about your stupid results, it just needs to shove a couple more ads in your ass so please disable your blocker and lubricate
Search engine algorithms are way better than in the 90s and early 2000s when it was naive keyword search completely unweighted by word order in the search string.
So the tricks we learned of doing the bare minimum for the most precise search behavior no longer apply the same way. Now a search for two words will add weight to results that have the two words as a phrase, and some weight for the two words close together in the same sentence, but still look for each individual word as a result, too.
More importantly, when a single word has multiple meanings, the search engines all use the rest of the search as an indicator of which meaning the searcher means. “Heat” is a really broad word with lots of meanings, and the rest of the search can help inform the algorithm of what the user intends.
You won’t get funny answers if you do it correctly.
It works. It will also find others who posted that question.
Until they worded it as “Does Angelina Jolie appear in heat?”
As a funny challenge I like to come up with simplified, stupid-sounding, 3-word search queries for complex questions, and more often than not it’s good enough to get me the information I’m looking for.
Why do people Google questions anyway?
Because it gives better responses.
Google and all the other major search engines have built in functionality to perform natural language processing on the user’s query and the text in its index to perform a search more precisely aligned with the user’s desired results, or to recommend related searches.
If the functionality is there, why wouldn’t we use it?
that is true but the results will be the same at best, not better
Longer queries give better opportunities for error correction, like searching for synonyms and misspellings, or applying the right context clues.
In this specific example, “is Angelina Jolie in Heat” gives better results than “Angelina Jolie heat,” because the words that make it a complete sentence question are also the words that give confirmation that the searcher is talking about the movie.
Especially with negative results, like when you ask a question where the answer is no, sometimes the semantic links in the kndex can get the search engine to make suggestions of a specific mistaken assumption you’ve made.
Wouldn’t removing your ovaries and fallopian tubes make you not “fertile” by definition?
Yes, it contradicts itself within the next couple of sentences.
As per form for these “AIs”.
It also contradicts itself immediately, saying she’s fertile, then immediately saying she’s had her ovaries removed end that she’s reached menopause.
How can she be fertile if her ovaries are removed?
Because you’re not getting an answer to a question, you’re getting characters selected to appear like they statistically belong together given the context.
A sentence saying she had her ovaries removed and that she is fertile don’t statistically belong together, so you’re not even getting that.
You think that because you understand the meaning of words. LLM AI doesn’t. It uses math and math doesn’t care that it’s contradictory, it cares that the words individually usually came next in it’s training data.
It’s not even words, it “thinks” in “word parts” called tokens.
and those tokens? just numbers, indexes. LLMs have no concept of language or words or anything, it’s literally just a statistical calculator where the numbers encode some combination of letter(s)
It has nothing to do with the meaning. If your training set consists of a bunch of strings consisting of A’s and B’s together and another subset consisting of C’s and D’s together (i.e.
[AB]+
and[CD]+
in regex) and the LLM outputs “ABBABBBDA”, then that’s statistically unlikely because D’s don’t appear with A’s and B’s. I have no idea what the meaning of these sequences are, nor do I need to know to see that it’s statistically unlikely.In the context of language and LLMs, “statistically likely” roughly means that some human somewhere out there is more likely to have written this than the alternatives because that’s where the training data comes from. The LLM doesn’t need to understand the meaning. It just needs to be able to compute probabilities, and the probability of this excerpt should be low because the probability that a human would’ve written this is low.
Honestly this isn’t really all that accurate. Like, a common example when introducing the Word2Vec mapping is that if you take the vector for “king” and add the vector for “woman,” the closest vector matching the resultant is “queen.” So there are elements of “meaning” being captured there. The Deep Learning networks can capture a lot more abstraction than that, and the Attention mechanism introduced by the Transformer model greatly increased the ability of these models to interpret context clues.
You’re right that it’s easy to make the mistake of overestimating the level of understanding behind the writing. That’s absolutely something that happens. But saying “it has nothing to do with the meaning” is going a bit far. There is semantic processing happening, it’s just less sophisticated than the form of the writing could lead you to assume.
Unless they grabbed discussion forums that happened to have examples of multiple people. It’s pretty common when talking about fertility, problems in that area will be brought up.
People can use context and meaning to avoid that mistake, LLMs have to be forced not to through much slower QC by real people (something Google hates to do).
And the text even ends with a mention of her being in early menopause…
NGL, I learned some things.
People Google questions like that? I would have looked up “Heat” in either Wikipedia or imdb and checked the cast list. Or gone to Jolie’s Wikipedia or imdb pages to see if Heat is listed
doesn’t matter, this is “AI” and it should know the difference from context. not to mention you can have gemini as an assistant, which is supposed to respond to natural language input. and it does this.
best thing about it is that it doesn’t remember previous questions most of the time so after listening to your “assistant” being patronizing about the term “in heat” not applying to humans you can try to explain saying “dude I meant the movie heat”, it will go “oh you mean the 1995 movie? of course… what do you want to know about it?”
We all know how AI has made things worse, but here’s some context on how it’s outright backwards.
Early search engines had a context problem. To use an example from “Halt and Catch Fire”, if you search for “Texas Cowboy”, do you mean the guys on horseback driving a herd of cows, or do you mean the football team? If you search for “Dallas Cowboys”, should that bias the results towards a different answer? Early, naive search engines gave bad results for cases like that. Spat out whatever keywords happen to hit the most.
Sometimes, it was really bad. In high school, I was showing a history teacher how to use search engines, and he searched for “China golden age”. All results were asian porn. I think we were using Yahoo.
AltaVista largely solved the context problem. We joke about its bad results now, but it was one of the better search engines before Google PageRank.
Now we have AI unsolving the problem.
This is why no one can find anything on Google anymore, they don’t know how to google shit.
Everyone in this post is the annoying IT person who says “why don’t you just run Linux?” to people who don’t even fully understand what an OS is in the first place.
Installing a whole new OS is not good comparison to browser. We all downloaded chrome using internet explorer at some point before.
You are included in my initial assertion
You’ve sullied my quick answer:
The assistant figures it out though:
Maybe that’s why ai had trouble determining anything about AJ & the movie Heat, because she’s wasn’t even in it!
In short: BONK
It probably thought you were Elon Musk.
Why is the search query in the top and bottom different?
It’s hilarious I got the same results with Charlize Theron with the exact same movie, I guess we both don’t know who actresses are apparently.
Deepseek also gets this wrong.
So she is in heat …
I never heard of the movie and was enjoying the content you created that I thought was supposed to be funny.