People around me use AI all the time to get answers to generalized topics. More and more they use it like a search engine / information augmentation system.
They are not technical people. They mostly know that the information needs to be double checked and might be wrong. But usually take it at face value if the importance is low.
Honestly this is about what they did before. They would search Google, click on the first blog, skim it, and repeat until getting some answer they believe.
I too use AI regularly for brainstorming, quickly summarizing massive text messages, and reformatting text from a jumbled mess into something more cohesive, etc.
I don’t love it or hate it. In some cases it saves a lot of time and is useful tool. In other cases it outputs trash that we cannot use for any serious case.
Just like a hammer or a shovel, it’s a tool. Can be used the right way and it can be used the wrong way.
It can be helpful for quickly summarizing a vast body of knowledge or a highly complex topic, to get a general overview and see which strings to pull further, as long as you don’t take everything at face value and understand that you still need to pull those strings yourself in order to acquire an understanding.
Like, if I suddenly wanted to learn computer programming, I wouldn’t know where to start. But querying an LLM can give me a general idea, define a few key terms and explain the difference between related concepts, without me having to browse through a hundred different tech blogs to answer all my questions in terms I can understand.
But I wouldn’t suddenly think I’m a computer programmer after doing that. I would have a better idea of where to start learning. I would be able to decide whether to focus first on object-oriented programming or functional programming, static or dynamic typing, declarative or imperative syntax, etc., instead of getting overwhelmed from the start just trying to learn the differences between those concepts.
It can also suggest resources for further learning, books or websites written by humans, links to open-source software that does what I’m trying to do, etc.
I wouldn’t expect it to write code for me, but it can be an efficient aid to self-learning and show me what programs and libraries to use for my intended purpose.
Or for astrophysics, for example. I wouldn’t expect it to give me an accurate breakdown of the engineering specs required to build a pair of O’Reilly cylinders at a Lagrange point, but it can suggest software for rendering prototypes or for simulating the forces that need to be accounted for.
That wouldn’t make me an astrophysicist, but it’s kind of cool that you don’t need to be one to learn about this stuff and tinker around in a field that’s so vast and technical as to be otherwise prohibitive for non-experts.
It also depends on the LLM of course. I think Mistral and Lumo are generally pretty okay at doing what I described above. Their algorithms aren’t corrupted by american venture capital, at least, so they have more incentive to give you an accurate response rather than being sycophantic and hugboxing.
Do you not cross reference multiple archived news articles and seek out past attendees to remind yourself of what Britney Spears wore at her last concert? smh
I’ll be real with you, I typed lazy but wanted to type idiot. Read your fucking emails Jesus Christ. You still have to check all the shit generative AI writes because it lies constantly. It’s very nature does not understand what it’s generating.
Obviously, don’t rely on them to read important emails for you. But so many things don’t need additional checking. We’ve all done at least a decade of schooling. We all know basic math, science, and history. When we forget things, all it takes is a small reminder to get it back. Our brains are capable of recognizing whether we’ve seen something before or not. We’re also capable of reasoning to determine whether something we read is consistent with everything else we know.
So many other things are also so unimportant that it doesn’t matter at all if you’re wrong. For example, some actor looks familiar, it lies to you about what film they were in, and you believe it. Is your life any worse off for it?
If it’s a liar that lies every time or most of the time, then yeah, don’t bother.
why […] am I asking it questions?
I can’t actually think of any specific scenario where something is unimportant enough to not matter but important enough that you’d ask. What I was originally thinking of were actually scenarios where I planned to verify the information at a later time, but I mistook that in my head as not verifying it.
People around me use AI all the time to get answers to generalized topics. More and more they use it like a search engine / information augmentation system.
They are not technical people. They mostly know that the information needs to be double checked and might be wrong. But usually take it at face value if the importance is low.
Honestly this is about what they did before. They would search Google, click on the first blog, skim it, and repeat until getting some answer they believe.
I too use AI regularly for brainstorming, quickly summarizing massive text messages, and reformatting text from a jumbled mess into something more cohesive, etc.
I don’t love it or hate it. In some cases it saves a lot of time and is useful tool. In other cases it outputs trash that we cannot use for any serious case.
Just like a hammer or a shovel, it’s a tool. Can be used the right way and it can be used the wrong way.
It can be helpful for quickly summarizing a vast body of knowledge or a highly complex topic, to get a general overview and see which strings to pull further, as long as you don’t take everything at face value and understand that you still need to pull those strings yourself in order to acquire an understanding.
Like, if I suddenly wanted to learn computer programming, I wouldn’t know where to start. But querying an LLM can give me a general idea, define a few key terms and explain the difference between related concepts, without me having to browse through a hundred different tech blogs to answer all my questions in terms I can understand.
But I wouldn’t suddenly think I’m a computer programmer after doing that. I would have a better idea of where to start learning. I would be able to decide whether to focus first on object-oriented programming or functional programming, static or dynamic typing, declarative or imperative syntax, etc., instead of getting overwhelmed from the start just trying to learn the differences between those concepts.
It can also suggest resources for further learning, books or websites written by humans, links to open-source software that does what I’m trying to do, etc.
I wouldn’t expect it to write code for me, but it can be an efficient aid to self-learning and show me what programs and libraries to use for my intended purpose.
Or for astrophysics, for example. I wouldn’t expect it to give me an accurate breakdown of the engineering specs required to build a pair of O’Reilly cylinders at a Lagrange point, but it can suggest software for rendering prototypes or for simulating the forces that need to be accounted for.
That wouldn’t make me an astrophysicist, but it’s kind of cool that you don’t need to be one to learn about this stuff and tinker around in a field that’s so vast and technical as to be otherwise prohibitive for non-experts.
It also depends on the LLM of course. I think Mistral and Lumo are generally pretty okay at doing what I described above. Their algorithms aren’t corrupted by american venture capital, at least, so they have more incentive to give you an accurate response rather than being sycophantic and hugboxing.
I’m sorry, but all the use cases you listed show that you’re just lazy. Stop it. It’s embarassing.
I’m lazy as fuck. I want to solve problems in the easiest way humanly possible. With the least amount of effort output.
What about you? Do you take the hard way?
Do you not cross reference multiple archived news articles and seek out past attendees to remind yourself of what Britney Spears wore at her last concert? smh
I’ll be real with you, I typed lazy but wanted to type idiot. Read your fucking emails Jesus Christ. You still have to check all the shit generative AI writes because it lies constantly. It’s very nature does not understand what it’s generating.
Hard to tell if you’re trolling or trying to add value to the conversation and just missing it.
A hammer doesn’t know what it is building but it is still useful.
This is the nature of tools: for some they improve output, for some they don’t.
Everyone’s a god damn tool philosopher.
Personally, I’m fine with banning cigarettes regardless of how responsibly my dead grandpa may have used them.
Obviously, don’t rely on them to read important emails for you. But so many things don’t need additional checking. We’ve all done at least a decade of schooling. We all know basic math, science, and history. When we forget things, all it takes is a small reminder to get it back. Our brains are capable of recognizing whether we’ve seen something before or not. We’re also capable of reasoning to determine whether something we read is consistent with everything else we know.
So many other things are also so unimportant that it doesn’t matter at all if you’re wrong. For example, some actor looks familiar, it lies to you about what film they were in, and you believe it. Is your life any worse off for it?
I think a better question is: why, then, am I asking it questions?
If I had a friend I knew was a notorious liar, I would—big chess move—simply stop asking him who actors are. Unless it was really funny.
If it’s a liar that lies every time or most of the time, then yeah, don’t bother.
I can’t actually think of any specific scenario where something is unimportant enough to not matter but important enough that you’d ask. What I was originally thinking of were actually scenarios where I planned to verify the information at a later time, but I mistook that in my head as not verifying it.
Yeah, fair enough.
The only time it’s happened to me is when gemini violates my eyes with its presence.