They worded the headline that way to scare you into that reaction. They’re only interested in telling you about the negative uses because that drives engagement.
I understand AI evangelists - which you may or may not be idk - look down on us Luddites who have the gall to ask questions, but you seriously can’t see any potential issue with this technology without some sort of restrictions in place?
You can’t see why people are a little hesitant in an era where massive international corporations are endlessly scraping anything and everything on the Internet to dump into LLM’s et al to use against us to make an extra dollar?
You can’t see why people are worried about governments and otherwise bad actors having access to this technology at scale?
I don’t think these people should be locked up or all AI usage banned. But there is definitely a middle ground between absolute prohibition and no restrictions at all.
And your comment was unnecessarily patronizing IMO. Do you think they needed that today?
If you don’t want people to respond to your takes then don’t post them in public forums. I am critiquing your stance. If it’s overly aggressive than I apologize for the tone.
I can’t control that you saw my comments seconds after they were posted but before the 20-30s it takes for me to edit them. There is nothing i changed that drastically for you to imply I was being deceptive.
None of those concerns are new in principle: AI is the current thing that makes people worry about corporate and government BS but corporate and government BS isn’t new.
Then: The cat is out of the bag, you won’t be able to put it in again. If those things worry you the strategic move isn’t to hope that suddenly, out of pretty much nowhere, capitalism and authoritarianism will fall never to be seen again, but to a) try our best to get sensible regulations in place, the EU has done a good job IMO, and b) own the tech. As in: Develop and use tech and models that can be self-hosted, that enable people to have control over AI, instead of being beholden to what corporate or government actors deem we should be using. It’s FLOSS all over again.
Or, to be an edgelord to some of the artists out there: If you don’t want your creative process to end up being dependent on Adobe’s AI stuff then help training models that aren’t owned by big CGI. No tech knowledge necessary, this would be about providing a trained eye as well as data (i.e. pictures) that allow the model to understand what it did wrong, according to your eye.
I don’t think these people should be locked up or all AI usage banned. But there is definitely a middle ground between absolute prohibition and no restrictions at all.
I have used AI tools as a shooter/editor for years so I don’t need a lecture on this, and I did not say any of the concerns are new. Obviously, the implication is AI greatly enables all of these actions to a degree we’ve never seen before. Just like cell phones didn’t invent distracted driving but made it exponentially worse and necessitated more specific direction/intervention.
Other than the obvious malicious uses of this technology, it could be great for multimedia, great for creative control for cast, great for virtual meetings to always look “your best” (as determined by each individual, e.g. clean-cut pristine, and/or preferred gender, and/or favorite anime, etc.). There are also use cases to hear letters spoken by a lost loved one, or replace the Three Stooges with politicians. Tons of “safe” use cases that I am looking forward to.
Entertainment might be pointless to some. I dream of having an on-demand Netflix that will generate whatever type of content I can imagine on demand, or better yet already know my preferences and all I have to do is tell it my mood and it will start playing something I would like.
A difference in goals, I guess. Having programs generated just to pander to my existing tastes sounds horrible to me. I want to be challenged and surprised and have my tastes tested and changed in unpredictable ways. I also want to watch stuff that’s written by humans and acted by humans, because there’s a sense of shared life there that there isn’t in an AI-generated video.
It’s also then just one step removed from refusing to accept any friends or romantic partners who don’t do exactly what you want at all times because life is supposed to be tailored to you.
If something is possible, and this simply indeed is, someone is going to develop it regardless of how we feel about it, so it’s important for non-malicious actors to make people aware of the potential negative impacts so we can start to develop ways to handle them before actively malicious actors start deploying it.
Critical businesses and governments need to know that identity verification via video and voice is much less trustworthy than it used to be, and so if you’re currently doing that, you need to mitigate these risks. There are tools, namely public-private key cryptography, that can be used to verify identity in a much tighter way, and we’re probably going to need to start implementing them in more places.
Because bags of money. And MS is a hyper toxic entity that’s been siphoning the data of every Windows user for decades now. That company is basically IBM during WW2.
Why would you develop this technology I simply don’t understand. All involved should be sent to jail. What the fuck.
They worded the headline that way to scare you into that reaction. They’re only interested in telling you about the negative uses because that drives engagement.
I understand AI evangelists - which you may or may not be idk - look down on us Luddites who have the gall to ask questions, but you seriously can’t see any potential issue with this technology without some sort of restrictions in place?
You can’t see why people are a little hesitant in an era where massive international corporations are endlessly scraping anything and everything on the Internet to dump into LLM’s et al to use against us to make an extra dollar?
You can’t see why people are worried about governments and otherwise bad actors having access to this technology at scale?
I don’t think these people should be locked up or all AI usage banned. But there is definitely a middle ground between absolute prohibition and no restrictions at all.
This is unnecessarily aggressive, I don’t need this today.
And your comment was unnecessarily patronizing IMO. Do you think they needed that today?
If you don’t want people to respond to your takes then don’t post them in public forums. I am critiquing your stance. If it’s overly aggressive than I apologize for the tone.
I saw what you wrote before your edits. I’m not going to engage with people who talk like that. Good day.
I can’t control that you saw my comments seconds after they were posted but before the 20-30s it takes for me to edit them. There is nothing i changed that drastically for you to imply I was being deceptive.
Have a good one.
None of those concerns are new in principle: AI is the current thing that makes people worry about corporate and government BS but corporate and government BS isn’t new.
Then: The cat is out of the bag, you won’t be able to put it in again. If those things worry you the strategic move isn’t to hope that suddenly, out of pretty much nowhere, capitalism and authoritarianism will fall never to be seen again, but to a) try our best to get sensible regulations in place, the EU has done a good job IMO, and b) own the tech. As in: Develop and use tech and models that can be self-hosted, that enable people to have control over AI, instead of being beholden to what corporate or government actors deem we should be using. It’s FLOSS all over again.
Or, to be an edgelord to some of the artists out there: If you don’t want your creative process to end up being dependent on Adobe’s AI stuff then help training models that aren’t owned by big CGI. No tech knowledge necessary, this would be about providing a trained eye as well as data (i.e. pictures) that allow the model to understand what it did wrong, according to your eye.
I said:
I have used AI tools as a shooter/editor for years so I don’t need a lecture on this, and I did not say any of the concerns are new. Obviously, the implication is AI greatly enables all of these actions to a degree we’ve never seen before. Just like cell phones didn’t invent distracted driving but made it exponentially worse and necessitated more specific direction/intervention.
Good point good point
Other than the obvious malicious uses of this technology, it could be great for multimedia, great for creative control for cast, great for virtual meetings to always look “your best” (as determined by each individual, e.g. clean-cut pristine, and/or preferred gender, and/or favorite anime, etc.). There are also use cases to hear letters spoken by a lost loved one, or replace the Three Stooges with politicians. Tons of “safe” use cases that I am looking forward to.
I’m not convinced any of these uses are actually beneficial. They mostly range from creepy to pointless.
Entertainment might be pointless to some. I dream of having an on-demand Netflix that will generate whatever type of content I can imagine on demand, or better yet already know my preferences and all I have to do is tell it my mood and it will start playing something I would like.
A difference in goals, I guess. Having programs generated just to pander to my existing tastes sounds horrible to me. I want to be challenged and surprised and have my tastes tested and changed in unpredictable ways. I also want to watch stuff that’s written by humans and acted by humans, because there’s a sense of shared life there that there isn’t in an AI-generated video.
It’s also then just one step removed from refusing to accept any friends or romantic partners who don’t do exactly what you want at all times because life is supposed to be tailored to you.
Actually I’m not sold on that logic. You could say that about anything at that point. The food that you order, the school you attend, your shoes.
If something is possible, and this simply indeed is, someone is going to develop it regardless of how we feel about it, so it’s important for non-malicious actors to make people aware of the potential negative impacts so we can start to develop ways to handle them before actively malicious actors start deploying it.
Critical businesses and governments need to know that identity verification via video and voice is much less trustworthy than it used to be, and so if you’re currently doing that, you need to mitigate these risks. There are tools, namely public-private key cryptography, that can be used to verify identity in a much tighter way, and we’re probably going to need to start implementing them in more places.
Because bags of money. And MS is a hyper toxic entity that’s been siphoning the data of every Windows user for decades now. That company is basically IBM during WW2.