Michael Cohen, the former lawyer for Donald Trump, admitted to citing fake, AI-generated court cases in a legal document that wound up in front of a federal judge, as reported earlier by The New York Times. A filing unsealed on Friday says Cohen used Google’s Bard to perform research after mistaking it for “a super-charged search engine” rather than an AI chatbot.
I… don’t even. I lack the words.
I… don’t even. I lack the words.
Have you tried ChatGPT?
That’s the second time a lawyer has made this mistake, though the previous case wasn’t at such a high level
Not even close to the second time. It’s happening constantly but is getting missed.
Too many people think LLMs are accurate.
deleted by creator
Some LLMs are already generating answers based on other llm generated contant. We’ve gone full circle.
I was using phind to get some information about edrum sensors, (not the intended usecase, but I was just messing around) and one of the sources was a very obvious AI generating article from a contant mill.
Skynet is going to be so inbred
Model collapse is going to be a big deal and it doesn’t take too much poisoned content to cause model collapse.
Have found, not will find.
There are so many spam sites with LLM content.
I work for a law firm, and yeah, this happens a lot. The stupidity and laziness of our clients’ in-house attorneys is making us a lot of money.
So, AI is… checks notes… making you a lot of money, by association?
I do get profit sharing. :)
deleted by creator
Why is there not an automated check for any cases referenced in a filing, or required links? It would be trivial to require a clear format or uniform cross-reference, and this looks like an easy niche for automation to improve the judicial system. I understand that you couldn’t interpret those cases or the relevance, but an existence check and links or it doesn’t count.
I assume that now it doesn’t happen unless the other side sys a paralegal for a few hours of research
I think the issue is we’re still in pretty uncharted territory here. It’ll take time for stuff like that to become the norm. That said… The lawyers should be doing those kind of checks anyways. They’re idiots if they don’t.
This is what you get when the political system favours lies above truth
The more these people lie and get away with it, the more it will become the culture. China levels of big brother oppression are only a decade or so away if this keeps on going.
The problem is breathless AI news stories have made people misunderstand LLMs. The capabilities tend to get a lot of attention, but not so much for the limitations.
And one important limitation of LLM’s: they’re really bad at being exactly right, while being really good at looking right. So if you ask it to do an arithmetic problem you can’t do in your head, it’ll give you an answer that looks right. But if you check it with a calculator, you find the only thing right about the answer is how it sounds.
So if you use it to find cases, it’s gonna be really good at finding cases that look exactly like what you need. The only problem is, they’re not exactly what you need, because they’re not real cases.
And this is the guy they want testifying about 45?
‘my honor, I object on the grounds that the prosecution witness is incompetent’.
Because it’s devastating to my case!
“then he is on equal footing with the defense, objection overruled”
While the individuals have a responsibility to double check things, I think Google is a big part of this. They’re rolling “AI” into their search engine, so people are being fed made up, inaccurate bullshit by a search engine that they’ve trusted for decades.
That’s not what they’re talking about here. Unless this so different in the US, only Microsoft so far shows LLM “answer” next to search results.
Google may not be showing an “AI” tagged answer, but they’re using AI to automatically generate web pages with information collated from outside sources to keep you on Google instead of citing and directing you to the actual sources of the information they’re using.
Here’s an example. I’m on a laptop with a 1080p screen. I went to Google (which I basically never use, so it shouldn’t be biased for or against me) and did a search for “best game of 2023”. I got no actual results in the entire first screen. Instead, their AI or other machine learning algorithms collated information from other people and built a little chart for me right there on the search page and stuck some YouTube (also Google) links below that, so if you want to read an article you have to scroll down past all the Google generated fluff.
I performed the exact same search with DuckDuckGo, and here’s what I got.
And that’s not to mention all the “news” sites that have straight up fired their human writers and replaced them with AI whose sole job is to just generate word salads on the fly to keep people engaged and scrolling past ads, accuracy be damned.
It was kinda funny to me when everyone freaked out about misinformation and “death of search” when I see a lot of people already never leave Google and treat Instant Answers as the truth, like they do with Chat-GPT, despite being very innacurate and out of context a lot of times.
Never expect the bottom 80% of the bell curve to have self awareness. That’s a bet you lose 9 times out of 10.
Funny how “self awareness” has two meanings here. It’s the essence of what makes humans the smartest animals, but the problem you’re referring to—lack of self reflection—is one of the most common problems amongst people today. Common sense ain’t so common.
Stable Geniuses. All that bunch.
Well to be fair Michael Cohen is not a lawyer, so how could he have known?
deleted by creator
You mean like what happening in Gaza right now? You think all those weapons of war made the last half decade don’t have AI routines programmed into them? You think the Iron curtain works like space invaders with people clacking buttons, or an aim bot shooting wildly before you can even comprehend there’s a target to shoot at?
All AI is doing is amplifying problems that already exist. Too many people lack media literacy, and too many people resort to anger and opposition when they don’t understand something.
I’m genuinely amazed at the calibre of people running the US. More so that aparently half the nation thinks its the best choice.