

If you click on the graph, it’ll turn into a data table showing ~48 hours worth of information. Is that what you’re looking for?
If you click on the graph, it’ll turn into a data table showing ~48 hours worth of information. Is that what you’re looking for?
The long story short is that you are being made to (by default) give up rights that you should have, particularly around class action lawsuits. It’s strictly bad for you and strictly good for the company. They probably shouldn’t be allowed to do this. Since they are, the only thing we can do to protest it is to opt-out.
Maybe you’ll never sue discord. But maybe someday there will be a lawsuit brought against discord by someone else. A few ideas for topics might include a security vulnerability that leaks personal information, the use of discord content for AI training data (e.g. copyright issues), or the safety of minors online. If you don’t opt-out, you can’t be a part of such lawsuits if they ever become relevant. This overall weakens these lawsuits and empowers companies like discord to do more shady things with less fear of repercussions.
And, since the vast majority of people will never opt-out (since you’re opted in by default) these kinds of lawsuits are weakened from the start. That’s why every company in the US is doing this forced arbitration thing. At this point, they would be crazy not to since it’s such a good thing for them and the average person doesn’t care enough about it.
Oops. Good to know… I guess the main thing was simply that there was a BK in the right place relative to the 9792 km arc then.
I’m not the person who found it originally, but I understand how they did it. We have three useful data points: you are 2.6 km from Burger King in Italy, that BK is on a street called "Via " and you are 9792 km from Burger King in Malaysia.
It’s not perfect but it works well! This is the principle of how your GPS works. It’s called triangulation. We only had distance to two points and one of them doesn’t tell us the sub-kilometer distance. If we had distance to three points, we could find your EXACT location, within some error depending on how detailed the distance information was.
Huh, go figure. Thanks for the info! I honestly never would have found that myself.
I still think it should be possible to use in:channel on the channel-specific search though. One less button press and it can’t be that confusing UX-wise since you have clear intent when doing it (if anything, the fact that the two searches work differently has to be more confusing UX-wise).
One of the biggest issues for me is that you can’t use ‘in:#channel’ anymore in searches for some inexplicable reason. But only on the mobile app — it works fine on desktop! If you could do that it would be fine.
You say “only” 6 months ago but it’s surprising to me just how quickly this time has passed.
I was a Reddit every day user pre-Lemmy. I happened to get linked to something there yesterday and saw all my sub’s “last visited” dates at 6 months. It’s crazy how easy it was to go cold turkey and I haven’t seen a need to go back.
The cops aren’t around so they can freely violate the law of gravity.
Copilot, yes. You can find some reasonable alternatives out there but I don’t know if I would use the word “great”.
GPT-4… not really. Unless you’ve got serious technical knowledge, serious hardware, and lots of time to experiment you’re not going to find anything even remotely close to GPT-4. Probably the best the “average” person can do is run quantized Llama-2 on an M1 (or better) Macbook making use of the unified memory. Lack of GPU VRAM makes running even the “basic” models a challenge. And, for the record, this will still perform substantially worse than GPT-4.
If you’re willing to pony up, you can get some hardware on the usual cloud providers but it will not be cheap and it will still require some serious effort since you’re basically going to have to fine-tune your own LLM to get anywhere in the same ballpark as GPT-4.
The alt text on that XKCD is even better:
“I recently had someone ask me to go get a computer and turn it on so I could restart it. He refused to move further in the script until I said I had done that.”
Definitely AI generated. Look at the bottom-right of the Confederate flag. It’s all messed up, classic generative AI “artifacting” for lack of a better word for it.
Edit: lower down in the thread the original was posted. This was upscaled (very poorly) by AI.
Seems like you might have fallen victim to the Scunthorpe Problem. I’m sure you can guess what word they were trying to censor there…
True, but those people are great when all you care about is line going up. People who ask think critically and ask questions don’t make line go up as fast.