• 3 Posts
  • 50 Comments
Joined 17 days ago
cake
Cake day: January 7th, 2026

help-circle


  • So a Mastodon ripoff, but its instances hosted by a single entity (effectively centralized): ensuring all instances residing within the European jurisdiction (allowing for full control over it). I don’t see how they genuinely believe, to have humans do the photo validation, when competing at the scale of X; especially when you run all the instances. Perhaps they could recruit volunteers to socialize the losses, as the platform privatizes the profits. Nothing but a privacy-centric approach however: said the privacy expert…

    Zeiter emphasized that systemic disinformation is eroding public trust and weakening democratic decision-making … W will be legally the subsidiary of “We Don’t Have Time,” a media platform for climate action … A group of 54 members of the European Parliament [primarily Greens/EFA, Renew, The Left] called for European alternatives

    If that doesn’t sound like a recipe, for swinging the pendulum to the other extreme (once more), I don’t know what does… Because can you imagine, a modern social media platform, not being a political echo chamber: not promoting extremism by use of filter bubbles, and instead allowing for deescalation through counter argumentation. One would almost start to think, for it all to be intentional: as a deeply divided population will never stand united, against their common oppressor.





  • With “deletion” you’re simply advancing the moment, they’re supposedly “deleting” your data; something I refuse to believe, they actually do. Instead, I suspect they “anonymize”, or effectively “pseudonymize” the data (as cross-referencing is trivial, when showing equal patterns on a new account; would the need arise). Stagnation wouldn’t require services to take such steps, and any personal data remains connected to you, personally.

    For the Gmail account, I would recommend: not deleting the account, opening an account at a privacy-respecting service (using Disroot as an example), connect the Gmail account to an email-client (like Thunderbird), copy all its contents (including ‘sent’ or other specific folders) to a local folder (making sure to back these up periodically), delete all contents from the Gmail server, and simply wait for incoming messages, at the now empty Gmail account.

    If a worthy email comes in: copy it over to the local folder, and delete it from the Gmail server. For used services, you could change the contact address to the Disroot account, and for others you could delete them, or simply mark them as spam (and periodically emptying the spam-folder). You may not want to wait for privacy-sensitive services, to finally make an appearance, and change these over to the Disroot address right away.

    I’ve been doing this for years now, and my big-tech accounts remain empty most of the time. Do make sure to transfer every folder, and make regular backups!











  • I understand you’ve read the comment as a single thing, mainly because it is. However, the BLE part is an additional piece of critique, which is not directly related to this specific exploit; neither is the tangent on the headphone jack “substitution”. It’s, indeed, this fast pairing feature, which is the subject of the discussed exploit; so you understood that correctly (or I misunderstood it too…).

    I’m however of the opinion, BLE being a major attack vector, by design. These are IoT devices that, especially when “find my device” is enabled (which in many cases isn’t even optional: “turned off” iPhones for example), do announce themselves periodically to the surrounding mesh, allowing for the precise location of these devices; and therefore also the persons carrying them. If bad actors gain access, to for example Google’s Sensorvault (legally in the case of state-actors), or would find ways of building such databases themselves; then I’d argue you’re in serious waters. Is it a convenient feature, to help one relocate lost devices? Yes. But this nice-to-have, also comes with this serious downside, which I believe doesn’t even near justify the means. Rob Braxman has a decent video about the subject if you’re interested.

    It’s not even a case of kids not wanting to switch, most devices don’t even come with 3.5mm jack connectors anymore…




  • AI reviews don’t replace maintainer code review, nor do they relieve maintainers from their due diligence.

    I can’t help but to always be a bit skeptical, when reading something like this. To me it’s akin to having to do calculations manually, but there’s a calculator right beside you. For now, the technology might not yet be considered sufficiently trustworthy, but what if the clanker starts spitting out conclusions, which equal a maintainer’s, like 99% of the time? Wouldn’t (partial) automation of the process become extremely tempting, especially when the stack of pull request starts piling up (because of vibecoding)?

    Such a policy would be near-impossible to enforce anyway. In fact, we’d rather have them transparently disclose the use of AI than hide it and submit the code against our terms. According to our policy, any significant use of AI in a pull request must be disclosed and labelled.

    And how exactly do you enforce that? It seems like you’re just shifting the problem.

    Certain more esoteric concerns about AI code being somehow inherently inferior to “real code” are not based in reality.

    I mean, there’s hallucination concerns, there’s licensing conflicts. Sure, people can also copy code from other projects with incompatible licenses, but someone without programming experience is less likely to do so, than when vibecoding with a tool directly trained on such material.

    Malicious and deceptive LLMs are absolutely conceivable, but that would bring us back to the saboteur.

    If Microsoft itself, would be the saboteur, you’d be fucked. They know the maintainers, because GitHub is Microsoft property, and so is the proprietary AI model, directly implemented in the toolchain. A malicious version of Copilot could, hypothetically, be supplied to maintainers, specifically targeting this exploit. Microsoft is NOT your friend, it closely works together with government organizations; which are increasingly interested in compromising consumer privacy.

    For now, I do believe this to be a sane approach to AI usage, and believe developers to have the freedom to choose their preferred environment. But the active usage of such tools, does come with a (healthy) dose of critique, especially with regards to privacy-oriented pieces of software; a field where AI has generally been rather invasive.