• 0 Posts
  • 298 Comments
Joined 5 years ago
cake
Cake day: February 15th, 2021

help-circle
  • Imho, that’s a slippery slope argument. Like arguing that communities should have no moderation at all (not even when it’s fair) because it opens the door for unfair moderation too…

    One might as well argue a slippery slope in the opposite direction, the more you reject parental-control methods that you can control, the more incentive they’ll have to instead promote methods where you’ll have no control. So you can equally say that rejecting this method will make their case stronger for proposals that would, progressively, give you less and less capacity for control (or in particular, capacity to actively be disobedient against).




  • Ferk@lemmy.mltoFediverse@lemmy.mlmastodon age verification
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    2 months ago

    The thing is that age verification in a digital world is not easy… what exactly does the government mandate as a valid verification method?

    Like… would asking the user their age be valid enough? … because it’s not like a reliable method exist (not even credit card verification prevents a minor from taking their parents card and go through it). IMHO, until the government doesn’t actually set a standard, I don’t see why websites should actually give anything else than the most minimal effort possible when it comes to this.


  • I agree, which is why I think running those open source apps in a separate computer, isolating infotainment from the more critical software, would be a stronger safety layer.

    Them being separated should, imho, be a precondition, so that it can minimize accidents and exploits in cars that might be running software that is not immediately up to date as a result from publicly and well known vulnerabilities being discovered as the code evolves.


  • Open source software is not bug free. I’d argue there are more vulnerabilities caused by human error than there are caused by malicious actors. More often than not, malicious actors are just exploiting the errors/gaps left by completely legit designers.

    Running those open source apps in a separate computer, isolating infotainment from the more critical software, would be an even stronger safety layer, imho.


  • Running it through the same computer is a bad practice, imho. Remember the Jeep Hack where researchers were able to dig into the integrated infotainment system and control the brakes?

    I wouldn’t want to have critical car functions (or emissions control, regulatory software, ADAS, telematics, etc) depend on the same device that someone might be using to connect to the internet and/or run Android Auto apps. Regardless of whether it’s integrated or not.

    I guess it might be ok to share energy and some non-critical capabilities with the infotainment system… but you can do that through a USB-C connection without requiring it be integrated directly in the vehicle. Imho they should be isolated, and what best way of isolating it than being completely different computers?





  • SIM card is absolutely required even for emergency services

    For anyone wondering: while technically the cell towers might be able to accept emergency calls even without network authentication (which is what’s the SIM is for), there are countries/places that will still require an active SIM with the excuse of wanting to prevent hoax calls.


  • LLMs abstract information collected from the content through an algorithm (what they store is the result of a series of tests/analysis, not the content itself, but a set of characteristics/ideas). If that makes it derivative, then all abstractions are derivative. It’s not possible to make abstractions without collecting data derived from a source you are observing.

    If derivative abstractions were already something that copyright can protect then litigants wouldn’t resort to patents, etc.


  • You are not gonna protect abstract ideas using copyright. Essentially, what he’s proposing implies turning this “TGPL” in some sort of viral NDA, which is a different category of contract.

    It’s harder to convince someone that a content-focused license like the GPLv3 protects also abstract ideas, than creating a new form of contract/license that is designed specifically to protect abstract ideas (not just the content itself) from being spread in ways you don’t want it to spread.


  • Ferk@lemmy.mltoComics@lemmy.mlBirth rates
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    4 months ago

    Yes, but they do it in order to fill up a hole in their lives, to have a “greater purpose”, give their lives meaning. Ultimately all we do is to satisfy our desires…and the push towards caring for kids is one of the biologically hardwired desires we evolved having, the reason we do it is not really a lack of ego. Having a family is something people want for themselves, for their own happiness.

    I believe it’s literally impossible for a person to not be egoistic without going crazy and/or offing oneself. Even christians who preach about self sacrifice and generosity only do it pushed by the promise of a better afterlife and their own self-interest of wanting to avoid hell and/or being closer to their god.


    1. The Pixel is easily unlockable, so one can install custom firmware without being a “pro”, its hardware is (or was reverse-engineered to be) compatible enough to make the experience seamless, with a whole firmware project / community that it’s exclusively dedicated on that specific range of hardware devices, making it a target for anyone looking for a phone where to install custom Android firmware on.

    But I’d bet it’s a mix of 2 and 3.



  • Yea, but he’s (intentionally?) misrepresenting things… people are not “unimpressed” by AI, what they are is not interested in MS “agentic OS”, these are not the same things.

    It’s irresponsible to hand in control of your machine to an AI integrated that deeply into the OS, particularly when it’s designed to be tethered to the network and it’s privately owned and managed by human entrepreneurs that do have the company’s interests as first and main priority.




  • Sounds like a prioritization issue. They could configure the git bots to automatically flags all these as “AI-reported” and filter them out from their TODO, considering them low priority by default, unless/until someone starts commenting on the ticket and bringing it up to their attention / legitimizing it.

    EDIT: ok, I just read about the 90-days policy… I feel then the problem is not the reporting, but the further actions Google plans based on an automated tool that seems to be inadequate to judge the severity of each issue.