• 0 Posts
Joined 1Y ago
Cake day: Feb 15, 2021


my counter-point was that most people aren’t open to installing an operating system

I mean, the original point didn’t say users should be required to install it themselves. It just said that phones should have an open source OS to increase their life span, which is something your “counter-point” is just building up on, not contradicting nor opposing it.

In fact, not every Android phone has open source firmware available that properly supports the hardware, so there are many cases where even if you knew how to install it you wouldn’t be able to.

Exceptions like the Pinephone are super rare, and I wouldn’t expect that to change without force.

I agree. There needs to be either legislation or a consumer driven shift. The real problem is that most users don’t seem to care that much about that and prefer getting a new shiny one with the latest trending features instead of a Pinephone or Fairphone.

I think the point was that open source software makes it last much longer. If using open source Android OS has extended the life of your phone then you are proving his point.

Of course it’s not the only thing that can extend the life of the phone, and of course additional measures should be taken to extend it further, but that doesn’t contradict anything the comment said.

Also, if having an open source OS isn’t a “simple option” for “typical consumer”, then we aren’t even there yet. Imho the phones should come with a fully open source OS that is easily upgradable independently of the manufacturer right out from the store.

The thing is that Copyleft is using the same legislation that corporations use to protect themselves. So spending a lot in lawyers for this might set a precedent that could backfire.

Also… I don’t think there’s a lot of incentive for them to do this, because if they have that much money they might as well just make their own software and have better control over it. Maybe use instead non-copyleft licenses like MIT, like Google sometimes has done.

It’s very likely there are already many GPL violations out there that we’ll never know about due to obfuscation or them simply being hard to identify, but nobody has the time, the money or the power to actually try and challengue a corporation about it and come out unscathed.

This is all human-made. One way or another, the cause is always between monitor and the chair. One of the reasons I find the crypto space so toxic and dangerous is their insistence on technosolutionism.

Preciselly, you can’t stop technosolutionism if you don’t differentiate between the technical factors and the human ones.

Saying technical issues are all the same as human ones or in the same level (just because they are “human-made”) is in fact technosolutionist.

The goal is to solve human issues by manipulating technology, not solving problems in the technology by manipulating humans. Manipulating humans is not in the same level as manipulating technology… I think that this should be pretty clear.

Your analogy falls apart due to how small the ratio of non-scammy uses of NFTs to scammy ones is.

The issue is that if the nature of NFTs already makes such purchases “scammy” for you then, of course, most of it will be “scammy”. But note that something feeling scammy to you is not the same as committing actual fraud. If someone is fully aware that they are buying something because they purposefully want to speculate with it in an extremely unstable market, then it’s their own fault if the risk they took doesn’t pay off. That’s not the same thing as getting scammed.

Myself, I’m not one to invest in such risks, and in fact, right now my bank is charging me money just because I have the money stored in my account doing nothing, which it makes no sense that they’d charge me for that! I wish I could just have it all as cash stored in a vault at home and don’t need banks, but sadly sending cash by post is not exactly secure (nor generally accepted). It’s too bad there isn’t a safe and government-backed cryptocurrency infrastructure in place. I would certainly find that useful.

And they will not be able to solve [domain names] with blockchain tech.

Some have already used the blockchain for that purpose though. Gittorrent used the bitcoin blockchain before (I’m not up to date on what’s the current state on that project, I hear it’s no longer maintained and there are other alternatives). And there’s also the ENS for .eth domain names which are distributed, or am I wrong?

We’re talking legal issues […], disputes […] Neither of these can be written down in code, be it on blockchain or not.

But those are human issues, they should not be in the code itself, just like they aren’t in the code of current DNS servers either. Instead, the tech should just be transparent and flexible enough to allow that kind of human control (again, humans are meant to manipulate the technology, not the other way around).

If anything, I’d imagine a public ledger in a blockchain with proper authorization using government issued signatures would make it easier to track and identify the owner and have legislation impart whatever sanction or punishment. Wouldn’t it? (I’m not even sure if the current DNS system allows this, I believe you can get domain names with some level of anonymity if you really want to).

I think the problem here is getting to the sweet spot between privacy and identification, maybe with different levels for different purposes. If this was controlled by each government and there were some layers in place and measures that allowed some level of anonymity at the same time as allowed disclosure in circumstances that require it, this could be a tool very controlled and safe.

In particular, I think a public p2p ledger would be helpful to have traceability of public funds in a way that can be peer-reviewed without depending on the government “accidentally” losing a hard disk or destroying evidence “by mistake”. Which is something I’ve seen happen more than once in my country whenever there’s an internal investigation for corruption.

It’s essentially a wrapper around Webkit.

Knowing the people at suckless, I was surprised when they launched surf based on Webkit instead of going for a cleaner & simpler engine like the one from NetSurf, even if that would have meant most websites wouldn’t work. After all, the web is anything but clean & simple. Compromising the UX in favor of cleaner code never stopped the suckless team before.

FLOSS community is not perfect, for example, but bullshit gets called out. Projects that make exorbitant claims about security (snakeoil, etc), get called out. But crypto scene acts as if that’s bad for business.

I think we have to differentiate the technical factors from the human ones. Calling out security vulnerabilities is not a problem, but when the cause is between the monitor and the chair then things get much more complicated.

Can’t generate “bad press”, right? Because if one does, they and potentially the whole scene is NGMI, HFBP!

Just not for the wrong reasons. It would be silly to say “internet” = “porn”, or “peer to peer” = “piracy”, so for the same reason, “NFT” = “fraud” is just as misdirected, imho.

I’ll agree to not continue with the simil about xenophobia since it’s true that it’s sensitive (though I do still think it does fit), but at least I hope you do accept these other broad generalizations that are mischaracterizing entire technologies that are very much different from that negative purpose someone might want to attribute to them due to how circunstancially “optimal” some specific instances might be for those purposes.

Saying “the association is well-deserved” already is admitting to the mischaracterization.

And frankly, I have not yet seen a single use of NFTs that is not either unnecessary (as in: whatever is being done could be done as well or better without NFTs)

It would be great to find a solution for distributed domain names that was done well or better than what can be done with NFTs, it’s something that p2p distributed networks haven’t managed to solve without blockchain tech.

not calling out crypto/NFT/web3 scams just to preserve the few potentially useful and non-scammy projects would be effectively aiding and abeting the scammers

I’m all for calling any and all scams. Just as long as we separate the technology from the scam. My problem isn’t with this article, but with the reactions in the comments that seem to jump to conclusions and paint things with broad strokes, assuming NFT = fraud.

Those are fair points. But I’m used to seeing so much bad press against NFT from people who blindedly criticise it and assotiate it with any possible bad use of it… to the point that they think “NFT=bad”, and this kind of news paints that picture for anyone who doesn’t know better…

It would be like highlighting in the news every crime perpetrated by someone of color and then complain about “whataboutism” when someone says that white people also commit crimes.

I’m afraid that all this demonization will make it much much harder for any fair and honest project that we ever attempt in the future related to blockchain technology (such as the one you mentioned).

But he didn’t really say that banks are bad, or that the cryptocurrency/NFT/web3 scene isn’t rife with scams.

Scams also existing in fiat currency (his point) doesn’t make fiat bad, in the same way as cryptocurrency/NFT/web3 having good uses doesn’t mean that it cannot also be “rife with scams”.

Are hammers bad because people can use them to smash skulls? imho what we need is measures to prevent, block, minimize or discourage that kind of behavior, not necessarily ban hammers.

Personally, I think the open source and p2p nature of blockchain technology can be a better way to introduce measures of control and protection in a way that is fairer and more transparent than using obscure private ledgers on the hands of more central authorities managed by humans that we have to trust…

There’s also #uncivplayers:matrix.org, although I don’t think there’s many people.

There are many other libre game matrix rooms in the #libregaming-space:matrix.org space, that’s where I found it.

EDIT: whoops! I just realized that it’s the same room you linked! :P

It’s definitelly not optimal for that. In my opinion, using proper blogs, websites and feeds is a much more intelligent, decentralized, and powerful alternative to artificially limited microblogging.

The only reason companies and groups love having a Twitter is because it allows them to advertise themselves there, due to how big its userbase is. It also allows them to have a more direct engagement with their “followers” or appear to be more “down to earth” preciselly because of the way it’s traditionally a platform more “individual-centered”. Twitter just happens to be good for Marketing. And the same goes for Facebook.

Imho, the blogosphere was in a very good place before Twitter and Facebook started to rise in popularity, when having a personal website was more of a common thing to do instead. Imho, the solution isn’t Mastodon either… I’d much rather go back to when using feed readers was a thing. I just wish there was a more modern pub-sub like alternative to RSS that we could use for websites (or maybe there is but nobody uses it…), and a more standardized API for viewing/posting coments to a blog post directly from your feed reader.

Hmm… that’s interesting actually. Having users have to authenticate might help some instances of trolling and abuse, but at the same time there’s the problem of the identification causing trouble for privacy.

A middle ground would be allowing non-verified users to participate, but have them have a lower influence in the relevance of the content, perhaps having caps that limit how much non-verified influence can affect the weighted relevance of a post (so… content promoted by unverified accounts would be of a lower priority, and pushing it with a farm of non-verified bot accounts would not have much of an impact).

Of course there’s likely gonna be some level of bias based on who are the people who would go through the trouble of verifying themselves… but that’s not the same thing as not being transparent. Bias is gonna be a problem that you cannot escape no matter what. If a social network is full of idiots the algorithm isn’t gonna magically make their conversations any less idiotic. So I think the algorithm could still be a good and useful thing to come out of this, even if the social network itself isn’t.

There’s still the chance that they have/make an algorithm that can actually be transparent without being exploitable in ways that are detrimental (which is what I would consider a “good algorithm”)… but I agree that this is the least likely outcome.

Still, I couldn’t care less about any of the other outcomes. I have nothing to lose whether Twitter burns or stays as it is 😁

Personally, I wouldn’t say that an algorithm that relies on obscurity (needless complexity being a form of obscurity) would be a good algorithm, not when it’s public. I guess we’ll see.

It’s possible that the algorithms will have to be heavily refactored, cleaned up and maybe simplified before they are publicly released, since I expect that many of those approaches would be useless against someone with access to the code and the ability to run tests against it systematically to “game the system”.

I’m not interested in Twitter (or any “individual-centric” social network to be honest… I don’t want to “follow” people, but ideas/topics). So I don’t have anything to lose from this.

I might have something to gain if he actually open sources the algorithms Twitter uses, because if they are actually good (I have no idea), they could have other applications too.

Not going to happen unfortunately, it’s basically stopped, I think I’ve read somewhere that the last commit to hurd’s code was something like 3 years ago

The development pace is very slow, and it’s true it’s a very low priority project for the FSF, but it’s not completelly dead yet. Last commit was ~28 hours ago. https://git.savannah.gnu.org/cgit/hurd/hurd.git/log/

I don’t know how the current state of more modern models is, but Kobo devices used to be quite hackable, at least about 7 years ago when I got mine. They are already running Linux under the hood, it was not hard to install koreader on mine.

If you want to minimize dependence from outside servers then it’d be best to centralize, not federate. If you want a million servers with 5 people on each you need dependence with the outside servers you want content from, unless you are exclusively interested in the content those 4 other people post in your one server.

Your subscription to a community from an outside server will always depend on the outside server providing the content from that community. And you have to also depend on your server having connection with that outside server.

You necessarly need every server to depend on all outside servers you want to participate in. Either that or you’ll need to have one account in each of those servers you are interested in, which wouldn’t be that different to the end user from having each of those servers be centralized.

The main thing you get in this fediverse approach is mirroring and replication. Which I’m not sure is worth it considering all the other problems that brings, like deleted content needing to be deleted everywhere, like the need for your server to block/allow outside servers so you don’t host bad content from them, etc. Having a shared account across multiple servers would work better if a shared common account system was used, something like OpenID.

Personally, I feel like it makes more sense to just have each server be its own instance without hosting the other servers content and instead have the user identity/account dettached from the content servers (something like OpenID to have a common user account across services). Then use standards so a common UI can be used client-side or for any particular server-to-server communication (much like how blogs can do backtracking between blog posts from different blogs and so, without them really having to federate). It would be more efficient while having the same end result but with a more free and open ecosystem, like the blogosphere used to be before it was shadowed by twitter & facebook.

To me, federation between private servers the way mastodon does it only makes sense for private communication like XMPP or Matrix… but the minute you are publicly posting content in the internet it makes no sense to have servers mirror the content from others just so people can access that content from one server in the next… if redundancy was the point then it would make more sense a P2P model, if a common UI/account was the point then separating those would make more sense. The current structure creates a dependency between the user and the server that hosts your account, the content being public forces the instances to whitelist which other instances they allow, and this makes it so you might end up having to create multiple accounts in different servers if you wanted to access instances that do not want to federate, and at that point it’s not much different from centralized services. It restricts what instances the user can access (based on which instance they are registered with) and places extra responsibility and bandwidth/storage requirements on the instances themselves.

There’s ongoing work to encrypt much of the metadata. https://github.com/matrix-org/matrix-doc/pull/3414

Yes this is needed for room persistence across multiple servers, but IMHO that is a solution looking for a problem and also a highly over-engineered one.

Without this solution the transition to p2p would be much more complicated, would it not?