• 0 Posts
Joined 2Y ago
Cake day: Feb 15, 2021


I believe they plan to switch “soon™” to a new Rust-based desktop environment they are developing.

I’m actually quite excited about this, even though I don’t use Pop!_OS, since I’m not really a fan of either Gtk nor Qt, and I believe Rust has a lot of potential to make a clean, modern and stable framework for OS development that isn’t over-complicated by layers and layers of abstraction & technical debt.

Yeah, to be honest …can it really die if it was never alive?

I feel second life or vrchat had more life to them than this corporate experiment.

Also, I doubt Facebook has given up on it. They are just leaving it aside for a moment to try and focus on AI because it’s the new hot thing. I wouldn’t be surprised if at some point they come back to it, this time adding AI features.

I haven’t personally used these, but after some searching I found a few relatively new and experimental twists to the formula:

  • distri uses SquashFS images for all their packages and claims to be very fast:

  • spack is also a Nix-inspired package system that’s python-based, and it seems to allow for a lot of customization, you can even modify the dependencies / build parameters of a package when installing it.

  • flox is compatible with Nixpkgs, it seems to be an iteration on the idea with some improvements.

The issue is that thunderbird is apparently built on top of firefox code, so I expect there’s a similar level of layers upon layers of bloat as with many electron apps. But this rebuilding project seems to be focused on the UI side of things and changing how things look, so I don’t know if it’ll actually improve in terms of performance (it might help a bit though, since they intend to remove stuff and clean up the code on the Thunderbird side of things).

Personally, I feel it would be more interesting to turn it into a Firefox extension (and extend the extensions API where necessary), so the resources that are shared could be actually shared. That, or fully embrace K-9 Mail (the android app that they partnered with and which will become Thunderbird mobile) and adapt it for the desktop.

I’d argue it’s search engines and social networks the ones that grant any level of "virality and discoverability ", not the internet itself. In the internet you need “third party” solutions for indexing or searching.

I mean, Mastodon probably intentionally lacks tools that enhance “virality and discoverability”, but that’s not the same thing as saying that it actively prevents information spread. You could in theory build a search engine for toots, or an alternate fork that does have those features. It’s even free and open source software, so it’s open to whatever.

The problem is that the moment that actual spammers are able to use that proxy too (or any IP within the range of it), then it’s likely it’ll be banned as well.

Can Mastodon actually subscribe to a static ActivityPub feed?

A lot of blogs are statically generated (it’s cheaper, faster and safer). Ideally, static websites could just generate JSON-LD for ActivityPub in a similar way as how they generate XML for RSS, which would make the transition easier… but last time I checked Mastodon didn’t support that very well, so RSS was still a better fit in many situations since it does not require an active server component. I’d love to be shown otherwise.

It doesn’t matter where did the “commerce” quote came from, if you do not agree with that quote (or have no first hand knowledge) do not confront its criticism, and if you do agree with it then own up to it.

Throwing a complaint, then saying “I did not say it” and then trying to silence anyone that disagrees with that quote with “don’t sell me on this so please stop”, is a bit like throwing a stone and then running away, imho. You are the one who brought that quote to the conversation.

I don’t use Vivaldi (nor are interested to even try, for other reasons), and I don’t know if it’s true or not the “commerce” statement, but I’m willing to bet that any corporation that opens a new Mastodon instance is gonna “by default” face allegations of being “commercial” in its inception by many random opinionated people, even if the instance was so young that it had little content.

I’m willing to bet it’s the other way around: most Vivaldi users (or at least the ones that matter in terms of extending Mastodon userbase) don’t have a Mastodon account but have a Vivaldi account already (since they are already Vivaldi users).

I think the way they have done it is the most comfortable for new users. Specially considering that most likely it’ll be Vivaldi users coming to Mastodon, rather than the other way around (since there are better alternatives to Vivaldi for those who value FOSS, which is common in early Mastodon users).

And as mentioned in other comment, you can actually use third party Mastodon accounts, even if the option is not obvious.

If you don’t mind it being terribly robotic, eSpeak supports a ton of languages and it’s very lightweight (mainly because the method of synthesis does not require a big database of voice samples).

At first it might be jarring if you are used to natural-sounding voices, but I think it’s possible to get used to it, and some people seem to actually prefer it.

Or you might be able to install MBROLA as a backend for eSpeak which should make it sound more natural, although I’ve never tried doing that in Android, personally (EDIT: it seems MBROLA isn’t yet supported in Android, sadly).

Even for the most minimal oneliner you’ll have to depend on complex library code under the hood which you’ll have to keep up to date as part of the OS. And/or depend on the compiler itself to not introduce bugs into the resulting binary you are distributing.

Either that or you write your software in pure assembler (which will end up exposing a lot of internal complexity anyway, resulting in asm files that are far from “minimal”).

These are just some known vulnerabilities in libc (we don’t know how many “unknown” ones there might be, or if new fixes will introduce new problems): https://www.cvedetails.com/vulnerability-list/vendor_id-72/product_id-767/GNU-Glibc.html

My problem with this idea is that I generally do not like the defaults most distros use, I like experimenting and I often switch desktop environment or uninstall / clean up stuff I don’t need.

I’d be ok if the image is just kernel + init system + shell, and maybe some small core components / tools… but if the OS comes preloaded with huge software libraries, like typical KDE / GNOME distros do, then it’s gonna be a lot of dead weight that I’d have to keep updated even if I do not use it.

Immutable images are great for devices with specific purposes meant for a particular software stack (like Chrome Books, the Steam Deck or so) but for a more general purpose computer where I actually want to deeply customize the UI for my workflow, I don’t want to carry around whatever popular software the maintainers of the popular distro might have decided to include.

That’s a good point. In particular the battery, some ebikes have an integrated battery that’s hard to replace and doesn’t follow any standards.

But as you said, bikes are much simpler and easier to repare. It’s much easier to find modular DIY kits to convert your bike to ebike out of a few components or at least purchase an ebike with modular battery pack (I think there’s a few different “standards” depending on the brand and style, and you can buy them separately from third parties although I’m not sure if there’s one standard that’s completely open and royalty-free).

I don’t see where you find in my comment the assumption that people mostly follow verified accounts (if anything it’s the other way around, one of the requirements for verification is notoriety, that doesn’t mean that verification creates notoriety, nor that notoriety cannot exist without verification). Nor did I say that “enough” of them would migrate (enough for what exactly?). I was trying to be careful with my words and I used “if” when I meant “if”, not “when”.

I was simply explaining the other view point, not necessarily saying that it will happen, but that it’s a possibility and it wouldn’t be so surprising to see an increased interest towards Twitter alternatives as a consecuence from changes like this, perhaps translating to a (small?) spike of new users exploring the fediverse (even if it’s possible most wouldn’t stay). But I don’t have a magic crystal ball so I can’t tell you what will happen.

Your last paragraph is essentially part of what I was meaning to say in the last two lines from my previous comment. We agree.

(and btw, it wasn’t me who gave you that downvote, to be clear)

I think that the point is that “regular users” also includes people who are neither a business nor a corporation but that are notable and active enough to have got the “verified” check-mark. Like a lot of individual popular figures and social media presences.

If a few of those big individuals ends up deciding to not pay up and instead moves to an alternative, the audience following them might be tempted to move as well to follow them up there, so it could potentially start a snowball effect.

That said, I don’t believe the verification badge alone would be enough reason for them to move…

I’d rather they stop making private cars, honestly.

This is great.

Laws should be objective and precise, just like algorithms are, so it would be a good fit to have laws modeled in a more machine-readable form… sure, interpreting those laws to pass fair judgement still requires a human eye, but I believe it should be possible to at least be able to consult your local law in a more user-friendly way… which I hope this could end up helping with.

And of course something like that would need to be Free and Open Source to be trustworthy at all to begin with…

You are missing the point. A process-independent file opener that is used by all applications to access files provides user-friendly security.

But that was essentially what I said… I’m the one who proposed something like that 2 comments ago.

This would be a core component of an OS so the description is correct.

Again, I disagree that “this would be a core component of an OS”. You did not address any of my points, so I don’t see how it follows that “the description is correct”. The term “core OS component” is subjective to begin with.

But even if you wanted to label it that way, it wouldn’t make any difference. That’s just a label you are putting on it, it would not make Flatpak any less of an app distribution / management system with focus on cross-distro compatibility and containerization. Flatpak would still be Flatpak. Whether or not you want to consider it a core part of the OS is not important.

And Flatpak already uses independent processes to manage the whole container & runtime that the app uses for access to the system resources, which already closely matches what you defined as “a core component of an OS”.

That’s a very loose definition of “OS Component”. At that point you might as well consider the web browser an “OS Component” too, or frameworks like Retroarch, who offer a filesystem API for their libretro cores.

But even if we accepted that loose definition, so what? even as it is today Flatpak is already an “OS Component” integrated already in many distros (it’s even a freedesktop.org standard), and it already implements a filesystem interface layer for its apps. As I said, I think the real reason they won’t do it is because they keep wanting to be transparent to the app devs (ie. they don’t want them to have to support Flatpak-specific APIs). Which is why I think there needs to be a change of philosophy if they want app containerization to be seamless, safe and generally useful.

You can install different flatpak repos without really having to depend on one specific central repository, so I’d say the “centralizing software” issue is not that different from any typical package manager.

That said, I do agree that Flatpak has a lot of issues. Specifically the problems with redundancy and security. Personally I find Guix/Nix offers better solutions to many of the problems Flatpak tries to fix.

or learn how to do it and spend time configuring each and every application as needed

And even if they were to spend the time, afaik there’s simply no right way to configure a flatpak like GIMP so it can edit any file from any arbitrary location they might want without first giving it read/write permissions for every single of those locations and allowing the program to access those whole folder trees at any point in time without the user knowing (making it “unsafe”).

It shouldn’t have to be this way, there could be a Flatpak API for requesting the user for a file to open with their explicit consent, without requiring constant micro-management of the flatpak settings nor pushing the user to give it free access to entire folders. The issue is that Flatpak tries to be transparent to the app devs so such level of integration is unlikely to happen with its current philosophy.

Back when the anti-Stallman letter broke out, some transgender people were calling him “transfobe” for having openly proposed the use of a gender-neutral pronoun that he came up with as the preferred way to speak when you don’t know the gender of the person.

It seems promoting the use of a gender-neutral pronoun can be counterproductive. Some people might actually find it offensive and condescending.


"The ineffectiveness of the campaign is so clear, and the ferociousness of the Ukrainian defence is so obvious … (that) it’s created an equalizer where neither side can move much from where they are now.”

“The damage and devastation to Ukrainian cities is likely to increase even in a period of stalemate,”

Frederick W. Kagan. senior fellow and director of the Critical Threats Project at the American Enterprise Institute in Washington, D.C. former professor of military history at the U.S. Military Academy at West Point. Married to Kimberly Kagan, the president of the Institute for the Study of War.

No modern AI has been able to reliably pass the Turing test without blatant cheats (like allowing the use of foreign kids unable to understand/express/speak themselves fluently, instead of adults). Just because it dates back to the 1950s doesn’t make it any less valid, imho.

I was interested by the other tests you shared, thanks for that! However, in my opinion:

The Markus test is just a Turing Test with a video feed. I don’t think this necessarily makes the test better, it adds more requirements for the AI, but it’s unclear if those are actually necessary requirements for consciousness.

The Lovelace test 2.0 is also not very different from a Turing test where the tester is the developer and the questions/answers are on a specific domain, where it’s creativity is what’s tested. I don’t think this improves much over the original test either, since already in the Turing test you have freedom to ask questions that might already require innovative answers. Given the more restricted scope of this test and how modern procedural generation and neural nets have developed, it’s likely easier to pass the Lovelance test than the Turing test. And at the same time, it’s also easier for a real human to not pass it if they can’t be creative enough. I don’t think this test is really testing the same thing.

The MIST is another particular case of a more restricted Turing test. It’s essentially a standardized and “simplified” Turing test where the tester is always the same and asks the same questions out of a set of ~80k. The only advantage is that it’s easier to measure and more consistent since you don’t depend on how good the tester is at choosing their questions or judging the answers, but it’s also easier to cheat, since it would be trivial to make a program specifically designed to answer correctly that set of questions.

Oh but I agree that assuming our reality is solipsist isn’t useful for practical purposes. I’m just highlighting the fact that we do not know. We don’t have enough data preciselly because there are many things related to consciousness that we cannot test.

Personally I think that if it looks like a duck, quacks like a duck and acts like a duck then it probably is a duck (and that’s what the studies you are referencing generally need to assume). Which is why, in my opinion, the turing test is a valid approach (and other tests with the same philosophy).

Disregarding turing-like tests and at the same time assuming that only humans are capable of having “a soul” is imho harder to defend, because it requires additional assumptions. I think it’s easier to assume that either duck-likes are ducks or that we are in a simulation. Personally I’m skeptical on both and I just side with the duck test because it’s the more pragmatic approach.

Do we know for sure that our architecture is the same? How do you prove that we are really the same? For all I know I could be plugged to a simulation :P

If there was a way to test consciousness then we would be able to prove that we are at least interacting with other conscious beings… but since we can’t test that, it could theoretically be possible that we (I? you?) are alone, interacting with a big non-sentient and interconnected AI, designed to make us “feel” like we are part of a community.

I know it’s trippy to think that but… well… from a philosophical point of view, isn’t that true?

Personally, I think this has very little to do with computing power and more to do with sensorial experience and replicating how the human brain interacts with the environment. It’s not about being able to do calculations very fast, but about what do those calculations do and how are they conditioned, what stimuli are the ones that cause them to evolve, in which way and by how much.

The real problem is that to think like a human you need to see like a human, touch like a human, have instincts of a human, the needs of a human and the limitations of a human. From babies we learn from things by touching, sucking, observing, experimenting, moved by instincts such as wanting food, wanting companionship, wanting approval from our family… all the things that ultimatelly motivate us. A human AI would make mistakes just like we do, because that’s how our brain works. It might just be little more than a toddler and it could still be a human-like AI.

If “what we call a soul” means consciousness, then I doubt there’s a way to prove that anything else than your own self can be shown to actually have a soul. Not even what we call “other people”.

You being aware of your own consciousness doesn’t mean every human necessarily is in the same, right? …and since we lack of a way to prove consciousness then we can’t assume other people are any more conscious than an AI could be.

I mean… when it comes to public chatrooms, even if you federate, both the hosting and the control of the chatroom are centralized anyway, so the only benefit of federating is the “nice-to-have” convenience of not needing multiple accounts, which you can already get if you set up bridging anyway… so imho, an IRC channel with proper bridging covers all bases and allows cross-communicating with many different protocols. Since IRC is fairly simple it’d be relatively easy to bridge it with automatic or minimal steps for the user.

Personally, I think it’s 1-on-1 communication what makes federation (or p2p) be the most useful, or maybe private groups too, but federation in public groups isn’t really necessary. Imho, it makes more sense to solve the “multiple accounts” problem with specialized authentication services, and separate user management from content providers.

my counter-point was that most people aren’t open to installing an operating system

I mean, the original point didn’t say users should be required to install it themselves. It just said that phones should have an open source OS to increase their life span, which is something your “counter-point” is just building up on, not contradicting nor opposing it.

In fact, not every Android phone has open source firmware available that properly supports the hardware, so there are many cases where even if you knew how to install it you wouldn’t be able to.

Exceptions like the Pinephone are super rare, and I wouldn’t expect that to change without force.

I agree. There needs to be either legislation or a consumer driven shift. The real problem is that most users don’t seem to care that much about that and prefer getting a new shiny one with the latest trending features instead of a Pinephone or Fairphone.

I think the point was that open source software makes it last much longer. If using open source Android OS has extended the life of your phone then you are proving his point.

Of course it’s not the only thing that can extend the life of the phone, and of course additional measures should be taken to extend it further, but that doesn’t contradict anything the comment said.

Also, if having an open source OS isn’t a “simple option” for “typical consumer”, then we aren’t even there yet. Imho the phones should come with a fully open source OS that is easily upgradable independently of the manufacturer right out from the store.

The thing is that Copyleft is using the same legislation that corporations use to protect themselves. So spending a lot in lawyers for this might set a precedent that could backfire.

Also… I don’t think there’s a lot of incentive for them to do this, because if they have that much money they might as well just make their own software and have better control over it. Maybe use instead non-copyleft licenses like MIT, like Google sometimes has done.

It’s very likely there are already many GPL violations out there that we’ll never know about due to obfuscation or them simply being hard to identify, but nobody has the time, the money or the power to actually try and challengue a corporation about it and come out unscathed.

This is all human-made. One way or another, the cause is always between monitor and the chair. One of the reasons I find the crypto space so toxic and dangerous is their insistence on technosolutionism.

Preciselly, you can’t stop technosolutionism if you don’t differentiate between the technical factors and the human ones.

Saying technical issues are all the same as human ones or in the same level (just because they are “human-made”) is in fact technosolutionist.

The goal is to solve human issues by manipulating technology, not solving problems in the technology by manipulating humans. Manipulating humans is not in the same level as manipulating technology… I think that this should be pretty clear.

Your analogy falls apart due to how small the ratio of non-scammy uses of NFTs to scammy ones is.

The issue is that if the nature of NFTs already makes such purchases “scammy” for you then, of course, most of it will be “scammy”. But note that something feeling scammy to you is not the same as committing actual fraud. If someone is fully aware that they are buying something because they purposefully want to speculate with it in an extremely unstable market, then it’s their own fault if the risk they took doesn’t pay off. That’s not the same thing as getting scammed.

Myself, I’m not one to invest in such risks, and in fact, right now my bank is charging me money just because I have the money stored in my account doing nothing, which it makes no sense that they’d charge me for that! I wish I could just have it all as cash stored in a vault at home and don’t need banks, but sadly sending cash by post is not exactly secure (nor generally accepted). It’s too bad there isn’t a safe and government-backed cryptocurrency infrastructure in place. I would certainly find that useful.

And they will not be able to solve [domain names] with blockchain tech.

Some have already used the blockchain for that purpose though. Gittorrent used the bitcoin blockchain before (I’m not up to date on what’s the current state on that project, I hear it’s no longer maintained and there are other alternatives). And there’s also the ENS for .eth domain names which are distributed, or am I wrong?

We’re talking legal issues […], disputes […] Neither of these can be written down in code, be it on blockchain or not.

But those are human issues, they should not be in the code itself, just like they aren’t in the code of current DNS servers either. Instead, the tech should just be transparent and flexible enough to allow that kind of human control (again, humans are meant to manipulate the technology, not the other way around).

If anything, I’d imagine a public ledger in a blockchain with proper authorization using government issued signatures would make it easier to track and identify the owner and have legislation impart whatever sanction or punishment. Wouldn’t it? (I’m not even sure if the current DNS system allows this, I believe you can get domain names with some level of anonymity if you really want to).

I think the problem here is getting to the sweet spot between privacy and identification, maybe with different levels for different purposes. If this was controlled by each government and there were some layers in place and measures that allowed some level of anonymity at the same time as allowed disclosure in circumstances that require it, this could be a tool very controlled and safe.

In particular, I think a public p2p ledger would be helpful to have traceability of public funds in a way that can be peer-reviewed without depending on the government “accidentally” losing a hard disk or destroying evidence “by mistake”. Which is something I’ve seen happen more than once in my country whenever there’s an internal investigation for corruption.

It’s essentially a wrapper around Webkit.

Knowing the people at suckless, I was surprised when they launched surf based on Webkit instead of going for a cleaner & simpler engine like the one from NetSurf, even if that would have meant most websites wouldn’t work. After all, the web is anything but clean & simple. Compromising the UX in favor of cleaner code never stopped the suckless team before.

FLOSS community is not perfect, for example, but bullshit gets called out. Projects that make exorbitant claims about security (snakeoil, etc), get called out. But crypto scene acts as if that’s bad for business.

I think we have to differentiate the technical factors from the human ones. Calling out security vulnerabilities is not a problem, but when the cause is between the monitor and the chair then things get much more complicated.

Can’t generate “bad press”, right? Because if one does, they and potentially the whole scene is NGMI, HFBP!

Just not for the wrong reasons. It would be silly to say “internet” = “porn”, or “peer to peer” = “piracy”, so for the same reason, “NFT” = “fraud” is just as misdirected, imho.

I’ll agree to not continue with the simil about xenophobia since it’s true that it’s sensitive (though I do still think it does fit), but at least I hope you do accept these other broad generalizations that are mischaracterizing entire technologies that are very much different from that negative purpose someone might want to attribute to them due to how circunstancially “optimal” some specific instances might be for those purposes.

Saying “the association is well-deserved” already is admitting to the mischaracterization.

And frankly, I have not yet seen a single use of NFTs that is not either unnecessary (as in: whatever is being done could be done as well or better without NFTs)

It would be great to find a solution for distributed domain names that was done well or better than what can be done with NFTs, it’s something that p2p distributed networks haven’t managed to solve without blockchain tech.

not calling out crypto/NFT/web3 scams just to preserve the few potentially useful and non-scammy projects would be effectively aiding and abeting the scammers

I’m all for calling any and all scams. Just as long as we separate the technology from the scam. My problem isn’t with this article, but with the reactions in the comments that seem to jump to conclusions and paint things with broad strokes, assuming NFT = fraud.

Those are fair points. But I’m used to seeing so much bad press against NFT from people who blindedly criticise it and assotiate it with any possible bad use of it… to the point that they think “NFT=bad”, and this kind of news paints that picture for anyone who doesn’t know better…

It would be like highlighting in the news every crime perpetrated by someone of color and then complain about “whataboutism” when someone says that white people also commit crimes.

I’m afraid that all this demonization will make it much much harder for any fair and honest project that we ever attempt in the future related to blockchain technology (such as the one you mentioned).

But he didn’t really say that banks are bad, or that the cryptocurrency/NFT/web3 scene isn’t rife with scams.

Scams also existing in fiat currency (his point) doesn’t make fiat bad, in the same way as cryptocurrency/NFT/web3 having good uses doesn’t mean that it cannot also be “rife with scams”.

Are hammers bad because people can use them to smash skulls? imho what we need is measures to prevent, block, minimize or discourage that kind of behavior, not necessarily ban hammers.

Personally, I think the open source and p2p nature of blockchain technology can be a better way to introduce measures of control and protection in a way that is fairer and more transparent than using obscure private ledgers on the hands of more central authorities managed by humans that we have to trust…

There’s also #uncivplayers:matrix.org, although I don’t think there’s many people.

There are many other libre game matrix rooms in the #libregaming-space:matrix.org space, that’s where I found it.

EDIT: whoops! I just realized that it’s the same room you linked! :P

It’s definitelly not optimal for that. In my opinion, using proper blogs, websites and feeds is a much more intelligent, decentralized, and powerful alternative to artificially limited microblogging.

The only reason companies and groups love having a Twitter is because it allows them to advertise themselves there, due to how big its userbase is. It also allows them to have a more direct engagement with their “followers” or appear to be more “down to earth” preciselly because of the way it’s traditionally a platform more “individual-centered”. Twitter just happens to be good for Marketing. And the same goes for Facebook.

Imho, the blogosphere was in a very good place before Twitter and Facebook started to rise in popularity, when having a personal website was more of a common thing to do instead. Imho, the solution isn’t Mastodon either… I’d much rather go back to when using feed readers was a thing. I just wish there was a more modern pub-sub like alternative to RSS that we could use for websites (or maybe there is but nobody uses it…), and a more standardized API for viewing/posting coments to a blog post directly from your feed reader.

Hmm… that’s interesting actually. Having users have to authenticate might help some instances of trolling and abuse, but at the same time there’s the problem of the identification causing trouble for privacy.

A middle ground would be allowing non-verified users to participate, but have them have a lower influence in the relevance of the content, perhaps having caps that limit how much non-verified influence can affect the weighted relevance of a post (so… content promoted by unverified accounts would be of a lower priority, and pushing it with a farm of non-verified bot accounts would not have much of an impact).

Of course there’s likely gonna be some level of bias based on who are the people who would go through the trouble of verifying themselves… but that’s not the same thing as not being transparent. Bias is gonna be a problem that you cannot escape no matter what. If a social network is full of idiots the algorithm isn’t gonna magically make their conversations any less idiotic. So I think the algorithm could still be a good and useful thing to come out of this, even if the social network itself isn’t.