The malicious changes were submitted by JiaT75, one of the two main xz Utils developers with years of contributions to the project.
“Given the activity over several weeks, the committer is either directly involved or there was some quite severe compromise of their system,” an official with distributor OpenWall wrote in an advisory. “Unfortunately the latter looks like the less likely explanation, given they communicated on various lists about the ‘fixes’” provided in recent updates. Those updates and fixes can be found here, here, here, and here.
On Thursday, someone using the developer’s name took to a developer site for Ubuntu to ask that the backdoored version 5.6.1 be incorporated into production versions because it fixed bugs that caused a tool known as Valgrind to malfunction.
“This could break build scripts and test pipelines that expect specific output from Valgrind in order to pass,” the person warned, from an account that was created the same day.
One of maintainers for Fedora said Friday that the same developer approached them in recent weeks to ask that Fedora 40, a beta release, incorporate one of the backdoored utility versions.
“We even worked with him to fix the valgrind issue (which it turns out now was caused by the backdoor he had added),” the Ubuntu maintainer said.
He has been part of the xz project for two years, adding all sorts of binary test files, and with this level of sophistication, we would be suspicious of even older versions of xz until proven otherwise.
Dude seems like a foreign asset
foreign to whom?
From the article…
Will Dormann, a senior vulnerability analyst at security firm Analygence, said in an online interview. “BUT that’s only because it was discovered early due to bad actor sloppiness. Had it not been discovered, it would have been catastrophic to the world.”
Is auditing for security reasons ever done on any open source code? Is everyone just assuming that everyone else is doing it, and hence no one is really doing it?
EDIT: I’m not attacking open source, I’m a big believer in open source.
I’m just trying to start a conversation about a potential flaw that needs to be addressed.
Once the conversation was started I was going to expand the conversation by suggesting an open source project that does security audits on other open source projects.
Please put the pitchforks away.
Edit2: This is not encouraging.
Having once worked on an open source project that dealt with providing anonymity - it was considered the duty of the release engineer to have an overview of all code committed (and to ask questions, publicly if needed, if they had any doubts) - before compiling and signing the code.
On some months, that was a big load of work and it seemed possible that one person might miss something. So others were encouraged to read and report about irregularities too. I don’t think anyone ever skipped it, because the implications were clear: “if one of us fails, someone somewhere can get imprisoned or killed, not to speak of milder results”.
However, in case of an utility not directly involved with functions that are critical for security - it might be easier to pass through the sieve.
I don’t think anyone ever skipped it, because the implications were clear: “if one of us fails, someone somewhere can get imprisoned or killed, not to speak of milder results”.
However, in case of an utility not directly involved with functions that are critical for security - it might be easier to pass through the sieve.
I’ve actually seen people checking in code that doesn’t get reviewed properly on mission critical apps before (like in the health industry).
My understanding is basically the same as yours, and in theory I agree with you. However, the problem is we all tend to hand-wave away any possibility of bad things happening, because it’s open source, and don’t take into account human nature, especially when it comes to volunteer versus paid work.
Auditing can be done only on open source code. No code = no audit. Reverse engieneering doesn’t count.
True, but does it actually get done, or just everyone just assuming gets done, because it’s open source?
The answer is the same as closed source software: sometimes.
But that’s beside the point, a security audit is not perfect. Plenty of audited codebases are the source of security vulnerabilities in the wild. We know based on analysis that the malicious actor’s approach would have a high chance of successfully hiding from a typical security audit.
Oh I know security audits are not perfect, I’m just wondering if they actually get done, or everyone just assumes they get done because of “Open Source”, but they don’t.
There are security researchers looking for vulnerabilities constantly, but they’re inconsistent and informal. Issues usually get caught eventually, but sometimes that’s after a vulnerability in the wild.
Thankfully this was discovered before hitting stable distros but I’m hoping it increases scrutiny across the board. We dodged a bullet on this one.
Across the board indeed. Scrutiny in code is one thing, where this story, as far as is known right now, really went south is the abuse of a trusted, but vulnerable, member of the community.
I know the (negative) spotlight is targeting Jia Tan right now (and who knows if they (still) exist), but I really hope Larhzu is doing okay. Who’s name is mentioned in the same articles.
Mental health is a serious issue, that, if you read the back story, is easily ignored or abused. And it wasn’t an unknown in this story. Don’t only check the code, check up on your people too.
This is why I run debian oldstable.
Or maybe it’s because I’m too lazy to do a dist-upgrade.
deleted by creator
Long game supply chain attacks, pretty much going to be state actors. And I wouldn’t chalk it up to the usual malicious ones like China and Russia. This could be the NSA just as easily.
If you throw enough money at the right person you can get shit done.
I think you are greatly underestimating FSB incompetense.
This is really bad.
The backdoor has existed for a month at least. Yikes.
A stable release of Arch Linux is also affected. That distribution, however, isn’t used in production systems.
Shots fired!
It seems WSL Ubuntu and Kali are safe with versions 5.2.5 and 5.4.4 installed respectfully.
Damn, I installed mine disrespectful.
I thought about the same (disrespectful) as I typed it. I’ve already laughed with you!
Don’t forget about openSUSE Tumbleweed! It’s actually affected AFAIK.
I think the AI that wrote the article misunderstood.
Arch doesn’t build from release tar balls, but straight from git. Arch also doesn’t link sshd against liblzma. So while they’ve shipped the dirty version of xz utils, at least sshd is not affected.
It’s possible that the dirty version affected some of the other things that link liblzma. Like a handful of kde components for example.
deleted by creator
Also, the malicious code only activated if it detected being built as dpkg or rpm.
Here is a more detailed FAQ about what happened:
This is also a good summary of the timeline: https://boehs.org/node/everything-i-know-about-the-xz-backdoor
Please help me as a novice Linux user- is this something that comes preinstalled with Mint Cinnamon? And if so, what can I do about it?
As the other person said it’s likely that
xz
is already installed on your system, but almost certainly a much older version than the compromised one. It’s likely that no action is required on your part assuming you’ve not been downloading tarballs of bleeding edge software.As the other person said, just keep doing updates as soon Mint recommends them (since it’s based on Ubuntu LTS, it’s a lot less likely to have these bleeding edge vulnerabilities).
Much appreciated, thank you.
You’re good. Even if you do use xz and ssh the version with the vulnerability only made it’s way to rolling release distros or beta version of distros like fedora 40
made its* way to
Hahaha
Welp, time to go check my version of fedora
40 is still in beta. 39 doesn’t have the vulnerability
Thanks!
The library itself is very common and used by a lot of things (in this case it seems that the payload only activated when used by specific programs, like SSH).
What you can do about it is keep your system up-to-date using your distribution update mechanisms. This kind of thing, when found out, is usually fixed quickly in security updates. In Mint (which I don’t use, but I believe is based on either debian or ubuntu, which uses dpkg/apt) security updates are flagged differently anc can be installed automatically, depending on your configuration.
tl;dr: keep your system up-to-date, it will keep known vulnerabilities away as much as it can;
In this case though the backdoor was added recently so updating could do the opposite of help here. Luckily I don’t think any stable distros added the new version.
It was added recently, but at this point in the timeline, fixes are available for most mainstream distro at least. Except for rare cases where a fix can’t be made available quickly, this kind of publicity is only done when a fix is broadly available. There are extreme cases of course, but in this case, it’s fixed.
Thanks. I do my best to regularly update, so here’s hoping it will not be a problem for me before an update fixes it!
Alternatively, if you never use ssh, then it wouldn’t be a problem.
There are definitely times where (at least based on the instructions I read) that I have had to use ssh for various reasons, so I think it will be a problem in the future if I don’t get a fix in an update. But I’m guessing a fix will be coming soon.
dpkg --list | grep xz
should return what version of xz package is on your system. Likely 5.4, in which case you should be okay.
It says “5.2.5-2ubuntu1.” So I’ll have to see about updating it.
EDIT: However, this says I should be safe: https://forums.linuxmint.com/viewtopic.php?t=416756
The affected versions are 5.6.0 to 5.6.1
ii xz-utils 5.2.5-2ubuntu1 amd64 XZ-format compression utilities On my latest Mint install.
which isn’t an effected version, so you should be okay.
Technically it breaks libsystemd encryption, which is not used in upstream openssh. There are unofficial redhat patches that use this library instead of reading one enviroment variable and writing to one file.
Who wants to bet he received a nice lump sum deposit of cash from a five eyes state to make an “accident”…
No…
Just read the story…
And that’s why you cannot trust open source software blindly.
And yet with closed-source software you have no choice but to trust it blindly. At least open source software has people looking at the code.
You are an idiot. It’s not blind. That’s how it was found.
Not having world accessible SSH is the real fix here.
Yeah I nearly panicked for a second there, then I remember noone’s getting near that anyway. Back to my relaxing weekend.
You are an idiot. It’s not blind. That’s how it was found.
From the article…
Will Dormann, a senior vulnerability analyst at security firm Analygence, said in an online interview. “BUT that’s only because it was discovered early due to bad actor sloppiness. Had it not been discovered, it would have been catastrophic to the world.”
Opensource = fast detection
Opensource + sloppiness = faster detection
Closedsource = never detected
Closedsource + sloppiness = maybe detected
You can put the pom-poms/rifle down, I’m not attacking open source, not in the slightest. I’m a big believer open source.
But I also know that volunteer work is not always as rigorous as when paid for work is being done.
The only point I’m trying to make in this conversation is getting confirmation if security audits are actually done, or if everyone just thinks they’re done because of “Open Source” reasons.
As opposed to what? If you said “thats why you cannot trust any software blindly” it would have been not that wrong.
Imagine trying to make a helpdesk of a proprietary company take your “it’s taking 0.5 seconds longer to login” complaint seriously…
So many vulnerabilities were found due to time to login that one of the security features was to take longer to respond to a bad login so they couldn’t tell what part failed. Here’s an article I found about one such vulnerability.
This is one hell of a take.
Certainly one of the takes of all time.
s/open source software/anything/
Reflections on Trusting Trust
- paper by Ken Tompson, 1984(not that one)
Ftfy: And that’s why you cannot trust people blindly.
Just because we cant observe the code of proprietary software and is sold legally doesn’t mean its all safe. Genuinely I distrust anything with a profit incentive.
What an oblivious attitude
Yep, you’re not wrong.
This is why I don’t use Linux. Insecure as fuck.
This. Everyone knows that windows is a perfectly safe and secure environment with no exploits and vulnerabilities whatsoever.
Unironically, this is true. They are features
If it was closed source software you wouldn’t even know this was happening.