Have you tried sfc /scannow
?
It’s Cannonical. They prefer implementing everything themselves fast, rather than developing a more sustainable project with the rest of the community over a longer timescale. When they do that, there will be very little buy-in from the wider community.
Others could technically implement another snap store for their own distro, but they’d have to build a lot of the backend that Cannonical didn’t release. It’s easier to use Flatpak or AppImage or whatever rather than hitch themselves onto Cannonicals’s homegrown solution that might get abandoned down the line like Mir or Ubuntu Touch.
It’s Cannonical. They prefer implementing everything themselves fast, rather than developing a more sustainable project with the rest of the community over a longer timescale. It makes sense that when they do that, there will be very little buy-in from the wider community. Much like Unity and Mir.
As you say - why would others put time into the less supported system? Better alternatives exist. If Canonical want their own software ecosystem, they’ll have to maintain it themselves. Which, based on Mir and Ubuntu Touch, they don’t have a good track record of.
I don’t know why you’re getting downvoted. It makes perfect sense that Cannonical made it’s own proprietary package ecosystem and while technically anyone can build their own snap store, ain’t nobody got time for that.
curl shit | sudo bash
is just so convenient.
When in doubt - C4!
I don’t think that’s what ‘market share’ is trying to represent, but without any context - yeah. You can lump in android phones and set-top boxes and signage and industrial controllers while you’re at it.
Is OP adding the Android share to Linux? That would certainly do it.
Only makes sense if you know their definition of ‘Linux’ though.
I think you’d only have to read it once, then you should be able to just filter it out next time you see it.
- Sent from my iPhone
Except PGP is a substring of the ‘technically correct’ term. It’s like someone saying you’re playing on your Nintendo - “Um, actually it’s a Nintendo 64.”
----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEETYf5hKIig5JX/jalu9uZGunHyUIFAmaB8YEACgkQu9uZGunH
yUKi7Q/+OJPzHWfGPtzk53KnMJ3GC8KQGEUCzKkSKmE0ugdI9h1Lj4SkvHpKWECK
Y1GxNujMPRM/aAS2M97AEbtYolenWzgYmO1wt131/hEG4tk+iYeB2Sfyvngbg5KI
y4D7mqpcVWYSf6S13vUX8VuyKeTxK6xdkp95E0wPVLfJwx5o5nH0njLXxeW0IblY
URLonem/yuBrJ6Ny3XX9+sKRKcdI9tOqhMhTxPcQySXcTx1pAG7YE7G5UqTbJxis
wy7LbYZB5Yy0FO3CtRIkA+cclG4y2RMM9M9buHzXTWCyDuoQao68yEVh4OdqwH1U
5AUnqdve5SiwygF/vc50Ila6VjJ4hyz1qVQnjqqD96p7CSVzVudLDDZMQZ8WvqLh
qaFr51xJvH6p6/CP1ji4HHucbJf6BhtSqc8ID9KFfaXxjfZHiUtgsVDYMV0e7u9v
lhcDH/3kmw/JImX25qsEsBeQyzOJsBvxOYD3lZrwSY9+7KNGVQstFrEvCuVPHr72
BQJPIhg3+9g6m36+9Uhs1N6b8G9DsZ6OgnNqr9dGturUg6CtRsLSpqoZq0FT9cLA
tnFTJDaXgx1DZnsLGDSoQQYjZ3vS+YYZ8jG86KGLFyXVK+uSssvorm9YR1/GGOy7
suaxro72An+MxCczF5TIR9n3gisKvcwa8ZbdoaGd9cigyzWlYg8=
=EgZm
----END PGP SIGNATURE-----
I’d have thought the cloud side would be pretty easy to script over. Presumably the images aren’t encrypted from the host filesystem so just ensure each VM is off, mount its image, delete the offending files, unmount the image and start the VM back up. Check it works for a few test machines then let it rip on the whole fleet.
It’s not that clear cut a problem. There seems to be two elements; the kernel driver had a memory safety bug; and a definitions file was deployed incorrectly, triggering the bug. The kernel driver definitely deserves a lot of scrutiny and static analysis should have told them this bug existed. The live updates are a bit different since this is a real-time response system. If malware starts actively exploiting a software vulnerability, they can’t wait for distribution maintainers to package their mitigation - they have to be deployed ASAP. They certainly should roll-out definitions progressively and monitor for anything anomalous but it has to be quick or the malware could beat them to it.
This is more a code safety issue than CI/CD strategy. The bug was in the driver all along, but it had never been triggered before so it passed the tests and got rolled out to everyone. Critical code like this ought to be written in memory safe languages like Rust.
I’d unsubscribe from !linux@lemmy.ml for a start.
I’m pretty sure this update didn’t get pushed to linux endpoints, but sure, linux machines running the CrowdStrike driver are probably vulnerable to panicking on malformed config files. There are a lot of weirdos claiming this is a uniquely Windows issue.
This doesn’t really answer my question but Crowdstrike do explain a bit here: https://www.crowdstrike.com/blog/technical-details-on-todays-outage/
These channel files are configuration for the driver and are pushed several times a day. It seems the driver can take a page fault if certain conditions are met. A mistake in a config file triggered this condition and put a lot of machines into a BSOD bootloop.
I think it makes sense that this was a preexisting bug in the driver which was triggered by an erroneous config. What I still don’t know is if these channel updates have a staged deployment (presumably driver updates do), and what fraction of machines that got the bad update actually had a BSOD.
Anyway, they should rewrite it in Rust.
Does anyone know how these Cloudstrike updates are actually deployed? Presumably the software has its own update mechanism to react to emergent threats without waiting for patch tuesday. Can users control the update policy for these ‘channel files’ themselves?
Because then it would be 'a;imodo not qazimodo.
For thin clients?