|
|
Log in / Subscribe / Register

Date bug affects Ubuntu 25.10 automatic updates

The Ubuntu Project has announced that a bug in the Rust-based uutils version of the date command shipped with Ubuntu 25.10 broke automatic updates:

Some Ubuntu 25.10 systems have been unable to automatically check for available software updates. Affected machines include cloud deployments, container images, Ubuntu Desktop and Ubuntu Server installs.

The announcement includes remediation instructions for those affected by the bug. Systems with the rust-coreutils package version 0.2.2-0ubuntu2 or earlier have the bug, it is fixed in 0.2.2-0ubuntu2.1 or later. It does not impact manual updates using the apt command or other utilities.

Ubuntu embarked on a project to "oxidize" the distribution by switching to uutils and sudo-rs for the 25.10 release, and to see if the Rust-based utilities would be suitable for the long-term-release slated for next April. LWN covered that project in March.



to post comments

The next Ubuntu release...

Posted Oct 23, 2025 20:52 UTC (Thu) by dskoll (subscriber, #1630) [Link] (57 responses)

... will be called Grateful Guinea-Pig

But seriously. Rewriting C utilities that have been battle-tested for decades in Rust might be a good idea in the long term, but anyone could have predicted short-term hiccups.

The next Ubuntu release...

Posted Oct 23, 2025 21:38 UTC (Thu) by geofft (subscriber, #59789) [Link] (35 responses)

Which is why I'm glad they're doing it! It seems like the kind of thing that one can be understandably scared to ever do, and I say this as one of the folks involved with getting some Rust in the Linux kernel.

The next Ubuntu release...

Posted Oct 23, 2025 22:21 UTC (Thu) by dskoll (subscriber, #1630) [Link] (31 responses)

I don't have anything against Rust (nor against C), but I do think it's unfortunate that the Rust utilities are licensed under the MIT license rather than the GPL. But that's a whole other debate...

The next Ubuntu release...

Posted Oct 23, 2025 23:57 UTC (Thu) by dralley (subscriber, #143766) [Link] (3 responses)

I don't think software like the coreutils is particularly monetizable these days. Especially if the whole point of it is to behave exactly the same as existing software. Meh.

The next Ubuntu release...

Posted Oct 24, 2025 8:46 UTC (Fri) by leromarinvit (subscriber, #56850) [Link] (2 responses)

It takes away one possible avenue to try and get vendors to release embedded firmware source code. Much like Busybox probably isn't particularly monetizable, but it has been used successfully to get vendors to release sources - including e.g. custom kernel patches that are much more interesting than Busybox itself.

The next Ubuntu release...

Posted Oct 24, 2025 12:13 UTC (Fri) by joib (subscriber, #8541) [Link]

There's already toybox as a BSD-licensed busybox clone, used e.g. in android. And of course all the BSD's have their own versions of the core utilities. So for better or worse, that ship as truly sailed I think.

The next Ubuntu release...

Posted Oct 24, 2025 13:25 UTC (Fri) by dralley (subscriber, #143766) [Link]

There is a net benefit in getting embedded device code written in a language with less security issues (not that coreutils is a significant culprit). I wouldn't be upset at all if embedded devices used it.

The next Ubuntu release...

Posted Oct 24, 2025 11:10 UTC (Fri) by ballombe (subscriber, #9523) [Link] (14 responses)

> But that's a whole other debate...
It is not an other debate. This bug is a direct consequence of this decision.
If they were willing to be a GNU GPL derivative of the original coreutils, they could port the C code to rust instead of rewriting it from first principle, which would avoid introduce fresh bugs.

The next Ubuntu release...

Posted Oct 24, 2025 13:56 UTC (Fri) by ssokolow (guest, #94568) [Link] (13 responses)

And yet I don't think that's how it would go for two reasons:
  1. uutils began as a pile of practice projects, not a minmax'd effort to produce a 1-to-1 replacement the most efficient way possible
  2. They already know that it doesn't pass the full GNU coreutils test suite yet.
I blame Canonical for precipitating a second "We told you KDE 4.0 was a developer preview, not an end-user release" situation.

The next Ubuntu release...

Posted Oct 24, 2025 15:02 UTC (Fri) by rahulsundaram (subscriber, #21946) [Link] (10 responses)

> I blame Canonical for precipitating a second "We told you KDE 4.0 was a developer preview, not an end-user release" situation.

I don't see anything in the KDE 4 announcement to indicate that it was a developer preview. Where is this coming from?

The next Ubuntu release...

Posted Oct 24, 2025 18:38 UTC (Fri) by elimranianass (subscriber, #164758) [Link]

I think it's due to a 15 (almost 20!) years old game of telephone. It started as "Some comments from some devs at the time stated that distros should wait until point releases" (or something like that) into "KDE 4.0 was actually just a dev preview never intended for end users". But I understand why that happened, the KDE 4.0 release was so controversial, and it led to people from all sides blaming the other side :).

KDE 4.0.0 status

Posted Oct 26, 2025 14:59 UTC (Sun) by Jandar (subscriber, #85683) [Link] (7 responses)

> I don't see anything in the KDE 4 announcement to indicate that it was a developer preview. Where is this coming from?

It wasn't in the official announcement but in prior mailing-list messages this was communicated. It can't be expected that users read these but packager for major distros had to be aware of this. I can't find a link right now but I recall it clearly how surprised I was as some distros used this first .0 release as their main KDE package.

The only link I found today only excerpts information form the KDE developers: https://www.osnews.com/story/19145/kde-400-released/

"but take note that the developers have clearly stated that KDE 4.0 is not KDE 4, but more of a base release with all the underlying systems ready to go, but with still a lot of work to be done on the user-visible side."

KDE 4.0.0 status

Posted Oct 26, 2025 15:52 UTC (Sun) by Jandar (subscriber, #85683) [Link] (6 responses)

I just found a link to a KDE message about KDE 4.0 being unfinished: https://commit-digest.kde.org/issues/2007-12-30/

"Stephan Binner writes a reminder note about the upcoming KDE 4.0 release (in an attempt to reign in wildly over-optimistic expectations by some users):

Before everyone starts to spread their opinion about KDE 4.0, let me spread some reminders:

KDE 4.0 is not KDE4 but only the first (4.0.0 even non-bugfix) release in a years-long KDE 4 series to come."

KDE 4.0.0 status

Posted Oct 27, 2025 1:26 UTC (Mon) by rahulsundaram (subscriber, #21946) [Link] (5 responses)

None of that says it is a developer preview and should not be packaged by distributions and the announcement doesn’t say that either and links to distributions shipping with it. As far as I can tell this is revisionist history to say upstream ever publicly said this clearly.

KDE 4.0.0 status

Posted Oct 27, 2025 3:20 UTC (Mon) by pizza (subscriber, #46) [Link]

> None of that says it is a developer preview and should not be packaged by distributions and the announcement doesn’t say that either and links to distributions shipping with it. As far as I can tell this is revisionist history to say upstream ever publicly said this clearly.

I don't think there's any debate that the KDE folks badly dropped the ball with respect to officially communicating the "quality" one could expect from KDE 4.0.0, including touting it as the latest "stable release" at the time.

LWN has a good writeup of this debacle here:

https://lwn.net/Articles/316827/

Notably it quotes Aaron Seigo [1] shortly after 4.0.0 was tagged for release:

"KDE 4.0.0 is our "will eat your children" release of KDE4, not the next release of KDE 3.5. The fact that many already use it daily for their desktop (including myself) shows that it really won't eat your children, but it is part of that early stage in the release system of KDE4. It's the "0.0" release. The amount of new software in KDE4 is remarkable and we're going the open route with that."

..and later, he said:

"I have to admit that it's really hard to stay positive about the efforts of downstreams when they wander around feeling they should be above reproach while simultaneously hurting our (theirs and ours) users in a rush to be more bad ass bleeding edge than any other cool dude distro in town. I hope this time instead of handing out spankings, the distros can sit back and think about things and try and figure out how they played an unfortunate part in the 4.0 fiasco."

KDE 4.0.0 status

Posted Oct 27, 2025 12:28 UTC (Mon) by Jandar (subscriber, #85683) [Link] (3 responses)

> As far as I can tell this is revisionist history to say upstream ever publicly said this clearly.

The warning on 30th December 2007 about a release announced on Friday, 11 January 2008 is revisionist history? My understanding of calendar dates and the meaning of "revisionist" differs from yours.

KDE 4.0.0 status

Posted Oct 27, 2025 13:23 UTC (Mon) by rahulsundaram (subscriber, #21946) [Link] (2 responses)

It has nothing to do with calendar dates but if the intent was to communicate that KDE 4.0 was a developer preview, that did not happen in anything you have linked to. There is no clear public communication to any distribution that KDE 4.0 is a developer preview as asserted earlier. The obvious places to say something like that including in the announcement of the release is missing any such warnings.

KDE 4.0.0 status

Posted Oct 28, 2025 12:29 UTC (Tue) by Jandar (subscriber, #85683) [Link] (1 responses)

My expectation is that a packager of a major DE is better informed than reading only release announcements. I as a interested user was better informed at that time.

I hope none of the packages I use is packaged by you.

This is my last post in this thread. It seems to me we are unable to reach a common ground.

KDE 4.0.0 status

Posted Oct 28, 2025 23:34 UTC (Tue) by rahulsundaram (subscriber, #21946) [Link]

> My expectation is that a packager of a major DE is better informed than reading only release announcements

No disagreements there. I specifically said including release announcement but did not limit my comments on it. If anyone is able to show any public announcement anywhere that clear communicated that KDE 4.0 was meant as a developer preview only, I am happy to change my mind. So far, I see no evidence of that.

> I hope none of the packages I use is packaged by you.

That slight is uncalled for but good news for you, I am no longer involved in maintaining packages.

The next Ubuntu release...

Posted Oct 27, 2025 0:40 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

I was in the KDE SIG for Fedora at that time (I started using it personally with 4.0.9). IIRC, we were well aware that it was experimental (it was packaged in parallel to KDE3 for some time).

The next Ubuntu release...

Posted Oct 25, 2025 16:23 UTC (Sat) by raven667 (subscriber, #5198) [Link] (1 responses)

> uutils began as a pile of practice projects, not a minmax'd effort to produce a 1-to-1 replacement the most efficient way possible

This is something I want to put a spotlight on, the choice for Canonical to replace coreutils with uutils, and whatever consequences come of that good or bad, reflects on Canonical's engineering, not the uutils developers who were just making a "my first Rust project" for fun, not to create an "Enterprise" replacement. I'd hate to dump unwanted criticism onto someone just for putting their personal project on the Internet about how its not good enough, because someone else decided to pull it as a dependency, that's how we end up eating our own people.

The next Ubuntu release...

Posted Oct 25, 2025 16:28 UTC (Sat) by dskoll (subscriber, #1630) [Link]

Yes, Canonical seems to believe in "move fast and break stuff" which is fine if you run a parasitic data-harvesting company designed to abuse users and damage their mental health, but not a great strategy for an OS.

Copy-left hasn't worked out how I thought it would

Posted Oct 24, 2025 22:37 UTC (Fri) by ras (subscriber, #33059) [Link] (11 responses)

Hasn't it always been like this, to some extent? Even glibc is LGPL rather than GPL. Like you I've always been a fan of the copy-left licences, but in retrospect it didn't go the way I expected when I enthusiastically first adopted them.

Sometimes that's because companies just ignore them. For example I was startled to find the tunnelling software distributed with Microsoft's InTune was just repackaged GPL software, which they redistributed with no attribution. Instead they slapped the usual, restrictive MS commercial licence on code they had no rights to whatsoever. I only found out because the Linux version was a container of some sort that didn't work. So I unpacked it, eventually discovered what it was, and was able to replace it with suitably tuned Debian packages that received timely security updates. Apple took the other route, which is to avoid copy-left entirely. Most of the app stores steer you in that direction as well.

Either way, copy-left demonstrably didn't force those companies to contribute back. The most annoying example of side-stepping copy-left is perhaps NVidia's kernel modules. I have no idea if it was legal or not and as far as I know it hasn't been tested, but regardless it does demonstrate how ineffective copy-left can be.

Meanwhile, other companies have contributed back. Lots of them. That's sometimes because they adopted open source as a marketing technique. But often it's because they have no choice. Open source projects like the kernel, GCC and Rust move at such a fast pace that the effort of carrying the patches is too big, so they post them upstream.

In an odd way, who uses copy-left vs permissive licences has turned out to be the opposite of how I naively thought it would. The most enthusiastic uses of the AGPL and it's relatives are commercial users, who are trying to prevent other commercial users from selling their work. In the meantime time, the open source world is gradually moving towards permissive licences. The Rust and the JS ecosystems being prime examples. Neither of those ecosystems seem to suffer from it, as both have enormous amounts of code contributed.

Copy-left hasn't worked out how I thought it would

Posted Oct 25, 2025 12:11 UTC (Sat) by khim (subscriber, #9252) [Link] (7 responses)

> Either way, copy-left demonstrably didn't force those companies to contribute back.

That's not true. Linux have jumped ahead of BSD precisely because companies have contributed back. A lot.

Not all of them did, sure, but enough for that to matter.

What copyleft failed to do was to implement RMS's dream of proprietary software destruction.

Copy-left hasn't worked out how I thought it would

Posted Oct 26, 2025 10:25 UTC (Sun) by ras (subscriber, #33059) [Link] (6 responses)

> Linux have jumped ahead of BSD precisely because companies have contributed back. A lot.

Yes, I think that's true. But you are apparently saying that's because the GPL forced them to contribute back.

They had two choices - contribute to BSD with no obligations imposed on them whatsoever, or contribute to Linux where the GPL effectively obligating these capitalist entities to give away their work from free. It looks like you are claiming they chose Linux because it forced this obligation on them?

That's hard to swallow, particularly given the other explanations I can think of. The first is the AT&T legal threat hanging over the BSD's at the time. The second is Linus has always made of contributing to Linux easier than the BSD's. The third is BSD's tight leash on their ports tree, whereas Linux outsourced that to GNU and the distro's. That made things like OpenWrt and Alpine possible. Linus's willingness to invite new ideas into the kernel contributes to this day with Rust. It made it a much more inviting ecosystem, as it was one far more likely to accept the tweaks a company with new requirements needed to add. But to be fair, I don't have a clue what the real driver was. I just doubt it was the GPL.

Regardless, once those heavyweight contributions started rolling the shear range of hardware supported created made Linux a more attractive base than the BSD's, GPL or no GPL. The pace of development and lack of stable kernel API meant carrying patches became very burdensome, so it becomes in your interest to contribute work upstream regardless of copy-left.

As I said, if you look at the shear amount of code that is contributed back to projects with permissive licences now (I suspect they get more in total than copy-left projects, just because of the shear number of them) copy-left doesn't look important to open source as it onced seemed. If it's true companies have to be forced to contribute back, why do we have typescript or Cassandra? I doubt it made that much difference to the kernel either.

Copy-left hasn't worked out how I thought it would

Posted Oct 26, 2025 10:50 UTC (Sun) by khim (subscriber, #9252) [Link] (2 responses)

> It looks like you are claiming they chose Linux because it forced this obligation on them?

Nope. Some chose Linux, some chose BSD. Some even chose Minix! Even today SONY prefers BSD for their consoles.

But the ones who picked Linux had to provide things back… over decades these things adds up.

> That made things like OpenWrt and Alpine possible.

OpenWrt was born from the described process. It's even documented on LWN.

> I just doubt it was the GPL.

It was GPL — among other things. Linus interpretation of it, tit-for-tat exchange was attractive to many. GPLv3, that tried to close the loophole and solve that problem of printer that birthed the GPL in the first place crashed the equilibrium: companies have found it too onerous and dangerous and because GPLv3 is not compatible with GPLv2 it divided the community, too.

Copy-left hasn't worked out how I thought it would

Posted Oct 29, 2025 10:55 UTC (Wed) by ras (subscriber, #33059) [Link] (1 responses)

> But the ones who picked Linux had to provide things back… over decades these things adds up.

You're missing my point. They didn't choose Linux because of the GPL. They chose Linux for other reasons. I now think whatever those reasons were are what lead Linux became successful than the BSD's, not the GPL. But yes, I agree the GPL forcing those contributions back contributions created a feedback loop accelerating the process once it got started.

> Some chose Linux, some chose BSD

And some who chose BSD contributed back heavily. Example: Netflix's contributions to FreeBSD. Turns out you don't need the GPL to make that happen.

> OpenWrt was born from the described process.

It certainly sped things up. I suspect OpenWrt using the binary kernel modules that didn't have source provided sped things up a good deal more.

We know LinkSys didn't like the GPL because they then moved to VxWorks to avoid it. Evidently they liked the BSD's even less, because they didn't move to it despite being free and including non-GPL user space tools. It would be interesting to know why.

Then as often happens, LinkSys discovered open source sells. The WRT54GL could be purchased from LinkSys (at a substantial premium!) long after the VxWorks version of the WRT54G had died. Espressif went down a similar road to enlightenment with the esp8266. Everything from them was initially proprietary. It wasn't the GPL that lead either company down this path.

> GPLv3 is not compatible with GPLv2

Yes, it is, assuming you use the original GPLv2 with its "or later version" clause.

Copy-left hasn't worked out how I thought it would

Posted Oct 29, 2025 13:26 UTC (Wed) by pizza (subscriber, #46) [Link]

> We know LinkSys didn't like the GPL because they then moved to VxWorks to avoid it.

No, Linksys moved to VxWorks for later revisions of the WRT54G because it had lower system requirements, allowing them to use less capable (ie cheaper) hardware. [1] They continued to sell the higher-spec hardware as the more expensive WRT54GL, which (according to them) eventually became "the best selling wireless router of all time".

Subsequent products in the WRTxx (and even the WRT54xx) product family remained predominantly (if not entirely) Linux-based.

Not exactly the actions of an organization trying to avoid the GPL...

[1] Same SoC, but half the RAM and flash.

Copy-left hasn't worked out how I thought it would

Posted Oct 28, 2025 11:03 UTC (Tue) by job (guest, #670) [Link] (2 responses)

> It looks like you are claiming they chose Linux because it forced this obligation on them?

This is empirically and trivially true. Linux won, BSD didn't, despite that BSD had a huge head start.

Companies contribute to Linux specifically because they *know* that their competitors can't incorporate these contributions into some proprietary software.

The business models reflect this, too. In the GPL world it is common to sell support and subscriptions, and to some extent sell dual licenses. In the BSD world the dominant model is to sell proprietary add-ons and non-free distributions. Free reimplementations of these add-ons are frequently seen as problems that needs to be solved.

> the shear amount of code that is contributed back to projects with permissive licences now

There was a decade when business were started around permissive, non-copyleft, licenses but these have almost all either failed or changed to a proprietary license. The amount of code was indeed huge but most of them do not exist anymore. MongoDB, Elastic, Pivotal are a few examples. Cassandra is the exception here.

They all changed in response to huge companies such as Amazon or Google starting to contribute code to their own forks. These forks mostly existed to replace proprietary features with free features and to fit the higher development velocity these companies desire. They were seen as huge problems and even a breach of the social trust. But a fork should be a *good* thing. It should be *desirable* that large companies contributes huge volumes of free code. These products are all now either completely non-free or has a small free version with neutered functionality.

So it's safe to conclude that GPL is much more commercially viable in practice, and directly led to Linux' huge domination.

Copy-left hasn't worked out how I thought it would

Posted Oct 28, 2025 14:06 UTC (Tue) by paulj (subscriber, #341) [Link]

NeXTSTEP / OPENSTEP might be a good addition to your list FWIW.

> They all changed in response to huge companies such as Amazon or Google starting to contribute code to their own forks.

They contribute only selectively though. Stuff they consider an advantage they often do not contribute back.

Copy-left hasn't worked out how I thought it would

Posted Oct 29, 2025 11:25 UTC (Wed) by ras (subscriber, #33059) [Link]

> The business models reflect this, too. In the GPL world it is common to sell support and subscriptions

My guess is the dominant model for GPL software doesn't involve selling support, subscripts, or shipping the source to customers. Instead follows one of two paths. You get it in a router, a TV, phone or car radio (that one below me away), and they make money from the hardware. Or you use it via a server, where they make money from the software but aren't obliged to contribute back their modifications. Using copy-left licences to sell software or subscriptions hasn't been a wild success. ElasticSearch's troubles spring to mind, as does RedHat's response to Oracle Linux.

> So it's safe to conclude that GPL is much more commercially viable in practice, and directly led to Linux' huge domination.

Linux has a huge domination in software? Certainly it dominates as the base OS. But in lines of code shipped to end users, it's a tiny fraction. My Debian laptop uses far more lines to drive it's user space than it's kernel. Granted, Debian user space is mostly copy-left. But Alpine is mostly permissive. Even Fedora is likely about 30% pure copy-left, another 25% mixed, and the rest permissive [0].

[0] https://www.sonarsource.com/blog/the-state-of-copyleft-li...

Copy-left hasn't worked out how I thought it would

Posted Oct 27, 2025 13:21 UTC (Mon) by dfc (subscriber, #87081) [Link] (2 responses)

> I was startled to find the tunnelling software distributed with Microsoft's InTune was just repackaged GPL software,

What was the GPLed tunnelling software?

Copy-left hasn't worked out how I thought it would

Posted Nov 7, 2025 9:02 UTC (Fri) by ras (subscriber, #33059) [Link]

Sorry, I don't remember it's name, or even if it had a name. I vaguely recall the product was discontinued after I left in favour of something looks to tie into the Authentication / Authorisation stuff Microsoft uses for their clould. It was what InTune on Android used when you asked it to create tunnel. It connected to a server side package you downloaded from Microsoft.

To be honest, I was grateful and somewhat surprised they bothered to ship the package for Linux. The company I worked for based everything on Linux, but was taken over by a Microsoft shop 100's of times it's size. They forced InTune on me. If Microsoft hasn't supplied a Linux package, they would have forced a Windows server on me.

Copy-left hasn't worked out how I thought it would

Posted Nov 12, 2025 9:21 UTC (Wed) by ras (subscriber, #33059) [Link]

I recall what the old server was - it was an instance of OpenConnect. It's LGPL, not GPL.

Oddly, I stumbled across a modern version of this software today. It's called the "Microsoft Tunnel Gateway". It's a docker image, available here: https://learn.microsoft.com/en-us/intune/intune-service/p...

I had a brief look. The "/usr/local/bin/mstunnel" binary contains the string "/etc/ocserv", which happens to be where OpenConnect stores it settings. "/etc/pam.d/ocserv" has this line: "auth required /lib/security/mise_pam.so mise_config=/etc/ocserv/mise.json enable_mise_logging". There is no "/etc/ocserv" directory, but from memory that's created when you install it, which I didn't do.

According to Debian's copyright file, libopenconnect code is LGPL, but also contains GPL code. It was statically linked. A charitable take is Microsoft assumed it was LGPL, and didn't modify it so they weren't required to contribute anything back.

That just leaves them with attribution, and making source available for [L]GPL code. They require your to accept a licence similar to this one before installing: https://support.microsoft.com/en-us/office/microsoft-soft... There are no other licences, attributions, or information about obtaining the source displayed.

The next Ubuntu release...

Posted Oct 26, 2025 13:40 UTC (Sun) by RazeLighter777 (subscriber, #130021) [Link] (2 responses)

I see the benefit in replacing the suid/setgid tools wonder what is the benefit in safer implementations of non privleged utilities? All security changes need a threat model, and exploiting coreutils isn't a very juice target. sudo-rs seems like a great example of benefit from a rust rewrite. But most of coreutils doesn't need setuid/setgid if I'm not mistaken.

The next Ubuntu release...

Posted Oct 26, 2025 13:43 UTC (Sun) by dskoll (subscriber, #1630) [Link]

The non-privileged utilities are still used a lot by the root user in various scripts. If an attacker can somehow arrange an exploit (eg, by creating a weirdly-named file that a script stumbles across) then the attacker can get root.

The next Ubuntu release...

Posted Oct 27, 2025 8:03 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

> I see the benefit in replacing the suid/setgid tools wonder what is the benefit in safer implementations of non privleged utilities?

There were buffer overflow bugs in tools like `file` that you would expect to just work. And in the modern world, "root" is meaningless: https://xkcd.com/1200/

The next Ubuntu release...

Posted Oct 24, 2025 1:45 UTC (Fri) by welinder (guest, #4699) [Link]

Agreed. And it's not even that the C versions are bug free -- probably not, but who knows? -- but that whatever bugs there might exist have been adapted to. New code, new bugs.

And it's humbling to see that a silly little bug deep in date can silently break unattended security updates!

Don't fix what aint broke

Posted Oct 24, 2025 6:30 UTC (Fri) by eru (subscriber, #2753) [Link] (19 responses)

Rewriting C utilities that have been battle-tested for decades in Rust might be a good idea in the long term,

I fail to see why it would be a good idea even in the long term. These utilities are done, the specifications do not change, or change very little. The only valid reason might be a future situation, where support for the C language starts disappearing, which is totally a fantasy scenario. C is too entrenched for that to happen within the expected lifetime of our technical civilisation.

Don't fix what aint broke

Posted Oct 24, 2025 7:40 UTC (Fri) by taladar (subscriber, #68407) [Link] (18 responses)

If they are done then why are there so many commits in the repo that the short version on

https://gitweb.git.savannah.gnu.org/gitweb/?p=coreutils.git

showing the last 16 commits doesn't even go back a week of development history?

These tools need maintenance and rewriting them in something with a saner build system than C has to offer after being around for 50 years certainly will make this easier.

As for C not disappearing, it feels like we are already in an era where most young people below the age of 35-40 or so do not learn it any more so you might be surprised how quickly the pool of potential volunteer maintainers will deplete for boring, mature C projects.

Don't fix what aint broke

Posted Oct 24, 2025 8:48 UTC (Fri) by alx.manpages (subscriber, #145117) [Link] (16 responses)

> As for C not disappearing, it feels like we are already in an era where most young people below the age of 35-40 or so do not learn it any more

I'm 32, and I went to university with people 7 years younger than me, and C was still the main language we studied there.

I've heard that some schools have reduced the amount of C courses, but it's still there.

> so you might be surprised how quickly the pool of potential volunteer maintainers will deplete for boring, mature C projects.

While the number of people knowing C enough might reduce, it might also increase the ratio of C programmers that know C very well. Self-selection can be a good thing. I don't expect the amount of C experts to diminish significantly.

> These tools need maintenance and rewriting them in something with a saner build system than C has to offer after being around for 50 years certainly will make this easier.

C has evolved quite a lot in these 50 years, and I'm not sure Rust is better than C. Most of the issues people complain about C are in reality issues with old C, or low quality compilers. Some issues remain in the latest GCC, but there's work on having an even better C in the following years.

Incremental improvements are better than entirely jumping to a new language, and this issue with Rust's date(1) is an example of why we should keep improving the C version, which is almost bug-free, instead of writing bugs in a different language.

<https://www.joelonsoftware.com/2000/04/06/things-you-shou...>

Don't fix what aint broke

Posted Oct 24, 2025 13:14 UTC (Fri) by LtWorf (subscriber, #124958) [Link] (1 responses)

Not only C is still there. Job offers in C outnumber the job offers in Rust (at least in my area).

Don't fix what aint broke

Posted Oct 27, 2025 8:43 UTC (Mon) by taladar (subscriber, #68407) [Link]

Job offers largely depend on the number of existing code bases since those almost always outnumber the new projects and nobody is arguing that there is more Rust code out there than C code at this point, what would be more interesting is the amount of people willing to take those jobs.

Don't fix what aint broke

Posted Oct 27, 2025 8:42 UTC (Mon) by taladar (subscriber, #68407) [Link] (13 responses)

> I'm 32, and I went to university with people 7 years younger than me, and C was still the main language we studied there.

Good for you. I am 43 and they tried to teach everything in Java back when I was in university. Yes, including stuff like OS memory management that is absolutely ridiculous to teach in a GC language without pointers.

Incremental improvements can only get you so far without breaking compatibility with existing code bases, which is the overwhelming reason to stick with an existing language in the first place. I think we can all agree that nobody wants a language that is essentially a new language in terms of compatibility just one that essentially bears the same name as the old language.

C has an absolutely gigantic pile of flaws that can never be fixed without breaking compatibility.

And this bug could have happened in any language, including C, someone starting the development of a feature but then forgetting to get back to it is hardly the kind of bug that says anything about the language itself.

Don't fix what aint broke

Posted Oct 27, 2025 10:53 UTC (Mon) by alx.manpages (subscriber, #145117) [Link] (12 responses)

> Incremental improvements can only get you so far without breaking compatibility with existing code bases,

Things get fixed slowly but steadily breaking old code. We got rid of implicit int, for example. We also got rid of K&R function definitions.

Recently, GCC has added -Wunterminated-string-initialization, which prevents bugs where a character array is initialized as a non-null-terminated character array.

We're discussing adding -Wzero-as-null-pointer-constant to C in GCC, which would also be a breaking change (which is why we're being slow at doing this), but it will most likely eventually be merged.

We got _Countof() in ISO C2y (and GCC 16, and Clang 21), which now counts arrays, and will soon count array parameters, making it really hard to step out of bounds in arrays. <https://inbox.sourceware.org/gcc-patches/cover.1755161451...>

> I think we can all agree that nobody wants a language that is essentially a new language in terms of compatibility just one that essentially bears the same name as the old language.

Even the kernel has moved to newer dialects of C. As long as the breakage is in small steps, it can be acceptable. Some new diagnostics (such as disallowing 0 as a null pointer constant) break existing code, but if the resulting new dialect is significantly safer, and programmers can handle the breakage relatively easily, it will be eventually adopted.

> C has an absolutely gigantic pile of flaws that can never be fixed without breaking compatibility.

The list isn't so gigantic, and some of them have already been fixed, and others are in the process of being fixed. Yes, some require breaking changes, and those have happened, and will happen again.

Don't fix what aint broke

Posted Oct 28, 2025 8:49 UTC (Tue) by taladar (subscriber, #68407) [Link] (11 responses)

Most of the flaws in C are so ingrained in the C community that C programmers even consider some of them features.

And while you might be able to break some surface level syntactic issues incrementally you will never be able to fix deep architectural issues. How would you e.g. introduce some change that replicates Rust's Send and Sync traits that prevent sharing data in broken ways across threads? How would you get rid of the separate passing of pointer and length values? How would you fix the utterly broken textual include system?

Don't fix what aint broke

Posted Oct 28, 2025 10:21 UTC (Tue) by alx.manpages (subscriber, #145117) [Link] (10 responses)

> How would you e.g. introduce some change that replicates Rust's Send and Sync traits that prevent sharing data in broken ways across threads?

All of the code I write these days is single-threaded, so I don't need that, nor feel qualified to comment on that. :)

> How would you get rid of the separate passing of pointer and length values?

That's in my comment above. IMO, possibly the most important improvement in the language in decades, and coming soon.

I'm currently working on a patch for GCC, which already works. I'm just working on adding diagnostics for a corner case.

Please follow this link: <https://inbox.sourceware.org/gcc-patches/cover.1755161451...>.

The idea is to use macros for wrapping functions that accept a pointer and a length, so that the macro gets an array, and decomposes it into the right pointer and length in a safe manner.

Here's an example wrapping strftime(3):

#define strftime_a(dst, fmt, tm) strftime(dst, countof(dst), fmt, tm)

First, I'll explain how this wrapper macro is safe.

countof() requires that the input is an array, and produces a compile-time error if that constraint is violated. This already exists, but of course it doesn't work if you pass something passed in the stack. The patch I'm working on will make countof() also work on array parameters, and return the declared number of elements.

For this to work, one must wrap *all* functions that get a pointer and a length, and non-wrapped calls would be points of failure (resembling Rust's unsafe code). The goal would be to avoid all such calls, or minimize them.

Second, I'll explain how to disallow the non-wrapped calls.

The idea would be that strftime(3) should only be allowed to be called through strftime_a(). You could use the [[deprecated]] attribute on the function prototype, and then the wrapper could use _Pragma to disable that diagnostic within the macro. This would need some help from the compiler.

Alternatively --and probably more easily--, you could set up a script in the build system that finds all such wrappers (if you use consistent naming, such as a trailing `_a`, that would be feasible), and then checks if there's any calls to the raw function anywhere in the code, reporting them all.

In the link above, you can find a proof-of-concept program that doesn't specify a pointer length manually at all; yet it uses malloc(3), and it passes array parameters to functions.

> How would you fix the utterly broken textual include system?

I'm sorry, but I consider this a feature. I like how it works. :)

If you have any specific concerns about #include, please mention them, but being textual makes it simple and great, IMO.

Don't fix what aint broke

Posted Oct 28, 2025 12:27 UTC (Tue) by mathstuf (subscriber, #69389) [Link] (9 responses)

> If you have any specific concerns about #include, please mention them, but being textual makes it simple and great, IMO.

From the software side:

- it is hard to hide implementation details beyond "convention"
- it would be nice if C had a way to expose some members for consumers to be able to use while other fields are off-limits (while still supporting manual struct layout use cases)
- header guards are a fact of life (`#pragma once` can run into issues with hard links, symlinks, bind mounts, etc.)
- no namespacing (you never know when `a.h` might start interfering with `b.h`)
- `config.h` patterns rather than more targeted configuration selection (see also: related build gripe)

From the build system side:

- header search is implicit; one really should also depend on all header paths searched until the target is found *not* existing for truly reliably builds
- it is hard to know any kind of higher-level information about headers: which headers belong to which "library" for tools like "you don't need to search for library X because you don't actually use it" or "you're including X's headers but you're finding them because Y makes its headers implicitly available"
- `config.h` busting caches project-wide on changes that affect one function's implementation decision

Some of these are definitely in the "trivial" or "no one cares about that edge case", but I'd much rather prefer something more structured.

Don't fix what aint broke

Posted Oct 28, 2025 13:42 UTC (Tue) by alx.manpages (subscriber, #145117) [Link] (4 responses)

> From the software side:

[... hiding stuff ...]

I don't enjoy hiding stuff. Just tell users to not depend on that, and use names that clearly tell users that something is an implementation detail. I guess this falls within what (at least some) C programmers consider a feature.

> - header guards are a fact of life (`#pragma once` can run into issues with hard links, symlinks, bind mounts, etc.)

Not a big deal. There are linters that report if the header guard doesn't follow some pattern (so, if you pasted from elsewhere, it would catch that mistake). Even if not using them, it's trivial to write a script that checks that each header file has a header guard consistent with its pathname.

> - no namespacing (you never know when `a.h` might start interfering with `b.h`)

That's up to the programmer's hygiene. Namespaces look like a_foo and b_foo in C.
In practice, it rarely is an issue.

You could even have namespaces in C (there are tricks with structures), although people don't use them often because it's not worth it; underscores are cheap and reliable.

> From the build system side:
>
> - header search is implicit; one really should also depend on all header paths searched until the target is found *not* existing for truly reliably builds

Yep. One can assume system headers are stable, but other than that, yes, each object file needs to recursively depend on all included files in the build system. This is handled easily by the compiler with `-M -MP`.

> - it is hard to know any kind of higher-level information about headers: which headers belong to which "library" for tools like "you don't need to search for library X because you don't actually use it" or "you're including X's headers but you're finding them because Y makes its headers implicitly available"

iwyu(1) solves this issue entirely. See <https://include-what-you-use.org/>.

> - `config.h` busting caches project-wide on changes that affect one function's implementation decision

You dislike autotools (and other build systems). I also dislike them. I use hand-written makefiles that don't have this issue.

Don't fix what aint broke

Posted Oct 28, 2025 15:30 UTC (Tue) by mathstuf (subscriber, #69389) [Link] (3 responses)

> I don't enjoy hiding stuff. Just tell users to not depend on that, and use names that clearly tell users that something is an implementation detail. I guess this falls within what (at least some) C programmers consider a feature.

I have not found the "we're all adults here" guidelines to be sufficient to not have to carry exceedingly dumb behaviors in the name of backwards compatibility. Hyrum's Law is very much in effect and just not offering things in the first place is the best solution IME.

> Yep. One can assume system headers are stable, but other than that, yes, each object file needs to recursively depend on all included files in the build system. This is handled easily by the compiler with `-M -MP`.

No, it is not. I'm not saying that. I'm saying that, if in the search for `<stdio.h>` you *check* for `/some/user/path/stdio.h` you *depend* on this file *not existing*. Only fully hygenic build systems even attempt to capture such things.

> iwyu(1) solves this issue entirely. See <https://include-what-you-use.org/>.

That tells you what *headers* you need. What tells you that your `pkg-config --cflags` call is no longer needed because you no longer use any of the headers associated with the package(s) found by it? Similarly, what tells you that header `frobnitz.h` really should have an associated `pkg-config --cflags frobdoodle` in your build system?

> I use hand-written makefiles that don't have this issue.

Presumably you then manage `-D` flags on a per-TU basis?

Don't fix what aint broke

Posted Oct 28, 2025 22:47 UTC (Tue) by alx.manpages (subscriber, #145117) [Link] (2 responses)

> I'm saying that, if in the search for `<stdio.h>` you *check* for `/some/user/path/stdio.h` you *depend* on this file *not existing*. Only fully hygenic build systems even attempt to capture such things.

I think I'm still not understanding. Please clarify further.

> What tells you that your `pkg-config --cflags` call is no longer needed because you no longer use any of the headers associated with the package(s) found by it?

Hmmm, I see. There's no solution for that, as far as I know. I don't see this as a significant issue of header files, though.

> Similarly, what tells you that header `frobnitz.h` really should have an associated `pkg-config --cflags frobdoodle` in your build system?

Documentation. Manual pages have a LIBRARY section, which is where this should be covered. In libc functions, it looks like this (see for example printf(3)):

LIBRARY
Standard C library (libc, -lc)

For libraries that need pkgconf(1), it should be documented in that section.

> Presumably you then manage `-D` flags on a per-TU basis?

Yes.

Don't fix what aint broke

Posted Oct 29, 2025 2:49 UTC (Wed) by mathstuf (subscriber, #69389) [Link] (1 responses)

> > I'm saying that, if in the search for `<stdio.h>` you *check* for `/some/user/path/stdio.h` you *depend* on this file *not existing*. Only fully hygenic build systems even attempt to capture such things.

> I think I'm still not understanding. Please clarify further.

Let's say you have:

```
// gcc -I/home/alx/include main.c

#include <stdio.h>

int main(int argc, char* argv[]) {
printf("%d args!\n", argc);
return 0;
}
```

If you compile this, fine, you use `/usr/include/stdio.h`. However, if you then create `/home/alx/include/stdio.h`, what tells your build that this TU is now out-of-date? Because if you compile it again, you'll get different results. The problem is that `-M` and friends will *not* report this depends-on-not-existing (because Makefile and `ninja` both do not have a way to represent this kind of dependency). Hermetic builds can get around it because they do such resource discovery separately to set up the hermetic environment for the compilation itself.

Don't fix what aint broke

Posted Oct 29, 2025 9:50 UTC (Wed) by alx.manpages (subscriber, #145117) [Link]

Ahhh, I see what you mean now. Yes, that's imperfect. I'm thinking of a way that could work, although I haven't tried it. Here's the usual rule for buildin .d files (details may vary):
$(TU_d): $(builddir)/%.d: $(SRCDIR)/% Makefile $(pkconf_file) | $$(@D)/
	$(CC) $(CFLAGS_) $(CPPFLAGS_) -M -MP $(DEPHTARGETS) -MF$@ $<
You could then have a second set of dependency files which get rebuilt unconditionally:
$(TU_d2): $(builddir)/%.d2: $(SRCDIR)/% Makefile $(pkconf_file) FORCE | $$(@D)/
	$(CC) $(CFLAGS_) $(CPPFLAGS_) -M -MP $(DEPHTARGETS) -MF$@ $<
This second set would make sure that the new dependencies are *also* considered. That would make the build system slower, though. If you measure that and find the additional time to be reasonable, you could do that. Alternatively, be careful with your system headers; but I agree that's not ideal.

Don't fix what aint broke

Posted Oct 28, 2025 14:01 UTC (Tue) by paulj (subscriber, #341) [Link] (3 responses)

> it is hard to hide implementation details beyond "convention"

This isn't really true. C actually makes it reasonably easy to hide implementation, as long as you're happy to use heap-allocated objects. You just have a interface like:

extern foo;

foo *foo_new();
foo_status foo_action(foo *);
void foo_finish(foo *);

If you wish make the last take a double-point, so that foo_finish can "reach back" into the caller and null out its reference, as a safety assist (though, doesn't prevent caller having other pointer stashed - if this is a worry, implement a weak ref system and use that instead).

If you want to allow stack-allocation you need a bit more of a dance, a foo_xxxx function to give a size to allocate and a foo_init() function to initialise it.

Languages with things like generics based on included templates (literal or effective) leak implementation much more than C does.

Don't fix what aint broke

Posted Oct 28, 2025 15:33 UTC (Tue) by mathstuf (subscriber, #69389) [Link] (2 responses)

I'm well aware of the heap-hiding trick. I'm talking about the functionality discussed in this GNU Cauldron talk: https://www.youtube.com/watch?v=bYxn_0jupaI

Basically, a way to have the structure available for performance reasons for specific fields while hiding others that should not be messed with. But you also want to control ABI layout for compactness or compatibility requirements.

Don't fix what aint broke

Posted Oct 28, 2025 15:47 UTC (Tue) by paulj (subscriber, #341) [Link] (1 responses)

You can have partially exposed objects too.

// public header
// foo.h

typedef struct {
// public fields
int x;
int y;
} foo;

// foo_private.h

typedef struct {
foo pub;
// private stuff
} foo_priv;

And you have the foo_new() function allocate a foo_priv and return &(foo_priv).pub; The various foo_xxx(foo *) functions can cast the foo * back a foo_priv *. Your user gets a limited 'foo' struct, your implementation can add whatever further internals to the end of that. You can extend this approach to allow arbitrary composition of objects into a hierarchy, e.g. see include/linux/conrtainer_of.h in the Linux sources.

Personally, I wouldn't use that approach myself. I would just rely on the functions in the API to retrieve information - you can use linker version maps, and/or redirection to manage compatibility over time. If you want to have custom implementations for different instances, supply a struct of function pointers for the API - i.e. a runtime interface.

Don't fix what aint broke

Posted Oct 28, 2025 18:45 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

> But you also want to control ABI layout for compactness or compatibility requirements.

The public and private fields might be interleaved to ensure optimal layout.

Don't fix what aint broke

Posted Oct 27, 2025 7:14 UTC (Mon) by eru (subscriber, #2753) [Link]

showing the last 16 commits doesn't even go back a week of development history?

Affecting 9 programs out of the around 100 in coreutils. And the changes look like only internal polishing no user is likely to notice. A typical example of work on software that is stable, and deep in maintenance mode. Rewriting in some other language takes it again to active development, kind of resets the lifecycle.

uutils is doing well, but needs to be carefully managed

Posted Oct 23, 2025 21:12 UTC (Thu) by pixelbeat (guest, #7440) [Link] (13 responses)

Note ubuntu 25.10 is still using GNU for the "scary" commands like cp, mv, rm, ... They should rip that band aid off sooner rather than later, so that any data corruption possibilities are identified before ubuntu 25.10 becomes more established or the next LTS release becomes imminent. Copying a file on unix has lots of edge cases multiplied by various file sytems and even kernel bugs etc.

Then there are fundamental issues with SIGPIPE handling in all the uutils https://github.com/uutils/coreutils/issues/8919

Also there are questionable interface changes being added like 12 ways to get a sha3 https://github.com/uutils/coreutils/issues/8984

I wish them well, but this needs to be carefully managed.

uutils is doing well, but needs to be carefully managed

Posted Oct 23, 2025 23:26 UTC (Thu) by csigler (subscriber, #1224) [Link]

> I wish them well, but this needs to be carefully managed.

I cannot possibly (imaginarily) upvote this comment enough.

For those familiar with the 1976 movie "Network":

"You have meddled with the primal forces of Unix, and _you_will_atone_!!!"

Clemmitt

uutils is doing well, but needs to be carefully managed

Posted Oct 24, 2025 4:04 UTC (Fri) by Keith.S.Thompson (subscriber, #133709) [Link] (11 responses)

Oddly, /usr/bin/false is a symlink to the Rust version, but /usr/bin/true is a symlink to the GNU C version.

I wonder whether that was a deliberate decision.

("true" and "false" are bash builtins, so the commands under /usr/bin probably aren't used very often.)

uutils is doing well, but needs to be carefully managed

Posted Oct 24, 2025 6:05 UTC (Fri) by mb (subscriber, #50428) [Link] (1 responses)

Rust is not the one and only truth, yet.

uutils is doing well, but needs to be carefully managed

Posted Oct 24, 2025 11:31 UTC (Fri) by makapuf (guest, #125557) [Link]

Yes, this will allow easy feature testing with test true == false

/s

uutils is not doing well

Posted Oct 24, 2025 9:27 UTC (Fri) by jengelh (subscriber, #33263) [Link] (1 responses)

>/usr/bin/false is a symlink to the Rust version, but /usr/bin/true is a symlink to the GNU C version

uutils-md5sum was recently broken too[1], so it is only natural to make a sensitive program like /bin/true (only one very specific return value is allowed!) be based on a known-good implementation.

[1] https://www.phoronix.com/news/Ubuntu-25.10-Coreutils-Make...

uutils is not doing well

Posted Oct 24, 2025 10:18 UTC (Fri) by collinfunk (subscriber, #169873) [Link]

Well, GNU true returns non-zero in some cases. :)

$ /bin/true; echo $?
0
$ /bin/true --help > /dev/full; echo $?
true: write error: No space left on device
1

uutils has more overhead

Posted Oct 24, 2025 11:09 UTC (Fri) by pixelbeat (guest, #7440) [Link] (3 responses)

Interesting, I hadn't realized /bin/true was still GNU. Perhaps this is a performance consideration, as all uutils have a larger startup overhead than their GNU equivalents, due mainly to the large multicall binary being used (due to rust binaries being significantly larger). For example:
$ time seq 10000 | xargs -n1 true
real	0m8.634s
user	0m3.178s
sys	0m5.616s

$ time seq 10000 | xargs -n1 uu_true
real	0m22.137s
user	0m6.542s
sys	0m15.561s
It irks me to see mention of rust implementations being faster, when at a fundamental level like this they're slower and add significant overhead to every command run

uutils has more overhead

Posted Oct 24, 2025 13:41 UTC (Fri) by ebee_matteo (subscriber, #165284) [Link] (2 responses)

From https://github.com/uutils/coreutils:

> If you don't want to build the multicall binary and would prefer to build the utilities as individual binaries, that is also possible.

This is a decision from the distribution to take, I would say.

uutils has more overhead

Posted Oct 24, 2025 13:55 UTC (Fri) by pixelbeat (guest, #7440) [Link] (1 responses)

Yes agreed, though it's a different decision with uutils as the separate binaries are significantly larger.

Note also that GNU coreutils can be built as a multi-call binary. Testing the performance of that here shows that the overhead is not rust specific, but rather the dynamic linker overhead loading the full set of libs linked by the multi-call binaries

$ ./configure --enable-single-binary --quiet && make -n $(nproc)

$ time seq 10000 | xargs -n1 src/true

real	0m21.595s
user	0m7.437s
sys	0m14.151s

uutils has more overhead

Posted Oct 25, 2025 8:26 UTC (Sat) by ebee_matteo (subscriber, #165284) [Link]

I agree, but it should be noted that the main reason the binaries are larger is linked to compilation flags around panic behavior (abort vs. unwind), and the fact that the stdlib is built with the generic use case in mind, and as such contains verbose backtrackes and error printing which inflate the final binary size.

I still think that this is a decision for the distribution. It's a tradeoff between being able to better debug and diagnose issues, and binary size.

If I try to use a recompiled version of Rust stdlib and panic = abort, for many binaries I get comparable sizes to the GNU version (not all, this is true, but some of them also add some features).

uutils is doing well, but needs to be carefully managed

Posted Oct 24, 2025 13:31 UTC (Fri) by juliank (guest, #45896) [Link] (2 responses)

Yes, a bunch of things disabled some scripts in .d directories by creating symlinks to /bin/true in their place.

Because we dispatch by argv[0] in the multi-call binary we then did not find the binary because the tool was invoked with the symlink name.

We do have a hardlink farm now and can resolve based on hardlink where available but it's a bit messy because it requires /proc to be mounted.

uutils is doing well, but needs to be carefully managed

Posted Oct 24, 2025 14:58 UTC (Fri) by pixelbeat (guest, #7440) [Link] (1 responses)

Ah right. Note the default GNU coreutils setup avoids that issue by using a wrapper script rather than symlinks. That's the default behavior with ./configure --enable-single-binary in GNU coreutils. I.e. it would install a file with the following contents at /usr/bin/true
#!/usr/bin/coreutils --coreutils-prog-shebang=true

uutils is doing well, but needs to be carefully managed

Posted Nov 2, 2025 10:02 UTC (Sun) by juliank (guest, #45896) [Link]

This is genius and solves our AppArmor profile problem again, because now say `/usr/bin/ls` actually works again, for both GNU and Rust versions and you don't need to deal with any of the shenanigans.

No problems for oldschool Linux users ...

Posted Oct 23, 2025 21:35 UTC (Thu) by JMB (guest, #74439) [Link]

It is interesting that it seems that automatic updates are of high priority.
For Smartphone Junkies that may be true (due to fear of missing out),
but for experts there is no need to get even security fixes in less than a week.

And concerning servers ... in most cases even extreme security relevant problems
are not fixed due to other priorities anyway ... form frozen zone ... to ice age.

At least the problem shows that it is not fuitile to have tested the new Rust code ...
but still wondering if concerning all bugs Rust really have a positive benefit
for experienced coders ... seems more a hype than something which can be prooved.

uutils date bug timeline and root cause

Posted Oct 23, 2025 22:00 UTC (Thu) by geofft (subscriber, #59789) [Link] (6 responses)

I think it would be interesting for Ubuntu to do (at some point, not right this second) an incident report / postmortem on how this happened.

Looks like this was originally reported in https://pad.lv/2127970 on October 16, exactly one week ago and also exactly one week after Ubuntu 25.10's release. The reporter originally mentioned the bug in the context of a homegrown backup script that was failing silently, and they got the fix into the proposed stable update repository yesterday, with an (understandable) argument about why it wasn't same-day levels of urgent.

This morning, someone pointed out that it breaks unattended-upgrades. It seems to me that it was only at this point that it was tracked as a security issue, and the package is now available in both the (prod) stable updates repository and the more minimal security updates repository.

The actual bug itself is simply that support for `date -r <file>` wasn't implemented. The issue https://github.com/uutils/coreutils/issues/8621 and the pull request implementing support https://github.com/uutils/coreutils/pull/8630 were both filed on the same day, September 12 of this year, and it was reviewed and merged into main two days later. This, understandably, postdates whichever release Ubuntu snapshotted.

I think I am mostly surprised that the command silently accepted -r and did nothing, and indeed from the actual diff (https://github.com/uutils/coreutils/commit/88a7fa7adfa048...) it's pretty clear that the argument parser had support for it but it wasn't wired up to do anything. If the command had instead returned an argument parsing error, I think this would have been caught a lot quicker. It does seem a little bit odd that whoever implemented this in the argument parser didn't at least add an "if -r, throw 'todo'" case. But it's also interesting that this was not statically caught. The Rust compiler is pretty good at warning and complaining about unused variables. (To be fair, most C compilers and many other languages are too, though anecdotally these warnings seem less noisy in Rust and I've seen more codebases in Rust where this is a hard failure than C codebases using -Werror. Also, Rust has #[must_use], if you want to be thorough.) However, there wasn't actually an unused variable here; you can see that you get the value out of the parsed-arguments object by asking for the value of the flag.

I wonder if it's worth thinking about an argument-parsing API in Rust that would raise an unused-variable warning at compile time if a parsed command-line flag or argument is never used in the code. It might also be possible to do this with the existing parser with a sufficiently clever linter. Either way, the lack of compile-time detection of this bug feels at odds with the philosophy of a Rust rewrite of coreutils, i.e., that there's merit in having tools do the checking instead of trusting and expecting people to write perfect code.

I also think it would be very much worth it for Ubuntu and the uutils developers to manually do an audit for all arguments that are parsed in an argument parser but not actually implemented. If this pattern happened once, it likely isn't the only case.

uutils date bug timeline and root cause

Posted Oct 24, 2025 7:49 UTC (Fri) by taladar (subscriber, #68407) [Link] (4 responses)

If they had used to clap derive way of specifying arguments it would likely have been caught as an unused case but presumably that wasn't an option to stay compatible with the old interface behavior exactly.

uutils date bug timeline and root cause

Posted Oct 24, 2025 9:19 UTC (Fri) by cyperpunks (subscriber, #39406) [Link] (2 responses)

uutils has been implemented without creating a robust and solid option parser first?

uutils date bug timeline and root cause

Posted Oct 27, 2025 8:45 UTC (Mon) by taladar (subscriber, #68407) [Link] (1 responses)

There is a robust and solid option parser (clap). They likely had to twist it a bit to get it to behave exactly as the ad-hoc historically grown interface of the existing binaries though.

clap problems

Posted Oct 28, 2025 7:07 UTC (Tue) by donald.buczek (subscriber, #112892) [Link]

A new parser has been proppsed [1] to work around the difficulties to adapt clap to the historical and inconsistent parsing of the old tools [2].

[1] https://github.com/uutils/coreutils/issues/4254
[2] https://github.com/tertsdiepraam/uutils-args/blob/main/do...

uutils date bug timeline and root cause

Posted Oct 24, 2025 19:48 UTC (Fri) by geofft (subscriber, #59789) [Link]

Unfortunately that doesn't seem to be caught, either by the regular compiler or clippy: https://play.rust-lang.org/?version=stable&mode=debug...

I guess that from the perspective of the compiler, the struct member is "used" because it's passed off to a macro / the derived Parser implementation assigns to it and otherwise uses it. So the fact that the user code isn't using it isn't alerted on.

uutils date bug timeline and root cause

Posted Oct 28, 2025 11:14 UTC (Tue) by job (guest, #670) [Link]

Interesting background, thank you!

That whole story is wrong on so many levels. That type of bug should simply not be possible, and the fact that there are chunks of core functionality still missing should immediately disqualify the idea of replacing coreutils on one of the biggest Linux distributions on the planet.

There are compliance test suites for coreutils. Surely the replacements must satisfy them before there can even be discussions about replacing them in production?

I don't know who Ubuntu thinks their customers are, but it can't be *that* important to rip out GPL-licensed code.

Load-bearing shell script?

Posted Oct 24, 2025 22:17 UTC (Fri) by smcv (subscriber, #53363) [Link] (25 responses)

What I'm wondering is why the automatic updates are relying on `date -r`, and not stat(). Is the automatic update a shell script?

(I thought Ubuntu used unattended-upgrades, which is Python with calls into the C++ apt libraries; or for desktops, maybe packagekit, which is C with calls into C++.)

Load-bearing shell script?

Posted Oct 26, 2025 21:05 UTC (Sun) by cyperpunks (subscriber, #39406) [Link] (24 responses)

> What I'm wondering is why the automatic updates are relying on `date -r`, and not stat(). Is the automatic update a shell script?
>
> (I thought Ubuntu used unattended-upgrades, which is Python with calls into the C++ apt libraries; or for desktops,
> maybe packagekit, which is C with calls into C++.)

While extending support for our product to *.deb based distros I truely amazed on how much packaging logic in Debian/Ubuntu is based is based on old fashion shell scripts. It's everywhere: build script, signing of packages, management of repos, service scripts, cron jobs, upgrade logic etc, all implemented in bash scripts. Red Hat has tried to move much of this stuff to Python, with some success, some tooling is horrible slow due to this choice.

Going forward it rather obvious to me the correct way forward is to convert all this stuff to rust, going to uutils
is the first baby step.

Load-bearing shell script?

Posted Oct 26, 2025 22:38 UTC (Sun) by stijn (subscriber, #570) [Link] (23 responses)

When dealing with state written in files and sequences of processes needed to produce these states/files, I find shell very expressive compared to say writing this in Python. I come at it from a different angle (bioinformatics), where there is a place for shell scripts before going to orchestration software such as Snakemake, Nextflow and others. I find writing equivalent functionality in Python painful. Maybe there is a danger of avoiding clear designs by relying on the flexibility of shell scripts, but I can also envision downsides to the opaqueness of binary executables. The latter make sense if they fill a well-defined niche with coherent functionality. At which stage they may end up as command lines in shell scripts. There is a mode of computing where objects are files and the transformations between them are naturally expressed in a shell-like language supported by powerful transformers (binary executables) and pipes. My natural instinct is to express more things this way, not fewer (I have no expertise in packaging logic, so I have no opinion about this particular case). I'm curious to other views.

Load-bearing shell script?

Posted Oct 27, 2025 0:46 UTC (Mon) by mathstuf (subscriber, #69389) [Link] (22 responses)

I have certainly written my fair share of shell scripts that have overgrown their crucible. Python is fine with some "clever" usage of its features. For example:

def git(*args):
return subprocess.check_output(('git',) + args, encoding='utf-8').strip()

The main benefit, to me, is not having to fight with escaping rules. Some of the tools I've written are still bash because they don't do much in the way of fancy argument dances. But once you're looking at `"${arg[@]}"` in any kind of frequency, I feel you really want to just have tuples and proper strings at your fingertips.

Load-bearing shell script?

Posted Oct 27, 2025 9:19 UTC (Mon) by kleptog (subscriber, #1183) [Link]

Anecdotally I've had a lot of success with using LLMs to migrate shell scripts to Python. Just giving it a brief description of what the script is supposed to do, that you want it in Python, then pasting the script itself gives good results. As you point out, the patterns are very simple and LLMs are very good at doing that mechanical transformation.

The advantage of shell scripts is that they work on even very minimal systems, whereas you cannot rely on Python working from the start.

Load-bearing shell script?

Posted Oct 27, 2025 9:22 UTC (Mon) by taladar (subscriber, #68407) [Link]

The main reason not to use shell scripts I have found is if I want to pass around several pieces of connected data (i.e. struct-like data) since shell scripts are just bad at that, especially at returning it from functions.

However I would argue that a static Rust binary is much better here than Python as a replacement since it can be relied on to work without a runtime system.

Load-bearing shell script?

Posted Oct 27, 2025 10:43 UTC (Mon) by stijn (subscriber, #570) [Link] (19 responses)

> The main benefit, to me, is not having to fight with escaping rules. Some of the tools I've written are still bash because they don't do much in the way of fancy argument dances. But once you're looking at `"${arg[@]}"` in any kind of frequency, I feel you really want to just have tuples and proper strings at your fingertips.

Ah yes, I agree with that. I wish for a shell that has that. Additionally some of my shell scripts are probably not written in a sufficiently hardened manner. They are not supposed to be called where bad actors could poison them, but who knows when a little bobby tables pops up.
What I love about shell and what I don't see can be replicated easily elsewhere (as succinctly in a streaming manner) is the composition of processes in pipes. Perhaps that's particular to the type of files / data I work with.

Load-bearing shell script?

Posted Oct 27, 2025 11:14 UTC (Mon) by cyperpunks (subscriber, #39406) [Link] (18 responses)

Proper error handling is hard in shell scripts, you can do it, but then scripts becomes ureadable fast.

The runtime problem with Python is indeed real, combined wiht the continuous flow of incompatibility changes in newer Python versions it means test coverge must be 100%, while is very rarely the case.

Deployment with single, cargo produced, binary is a much simpler and way safer.

Load-bearing shell script?

Posted Oct 27, 2025 14:57 UTC (Mon) by edgewood (subscriber, #1123) [Link] (17 responses)

One advantage of shell and interpreted scripts like Python is that you can never lose the source, or be unsure which version of the source produced the version of the program you're running.

Load-bearing shell script?

Posted Oct 27, 2025 15:29 UTC (Mon) by euclidian (subscriber, #145308) [Link] (3 responses)

Do any of the compiled languages have the option of embedding the source (as extra notes sections in the binary?).

I guess in general for a lot of binaries the bloat would be too much. Also for legacy C/C++ the pure source without knowing what compiler flags were used would be less useful but it would be a start for some code archeology.

I have some old binaries at work where we lost the source / patches we added to it.

Load-bearing shell script?

Posted Oct 27, 2025 19:18 UTC (Mon) by cyperpunks (subscriber, #39406) [Link] (1 responses)

RPM solves this source problem by providing both source rpms: *.srpm (for easy rebuilding) and also debugsource packages for use in debuggers like gdb.

Load-bearing shell script?

Posted Oct 28, 2025 8:45 UTC (Tue) by taladar (subscriber, #68407) [Link]

That only helps you if you installed the sources before the repository you installed the binary from goes offline.

Load-bearing shell script?

Posted Oct 28, 2025 2:32 UTC (Tue) by Fowl (subscriber, #65667) [Link]

I know that C# does have an option to embed input source code in the output binary, so there's at least 1. Of course what it embeds is the result of all preprocessor-type things, which would be a serious issue in languages that make heavy use of preprocessor.

Load-bearing shell script?

Posted Oct 27, 2025 15:30 UTC (Mon) by dskoll (subscriber, #1630) [Link] (12 responses)

And also, if things mess up, it's less of a hassle to debug if your cycle is "edit/run" rather than "edit/compile/run". I don't think compiled languages are appropriate for the non-performance-critical plumbing in cases like this.

Load-bearing shell script?

Posted Oct 28, 2025 8:46 UTC (Tue) by taladar (subscriber, #68407) [Link] (11 responses)

On the other hand many runtime scenarios that lead to rare bugs are in code paths where running the programming in such a way that it reaches that code path is non-trivial so having strong static checks can save you a lot of hassle there.

Load-bearing shell script?

Posted Oct 28, 2025 13:46 UTC (Tue) by dskoll (subscriber, #1630) [Link] (10 responses)

Is that the case for the typical use-cases of installers and sysadmin-type scripts? I have no data, so can't say either way, but my gut feeling is that the hassle of using a compiled language for those things outweighs the benefits.

Load-bearing shell script?

Posted Oct 29, 2025 9:00 UTC (Wed) by taladar (subscriber, #68407) [Link] (9 responses)

Well, I for one haven't had any stupid bugs like the ones I had in shell scripts in the past since I started using Rust for this, bugs such as programs failing in August and September due to leading zeroes making months octal and thus 08 and 09 invalid numbers.

Load-bearing shell script?

Posted Oct 29, 2025 15:19 UTC (Wed) by dskoll (subscriber, #1630) [Link] (8 responses)

Well, good for you. But you must be running some very weird shell:

$ expr 08 + 09
17

Also, doing arithmetic with a month number is IMO already a little bit suspicious... date arithmetic is much more complicated than that. Adding 1 to the month number of a date to get "same date next month" is already wrong in any programming language.

Load-bearing shell script?

Posted Oct 29, 2025 16:36 UTC (Wed) by Wol (subscriber, #4433) [Link] (1 responses)

Well, various languages iirc define octal as "numbers preceded with a 0". Can't remember if FORTRAN is one, but I seem to remember writing octal bytes as three-digit numbers. I know hex was preceded by 0x, but I thought octal was just 0.

And I know I've implemented date arithmetic with "add 1 to the month, carry if it's gone over 12", with similar subtract logic.

Cheers,
Wol

Load-bearing shell script?

Posted Oct 29, 2025 17:08 UTC (Wed) by dskoll (subscriber, #1630) [Link]

That sort of date arithmetic fails quite nicely on January 31st (for example). I am the author of Remind so I have 30+ years of battle-tested date calculation experience. :)

Load-bearing shell script?

Posted Oct 30, 2025 8:58 UTC (Thu) by taladar (subscriber, #68407) [Link] (5 responses)

> echo $(( 08 - 1 ))

in bash will give you

> bash: 08: value too great for base (error token is "08")

Load-bearing shell script?

Posted Oct 30, 2025 10:05 UTC (Thu) by malmedal (subscriber, #56172) [Link] (4 responses)

If people are actually doing date calculations in shell I think this is better:

$ date -d 'next wed + 4 weeks'
Wed 3 Dec 00:00:00 GMT 2025

Though I tend to prefer Date::Manip for the purpose, easier to get right.
$ perl -MDate::Manip -wle 'print ParseDate("first thursday of dec")'
2025120400:00:00
$ perl -MDate::Manip -le 'print DateCalc(ParseDate("last thursday of dec 2024"), "2 weeks")'
2025010900:00:00

Load-bearing shell script?

Posted Oct 30, 2025 10:09 UTC (Thu) by taladar (subscriber, #68407) [Link] (3 responses)

All nice and well but the point was more about error paths being hard to test with shell scripts in situations that would just be covered by compile time checks in something like Rust, not about the specifics of date calculations.

Load-bearing shell script?

Posted Oct 30, 2025 14:49 UTC (Thu) by dskoll (subscriber, #1630) [Link] (2 responses)

Sure, there are gotchas in shell scripting. But I don't think moving to a compiled language is the right answer, necessarily. There are better scripting languages than shell (Perl or Python, for example) that have fewer gotchas while remaining very easy for sysadmins to write or modify.

Load-bearing shell script?

Posted Oct 31, 2025 10:37 UTC (Fri) by taladar (subscriber, #68407) [Link] (1 responses)

In my experience Python is a complete non-starter for sysadmin work because it is completely unsuitable to the time range between "oldest still supported distro" and "latest, just release distro" so you would need to maintain multiple versions of each script.

Load-bearing shell script?

Posted Oct 31, 2025 14:53 UTC (Fri) by malmedal (subscriber, #56172) [Link]

Indeed, but compiled languages are still a big hurdle for many sysadmins, bringing us back to shell and perl :) (also similar things like TCL).

Actually I've noticed that sysadmin-stuff, that earlier would be written as a command-line app, is often written as a web-gui these days. So maybe try to convince them to use typescript at least?

Hmm, also go is quite popular for sysadmin stuff, come to think it.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds