Date bug affects Ubuntu 25.10 automatic updates
The Ubuntu Project has announced that a bug in the Rust-based uutils version of the date command shipped with Ubuntu 25.10 broke automatic updates:
Some Ubuntu 25.10 systems have been unable to automatically check for available software updates. Affected machines include cloud deployments, container images, Ubuntu Desktop and Ubuntu Server installs.
The announcement includes remediation instructions for those affected by the bug. Systems with the rust-coreutils package version 0.2.2-0ubuntu2 or earlier have the bug, it is fixed in 0.2.2-0ubuntu2.1 or later. It does not impact manual updates using the apt command or other utilities.
Ubuntu embarked on a project to "oxidize" the distribution by switching to uutils and sudo-rs for the 25.10 release, and to see if the Rust-based utilities would be suitable for the long-term-release slated for next April. LWN covered that project in March.
Posted Oct 23, 2025 20:52 UTC (Thu)
by dskoll (subscriber, #1630)
[Link] (57 responses)
... will be called Grateful Guinea-Pig
But seriously. Rewriting C utilities that have been battle-tested for decades in Rust might be a good idea in the long term, but anyone could have predicted short-term hiccups.
Posted Oct 23, 2025 21:38 UTC (Thu)
by geofft (subscriber, #59789)
[Link] (35 responses)
Posted Oct 23, 2025 22:21 UTC (Thu)
by dskoll (subscriber, #1630)
[Link] (31 responses)
I don't have anything against Rust (nor against C), but I do think it's unfortunate that the Rust utilities are licensed under the MIT license rather than the GPL. But that's a whole other debate...
Posted Oct 23, 2025 23:57 UTC (Thu)
by dralley (subscriber, #143766)
[Link] (3 responses)
Posted Oct 24, 2025 8:46 UTC (Fri)
by leromarinvit (subscriber, #56850)
[Link] (2 responses)
Posted Oct 24, 2025 12:13 UTC (Fri)
by joib (subscriber, #8541)
[Link]
Posted Oct 24, 2025 13:25 UTC (Fri)
by dralley (subscriber, #143766)
[Link]
Posted Oct 24, 2025 11:10 UTC (Fri)
by ballombe (subscriber, #9523)
[Link] (14 responses)
Posted Oct 24, 2025 13:56 UTC (Fri)
by ssokolow (guest, #94568)
[Link] (13 responses)
Posted Oct 24, 2025 15:02 UTC (Fri)
by rahulsundaram (subscriber, #21946)
[Link] (10 responses)
I don't see anything in the KDE 4 announcement to indicate that it was a developer preview. Where is this coming from?
Posted Oct 24, 2025 18:38 UTC (Fri)
by elimranianass (subscriber, #164758)
[Link]
Posted Oct 26, 2025 14:59 UTC (Sun)
by Jandar (subscriber, #85683)
[Link] (7 responses)
It wasn't in the official announcement but in prior mailing-list messages this was communicated. It can't be expected that users read these but packager for major distros had to be aware of this. I can't find a link right now but I recall it clearly how surprised I was as some distros used this first .0 release as their main KDE package.
The only link I found today only excerpts information form the KDE developers: https://www.osnews.com/story/19145/kde-400-released/
"but take note that the developers have clearly stated that KDE 4.0 is not KDE 4, but more of a base release with all the underlying systems ready to go, but with still a lot of work to be done on the user-visible side."
Posted Oct 26, 2025 15:52 UTC (Sun)
by Jandar (subscriber, #85683)
[Link] (6 responses)
"Stephan Binner writes a reminder note about the upcoming KDE 4.0 release (in an attempt to reign in wildly over-optimistic expectations by some users):
Before everyone starts to spread their opinion about KDE 4.0, let me spread some reminders:
KDE 4.0 is not KDE4 but only the first (4.0.0 even non-bugfix) release in a years-long KDE 4 series to come."
Posted Oct 27, 2025 1:26 UTC (Mon)
by rahulsundaram (subscriber, #21946)
[Link] (5 responses)
Posted Oct 27, 2025 3:20 UTC (Mon)
by pizza (subscriber, #46)
[Link]
I don't think there's any debate that the KDE folks badly dropped the ball with respect to officially communicating the "quality" one could expect from KDE 4.0.0, including touting it as the latest "stable release" at the time.
LWN has a good writeup of this debacle here:
https://lwn.net/Articles/316827/
Notably it quotes Aaron Seigo [1] shortly after 4.0.0 was tagged for release:
"KDE 4.0.0 is our "will eat your children" release of KDE4, not the next release of KDE 3.5. The fact that many already use it daily for their desktop (including myself) shows that it really won't eat your children, but it is part of that early stage in the release system of KDE4. It's the "0.0" release. The amount of new software in KDE4 is remarkable and we're going the open route with that."
..and later, he said:
"I have to admit that it's really hard to stay positive about the efforts of downstreams when they wander around feeling they should be above reproach while simultaneously hurting our (theirs and ours) users in a rush to be more bad ass bleeding edge than any other cool dude distro in town. I hope this time instead of handing out spankings, the distros can sit back and think about things and try and figure out how they played an unfortunate part in the 4.0 fiasco."
Posted Oct 27, 2025 12:28 UTC (Mon)
by Jandar (subscriber, #85683)
[Link] (3 responses)
The warning on 30th December 2007 about a release announced on Friday, 11 January 2008 is revisionist history? My understanding of calendar dates and the meaning of "revisionist" differs from yours.
Posted Oct 27, 2025 13:23 UTC (Mon)
by rahulsundaram (subscriber, #21946)
[Link] (2 responses)
Posted Oct 28, 2025 12:29 UTC (Tue)
by Jandar (subscriber, #85683)
[Link] (1 responses)
I hope none of the packages I use is packaged by you.
This is my last post in this thread. It seems to me we are unable to reach a common ground.
Posted Oct 28, 2025 23:34 UTC (Tue)
by rahulsundaram (subscriber, #21946)
[Link]
No disagreements there. I specifically said including release announcement but did not limit my comments on it. If anyone is able to show any public announcement anywhere that clear communicated that KDE 4.0 was meant as a developer preview only, I am happy to change my mind. So far, I see no evidence of that.
> I hope none of the packages I use is packaged by you.
That slight is uncalled for but good news for you, I am no longer involved in maintaining packages.
Posted Oct 27, 2025 0:40 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link]
Posted Oct 25, 2025 16:23 UTC (Sat)
by raven667 (subscriber, #5198)
[Link] (1 responses)
This is something I want to put a spotlight on, the choice for Canonical to replace coreutils with uutils, and whatever consequences come of that good or bad, reflects on Canonical's engineering, not the uutils developers who were just making a "my first Rust project" for fun, not to create an "Enterprise" replacement. I'd hate to dump unwanted criticism onto someone just for putting their personal project on the Internet about how its not good enough, because someone else decided to pull it as a dependency, that's how we end up eating our own people.
Posted Oct 25, 2025 16:28 UTC (Sat)
by dskoll (subscriber, #1630)
[Link]
Yes, Canonical seems to believe in "move fast and break stuff" which is fine if you run a parasitic data-harvesting company designed to abuse users and damage their mental health, but not a great strategy for an OS.
Posted Oct 24, 2025 22:37 UTC (Fri)
by ras (subscriber, #33059)
[Link] (11 responses)
Sometimes that's because companies just ignore them. For example I was startled to find the tunnelling software distributed with Microsoft's InTune was just repackaged GPL software, which they redistributed with no attribution. Instead they slapped the usual, restrictive MS commercial licence on code they had no rights to whatsoever. I only found out because the Linux version was a container of some sort that didn't work. So I unpacked it, eventually discovered what it was, and was able to replace it with suitably tuned Debian packages that received timely security updates. Apple took the other route, which is to avoid copy-left entirely. Most of the app stores steer you in that direction as well.
Either way, copy-left demonstrably didn't force those companies to contribute back. The most annoying example of side-stepping copy-left is perhaps NVidia's kernel modules. I have no idea if it was legal or not and as far as I know it hasn't been tested, but regardless it does demonstrate how ineffective copy-left can be.
Meanwhile, other companies have contributed back. Lots of them. That's sometimes because they adopted open source as a marketing technique. But often it's because they have no choice. Open source projects like the kernel, GCC and Rust move at such a fast pace that the effort of carrying the patches is too big, so they post them upstream.
In an odd way, who uses copy-left vs permissive licences has turned out to be the opposite of how I naively thought it would. The most enthusiastic uses of the AGPL and it's relatives are commercial users, who are trying to prevent other commercial users from selling their work. In the meantime time, the open source world is gradually moving towards permissive licences. The Rust and the JS ecosystems being prime examples. Neither of those ecosystems seem to suffer from it, as both have enormous amounts of code contributed.
Posted Oct 25, 2025 12:11 UTC (Sat)
by khim (subscriber, #9252)
[Link] (7 responses)
That's not true. Linux have jumped ahead of BSD precisely because companies have contributed back. A lot. Not all of them did, sure, but enough for that to matter. What copyleft failed to do was to implement RMS's dream of proprietary software destruction.
Posted Oct 26, 2025 10:25 UTC (Sun)
by ras (subscriber, #33059)
[Link] (6 responses)
Yes, I think that's true. But you are apparently saying that's because the GPL forced them to contribute back.
They had two choices - contribute to BSD with no obligations imposed on them whatsoever, or contribute to Linux where the GPL effectively obligating these capitalist entities to give away their work from free. It looks like you are claiming they chose Linux because it forced this obligation on them?
That's hard to swallow, particularly given the other explanations I can think of. The first is the AT&T legal threat hanging over the BSD's at the time. The second is Linus has always made of contributing to Linux easier than the BSD's. The third is BSD's tight leash on their ports tree, whereas Linux outsourced that to GNU and the distro's. That made things like OpenWrt and Alpine possible. Linus's willingness to invite new ideas into the kernel contributes to this day with Rust. It made it a much more inviting ecosystem, as it was one far more likely to accept the tweaks a company with new requirements needed to add. But to be fair, I don't have a clue what the real driver was. I just doubt it was the GPL.
Regardless, once those heavyweight contributions started rolling the shear range of hardware supported created made Linux a more attractive base than the BSD's, GPL or no GPL. The pace of development and lack of stable kernel API meant carrying patches became very burdensome, so it becomes in your interest to contribute work upstream regardless of copy-left.
As I said, if you look at the shear amount of code that is contributed back to projects with permissive licences now (I suspect they get more in total than copy-left projects, just because of the shear number of them) copy-left doesn't look important to open source as it onced seemed. If it's true companies have to be forced to contribute back, why do we have typescript or Cassandra? I doubt it made that much difference to the kernel either.
Posted Oct 26, 2025 10:50 UTC (Sun)
by khim (subscriber, #9252)
[Link] (2 responses)
Nope. Some chose Linux, some chose BSD. Some even chose Minix! Even today SONY prefers BSD for their consoles. But the ones who picked Linux had to provide things back… over decades these things adds up. OpenWrt was born from the described process. It's even documented on LWN. It was GPL — among other things. Linus interpretation of it, tit-for-tat exchange was attractive to many. GPLv3, that tried to close the loophole and solve that problem of printer that birthed the GPL in the first place crashed the equilibrium: companies have found it too onerous and dangerous and because GPLv3 is not compatible with GPLv2 it divided the community, too.
Posted Oct 29, 2025 10:55 UTC (Wed)
by ras (subscriber, #33059)
[Link] (1 responses)
You're missing my point. They didn't choose Linux because of the GPL. They chose Linux for other reasons. I now think whatever those reasons were are what lead Linux became successful than the BSD's, not the GPL. But yes, I agree the GPL forcing those contributions back contributions created a feedback loop accelerating the process once it got started.
> Some chose Linux, some chose BSD
And some who chose BSD contributed back heavily. Example: Netflix's contributions to FreeBSD. Turns out you don't need the GPL to make that happen.
> OpenWrt was born from the described process.
It certainly sped things up. I suspect OpenWrt using the binary kernel modules that didn't have source provided sped things up a good deal more.
We know LinkSys didn't like the GPL because they then moved to VxWorks to avoid it. Evidently they liked the BSD's even less, because they didn't move to it despite being free and including non-GPL user space tools. It would be interesting to know why.
Then as often happens, LinkSys discovered open source sells. The WRT54GL could be purchased from LinkSys (at a substantial premium!) long after the VxWorks version of the WRT54G had died. Espressif went down a similar road to enlightenment with the esp8266. Everything from them was initially proprietary. It wasn't the GPL that lead either company down this path.
Yes, it is, assuming you use the original GPLv2 with its "or later version" clause.
Posted Oct 29, 2025 13:26 UTC (Wed)
by pizza (subscriber, #46)
[Link]
No, Linksys moved to VxWorks for later revisions of the WRT54G because it had lower system requirements, allowing them to use less capable (ie cheaper) hardware. [1] They continued to sell the higher-spec hardware as the more expensive WRT54GL, which (according to them) eventually became "the best selling wireless router of all time".
Subsequent products in the WRTxx (and even the WRT54xx) product family remained predominantly (if not entirely) Linux-based.
Not exactly the actions of an organization trying to avoid the GPL...
[1] Same SoC, but half the RAM and flash.
Posted Oct 28, 2025 11:03 UTC (Tue)
by job (guest, #670)
[Link] (2 responses)
This is empirically and trivially true. Linux won, BSD didn't, despite that BSD had a huge head start.
Companies contribute to Linux specifically because they *know* that their competitors can't incorporate these contributions into some proprietary software.
The business models reflect this, too. In the GPL world it is common to sell support and subscriptions, and to some extent sell dual licenses. In the BSD world the dominant model is to sell proprietary add-ons and non-free distributions. Free reimplementations of these add-ons are frequently seen as problems that needs to be solved.
> the shear amount of code that is contributed back to projects with permissive licences now
There was a decade when business were started around permissive, non-copyleft, licenses but these have almost all either failed or changed to a proprietary license. The amount of code was indeed huge but most of them do not exist anymore. MongoDB, Elastic, Pivotal are a few examples. Cassandra is the exception here.
They all changed in response to huge companies such as Amazon or Google starting to contribute code to their own forks. These forks mostly existed to replace proprietary features with free features and to fit the higher development velocity these companies desire. They were seen as huge problems and even a breach of the social trust. But a fork should be a *good* thing. It should be *desirable* that large companies contributes huge volumes of free code. These products are all now either completely non-free or has a small free version with neutered functionality.
So it's safe to conclude that GPL is much more commercially viable in practice, and directly led to Linux' huge domination.
Posted Oct 28, 2025 14:06 UTC (Tue)
by paulj (subscriber, #341)
[Link]
> They all changed in response to huge companies such as Amazon or Google starting to contribute code to their own forks.
They contribute only selectively though. Stuff they consider an advantage they often do not contribute back.
Posted Oct 29, 2025 11:25 UTC (Wed)
by ras (subscriber, #33059)
[Link]
My guess is the dominant model for GPL software doesn't involve selling support, subscripts, or shipping the source to customers. Instead follows one of two paths. You get it in a router, a TV, phone or car radio (that one below me away), and they make money from the hardware. Or you use it via a server, where they make money from the software but aren't obliged to contribute back their modifications. Using copy-left licences to sell software or subscriptions hasn't been a wild success. ElasticSearch's troubles spring to mind, as does RedHat's response to Oracle Linux.
> So it's safe to conclude that GPL is much more commercially viable in practice, and directly led to Linux' huge domination.
Linux has a huge domination in software? Certainly it dominates as the base OS. But in lines of code shipped to end users, it's a tiny fraction. My Debian laptop uses far more lines to drive it's user space than it's kernel. Granted, Debian user space is mostly copy-left. But Alpine is mostly permissive. Even Fedora is likely about 30% pure copy-left, another 25% mixed, and the rest permissive [0].
[0] https://www.sonarsource.com/blog/the-state-of-copyleft-li...
Posted Oct 27, 2025 13:21 UTC (Mon)
by dfc (subscriber, #87081)
[Link] (2 responses)
What was the GPLed tunnelling software?
Posted Nov 7, 2025 9:02 UTC (Fri)
by ras (subscriber, #33059)
[Link]
To be honest, I was grateful and somewhat surprised they bothered to ship the package for Linux. The company I worked for based everything on Linux, but was taken over by a Microsoft shop 100's of times it's size. They forced InTune on me. If Microsoft hasn't supplied a Linux package, they would have forced a Windows server on me.
Posted Nov 12, 2025 9:21 UTC (Wed)
by ras (subscriber, #33059)
[Link]
Oddly, I stumbled across a modern version of this software today. It's called the "Microsoft Tunnel Gateway". It's a docker image, available here: https://learn.microsoft.com/en-us/intune/intune-service/p...
I had a brief look. The "/usr/local/bin/mstunnel" binary contains the string "/etc/ocserv", which happens to be where OpenConnect stores it settings. "/etc/pam.d/ocserv" has this line: "auth required /lib/security/mise_pam.so mise_config=/etc/ocserv/mise.json enable_mise_logging". There is no "/etc/ocserv" directory, but from memory that's created when you install it, which I didn't do.
According to Debian's copyright file, libopenconnect code is LGPL, but also contains GPL code. It was statically linked. A charitable take is Microsoft assumed it was LGPL, and didn't modify it so they weren't required to contribute anything back.
That just leaves them with attribution, and making source available for [L]GPL code. They require your to accept a licence similar to this one before installing: https://support.microsoft.com/en-us/office/microsoft-soft... There are no other licences, attributions, or information about obtaining the source displayed.
Posted Oct 26, 2025 13:40 UTC (Sun)
by RazeLighter777 (subscriber, #130021)
[Link] (2 responses)
Posted Oct 26, 2025 13:43 UTC (Sun)
by dskoll (subscriber, #1630)
[Link]
The non-privileged utilities are still used a lot by the root user in various scripts. If an attacker can somehow arrange an exploit (eg, by creating a weirdly-named file that a script stumbles across) then the attacker can get root.
Posted Oct 27, 2025 8:03 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link]
There were buffer overflow bugs in tools like `file` that you would expect to just work. And in the modern world, "root" is meaningless: https://xkcd.com/1200/
Posted Oct 24, 2025 1:45 UTC (Fri)
by welinder (guest, #4699)
[Link]
And it's humbling to see that a silly little bug deep in date can silently break unattended security updates!
Posted Oct 24, 2025 6:30 UTC (Fri)
by eru (subscriber, #2753)
[Link] (19 responses)
I fail to see why it would be a good idea even in the long term. These utilities are done, the specifications do not change, or change very little. The only valid reason might be a future situation, where support for the C language starts disappearing, which is totally a fantasy scenario. C is too entrenched for that to happen within the expected lifetime of our technical civilisation.
Posted Oct 24, 2025 7:40 UTC (Fri)
by taladar (subscriber, #68407)
[Link] (18 responses)
https://gitweb.git.savannah.gnu.org/gitweb/?p=coreutils.git
showing the last 16 commits doesn't even go back a week of development history?
These tools need maintenance and rewriting them in something with a saner build system than C has to offer after being around for 50 years certainly will make this easier.
As for C not disappearing, it feels like we are already in an era where most young people below the age of 35-40 or so do not learn it any more so you might be surprised how quickly the pool of potential volunteer maintainers will deplete for boring, mature C projects.
Posted Oct 24, 2025 8:48 UTC (Fri)
by alx.manpages (subscriber, #145117)
[Link] (16 responses)
I'm 32, and I went to university with people 7 years younger than me, and C was still the main language we studied there.
I've heard that some schools have reduced the amount of C courses, but it's still there.
> so you might be surprised how quickly the pool of potential volunteer maintainers will deplete for boring, mature C projects.
While the number of people knowing C enough might reduce, it might also increase the ratio of C programmers that know C very well. Self-selection can be a good thing. I don't expect the amount of C experts to diminish significantly.
> These tools need maintenance and rewriting them in something with a saner build system than C has to offer after being around for 50 years certainly will make this easier.
C has evolved quite a lot in these 50 years, and I'm not sure Rust is better than C. Most of the issues people complain about C are in reality issues with old C, or low quality compilers. Some issues remain in the latest GCC, but there's work on having an even better C in the following years.
Incremental improvements are better than entirely jumping to a new language, and this issue with Rust's date(1) is an example of why we should keep improving the C version, which is almost bug-free, instead of writing bugs in a different language.
<https://www.joelonsoftware.com/2000/04/06/things-you-shou...>
Posted Oct 24, 2025 13:14 UTC (Fri)
by LtWorf (subscriber, #124958)
[Link] (1 responses)
Posted Oct 27, 2025 8:43 UTC (Mon)
by taladar (subscriber, #68407)
[Link]
Posted Oct 27, 2025 8:42 UTC (Mon)
by taladar (subscriber, #68407)
[Link] (13 responses)
Good for you. I am 43 and they tried to teach everything in Java back when I was in university. Yes, including stuff like OS memory management that is absolutely ridiculous to teach in a GC language without pointers.
Incremental improvements can only get you so far without breaking compatibility with existing code bases, which is the overwhelming reason to stick with an existing language in the first place. I think we can all agree that nobody wants a language that is essentially a new language in terms of compatibility just one that essentially bears the same name as the old language.
C has an absolutely gigantic pile of flaws that can never be fixed without breaking compatibility.
And this bug could have happened in any language, including C, someone starting the development of a feature but then forgetting to get back to it is hardly the kind of bug that says anything about the language itself.
Posted Oct 27, 2025 10:53 UTC (Mon)
by alx.manpages (subscriber, #145117)
[Link] (12 responses)
Things get fixed slowly but steadily breaking old code. We got rid of implicit int, for example. We also got rid of K&R function definitions.
Recently, GCC has added -Wunterminated-string-initialization, which prevents bugs where a character array is initialized as a non-null-terminated character array.
We're discussing adding -Wzero-as-null-pointer-constant to C in GCC, which would also be a breaking change (which is why we're being slow at doing this), but it will most likely eventually be merged.
We got _Countof() in ISO C2y (and GCC 16, and Clang 21), which now counts arrays, and will soon count array parameters, making it really hard to step out of bounds in arrays. <https://inbox.sourceware.org/gcc-patches/cover.1755161451...>
> I think we can all agree that nobody wants a language that is essentially a new language in terms of compatibility just one that essentially bears the same name as the old language.
Even the kernel has moved to newer dialects of C. As long as the breakage is in small steps, it can be acceptable. Some new diagnostics (such as disallowing 0 as a null pointer constant) break existing code, but if the resulting new dialect is significantly safer, and programmers can handle the breakage relatively easily, it will be eventually adopted.
> C has an absolutely gigantic pile of flaws that can never be fixed without breaking compatibility.
The list isn't so gigantic, and some of them have already been fixed, and others are in the process of being fixed. Yes, some require breaking changes, and those have happened, and will happen again.
Posted Oct 28, 2025 8:49 UTC (Tue)
by taladar (subscriber, #68407)
[Link] (11 responses)
And while you might be able to break some surface level syntactic issues incrementally you will never be able to fix deep architectural issues. How would you e.g. introduce some change that replicates Rust's Send and Sync traits that prevent sharing data in broken ways across threads? How would you get rid of the separate passing of pointer and length values? How would you fix the utterly broken textual include system?
Posted Oct 28, 2025 10:21 UTC (Tue)
by alx.manpages (subscriber, #145117)
[Link] (10 responses)
All of the code I write these days is single-threaded, so I don't need that, nor feel qualified to comment on that. :)
> How would you get rid of the separate passing of pointer and length values?
That's in my comment above. IMO, possibly the most important improvement in the language in decades, and coming soon.
I'm currently working on a patch for GCC, which already works. I'm just working on adding diagnostics for a corner case.
Please follow this link: <https://inbox.sourceware.org/gcc-patches/cover.1755161451...>.
The idea is to use macros for wrapping functions that accept a pointer and a length, so that the macro gets an array, and decomposes it into the right pointer and length in a safe manner.
Here's an example wrapping strftime(3):
#define strftime_a(dst, fmt, tm) strftime(dst, countof(dst), fmt, tm)
First, I'll explain how this wrapper macro is safe.
countof() requires that the input is an array, and produces a compile-time error if that constraint is violated. This already exists, but of course it doesn't work if you pass something passed in the stack. The patch I'm working on will make countof() also work on array parameters, and return the declared number of elements.
For this to work, one must wrap *all* functions that get a pointer and a length, and non-wrapped calls would be points of failure (resembling Rust's unsafe code). The goal would be to avoid all such calls, or minimize them.
Second, I'll explain how to disallow the non-wrapped calls.
The idea would be that strftime(3) should only be allowed to be called through strftime_a(). You could use the [[deprecated]] attribute on the function prototype, and then the wrapper could use _Pragma to disable that diagnostic within the macro. This would need some help from the compiler.
Alternatively --and probably more easily--, you could set up a script in the build system that finds all such wrappers (if you use consistent naming, such as a trailing `_a`, that would be feasible), and then checks if there's any calls to the raw function anywhere in the code, reporting them all.
In the link above, you can find a proof-of-concept program that doesn't specify a pointer length manually at all; yet it uses malloc(3), and it passes array parameters to functions.
> How would you fix the utterly broken textual include system?
I'm sorry, but I consider this a feature. I like how it works. :)
If you have any specific concerns about #include, please mention them, but being textual makes it simple and great, IMO.
Posted Oct 28, 2025 12:27 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (9 responses)
From the software side:
- it is hard to hide implementation details beyond "convention"
From the build system side:
- header search is implicit; one really should also depend on all header paths searched until the target is found *not* existing for truly reliably builds
Some of these are definitely in the "trivial" or "no one cares about that edge case", but I'd much rather prefer something more structured.
Posted Oct 28, 2025 13:42 UTC (Tue)
by alx.manpages (subscriber, #145117)
[Link] (4 responses)
[... hiding stuff ...]
I don't enjoy hiding stuff. Just tell users to not depend on that, and use names that clearly tell users that something is an implementation detail. I guess this falls within what (at least some) C programmers consider a feature.
> - header guards are a fact of life (`#pragma once` can run into issues with hard links, symlinks, bind mounts, etc.)
Not a big deal. There are linters that report if the header guard doesn't follow some pattern (so, if you pasted from elsewhere, it would catch that mistake). Even if not using them, it's trivial to write a script that checks that each header file has a header guard consistent with its pathname.
> - no namespacing (you never know when `a.h` might start interfering with `b.h`)
That's up to the programmer's hygiene. Namespaces look like a_foo and b_foo in C.
You could even have namespaces in C (there are tricks with structures), although people don't use them often because it's not worth it; underscores are cheap and reliable.
> From the build system side:
Yep. One can assume system headers are stable, but other than that, yes, each object file needs to recursively depend on all included files in the build system. This is handled easily by the compiler with `-M -MP`.
> - it is hard to know any kind of higher-level information about headers: which headers belong to which "library" for tools like "you don't need to search for library X because you don't actually use it" or "you're including X's headers but you're finding them because Y makes its headers implicitly available"
iwyu(1) solves this issue entirely. See <https://include-what-you-use.org/>.
> - `config.h` busting caches project-wide on changes that affect one function's implementation decision
You dislike autotools (and other build systems). I also dislike them. I use hand-written makefiles that don't have this issue.
Posted Oct 28, 2025 15:30 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (3 responses)
I have not found the "we're all adults here" guidelines to be sufficient to not have to carry exceedingly dumb behaviors in the name of backwards compatibility. Hyrum's Law is very much in effect and just not offering things in the first place is the best solution IME.
> Yep. One can assume system headers are stable, but other than that, yes, each object file needs to recursively depend on all included files in the build system. This is handled easily by the compiler with `-M -MP`.
No, it is not. I'm not saying that. I'm saying that, if in the search for `<stdio.h>` you *check* for `/some/user/path/stdio.h` you *depend* on this file *not existing*. Only fully hygenic build systems even attempt to capture such things.
> iwyu(1) solves this issue entirely. See <https://include-what-you-use.org/>.
That tells you what *headers* you need. What tells you that your `pkg-config --cflags` call is no longer needed because you no longer use any of the headers associated with the package(s) found by it? Similarly, what tells you that header `frobnitz.h` really should have an associated `pkg-config --cflags frobdoodle` in your build system?
> I use hand-written makefiles that don't have this issue.
Presumably you then manage `-D` flags on a per-TU basis?
Posted Oct 28, 2025 22:47 UTC (Tue)
by alx.manpages (subscriber, #145117)
[Link] (2 responses)
I think I'm still not understanding. Please clarify further.
> What tells you that your `pkg-config --cflags` call is no longer needed because you no longer use any of the headers associated with the package(s) found by it?
Hmmm, I see. There's no solution for that, as far as I know. I don't see this as a significant issue of header files, though.
> Similarly, what tells you that header `frobnitz.h` really should have an associated `pkg-config --cflags frobdoodle` in your build system?
Documentation. Manual pages have a LIBRARY section, which is where this should be covered. In libc functions, it looks like this (see for example printf(3)):
LIBRARY
For libraries that need pkgconf(1), it should be documented in that section.
> Presumably you then manage `-D` flags on a per-TU basis?
Yes.
Posted Oct 29, 2025 2:49 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
> I think I'm still not understanding. Please clarify further.
Let's say you have:
```
#include <stdio.h>
int main(int argc, char* argv[]) {
If you compile this, fine, you use `/usr/include/stdio.h`. However, if you then create `/home/alx/include/stdio.h`, what tells your build that this TU is now out-of-date? Because if you compile it again, you'll get different results. The problem is that `-M` and friends will *not* report this depends-on-not-existing (because Makefile and `ninja` both do not have a way to represent this kind of dependency). Hermetic builds can get around it because they do such resource discovery separately to set up the hermetic environment for the compilation itself.
Posted Oct 29, 2025 9:50 UTC (Wed)
by alx.manpages (subscriber, #145117)
[Link]
Posted Oct 28, 2025 14:01 UTC (Tue)
by paulj (subscriber, #341)
[Link] (3 responses)
This isn't really true. C actually makes it reasonably easy to hide implementation, as long as you're happy to use heap-allocated objects. You just have a interface like:
extern foo;
foo *foo_new();
If you wish make the last take a double-point, so that foo_finish can "reach back" into the caller and null out its reference, as a safety assist (though, doesn't prevent caller having other pointer stashed - if this is a worry, implement a weak ref system and use that instead).
If you want to allow stack-allocation you need a bit more of a dance, a foo_xxxx function to give a size to allocate and a foo_init() function to initialise it.
Languages with things like generics based on included templates (literal or effective) leak implementation much more than C does.
Posted Oct 28, 2025 15:33 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (2 responses)
Basically, a way to have the structure available for performance reasons for specific fields while hiding others that should not be messed with. But you also want to control ABI layout for compactness or compatibility requirements.
Posted Oct 28, 2025 15:47 UTC (Tue)
by paulj (subscriber, #341)
[Link] (1 responses)
// public header
typedef struct {
// foo_private.h
typedef struct {
And you have the foo_new() function allocate a foo_priv and return &(foo_priv).pub; The various foo_xxx(foo *) functions can cast the foo * back a foo_priv *. Your user gets a limited 'foo' struct, your implementation can add whatever further internals to the end of that. You can extend this approach to allow arbitrary composition of objects into a hierarchy, e.g. see include/linux/conrtainer_of.h in the Linux sources.
Personally, I wouldn't use that approach myself. I would just rely on the functions in the API to retrieve information - you can use linker version maps, and/or redirection to manage compatibility over time. If you want to have custom implementations for different instances, supply a struct of function pointers for the API - i.e. a runtime interface.
Posted Oct 28, 2025 18:45 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link]
The public and private fields might be interleaved to ensure optimal layout.
Posted Oct 27, 2025 7:14 UTC (Mon)
by eru (subscriber, #2753)
[Link]
Affecting 9 programs out of the around 100 in coreutils. And the changes look like only internal polishing no user is likely to notice. A typical example of work on software that is stable, and deep in maintenance mode. Rewriting in some other language takes it again to active development, kind of resets the lifecycle.
Posted Oct 23, 2025 21:12 UTC (Thu)
by pixelbeat (guest, #7440)
[Link] (13 responses)
Then there are fundamental issues with SIGPIPE handling in all the uutils https://github.com/uutils/coreutils/issues/8919
Also there are questionable interface changes being added like 12 ways to get a sha3 https://github.com/uutils/coreutils/issues/8984
I wish them well, but this needs to be carefully managed.
Posted Oct 23, 2025 23:26 UTC (Thu)
by csigler (subscriber, #1224)
[Link]
I cannot possibly (imaginarily) upvote this comment enough.
For those familiar with the 1976 movie "Network":
"You have meddled with the primal forces of Unix, and _you_will_atone_!!!"
Clemmitt
Posted Oct 24, 2025 4:04 UTC (Fri)
by Keith.S.Thompson (subscriber, #133709)
[Link] (11 responses)
I wonder whether that was a deliberate decision.
("true" and "false" are bash builtins, so the commands under /usr/bin probably aren't used very often.)
Posted Oct 24, 2025 6:05 UTC (Fri)
by mb (subscriber, #50428)
[Link] (1 responses)
Posted Oct 24, 2025 11:31 UTC (Fri)
by makapuf (guest, #125557)
[Link]
/s
Posted Oct 24, 2025 9:27 UTC (Fri)
by jengelh (subscriber, #33263)
[Link] (1 responses)
uutils-md5sum was recently broken too[1], so it is only natural to make a sensitive program like /bin/true (only one very specific return value is allowed!) be based on a known-good implementation.
[1] https://www.phoronix.com/news/Ubuntu-25.10-Coreutils-Make...
Posted Oct 24, 2025 10:18 UTC (Fri)
by collinfunk (subscriber, #169873)
[Link]
$ /bin/true; echo $?
Posted Oct 24, 2025 11:09 UTC (Fri)
by pixelbeat (guest, #7440)
[Link] (3 responses)
Posted Oct 24, 2025 13:41 UTC (Fri)
by ebee_matteo (subscriber, #165284)
[Link] (2 responses)
> If you don't want to build the multicall binary and would prefer to build the utilities as individual binaries, that is also possible.
This is a decision from the distribution to take, I would say.
Posted Oct 24, 2025 13:55 UTC (Fri)
by pixelbeat (guest, #7440)
[Link] (1 responses)
Note also that GNU coreutils can be built as a multi-call binary. Testing the performance of that here shows that the overhead is not rust specific, but rather the dynamic linker overhead loading the full set of libs linked by the multi-call binaries
Posted Oct 25, 2025 8:26 UTC (Sat)
by ebee_matteo (subscriber, #165284)
[Link]
I still think that this is a decision for the distribution. It's a tradeoff between being able to better debug and diagnose issues, and binary size.
If I try to use a recompiled version of Rust stdlib and panic = abort, for many binaries I get comparable sizes to the GNU version (not all, this is true, but some of them also add some features).
Posted Oct 24, 2025 13:31 UTC (Fri)
by juliank (guest, #45896)
[Link] (2 responses)
Because we dispatch by argv[0] in the multi-call binary we then did not find the binary because the tool was invoked with the symlink name.
We do have a hardlink farm now and can resolve based on hardlink where available but it's a bit messy because it requires /proc to be mounted.
Posted Oct 24, 2025 14:58 UTC (Fri)
by pixelbeat (guest, #7440)
[Link] (1 responses)
Posted Nov 2, 2025 10:02 UTC (Sun)
by juliank (guest, #45896)
[Link]
Posted Oct 23, 2025 21:35 UTC (Thu)
by JMB (guest, #74439)
[Link]
And concerning servers ... in most cases even extreme security relevant problems
At least the problem shows that it is not fuitile to have tested the new Rust code ...
Posted Oct 23, 2025 22:00 UTC (Thu)
by geofft (subscriber, #59789)
[Link] (6 responses)
Looks like this was originally reported in https://pad.lv/2127970 on October 16, exactly one week ago and also exactly one week after Ubuntu 25.10's release. The reporter originally mentioned the bug in the context of a homegrown backup script that was failing silently, and they got the fix into the proposed stable update repository yesterday, with an (understandable) argument about why it wasn't same-day levels of urgent.
This morning, someone pointed out that it breaks unattended-upgrades. It seems to me that it was only at this point that it was tracked as a security issue, and the package is now available in both the (prod) stable updates repository and the more minimal security updates repository.
The actual bug itself is simply that support for `date -r <file>` wasn't implemented. The issue https://github.com/uutils/coreutils/issues/8621 and the pull request implementing support https://github.com/uutils/coreutils/pull/8630 were both filed on the same day, September 12 of this year, and it was reviewed and merged into main two days later. This, understandably, postdates whichever release Ubuntu snapshotted.
I think I am mostly surprised that the command silently accepted -r and did nothing, and indeed from the actual diff (https://github.com/uutils/coreutils/commit/88a7fa7adfa048...) it's pretty clear that the argument parser had support for it but it wasn't wired up to do anything. If the command had instead returned an argument parsing error, I think this would have been caught a lot quicker. It does seem a little bit odd that whoever implemented this in the argument parser didn't at least add an "if -r, throw 'todo'" case. But it's also interesting that this was not statically caught. The Rust compiler is pretty good at warning and complaining about unused variables. (To be fair, most C compilers and many other languages are too, though anecdotally these warnings seem less noisy in Rust and I've seen more codebases in Rust where this is a hard failure than C codebases using -Werror. Also, Rust has #[must_use], if you want to be thorough.) However, there wasn't actually an unused variable here; you can see that you get the value out of the parsed-arguments object by asking for the value of the flag.
I wonder if it's worth thinking about an argument-parsing API in Rust that would raise an unused-variable warning at compile time if a parsed command-line flag or argument is never used in the code. It might also be possible to do this with the existing parser with a sufficiently clever linter. Either way, the lack of compile-time detection of this bug feels at odds with the philosophy of a Rust rewrite of coreutils, i.e., that there's merit in having tools do the checking instead of trusting and expecting people to write perfect code.
I also think it would be very much worth it for Ubuntu and the uutils developers to manually do an audit for all arguments that are parsed in an argument parser but not actually implemented. If this pattern happened once, it likely isn't the only case.
Posted Oct 24, 2025 7:49 UTC (Fri)
by taladar (subscriber, #68407)
[Link] (4 responses)
Posted Oct 24, 2025 9:19 UTC (Fri)
by cyperpunks (subscriber, #39406)
[Link] (2 responses)
Posted Oct 27, 2025 8:45 UTC (Mon)
by taladar (subscriber, #68407)
[Link] (1 responses)
Posted Oct 28, 2025 7:07 UTC (Tue)
by donald.buczek (subscriber, #112892)
[Link]
[1] https://github.com/uutils/coreutils/issues/4254
Posted Oct 24, 2025 19:48 UTC (Fri)
by geofft (subscriber, #59789)
[Link]
I guess that from the perspective of the compiler, the struct member is "used" because it's passed off to a macro / the derived Parser implementation assigns to it and otherwise uses it. So the fact that the user code isn't using it isn't alerted on.
Posted Oct 28, 2025 11:14 UTC (Tue)
by job (guest, #670)
[Link]
That whole story is wrong on so many levels. That type of bug should simply not be possible, and the fact that there are chunks of core functionality still missing should immediately disqualify the idea of replacing coreutils on one of the biggest Linux distributions on the planet.
There are compliance test suites for coreutils. Surely the replacements must satisfy them before there can even be discussions about replacing them in production?
I don't know who Ubuntu thinks their customers are, but it can't be *that* important to rip out GPL-licensed code.
Posted Oct 24, 2025 22:17 UTC (Fri)
by smcv (subscriber, #53363)
[Link] (25 responses)
(I thought Ubuntu used unattended-upgrades, which is Python with calls into the C++ apt libraries; or for desktops, maybe packagekit, which is C with calls into C++.)
Posted Oct 26, 2025 21:05 UTC (Sun)
by cyperpunks (subscriber, #39406)
[Link] (24 responses)
While extending support for our product to *.deb based distros I truely amazed on how much packaging logic in Debian/Ubuntu is based is based on old fashion shell scripts. It's everywhere: build script, signing of packages, management of repos, service scripts, cron jobs, upgrade logic etc, all implemented in bash scripts. Red Hat has tried to move much of this stuff to Python, with some success, some tooling is horrible slow due to this choice.
Going forward it rather obvious to me the correct way forward is to convert all this stuff to rust, going to uutils
Posted Oct 26, 2025 22:38 UTC (Sun)
by stijn (subscriber, #570)
[Link] (23 responses)
Posted Oct 27, 2025 0:46 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link] (22 responses)
def git(*args):
The main benefit, to me, is not having to fight with escaping rules. Some of the tools I've written are still bash because they don't do much in the way of fancy argument dances. But once you're looking at `"${arg[@]}"` in any kind of frequency, I feel you really want to just have tuples and proper strings at your fingertips.
Posted Oct 27, 2025 9:19 UTC (Mon)
by kleptog (subscriber, #1183)
[Link]
The advantage of shell scripts is that they work on even very minimal systems, whereas you cannot rely on Python working from the start.
Posted Oct 27, 2025 9:22 UTC (Mon)
by taladar (subscriber, #68407)
[Link]
However I would argue that a static Rust binary is much better here than Python as a replacement since it can be relied on to work without a runtime system.
Posted Oct 27, 2025 10:43 UTC (Mon)
by stijn (subscriber, #570)
[Link] (19 responses)
Ah yes, I agree with that. I wish for a shell that has that. Additionally some of my shell scripts are probably not written in a sufficiently hardened manner. They are not supposed to be called where bad actors could poison them, but who knows when a little bobby tables pops up.
Posted Oct 27, 2025 11:14 UTC (Mon)
by cyperpunks (subscriber, #39406)
[Link] (18 responses)
The runtime problem with Python is indeed real, combined wiht the continuous flow of incompatibility changes in newer Python versions it means test coverge must be 100%, while is very rarely the case.
Deployment with single, cargo produced, binary is a much simpler and way safer.
Posted Oct 27, 2025 14:57 UTC (Mon)
by edgewood (subscriber, #1123)
[Link] (17 responses)
Posted Oct 27, 2025 15:29 UTC (Mon)
by euclidian (subscriber, #145308)
[Link] (3 responses)
I guess in general for a lot of binaries the bloat would be too much. Also for legacy C/C++ the pure source without knowing what compiler flags were used would be less useful but it would be a start for some code archeology.
I have some old binaries at work where we lost the source / patches we added to it.
Posted Oct 27, 2025 19:18 UTC (Mon)
by cyperpunks (subscriber, #39406)
[Link] (1 responses)
Posted Oct 28, 2025 8:45 UTC (Tue)
by taladar (subscriber, #68407)
[Link]
Posted Oct 28, 2025 2:32 UTC (Tue)
by Fowl (subscriber, #65667)
[Link]
Posted Oct 27, 2025 15:30 UTC (Mon)
by dskoll (subscriber, #1630)
[Link] (12 responses)
And also, if things mess up, it's less of a hassle to debug if your cycle is "edit/run" rather than "edit/compile/run". I don't think compiled languages are appropriate for the non-performance-critical plumbing in cases like this.
Posted Oct 28, 2025 8:46 UTC (Tue)
by taladar (subscriber, #68407)
[Link] (11 responses)
Posted Oct 28, 2025 13:46 UTC (Tue)
by dskoll (subscriber, #1630)
[Link] (10 responses)
Is that the case for the typical use-cases of installers and sysadmin-type scripts? I have no data, so can't say either way, but my gut feeling is that the hassle of using a compiled language for those things outweighs the benefits.
Posted Oct 29, 2025 9:00 UTC (Wed)
by taladar (subscriber, #68407)
[Link] (9 responses)
Posted Oct 29, 2025 15:19 UTC (Wed)
by dskoll (subscriber, #1630)
[Link] (8 responses)
Well, good for you. But you must be running some very weird shell:
Also, doing arithmetic with a month number is IMO already a little bit suspicious... date arithmetic is much more complicated than that. Adding 1 to the month number of a date to get "same date next month" is already wrong in any programming language.
Posted Oct 29, 2025 16:36 UTC (Wed)
by Wol (subscriber, #4433)
[Link] (1 responses)
And I know I've implemented date arithmetic with "add 1 to the month, carry if it's gone over 12", with similar subtract logic.
Cheers, Posted Oct 30, 2025 8:58 UTC (Thu)
by taladar (subscriber, #68407)
[Link] (5 responses)
in bash will give you
> bash: 08: value too great for base (error token is "08")
Posted Oct 30, 2025 10:05 UTC (Thu)
by malmedal (subscriber, #56172)
[Link] (4 responses)
If people are actually doing date calculations in shell I think this is better:
$ date -d 'next wed + 4 weeks'
Though I tend to prefer Date::Manip for the purpose, easier to get right.
Posted Oct 30, 2025 10:09 UTC (Thu)
by taladar (subscriber, #68407)
[Link] (3 responses)
Posted Oct 30, 2025 14:49 UTC (Thu)
by dskoll (subscriber, #1630)
[Link] (2 responses)
Sure, there are gotchas in shell scripting. But I don't think moving to a compiled language is the right answer, necessarily. There are better scripting languages than shell (Perl or Python, for example) that have fewer gotchas while remaining very easy for sysadmins to write or modify.
Posted Oct 31, 2025 10:37 UTC (Fri)
by taladar (subscriber, #68407)
[Link] (1 responses)
Posted Oct 31, 2025 14:53 UTC (Fri)
by malmedal (subscriber, #56172)
[Link]
Actually I've noticed that sysadmin-stuff, that earlier would be written as a command-line app, is often written as a web-gui these days. So maybe try to convince them to use typescript at least?
Hmm, also go is quite popular for sysadmin stuff, come to think it.
The next Ubuntu release...
The next Ubuntu release...
The next Ubuntu release...
The next Ubuntu release...
The next Ubuntu release...
The next Ubuntu release...
The next Ubuntu release...
The next Ubuntu release...
It is not an other debate. This bug is a direct consequence of this decision.
If they were willing to be a GNU GPL derivative of the original coreutils, they could port the C code to rust instead of rewriting it from first principle, which would avoid introduce fresh bugs.
And yet I don't think that's how it would go for two reasons:
The next Ubuntu release...
I blame Canonical for precipitating a second "We told you KDE 4.0 was a developer preview, not an end-user release" situation.
The next Ubuntu release...
The next Ubuntu release...
KDE 4.0.0 status
KDE 4.0.0 status
KDE 4.0.0 status
KDE 4.0.0 status
KDE 4.0.0 status
KDE 4.0.0 status
KDE 4.0.0 status
KDE 4.0.0 status
The next Ubuntu release...
The next Ubuntu release...
The next Ubuntu release...
Copy-left hasn't worked out how I thought it would
> Either way, copy-left demonstrably didn't force those companies to contribute back.
Copy-left hasn't worked out how I thought it would
Copy-left hasn't worked out how I thought it would
> It looks like you are claiming they chose Linux because it forced this obligation on them?
Copy-left hasn't worked out how I thought it would
Copy-left hasn't worked out how I thought it would
> GPLv3 is not compatible with GPLv2
Copy-left hasn't worked out how I thought it would
Copy-left hasn't worked out how I thought it would
Copy-left hasn't worked out how I thought it would
Copy-left hasn't worked out how I thought it would
Copy-left hasn't worked out how I thought it would
Copy-left hasn't worked out how I thought it would
Copy-left hasn't worked out how I thought it would
The next Ubuntu release...
The next Ubuntu release...
The next Ubuntu release...
The next Ubuntu release...
Don't fix what aint broke
Rewriting C utilities that have been battle-tested for decades in Rust might be a good idea in the long term,
Don't fix what aint broke
Don't fix what aint broke
Don't fix what aint broke
Don't fix what aint broke
Don't fix what aint broke
Don't fix what aint broke
Don't fix what aint broke
Don't fix what aint broke
Don't fix what aint broke
- it would be nice if C had a way to expose some members for consumers to be able to use while other fields are off-limits (while still supporting manual struct layout use cases)
- header guards are a fact of life (`#pragma once` can run into issues with hard links, symlinks, bind mounts, etc.)
- no namespacing (you never know when `a.h` might start interfering with `b.h`)
- `config.h` patterns rather than more targeted configuration selection (see also: related build gripe)
- it is hard to know any kind of higher-level information about headers: which headers belong to which "library" for tools like "you don't need to search for library X because you don't actually use it" or "you're including X's headers but you're finding them because Y makes its headers implicitly available"
- `config.h` busting caches project-wide on changes that affect one function's implementation decision
Don't fix what aint broke
In practice, it rarely is an issue.
>
> - header search is implicit; one really should also depend on all header paths searched until the target is found *not* existing for truly reliably builds
Don't fix what aint broke
Don't fix what aint broke
Standard C library (libc, -lc)
Don't fix what aint broke
// gcc -I/home/alx/include main.c
printf("%d args!\n", argc);
return 0;
}
```
Ahhh, I see what you mean now. Yes, that's imperfect.
I'm thinking of a way that could work, although I haven't tried it.
Here's the usual rule for buildin .d files (details may vary):
Don't fix what aint broke
$(TU_d): $(builddir)/%.d: $(SRCDIR)/% Makefile $(pkconf_file) | $$(@D)/
$(CC) $(CFLAGS_) $(CPPFLAGS_) -M -MP $(DEPHTARGETS) -MF$@ $<
You could then have a second set of dependency files which get rebuilt unconditionally:
$(TU_d2): $(builddir)/%.d2: $(SRCDIR)/% Makefile $(pkconf_file) FORCE | $$(@D)/
$(CC) $(CFLAGS_) $(CPPFLAGS_) -M -MP $(DEPHTARGETS) -MF$@ $<
This second set would make sure that the new dependencies are *also* considered. That would make the build system slower, though.
If you measure that and find the additional time to be reasonable, you could do that. Alternatively, be careful with your system headers; but I agree that's not ideal.
Don't fix what aint broke
foo_status foo_action(foo *);
void foo_finish(foo *);
Don't fix what aint broke
Don't fix what aint broke
// foo.h
// public fields
int x;
int y;
} foo;
foo pub;
// private stuff
} foo_priv;
Don't fix what aint broke
Don't fix what aint broke
showing the last 16 commits doesn't even go back a week of development history?
uutils is doing well, but needs to be carefully managed
uutils is doing well, but needs to be carefully managed
uutils is doing well, but needs to be carefully managed
uutils is doing well, but needs to be carefully managed
uutils is doing well, but needs to be carefully managed
uutils is not doing well
uutils is not doing well
0
$ /bin/true --help > /dev/full; echo $?
true: write error: No space left on device
1
Interesting, I hadn't realized /bin/true was still GNU.
Perhaps this is a performance consideration, as all uutils have a larger startup overhead than their GNU equivalents, due mainly to the large multicall binary being used (due to rust binaries being significantly larger).
For example:
uutils has more overhead
$ time seq 10000 | xargs -n1 true
real 0m8.634s
user 0m3.178s
sys 0m5.616s
$ time seq 10000 | xargs -n1 uu_true
real 0m22.137s
user 0m6.542s
sys 0m15.561s
It irks me to see mention of rust implementations being faster, when at a fundamental level like this they're slower and add significant overhead to every command run
uutils has more overhead
Yes agreed, though it's a different decision with uutils as the separate binaries are significantly larger.
uutils has more overhead
$ ./configure --enable-single-binary --quiet && make -n $(nproc)
$ time seq 10000 | xargs -n1 src/true
real 0m21.595s
user 0m7.437s
sys 0m14.151s
uutils has more overhead
uutils is doing well, but needs to be carefully managed
Ah right. Note the default GNU coreutils setup avoids that issue by using a wrapper script rather than symlinks.
That's the default behavior with ./configure --enable-single-binary in GNU coreutils.
I.e. it would install a file with the following contents at /usr/bin/true
uutils is doing well, but needs to be carefully managed
#!/usr/bin/coreutils --coreutils-prog-shebang=true
uutils is doing well, but needs to be carefully managed
No problems for oldschool Linux users ...
For Smartphone Junkies that may be true (due to fear of missing out),
but for experts there is no need to get even security fixes in less than a week.
are not fixed due to other priorities anyway ... form frozen zone ... to ice age.
but still wondering if concerning all bugs Rust really have a positive benefit
for experienced coders ... seems more a hype than something which can be prooved.
uutils date bug timeline and root cause
uutils date bug timeline and root cause
uutils date bug timeline and root cause
uutils date bug timeline and root cause
clap problems
[2] https://github.com/tertsdiepraam/uutils-args/blob/main/do...
uutils date bug timeline and root cause
uutils date bug timeline and root cause
Load-bearing shell script?
Load-bearing shell script?
>
> (I thought Ubuntu used unattended-upgrades, which is Python with calls into the C++ apt libraries; or for desktops,
> maybe packagekit, which is C with calls into C++.)
is the first baby step.
When dealing with state written in files and sequences of processes needed to produce these states/files, I find shell very expressive compared to say writing this in Python. I come at it from a different angle (bioinformatics), where there is a place for shell scripts before going to orchestration software such as Snakemake, Nextflow and others. I find writing equivalent functionality in Python painful. Maybe there is a danger of avoiding clear designs by relying on the flexibility of shell scripts, but I can also envision downsides to the opaqueness of binary executables. The latter make sense if they fill a well-defined niche with coherent functionality. At which stage they may end up as command lines in shell scripts. There is a mode of computing where objects are files and the transformations between them are naturally expressed in a shell-like language supported by powerful transformers (binary executables) and pipes. My natural instinct is to express more things this way, not fewer (I have no expertise in packaging logic, so I have no opinion about this particular case). I'm curious to other views.
Load-bearing shell script?
Load-bearing shell script?
return subprocess.check_output(('git',) + args, encoding='utf-8').strip()
Load-bearing shell script?
Load-bearing shell script?
Load-bearing shell script?
What I love about shell and what I don't see can be replicated easily elsewhere (as succinctly in a streaming manner) is the composition of processes in pipes. Perhaps that's particular to the type of files / data I work with.
Load-bearing shell script?
One advantage of shell and interpreted scripts like Python is that you can never lose the source, or be unsure which version of the source produced the version of the program you're running.
Load-bearing shell script?
Load-bearing shell script?
Load-bearing shell script?
Load-bearing shell script?
Load-bearing shell script?
Load-bearing shell script?
Load-bearing shell script?
Load-bearing shell script?
Load-bearing shell script?
Load-bearing shell script?
$ expr 08 + 09
17
Load-bearing shell script?
Wol
Load-bearing shell script?
Load-bearing shell script?
Wed 3 Dec 00:00:00 GMT 2025
$ perl -MDate::Manip -wle 'print ParseDate("first thursday of dec")'
2025120400:00:00
$ perl -MDate::Manip -le 'print DateCalc(ParseDate("last thursday of dec 2024"), "2 weeks")'
2025010900:00:00
Load-bearing shell script?
Load-bearing shell script?
Load-bearing shell script?
Load-bearing shell script?
