The GNU Operating System and the Free Software Movement's Avatar

The GNU Operating System and the Free Software Movement

@gnu.org.web.brid.gy

Since 1983, developing the free Unix style operating system GNU, so that computer users can have the freedom to share and improve the software they use. [bridged from https://gnu.org/ on the web: https://fed.brid.gy/web/gnu.org ]

85 Followers  |  0 Following  |  263 Posts  |  Joined: 01.11.2024  |  1.9938

Latest posts by gnu.org.web.brid.gy on Bluesky

GNU Guix: Fundraising campaign to sustain GNU Guix Today we're launching a fundraising campaign to **sustain and strengthen** GNU Guix. Guix is completely independent from any company or institution, we rely on the support of our community to fund the project. If you can, **please help sustain Guix by making a donation**. DONATE NOW ## Why we need your support Like many Free Software projects we need financial support because running a project is expensive. We incur costs for development infrastructure, facilitating developer collaboration and supporting the community around the project. As a package manager and GNU/Linux distribution Guix has some unique needs. As the distribution grows and becomes more popular our costs also grow. Each package that's added to the distribution increases the number of builds. And, as more people use Guix the cost of delivering those packages also grows. ## Sustain Guix To be sustainable we need to match our expenses with our incoming donations. This gives the project certainty that there won't be a sudden funding shortfall. Currently, shortfalls happen. Even recently individual volunteers have had to step in and fund services from their own pockets. That's risky and unsustainable, so we're aiming for stable financial foundations. To achieve that goal we need โ‚ฌ15,000 (roughly $17,500) of donations a year. This would pay for the current infrastructure and project expenses. To be sustainable **recurring donations** are critical because they provide a regular stream of income that can pay for the ongoing shared resources that we all use. For example, to have a better build farm we'd need more hosting and bandwidth which is a recurring cost. So is the goal achievable? Well, it's definitely a big goal. But, it's only โ‚ฌ1,250 a month - so if **125 people contribute โ‚ฌ10 a month** that would get us to the target and make all the difference! ## Strengthen Guix We would love to do more, and if there's support from the community then we will. There's lots more we could do! If there's more funding then we'll be able **strengthen Guix** by expanding the infrastructure, investing more to support the project and promote Guix. With more support we'd be able to do things like: * Improve the overall resilience of our infrastructure so that services are more reliable. * Do more to increase the substitute infrastructure's bandwith and distribution. * Tell more people about Guix by attending events, organising user sprints and conferences. ## Donate Now to Sustain Guix Now's the time where we ask for your help. Please donate to sustain and strengthen Guix. You can donate through either the FSF or the Guix Foundation using a variety of payment methods. If you haven't heard of the Guix Foundation it's an EU-based non-profit that's dedicated to supporting the development and promotion of GNU Guix. It's a members-driven association, so by becoming a member you'll be supporting Guix and will have a voice in it's activities. Every donations helps, **Recurring donations** are ideal, but we appreciate any support you can give. Every donation gets us a step closer to being sustainable. DONATE NOW Thank you for your support!
29.09.2025 07:00 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
FSF Events: Free Software Directory meeting on IRC: Friday, October 3, starting at 12:00 EDT (16:00 UTC) Join the FSF and friends on Friday, October 3 from 12:00 to 15:00 EDT (16:00 to 19:00 UTC) to help improve the Free Software Directory.
23.09.2025 14:45 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
FSF Events: Free Software Directory meeting on IRC: Friday, September 26, starting at 12:00 EDT (16:00 UTC) Join the FSF and friends on Friday, September 26 from 12:00 to 15:00 EDT (16:00 to 19:00 UTC) to help improve the Free Software Directory.
23.09.2025 14:44 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
FSF Events: Free Software Directory meeting on IRC: Friday, October 3, starting at 12:00 EDT (16:00 UTC) Join the FSF and friends on Friday, October 3 from 12:00 to 15:00 EDT (16:00 to 19:00 UTC) to help improve the Free Software Directory.
23.09.2025 14:44 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
FSF News: Job opportunity: Program Manager at the Free Software Foundation BOSTON, Massachusetts, USA (Friday, September 19, 2025) -- The Free Software Foundation (FSF) announced a job opportunity for a motivated and talented program manager.
19.09.2025 15:46 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
GNUnet News: libgnunetchat 0.6.0 # libgnunetchat 0.6.0 released We are pleased to announce the release of libgnunetchat 0.6.0. This is a minor new release bringing compatibility with the major changes in latest GNUnet release 0.25.0. A few API updates and fixes are included. Additionally the messaging client applications using libgnunetchat got updated to stay compatible. This release will also require the GNUnet services from version 0.25.0 or later because of that. #### Download links * libgnunetchat-0.6.0.tar.gz * libgnunetchat-0.6.0.tar.gz.sig The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/ #### Noteworthy changes in 0.6.0 * Fixes issues regarding group creation, leaving chats and invitations * Improving latency by reducing message graph complexity caused by automated event handling A detailed list of changes can be found in the ChangeLog . ## Messenger-GTK 0.11.0 This minor release will add private chats to write notes to yourself and minor changes in the user interface. But mostly the release is intended to reflect changes in libgnunetchat 0.6.0. #### Download links * messenger-gtk-0.11.0.tar.gz * messenger-gtk-0.11.0.tar.gz.sig Keep in mind the application is still in development. So there may still be major bugs keeping you from getting a reliable connection. But if you encounter such issue, feel free to consult our bug tracker at bugs.gnunet.org . A detailed list of changes can be found in the ChangeLog . ## messenger-cli 0.4.0 This release will add changes to be compatible with libgnunetchat 0.6.0 and it will add some text prompts in dialogs to improve overall usability. * messenger-cli-0.4.0.tar.gz * messenger-cli-0.4.0.tar.gz.sig A detailed list of changes can be found in the ChangeLog .
16.09.2025 22:00 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
freeipmi @ Savannah: FreeIPMI 1.6.16 Released o Fix potential sensor reading miscalculation on systems where a char is defined as unsigned (such as ARM) vs signed (such as x86). o Fix gcc15 compilation errors. https://ftp.gnu.o ... pmi-1.6.16.tar.gz
15.09.2025 19:18 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
GNU Guix: Privilege Escalation Vulnerability A security issue has been identified in `guix-daemon`, which allows for a local user to gain the privileges of any of the build users and subsequently use this to manipulate the output of any build. In the case of the rootless daemon, this also means gaining the privileges of `guix-daemon`. All systems are affected, whether or not `guix-daemon` is running with root privileges. You are strongly advised to **upgrade your daemon** now (see instructions below). The only requirements to exploit this are the ability to create and build an arbitrary derivation that has `builtin:download` as its builder, and to execute a setuid program on the system in question. As such, this represents an increased risk primarily to multi-user systems, but also more generally to any system in which untrusted code may be able to access guix-daemon's socket, which is usually located at `/var/guix/daemon-socket/socket`. # Vulnerability `guix-daemon` currently supports two so-called _built-in builders_ for derivations: `builtin:download` and `builtin:git-download`. Their primary utility is currently to circumvent bootstrapping challenges in code-downloading derivations (that is, "how does one get the programs that are used to download programs?") by using "host-side" programs for this purpose. In particular, `download` and `git-download` are currently implemented in the `(guix scripts perform-download)` module, which can be run as a standalone program using `guix perform-download`. At the time that perform-download was written, it was believed to be sufficient for security purposes to ensure that it was run by a build user, and so one of the user-supplied values (the file named by the derivation's `content-addressed-mirrors` environment variable) was read and evaluated as arbitrary Guile code. This suffices to protect a regular user from an untrusted derivation they may be building, since all processes owned by the build user will be killed at the end of the build, before any other build can be run that uses the same build user. It does not suffice, however, to protect the daemon's build users โ€“ and by extension the integrity of the daemon's builds โ€“ from a regular user. In particular, a `content-addressed-mirrors` file can be written to create a setuid program that allows a regular user to gain the privileges of the build user that runs it even after the build has ended. This is similar in impact to CVE-2025-46416, though using a somewhat different mechanism. # Mitigation This security issue has been fixed by 3 commits (`2a33354`, `f607aaa`, and `9202921`) as part of pull request #2419. Users should make sure they have upgraded to commit `1618ca7` or any later commit to be protected from this vulnerability. Upgrade instructions are in the following section. The fix was accomplished by changing `(guix scripts perform-download)` to evaluate the `content-addressed-mirrors` file in a Guile isolated environment, as well as verifying that this file and the others that `perform-download` reads is neither outside of the store nor a symbolic link to a file outside of the store, so that this cannot be used to cause arbitrary files (such as `/proc/PID/fd/N`) to be read. Measures were also taken to ensure that calls to `read` never cause code to be evaluated. A test for the presence of this vulnerability is available at the end of this post. One can run this code with: guix repl -- content-addressed-mirrors-vuln-check.scm This will output whether the current `guix-daemon` being used is vulnerable or not. If it is _not_ vulnerable, the last line will contain `guix-daemon is not vulnerable` and `guix repl` will exit with status code 0, otherwise the last line will contain `guix-daemon is VULNERABLE` and `guix repl` will exit with status code 1. # Upgrading Due to the severity of this security advisory, **we strongly recommend all users to upgrade`guix-daemon` immediately**. **For Guix System** , the procedure is to reconfigure the system after a `guix pull`, either restarting `guix-daemon` or rebooting. For example: guix pull sudo guix system reconfigure /run/current-system/configuration.scm sudo herd restart guix-daemon where `/run/current-system/configuration.scm` is the current system configuration but could, of course, be replaced by a system configuration file of a user's choice. **For Guix on another distribution** , one needs to `guix pull` with `sudo`, as the `guix-daemon` runs as root, and restart the `guix-daemon` service, as documented. For example, on a system using systemd to manage services, run: sudo --login guix pull sudo systemctl restart guix-daemon.service Note that for users with their distro's package of Guix (as opposed to having used the install script) you may need to take other steps or upgrade the Guix package as per other packages on your distro. Please consult the relevant documentation from your distro or contact the package maintainer for additional information or questions. # Conclusion ## Test for presence of vulnerability Below is code to check if your `guix-daemon` is vulnerable to this exploit. Save this file as `content-addressed-mirrors-vuln-check.scm` and run following the instructions above, in "Mitigation." (use-modules (guix monads) (guix derivations) (guix gexp) (guix store) (guix utils) (guix packages) (srfi srfi-34)) (define %test-filename "/tmp/content-addressed-mirrors-vulnerable") (define %test-content-addressed-mirrors `(begin (mkdir ,%test-filename) (exit 33))) (define %test-content-addressed-mirrors-file (plain-file "content-addressed-mirrors" (object->string %test-content-addressed-mirrors))) (define (test-content-addressed-mirrors content-addressed-mirrors) (mlet %store-monad ((content-addressed-mirrors (lower-object content-addressed-mirrors))) (raw-derivation "content-addressed-mirrors-vuln-check" "builtin:download" '() #:hash (base32 "dddddddddddddddddddddddddddddddddddddddddddddddddddd") #:hash-algo 'sha256 #:sources (list content-addressed-mirrors) #:env-vars `(("url" . "/doesnotexist") ("content-addressed-mirrors" . ,content-addressed-mirrors)) #:local-build? #t))) (with-store store (let ((drv (run-with-store store (test-content-addressed-mirrors %test-content-addressed-mirrors-file)))) (guard (c ((and (store-protocol-error? c) (string-contains (store-protocol-error-message c) "failed")) (cond ((file-exists? %test-filename) (format #t "content-addressed-mirrors can evaluate arbitrary code, guix-daemon is VULNERABLE~%") (exit 1)) (else (format #t "content-addressed-mirrors can't create files, guix-daemon is not vulnerable~%") (exit 0))))) (build-derivations store (list drv)))))
01.09.2025 14:00 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Parabola GNU/Linux-libre: 'zabbix' users: manual intervention may be required From Arch: Starting with `7.4.1-2`, the following Zabbix system user accounts (previously shipped by their related packages) will no longer be used. Instead, all Zabbix components will now rely on a shared `zabbix` user account (as originally intended by upstream and done by other distributions): * zabbix-server * zabbix-proxy * zabbix-agent _(also used by the`zabbix-agent2` package)_ * zabbix-web-service This shared `zabbix` user account is provided by the newly introduced `zabbix-common` _split_ package, which is now a dependency for all relevant `zabbix-*` packages. The switch to the new user account is handled automatically for the corresponding main configuration files and `systemd` service units. However, **manual intervention may be required** if you created custom files or configurations referencing to and / or being owned by the above deprecated users accounts, for example: * `PSK` files used for encrypted communication * Custom scripts for metrics collections or report generations * `sudoers` rules for metrics requiring elevated privileges to be collected * ... Those should therefore be updated to refer to and / or be owned by the new `zabbix` user account, otherwise some services or user parameters may fail to work properly, or not at all. Once migrated, you may [remove the obsolete user accounts from your system].
04.08.2025 15:58 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Parabola GNU/Linux-libre: 'zabbix' users: manual intervention may be required From Arch: Starting with `7.4.1-2`, the following Zabbix system user accounts (previously shipped by their related packages) will no longer be used. Instead, all Zabbix components will now rely on a shared `zabbix` user account (as originally intended by upstream and done by other distributions): * zabbix-server * zabbix-proxy * zabbix-agent _(also used by the`zabbix-agent2` package)_ * zabbix-web-service This shared `zabbix` user account is provided by the newly introduced `zabbix-common` _split_ package, which is now a dependency for all relevant `zabbix-*` packages. The switch to the new user account is handled automatically for the corresponding main configuration files and `systemd` service units. However, **manual intervention may be required** if you created custom files or configurations referencing to and / or being owned by the above deprecated users accounts, for example: * `PSK` files used for encrypted communication * Custom scripts for metrics collections or report generations * `sudoers` rules for metrics requiring elevated privileges to be collected * ... Those should therefore be updated to refer to and / or be owned by the new `zabbix` user account, otherwise some services or user parameters may fail to work properly, or not at all. Once migrated, you may [remove the obsolete user accounts from your system].
04.08.2025 15:53 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
health @ Savannah: One Health Conference 2025 - Rome Dear all Luis Falcรณn, author of GNU Health and president of GNU Solidario, will be a keynote speaker at the 4th One-Health conference, that will take place in Rome, Italy September 30 โ€“ October 2, 2025. Those of you who know the mission of GNU Solidario will understand the relevance of this congress in our society, our planet and humanity. Among other things, Luis will talk about the importance of GNU Health, the Global Exposome project and Open Science to achieve social justice, and why is crucial to immediately move away from the ruthless anthropocentrism and start respecting other species and nature is the morally right thing to do, but the only key if we want to survive as a species. Looking forward to meeting you in Rome! More information: https://onehealthconference.it/
02.08.2025 10:22 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
mailutils @ Savannah: GNU mailutils version 3.20 GNU mailutils version 3.20 is available for download. New in this version: ### Movemail synchronization mode Setting synchronization mode allows the user to keep messages in remote source mailbox, while downloading only recently received messages. The mode is defined via the **--sync** command line option or **sync** configuration statement. Allowed values are **uidnext** , **uidl** , and **all**. When set to **uidnext** , **movemail** uses the combination of uidvalidity/uidnext values. This is useful mainly if the source mailbox is accessed via IMAP4 protocol. When using this method, **movemail** stores session metadata in files in the directory **~/.movemail.sync**. The directory location can be changed using the **--sync-dir** option or **sync-dir** configuration statement. The **uidl** setting instructs the program to use UIDL values. This is useful if the source mailbox is accessed via the POP3 protocol. Finally, the value **all** tells it to download all messages. This is the default behavior when no **--sync** option is given. ### Other changes in movemail * The **--reverse** option removed. It made little sense and was never used. * The **--max-messages** option sets the maximum number of latest messages to process. ### New Sieve test: uidnew This test keeps track of the processed messages. It evaluates to true if the current message was not processed before. The test is implemented as an external module. To require it, use > require "test-uidnew"; > Sample use: > if uidnew > { > fileinto "store"; keep; > } > For each processed mailbox, the test keeps its state in a GDBM file **~/.uidnew.db**. The **:db** tagged argument can be used to alter this location. ### Bugfix in imap4d UID SEARCH command The UID SEARCH command incorrectly treated message range argument as UIDs instead of as message sequence numbers.
30.07.2025 12:54 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
FSF Events: Free Software Directory meeting on IRC: Friday, July 11, starting at 12:00 EDT (16:00 UTC) Join the FSF and friends on Friday, July 11 from 12:00 to 15:00 EDT (16:00 to 19:00 UTC) to help improve the Free Software Directory.
24.06.2025 18:05 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
FSF Events: Free Software Directory meeting on IRC: Friday, June 27, starting at 12:00 EDT (16:00 UTC) Join the FSF and friends on Friday, June 27 from 12:00 to 15:00 EDT (16:00 to 19:00 UTC) to help improve the Free Software Directory.
24.06.2025 17:55 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
GNU Guix: Privilege Escalation Vulnerabilities (CVE-2025-46415, CVE-2025-46416) Two security issues, known as **CVE-2025-46415** and **CVE-2025-46416**, have been identified in `guix-daemon`, which allow for a local user to gain the privileges of any of the build users and subsequently use this to manipulate the output of any build, as well as to subsequently gain the privileges of the daemon user. You are strongly advised to **upgrade your daemon now** (see instructions below), especially on multi-user systems. Both exploits require the ability to start a derivation build. CVE-2025-46415 requires the ability to create files in `/tmp` in the root mount namespace on the machine the build occurs on, and CVE-2025-46416 requires the ability to run arbitrary code in the root PID and network namespaces on the machine the build occurs on. As such, this represents an increased risk primarily to multi-user systems, but also more generally to any system in which untrusted code may be able to access guix-daemon's socket, which is usually located at `/var/guix/daemon-socket/socket`. # Vulnerability One of the longstanding oversights of Guix's build environment isolation is what has become known as the _abstract Unix-domain socket hole_ : a Linux-specific feature that enables any two processes in the same network namespace to communicate _via_ Unix-domain sockets, _regardless of all other namespace state_. Unix-domain sockets are perhaps the single most powerful form of interprocess communication (IPC) that Unix-like systems have to offer, for the reason that they allow file descriptors to be passed between processes. This behavior had played a crucial role in CVE-2024-27297, in which it was possible to smuggle a writable file descriptor to one of the output files of a fixed-output derivation to a process outside of the build environment sandbox. More specifically, this would use a fixed-output derivation that doesn't use a builtin builder; examples of this class of derivation include derivations produced by origins using `svn-fetch` and `hg-fetch`, but not `git-fetch` or `url-fetch`, since those are implemented using builtin builders. The process could then wait for the daemon to validate the hash and register the output, and subsequently modify the file to contain any contents it desired. The fix for CVE-2024-27297 seems to have made the assumption that once the build was finished, no more processes could be running as that build user. This is unfortunately incorrect: the builder could also smuggle out the file descriptor of a setuid program, which could subsequently be executed either using `/proc/self/fd/N` or `execveat` to gain the privileges of the build user. This assumption was likely believed to hold in Nix because Nix had a seccomp filter that attempted to forbid the creation of setuid programs entirely by blocking the necessary `chmod` calls. The security researchers who discovered CVE-2025-46415 and CVE-2025-46416 discovered ways around Nix's seccomp filter, but Guix never had any such filter to begin with. It was therefore possible to run arbitrary code as the build user outside of the isolated build environment at any time. Because it is possible to run arbitrary code as the build user even after the build has finished, many assumptions made in the design of the build daemon โ€” not only in fixing CVE-2024-27297 but going way back โ€” can be violated and exploited. One such assumption is that directories being deleted by `deletePath` โ€” for instance the build tree of a build that has just failed โ€” won't be modified while it is recursing through them. By violating this assumption, it is possible to exploit race conditions in `deletePath` to get the daemon to delete arbitrary files. One such file is a build directory of the form `/tmp/guix-build-PACKAGE-X.Y.drv-0`. If this is done between when the build directory is created and when it is `chown`ed to the build user, an attacker can put a symbolic link in the appropriate place and get it to `chown` any file owned by the daemon's user to now be owned by the build user. In the case of a daemon running as root, that includes files such as `/etc/passwd`. The build users, as mentioned before, are easily compromised, so an attacker can at this point write to the target file. When `guix-daemon` is _not_ running as root, the attacker would gain privileges of the `guix-daemon` user, giving write access to the store and nothing else. In short, there are two separate problems here: 1. It is possible to take over build users by exfiltrating setuid programs (CVE-2025-46416). 2. Race conditions in the daemon make it possible to elevate privileges when other processes can concurrently modify files it operates on (CVE-2025-46415). # Mitigation This security issue has been fixed by 6 commits (7173c2c0ca, be8aca0651, fb42611b8f, c659f977bb, 0e79d5b655, and 30a5d140aa as part of pull request #788). Users should make sure they have upgraded to commit 30a5d140aa or any later commit to be protected from this vulnerability. Upgrade instructions are in the following section. The fix was accomplished primarily by closing the "abstract Unix-domain socket hole" entirely. To do this, the daemon was modified so that all builds โ€” even fixed-output ones โ€“ occur in a fresh network namespace. To keep networking functional despite the separate network namespace, a userspace networking stack, slirp4netns, is used. Additionally, some of the daemon's file deletion and copying helper procedures were modified to use the `openat` family of system calls, so that even in cases where build users can be taken over (for example, when the daemon is run with `--disable-chroot`), those particular helper procedures can't be exploited to escalate privileges. A test for the presence of the abstract Unix-domain socket hole is available at the end of this post. One can run this code with: guix repl -- abstract-socket-vuln-check.scm This will output whether the current `guix-daemon` being used is vulnerable or not. If it is _not_ vulnerable, the last line will contain `Abstract unix socket hole is CLOSED`, otherwise the last line will contain `Abstract unix socket hole is OPEN, guix-daemon is VULNERABLE`. Note that this will properly report that the hole is still open for daemons running with `--disable-chroot`, which is, as before, still insecure wherever untrusted users can access the daemon's socket. # Upgrading Due to the severity of this security advisory, **we strongly recommend all users to upgrade`guix-daemon` immediately**. **For Guix System** , the procedure is to reconfigure the system after a `guix pull`, either restarting `guix-daemon` or rebooting. For example: guix pull sudo guix system reconfigure /run/current-system/configuration.scm sudo herd restart guix-daemon where `/run/current-system/configuration.scm` is the current system configuration but could, of course, be replaced by a system configuration file of a user's choice. **For Guix on another distribution** , one needs to `guix pull` with `sudo`, as the `guix-daemon` runs as root, and restart the `guix-daemon` service, as documented. For example, on a system using systemd to manage services, run: sudo --login guix pull sudo systemctl restart guix-daemon.service Note that for users with their distro's package of Guix (as opposed to having used the install script) you may need to take other steps or upgrade the Guix package as per other packages on your distro. Please consult the relevant documentation from your distro or contact the package maintainer for additional information or questions. # Timeline On March 27th, the NixOS/Nixpkgs security team forwarded a detailed report about two vulnerabilities from Snyk Security Labs to the Guix security team and to Ludovic Courtรจs and Reepca Russelstein (as contributors to `guix-daemon`). A 90-day disclosure timeline was agreed upon with Snyk and all the affected projects: Nix, Lix, and Guix. During that time, development of the fixes in Guix was led by Reepca Russelstein with peer review happening on the private `guix-security` mailing list. Coordination with the other projects and for this security advisory was managed by the Guix security team. A pre-disclosure announcement was sent by the NixOS/Nixpkgs and the Guix security teams on June 19thโ€“20th, giving June 24th as the full public disclosure date. Some other CVEs that were included in the report were CVE-2025-52991, CVE-2025-52992, and CVE-2025-52993. These don't represent direct vulnerabilities so much as missed opportunities to mitigate the attack the researchers identified โ€” that is, it has to be possible to do things like exfiltrate file descriptors (for CVE-2025-52992) and trick the daemon into deleting arbitrary files (for CVE-2025-52991 and CVE-2025-52993) before these start mattering. # Conclusion More information concerning the fix for this vulnerability and the design choices made for it will be provided in a follow-up blog post. We thank the Security Labs team at Snyk for discovering similar-but-not-quite-the-same vulnerabilities in Nix, and the NixOS/Nixpkgs security team for sharing this information with the Guix security team, which led us to realize our own related vulnerabilities. ## Test for presence of vulnerability Below is code to check if your `guix-daemon` is vulnerable to this exploit. Save this file as `abstract-socket-vuln-check.scm` and run following the instructions above, in "Mitigation." ;; Checking for CVE-2025-46415 and CVE-2025-46416. (use-modules (guix) (gcrypt hash) ((rnrs bytevectors) #:select (string->utf8)) (ice-9 match) (ice-9 threads) (srfi srfi-34)) (define nonce (string-append "-" (number->string (car (gettimeofday)) 16) "-" (number->string (getpid)))) (define socket-name (string-append "\0" nonce)) (define test-message nonce) (define check (computed-file "check-abstract-socket-hole" #~(begin (use-modules (ice-9 textual-ports)) (let ((sock (socket AF_UNIX SOCK_STREAM 0))) ;; Attempt to connect to the abstract Unix-domain socket outside. (connect sock AF_UNIX #$socket-name) ;; If we reach this line, then we successfully managed to connect to ;; the abstract Unix-domain socket. (call-with-output-file #$output (lambda (port) (display (get-string-all sock) port))))) #:options `(#:hash-algo sha256 #:hash ,(sha256 (string->utf8 test-message)) #:local-build? #t))) (define build-result ;; Listen on the abstract Unix-domain socket at SOCKET-NAME and build ;; CHECK. If CHECK succeeds, then it managed to connect to SOCKET-NAME. (let ((sock (socket AF_UNIX SOCK_STREAM 0))) (bind sock AF_UNIX socket-name) (listen sock 1) (call-with-new-thread (lambda () (match (accept sock) ((connection . peer) (format #t "accepted connection on abstract Unix-domain socket~%") (display test-message connection) (close-port connection))))) (with-store store (let ((drv (run-with-store store (lower-object check)))) (guard (c ((store-protocol-error? c) c)) (build-derivations store (list drv)) #t))))) (if (store-protocol-error? build-result) (format (current-error-port) "Abstract Unix-domain socket hole is CLOSED, build failed with ~S.~%" (store-protocol-error-message build-result)) (format (current-error-port) "Abstract Unix-domain socket hole is OPEN, guix-daemon is VULNERABLE!~%"))
24.06.2025 14:00 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
GNUnet News: GNUnet 0.24.3 # GNUnet 0.24.3 This is a bugfix release for gnunet 0.24.2. It fixes some regressions and minor bugs. #### Links * Source: https://ftpmirror.gnu.org/gnunet/gnunet-0.24.3.tar.gz ( https://ftpmirror.gnu.org/gnunet/gnunet-0.24.3.tar.gz.sig ) * Detailed list of changes: https://git.gnunet.org/gnunet.git/log/?h=v0.24.3 * NEWS: https://git.gnunet.org/gnunet.git/tree/NEWS?h=v0.24.3 The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try https://ftp.gnu.org/gnu/gnunet/
23.06.2025 22:00 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
parallel @ Savannah: GNU Parallel 20250622 ('ะŸะฐะฒัƒั‚ะธะฝะฐ') released GNU Parallel 20250622 ('ะŸะฐะฒัƒั‚ะธะฝะฐ') has been released. It is available for download at: lbry://@GnuParallel:4 Quote of the month: GNU Parallel is a seriously underrated tool, at least based on how little I hear people talk about it (and how often I possibly over-use it) -- Byron Alley @byronalley New in this release: * No new features. * Bug fixes and man page updates. News about GNU Parallel: * Maรฎtriser la commande parallel https://blog.stephane-robert.info/docs/admin-serveurs/linux/parallel/ GNU Parallel - For people who live life in the parallel lane. If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it. ## About GNU Parallel GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel. If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops. GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs. For example you can run this to convert all jpeg files into png and gif files and have a progress bar: parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs: find . -name '*.jpg' | parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200 You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/ You can install GNU Parallel in just 10 seconds with: $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \ fetch -o - http://pi.dk/3 ) > install.sh $ sha1sum install.sh | grep c555f616391c6f7c28bf938044f4ec50 12345678 c555f616 391c6f7c 28bf9380 44f4ec50 $ md5sum install.sh | grep 707275363428aa9e9a136b9a7296dfe4 70727536 3428aa9e 9a136b9a 7296dfe4 $ sha512sum install.sh | grep b24bfe249695e0236f6bc7de85828fe1f08f4259 83320d89 f56698ec 77454856 895edc3e aa16feab 2757966e 5092ef2d 661b8b45 b24bfe24 9695e023 6f6bc7de 85828fe1 f08f4259 6ce5480a 5e1571b2 8b722f21 $ bash install.sh Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1 Walk through the tutorial (man parallel_tutorial). Your command line will love you for it. When using programs that use GNU Parallel to process data for publication please cite: O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014. If you like GNU Parallel: * Give a demo at your local user group/team/colleagues * Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists * Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel * Request or write a review for your favourite blog or magazine * Request or build a package for your favourite distribution (if it is not already there) * Invite me for your next conference If you use programs that use GNU Parallel for research: * Please cite GNU Parallel in you publications (use --citation) If GNU Parallel saves you money: * (Have your company) donate to FSF https://my.fsf.org/donate/ ## About GNU SQL GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries. The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell. When using GNU SQL for a publication please cite: O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32. ## About GNU Niceload GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.
22.06.2025 22:46 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
FSF Blogs: GNU Press Shop is open! FSF 40 gear, books & more -- now until July 28 The Free Software Foundation's (FSF) summer fundraiser is underway, and that means the GNU Press Shop is open!
20.06.2025 20:40 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
FSF News: Free software can defy dystopia Arrive bientรดt.
16.06.2025 21:51 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
FSF News: Free software can defy dystopia Proximamente.
16.06.2025 21:51 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
FSF Events: Free Software Directory meeting on IRC: Friday, June 20, starting at 12:00 EDT (16:00 UTC) Join the FSF and friends on Friday, June 20 from 12:00 to 15:00 EDT (16:00 to 19:00 UTC) to help improve the Free Software Directory.
16.06.2025 20:02 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
FSF Events: Free Software Licensing 101 with FSF copyright & licensing associate Craig Topham This free software licensing 101 talk is intended to cover as many details as possible involving the subject of free software licensing. The talk is broad in scope and is geared toward the beginner and intermediate audience.
12.06.2025 19:25 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
FSF Events: Free Software Licensing 101 with FSF copyright & licensing associate Craig Topham This free software licensing 101 talk is intended to cover as many details as possible involving the subject of free software licensing. The talk is broad in scope and is geared toward the beginner and intermediate audience.
12.06.2025 19:23 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
gcl @ Savannah: Small release errata Greetings! While these tiny issues will likely not affect many if any, there are alas a few tiny errata with the 2.7.1 tarball release. Posted here just for those interested. Will of course be incorporated in the next release. modified gcl/debian/rules @@ -138,7 +138,7 @@ clean: debian/control debian/gcl.templates rm -rf $(INS) debian/substvars debian.upstream rm -rf *stamp build-indep rm -f debian/elpa-gcl$(EXT).elpa debian/gcl$(EXT)-pkg.el - rm -rf $(EXT_TARGS) info/gcl$(EXT)*.info* + rm -rf $(EXT_TARGS) info/gcl$(EXT)*.info* gcl_pool debian-clean: debian/control debian/gcl.templates dh_testdir modified gcl/git.tag @@ -1,2 +1,2 @@ -"Version_2_7_0" +"Version_2_7_1" modified gcl/o/alloc.c @@ -707,6 +707,7 @@ empty_relblock(void) { for (;!rb_emptyp();) { tm_table[t_relocatable].tm_adjgbccnt--; expand_contblock_index_space(); + expand_contblock_array(); GBC(t_relocatable); } sSAleaf_collection_thresholdA->s.s_dbind=o;
11.04.2025 22:06 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Simon Josefsson: Reproducible Software Releases Around a year ago I discussed two concerns with software release archives (tarball artifacts) that could be improved to increase confidence in the supply-chain security of software releases. Repeating the goals for simplicity: * Release artifacts should be built in a way that can be reproduced by others * It should be possible to build a project from source tarball that doesnโ€™t contain any generated or vendor files (e.g., in the style of git-archive). While implementing these ideas for a small project was accomplished within weeks โ€“ see my announcement of Libntlm version 1.8 โ€“ adressing this in complex projects uncovered concerns with tools that had to be addressed, and things stalled for many months pending that work. I had the notion that these two goals were easy and shouldnโ€™t be hard to accomplish. I still believe that, but have had to realize that improving tooling to support these goals takes time. It seems clear that these concepts are not universally agreed on and implemented generally. Iโ€™m now happy to recap some of the work that led to releases of libtasn1 v4.20.0, inetutils v2.6, libidn2 v2.3.8, libidn v1.43. These releases all achieve these goals. I am working on a bunch of more projects to support these ideas too. What have the obstacles so far been to make this happen? It may help others who are in the same process of addressing these concerns to have a high-level introduction to the issues I encountered. Source code for projects above are available and anyone can look at the solutions to learn how the problems are addressed. First letโ€™s look at the problems we need to solve to make โ€œgit-archiveโ€ style tarballs usable: ## Version Handling To build usable binaries from a minimal tarballs, it need to know which version number it is. Traditionally this information was stored inside configure.ac in git. However I use gnulibโ€™s git-version-gen to infer the version number from the git tag or git commit instead. The git tag information is not available in a `git-archive` tarball. My solution to this was to make use of the `export-subst` feature of the `.gitattributes` file. I store the file `.tarball-version-git` in git containing the magic cookie like this: $Format:%(describe)$ With this, `git-archive` will replace with a useful version identifier on export, see the libtasn1 patch to achieve this. To make use of this information, the `git-version-gen` script was enhanced to read this information, see the gnulib patch. This is invoked by `./configure` to figure out which version number the package is for. ## Translations We want translations to be included in the minimal source tarball for it to be buildable. Traditionally these files are retrieved by the maintainer from the Translation project when running `./bootstrap`, however there are two problems with this. The first one is that there is no strong authentication or versioning information on this data, the tools just download and place whatever `wget` downloaded into your source tree (printf-style injection attack anyone?). We could improve this (e.g., publish GnuPG signed translations messages with clear versioning), however I did not work on that further. The reason is that I want to support offline builds of packages. Downloading random things from the Internet during builds does not work when building a Debian package, for example. The translation project could solve this by making a monthly tarball with their translations available, for distributors to pick up and provide as a separate package that could be used as a build dependency. However that is not how these tools and projects are designed. Instead I reverted back to storing translations in git, something that I did for most projects back when I was using CVS 20 years ago. Hooking this into `./bootstrap` and gettext workflow can be tricky (ideas for improvement most welcome!), but I used a simple approach to store all directly downloaded `po/*.po` files directly as `po/*.po.in` and make the `./bootstrap` tool move them in place, see the libidn2 commit followed by the actual โ€˜make update-poโ€™ commit with all the translations where one essential step is: # Prime po/*.po from fall-back copy stored in git. for poin in po/*.po.in; do po=$(echo $poin | sed 's/.in//') test -f $po || cp -v $poin $po done ls po/*.po | sed 's|.*/||; s|\.po$||' > po/LINGUAS ## Fetching vendor files like gnulib Most build dependencies are in the shape of โ€œYou need a C compilerโ€. However some come in the shape of โ€œsource-code files intended to be vendoredโ€, and gnulib is a huge repository of such files. The latter is a problem when building from a minimal git archive. It is possible to consider translation files as a class of vendor files, since they need to be copied verbatim into the project build directory for things to work. The same goes for `*.m4` macros from the GNU Autoconf Archive. However Iโ€™m not confident that the solution for all vendor files must be the same. For translation files and for Autoconf Archive macros, I have decided to put these files into git and merge them manually occasionally. For gnulib files, in some projects like OATH Toolkit I also store all gnulib files in git which effectively resolve this concern. (Incidentally, the reason for doing so was originally that running `./bootstrap` took forever since there is five gnulib instances used, which is no longer the case since gnulib-tool was rewritten in Python.) For most projects, however, I rely on `./bootstrap` to fetch a gnulib git clone when building. I like this model, however it doesnโ€™t work offline. One way to resolve this is to make the gnulib git repository available for offline use, and Iโ€™ve made some effort to make this happen via a Gnulib Git Bundle and have explained how to implement this approach for Debian packaging. I donโ€™t think that is sufficient as a generic solution though, it is mostly applicable to building old releases that uses old gnulib files. It wonโ€™t work when building from CI/CD pipelines, for example, where I have settled to use a crude way of fetching and unpacking a particular gnulib snapshot, see this Libntlm patch. This is much faster than working with git submodules and cloning gnulib during `./bootstrap`. Essentially this is doing: GNULIB_REVISION=$(. bootstrap.conf >&2; echo $GNULIB_REVISION) wget -nv https://gitlab.com/libidn/gnulib-mirror/-/archive/$GNULIB_REVISION/gnulib-mirror-$GNULIB_REVISION.tar.gz gzip -cd gnulib-mirror-$GNULIB_REVISION.tar.gz | tar xf - rm -fv gnulib-mirror-$GNULIB_REVISION.tar.gz export GNULIB_SRCDIR=$PWD/gnulib-mirror-$GNULIB_REVISION ./bootstrap --no-git ./configure make ## Test the git-archive tarball This goes without saying, but if you donโ€™t test that building from a `git-archive` style tarball works, you are likely to regress at some point. Use CI/CD techniques to continuously test that a minimal `git-archive` tarball leads to a usable build. ## Mission Accomplished So that wasnโ€™t hard, was it? You should now be able to publish a minimal `git-archive` tarball and users should be able to build your project from it. I recommend naming these archives as `PROJECT-vX.Y.Z-src.tar.gz` replacing PROJECT with your project name and X.Y.Z with your version number. The archive should have only one sub-directory named `PROJECT-vX.Y.Z/` containing all the source-code files. This differentiate it against traditional PROJECT-X.Y.Z.tar.gz tarballs in that it embeds the git tag (which typically starts with `v`) and contains a wildcard-friendly `-src` substring. Alas there is no consistency around this naming pattern, and GitLab, GitHub, Codeberg etc all seem to use their own slightly incompatible variant. Letโ€™s go on to see what is needed to achieve reproducible โ€œmake distโ€ source tarballs. This is the release artifact that most users use, and they often contain lots of generated files and vendor files. These files are included to make it easy to build for the user. What are the challenges to make these reproducible? ## Build dependencies causing different generated content The first part is to realize that if you use tool X with version A to generate a file that goes into the tarball, version B of that tool may produce different outputs. This is a generic concern and it cannot be solved. We want our build tools to evolve and produce better outputs over time. What can be addressed is to avoid needless differences. For example, many tools store timestamps and versioning information in the generated files. This causes needless differences, which makes audits harder. I have worked on some of these, like Autoconf Archive timestamps but solving all of these examples will take a long time, and some upstream are reluctant to incorporate these changes. My approach meanwhile is to build things using similar environments, and compare the outputs for differences. Iโ€™ve found that the various closely related forks of GNU/Linux distributions are useful for this. Trisquel 11 is based on Ubuntu 22.04, and building my projects using both and comparing the differences only give me the relevant differences to improve. This can be extended to compare AlmaLinux with RockyLinux (for both versions 8 and 9), Devuan 5 against Debian 12, PureOS 10 with Debian 11, and so on. ## Timestamps Sometimes tools store timestamps in files in a way that is harder to fix. Two notable examples of this are `*.po` translation files and Texinfo manuals. For translation files, I have resolved this by making sure the files use a predictable `POT-Creation-Date` timestamp, and I set it to the modification timestamps of the `NEWS` file in the repository (which I set to the `git commit` of the latest commit elsewhere) like this: dist-hook: po-CreationDate-to-mtime-NEWS .PHONY: po-CreationDate-to-mtime-NEWS po-CreationDate-to-mtime-NEWS: mtime-NEWS-to-git-HEAD $(AM_V_GEN)for p in $(distdir)/po/*.po $(distdir)/po/$(PACKAGE).pot; do \ if test -f "$$p"; then \ $(SED) -e 's,POT-Creation-Date: .*\\n",POT-Creation-Date: '"$$(env LC_ALL=C TZ=UTC0 stat --format=%y $(srcdir)/NEWS | cut -c1-16,31-)"'\\n",' < $$p > $$p.tmp && \ if cmp $$p $$p.tmp > /dev/null; then \ rm -f $$p.tmp; \ else \ mv $$p.tmp $$p; \ fi \ fi \ done Similarily, I set a predictable modification time of the texinfo source file like this: dist-hook: mtime-NEWS-to-git-HEAD .PHONY: mtime-NEWS-to-git-HEAD mtime-NEWS-to-git-HEAD: $(AM_V_GEN)if test -e $(srcdir)/.git \ && command -v git > /dev/null; then \ touch -m -t "$$(git log -1 --format=%cd \ --date=format-local:%Y%m%d%H%M.%S)" $(srcdir)/NEWS; \ fi However Iโ€™ve realized that this needs to happen earlier and probably has to be run during `./configure` time, because the `doc/version.texi` file is generated on first build before running โ€˜`make dist`โ€˜ and for some reason the file is not rebuilt at release time. The Automake texinfo integration is a bit inflexible about providing hooks to extend the dependency tracking. The method to address these differences isnโ€™t really important, and they change over time depending on preferences. What is important is that the differences are eliminated. ## ChangeLog Traditionally ChangeLog files were manually prepared, and still is for some projects. I maintain git2cl but recently Iโ€™ve settled with gnulibโ€™s gitlog-to-changelog because doing so avoids another build dependency (although the output formatting is different and arguable worse for my git commit style). So the ChangeLog files are generated from git history. This means a shallow clone will not produce the same ChangeLog file depending on how deep it was cloned. For Libntlm I simply disabled use of generated ChangeLog because I wanted to support an even more extreme form of reproducibility: I wanted to be able to reproduce the full โ€œ`make dist`โ€ source archives from a minimal โ€œ`git-archive`โ€ source archive. However for other projects Iโ€™ve settled with a middle ground. I realized that for โ€˜`git describe`โ€˜ to produce reproducible outputs, the shallow clone needs to include the last release tag. So it felt acceptable to assume that the clone is not minimal, but instead has some but not all of the history. I settled with the following recipe to produce `ChangeLog's` covering all changes since the last release. dist-hook: gen-ChangeLog .PHONY: gen-ChangeLog gen-ChangeLog: $(AM_V_GEN)if test -e $(srcdir)/.git; then \ LC_ALL=en_US.UTF-8 TZ=UTC0 \ $(top_srcdir)/build-aux/gitlog-to-changelog \ --srcdir=$(srcdir) -- \ v$(PREV_VERSION)~.. > $(distdir)/cl-t && \ { printf '\n\nSee the source repo for older entries\n' \ >> $(distdir)/cl-t && \ rm -f $(distdir)/ChangeLog && \ mv $(distdir)/cl-t $(distdir)/ChangeLog; } \ fi Iโ€™m undecided about the usefulness of generated `ChangeLog` files within โ€˜`make dist`โ€˜ archives. Before we have stable and secure archival of git repositories widely implemented, I can see some utility of this in case we lose all copies of the upstream git repositories. I can sympathize with the concept of `ChangeLog` files died when we started to generate them from git logs: the files no longer serve any purpose, and we can ask people to go look at the git log instead of reading these generated non-source files. ## Long-term reproducible trusted build environment Distributions comes and goes, and old releases of them goes out of support and often stops working. Which build environment should I chose to build the official release archives? To my knowledge only Guix offers a reliable way to re-create an older build environment (`guix gime`-machine) that have bootstrappable properties for additional confidence. However I had two difficult problems here. The first one was that I needed Guix container images that were usable in GitLab CI/CD Pipelines, and this side-tracked me for a while. The second one delayed my effort for many months, and I was inclined to give up. Libidn distribute a C# implementation. Some of the C# source code files included in the release tarball are generated. By what? You guess it, by a C# program, with the source code included in the distribution. This means nobody can reproduce the source tarball of Libidn without trusting someone elses C# compiler binaries, which were built from binaries of earlier releases, chaining back into something that nobody ever attempts to build any more and likely fail due to bit-rot. I had two basic choices, either remove the C# implementation from Libidn (which may be a good idea for other reasons, since the C and C# are unrelated implementations) or build the source tarball on some binary-only distribution like Trisquel. Neither felt appealing to me, but a late christmas gift of a reproducible Mono came to Guix that resolve this. ## Embedded images in Texinfo manual For Libidn one section of the manual has an image illustrating some concepts. The PNG, PDF and EPS outputs were generated via fig2dev from a *.fig file (hello 1985!) that I had stored in git. Over time, I had also started to store the generated outputs because of build issues. At some point, it was possible to post-process the PDF outputs with `grep` to remove some timestamps, however with compression this is no longer possible and actually the `grep` command I used resulted in a 0-byte output file. So my embedded binaries in git was no longer reproducible. I first set out to fix this by post-processing things properly, however I then realized that the `*.fig` file is not really easy to work with in a modern world. I wanted to create an image from some text-file description of the image. Eventually, via the Guix manual on `guix graph`, I came to re-discover the graphviz language and tool called `dot` (hello 1993!). All well then? Oh no, the PDF output embeds timestamps. Binary editing of PDFโ€™s no longer work through simple grep, remember? I was back where I started, and after some (soul- and web-) searching I discovered that Ghostscript (hello 1988!) pdfmarks could be used to modify things here. Cooperating with automakeโ€™s texinfo rules related to `make dist` proved once again a worthy challenge, and eventually I ended up with a Makefile.am snippet to build images that could be condensed into: info_TEXINFOS = libidn.texi libidn_TEXINFOS += libidn-components.png imagesdir = $(infodir) images_DATA = libidn-components.png EXTRA_DIST += components.dot DISTCLEANFILES = \ libidn-components.eps libidn-components.png libidn-components.pdf libidn-components.eps: $(srcdir)/components.dot $(AM_V_GEN)$(DOT) -Nfontsize=9 -Teps < $< > $@.tmp $(AM_V_at)! grep %%CreationDate $@.tmp $(AM_V_at)mv $@.tmp $@ libidn-components.pdf: $(srcdir)/components.dot $(AM_V_GEN)$(DOT) -Nfontsize=9 -Tpdf < $< > $@.tmp # A simple sed on CreationDate is no longer possible due to compression. # 'exiftool -CreateDate' is alternative to 'gs', but adds ~4kb to file. # Ghostscript add <1kb. Why can't 'dot' avoid setting CreationDate? $(AM_V_at)printf '[ /ModDate ()\n /CreationDate ()\n /DOCINFO pdfmark\n' > pdfmarks $(AM_V_at)$(GS) -q -dBATCH -dNOPAUSE -sDEVICE=pdfwrite -sOutputFile=$@.tmp2 $@.tmp pdfmarks $(AM_V_at)rm -f $@.tmp pdfmarks $(AM_V_at)mv $@.tmp2 $@ libidn-components.png: $(srcdir)/components.dot $(AM_V_GEN)$(DOT) -Nfontsize=9 -Tpng < $< > $@.tmp $(AM_V_at)mv $@.tmp $@ pdf-recursive: libidn-components.pdf dvi-recursive: libidn-components.eps ps-recursive: libidn-components.eps info-recursive: $(top_srcdir)/.version libidn-components.png Surely this can be improved, but Iโ€™m not yet certain in what way is the best one forward. I like having a text representation as the source of the image. Iโ€™m sad that the new image size is ~48kb compared to the old image size of ~1kb. I tried using `exiftool -CreateDate` as an alternative to GhostScript, but using it to remove the timestamp _added_ ~4kb to the file size and naturally I was appalled by this ignorance of impending doom. ## Test reproducibility of tarball Again, you need to continuously test the properties you desire. This means building your project twice using different environments and comparing the results. Iโ€™ve settled with a small GitLab CI/CD pipeline job that perform bit-by-bit comparison of generated โ€˜make distโ€™ archives. It also perform bit-by-bit comparison of generated โ€˜git-archiveโ€™ artifacts. See the Libidn2 .gitlab-ci.yml 0-compare job which essentially is: 0-compare: image: alpine:latest stage: repro needs: [ B-AlmaLinux8, B-AlmaLinux9, B-RockyLinux8, B-RockyLinux9, B-Trisquel11, B-Ubuntu2204, B-PureOS10, B-Debian11, B-Devuan5, B-Debian12, B-gcc, B-clang, B-Guix, R-Guix, R-Debian12, R-Ubuntu2404, S-Trisquel10, S-Ubuntu2004 ] script: - cd out - sha256sum */*.tar.* */*/*.tar.* | sort | grep -- -src.tar. - sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar. - sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn - sha256sum */*.tar.* */*/*.tar.* | grep -- -src.tar. | sort | uniq -c -w64 | grep -v '^ 1 ' - sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^ 1 ' # Confirm modern git-archive tarball reproducibility - cmp b-almalinux8/src/*.tar.gz b-almalinux9/src/*.tar.gz - cmp b-almalinux8/src/*.tar.gz b-rockylinux8/src/*.tar.gz - cmp b-almalinux8/src/*.tar.gz b-rockylinux9/src/*.tar.gz - cmp b-almalinux8/src/*.tar.gz b-debian12/src/*.tar.gz - cmp b-almalinux8/src/*.tar.gz b-devuan5/src/*.tar.gz - cmp b-almalinux8/src/*.tar.gz r-guix/src/*.tar.gz - cmp b-almalinux8/src/*.tar.gz r-debian12/src/*.tar.gz - cmp b-almalinux8/src/*.tar.gz r-ubuntu2404/src/*v2.*.tar.gz # Confirm old git-archive (export-subst but long git describe) tarball reproducibility - cmp b-trisquel11/src/*.tar.gz b-ubuntu2204/src/*.tar.gz # Confirm really old git-archive (no export-subst) tarball reproducibility - cmp b-debian11/src/*.tar.gz b-pureos10/src/*.tar.gz # Confirm 'make dist' generated tarball reproducibility - cmp b-almalinux8/*.tar.gz b-rockylinux8/*.tar.gz - cmp b-almalinux9/*.tar.gz b-rockylinux9/*.tar.gz - cmp b-pureos10/*.tar.gz b-debian11/*.tar.gz - cmp b-devuan5/*.tar.gz b-debian12/*.tar.gz - cmp b-trisquel11/*.tar.gz b-ubuntu2204/*.tar.gz - cmp b-guix/*.tar.gz r-guix/*.tar.gz # Confirm 'make dist' from git-archive tarball reproducibility - cmp s-trisquel10/*.tar.gz s-ubuntu2004/*.tar.gz Notice that I discovered that โ€˜git archiveโ€™ outputs differ over time too, which is natural but a bit of a nuisance. The output of the job is illuminating in the way that all SHA256 checksums of generated tarballs are included, for example the libidn2 v2.3.8 job log: $ sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar. 368488b6cc8697a0a937b9eb307a014396dd17d3feba3881e6911d549732a293 b-trisquel11/libidn2-2.3.8.tar.gz 368488b6cc8697a0a937b9eb307a014396dd17d3feba3881e6911d549732a293 b-ubuntu2204/libidn2-2.3.8.tar.gz 59db2d045fdc5639c98592d236403daa24d33d7c8db0986686b2a3056dfe0ded b-debian11/libidn2-2.3.8.tar.gz 59db2d045fdc5639c98592d236403daa24d33d7c8db0986686b2a3056dfe0ded b-pureos10/libidn2-2.3.8.tar.gz 5bd521d5ecd75f4b0ab0fc6d95d444944ef44a84cad859c9fb01363d3ce48bb8 s-trisquel10/libidn2-2.3.8.tar.gz 5bd521d5ecd75f4b0ab0fc6d95d444944ef44a84cad859c9fb01363d3ce48bb8 s-ubuntu2004/libidn2-2.3.8.tar.gz 7f1dcdea3772a34b7a9f22d6ae6361cdcbe5513e3b6485d40100b8565c9b961a b-almalinux8/libidn2-2.3.8.tar.gz 7f1dcdea3772a34b7a9f22d6ae6361cdcbe5513e3b6485d40100b8565c9b961a b-rockylinux8/libidn2-2.3.8.tar.gz 8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06 b-clang/libidn2-2.3.8.tar.gz 8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06 b-debian12/libidn2-2.3.8.tar.gz 8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06 b-devuan5/libidn2-2.3.8.tar.gz 8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06 b-gcc/libidn2-2.3.8.tar.gz 8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06 r-debian12/libidn2-2.3.8.tar.gz acf5cbb295e0693e4394a56c71600421059f9c9bf45ccf8a7e305c995630b32b r-ubuntu2404/libidn2-2.3.8.tar.gz cbdb75c38100e9267670b916f41878b6dbc35f9c6cbe60d50f458b40df64fcf1 b-almalinux9/libidn2-2.3.8.tar.gz cbdb75c38100e9267670b916f41878b6dbc35f9c6cbe60d50f458b40df64fcf1 b-rockylinux9/libidn2-2.3.8.tar.gz f557911bf6171621e1f72ff35f5b1825bb35b52ed45325dcdee931e5d3c0787a b-guix/libidn2-2.3.8.tar.gz f557911bf6171621e1f72ff35f5b1825bb35b52ed45325dcdee931e5d3c0787a r-guix/libidn2-2.3.8.tar.gz Iโ€™m sure I have forgotten or suppressed some challenges (sprinkling LANG=C `TZ=UTC0` helps) related to these goals, but my hope is that this discussion of solutions will inspire you to implement these concepts for your software project too. Please share your thoughts and additional insights in a comment below. Enjoy Happy Hacking in the course of practicing this!
24.03.2025 11:09 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
poke @ Savannah: GNU poke 4.3 released I am happy to announce a new release of GNU poke, version 4.3. This is a bugfix release in the 4.x series. See the file NEWS in the distribution tarball for a list of issues fixed in this release. The tarball poke-4.3.tar.gz is now available at https://ftp.gnu.org/gnu/poke/poke-4.3.tar.gz. > > GNU poke (http://www.jemarch.net/poke) is an interactive, extensible > > editor for binary data. Not limited to editing basic entities such > > as bits and bytes, it provides a full-fledged procedural, > > interactive programming language designed to describe data > > structures and to operate on them. > Thanks to the people who contributed with code and/or documentation to this release. Happy poking! Mohammad-Reza Nabipoor
10.03.2025 23:05 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
FSF Events: Free Software Directory meeting on IRC: Friday, March 7, starting at 12:00 EST (17:00 UTC) Join the FSF and friends on Friday, March 7 from 12:00 to 15:00 EST (17:00 to 20:00 UTC) to help improve the Free Software Directory.
25.02.2025 15:45 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
FSF Events: Free Software Directory meeting on IRC: Friday, February 28, starting at 12:00 EST (17:00 UTC) Join the FSF and friends on Friday, February 28 from 12:00 to 15:00 EST (17:00 to 20:00 UTC) to help improve the Free Software Directory.
25.02.2025 15:45 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
FSF Events: Free Software Directory meeting on IRC: Friday, February 14, starting at 12:00 EST (17:00 UTC) Join the FSF and friends on Friday, February 14 from 12:00 to 15:00 EST (17:00 to 20:00 UTC) to help improve the Free Software Directory.
11.02.2025 21:28 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
FSF Blogs: FSF talked about education, copyright management, and free machine learning at FOSDEM 2025 Four FSF staff members had a great time sharing their knowledge and learning at FOSDEM 2025 in Brussels.
11.02.2025 19:30 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0