The Status of Universal Package Systems
Competing Standards
ByBilled as the future of package management, universal package systems like Snappy and Flatpak have failed to live up to their promise.
Remember universal package systems? Although AppImage, the earliest universal package system, was first released in 2004, the concept did not capture much attention until a decade later, when Canonical released Snappy and Red Hat released Flatpak. Each was presented as the next generation of package managers, usable by any distribution, and as a means to reduce the number of rival technologies. Yet in 2020, both Snappy and Flatpak have receded into the background, and the deb and RPM package management systems continue to dominate Linux, leaving the question of why Snappy and Flatpak did not fulfill their promises.
Two quick searches on DistroWatch reveal that, out of the 273 active distros listed, 39 support Flatpak, and 35 support Snap packages. At first, those may sound like respectable numbers, until you realize that a much more arcane deviation from the norm, like distros that do not ship systemd, can boast 99 distros. Moreover, those figures consist mainly of major distros that support Flatpak and Snap -- often both -- but still depend primarily on traditional package managers.
Theory vs. Practice
A serious drawback to universal packages is that, to be truly universal, they require that each distribution be structured the same as others. Despite efforts like the Linux Standard Base, this requirement is simply not met. Many distros continue to place key files in different positions. For this reason, the promise that universal packages would reduce the amount of work needed to ship packages has no practical chance of being realized.
Similarly, although still in discussion, Alexander Larsson, one of the original Flatpak developers, has championed the placement of Flatpak in containers, which has also fallen short of theory. For one thing, containers are optional, and the level of security often varies. Just as importantly, a study at North Carolina State University in 2017 showed that over 356,000 community-contributed container images averaged 156 vulnerabilities, while 3,800 official images averaged 76. In most cases, these vulnerabilities were rated high severity. In other words, while containers might be secure in themselves, what is in them may not be secure.
As Adrian Coyler, who summarized the North Carolina State study, pointed out, new packages often perpetuate vulnerabilities by borrowing dependencies from older ones for convenience. In fact, even a containerized package that is up to date when created may later prove to have vulnerabilities but continue to be used. With the introduction of automatic updates and tools such as Docker Security Scanning, these problems may be mitigated, but, even so, much still depends upon the conscientiousness of a package’s maintainer. Consequently, the promised advantages of universal packages often are yet to be realized.
Going Against Custom
The technical challenges are only part of the reason for the lukewarm reception of universal package managers. It is true that the deb and RPM package managers were designed in eras of limited memory, rather than today's abundance. However, for the average user, that hardly matters. In fact, when installing on older or limited systems, like many bottom-of the-line laptops, the efficient memory use can still be relevant. Snaps, for example, may be better suited for use with containers, but ,overall, the incentive to move away from traditional packages simply hasn’t been there once the universal package manager's novelty subsided.
Part of the problem may be infrastructure. When universal packages were introduced, the rationale was that they would be built by upstream developers. By removing the distributions from the delivery, in theory, packages could get into the hands of users more quickly. The trouble is that was a new role for upstream developers, and one they have not always undertaken. This should not be surprising, because upstream developers’ operations are simply not geared for it. Often, they may not have the numbers to provide such service. As a result, packaging has remained largely in the hands of distro developers, who have the experience and infrastructure for it. While distro developers appear perfectly willing to produce universal packages of different formats, the role tends to be a sideshow, secondary to maintaining traditional packages.
Even more importantly, despite what the makers of universal packages maintain, a functional distribution is not a matter of technology so much as policy -- specifically, of quality control. Josh Triplett, a long time Debian contributor, explains the reason for Debian’s dominance among currently active distributions: “Debian without the .deb format would still be Debian; Debian without Debian Policy would just be SourceForge or rpmfind” — that is, repositories of packages and source code that you could find on a web page with no overall quality control.
The Debian Policy Manual is a lengthy, much revised document that explains what goes in to both a Debian package and a general release. It explains what a package must or may contain, and how it interacts with other packages. It includes where files and logs should be placed, and dozens of other details. No other distribution is so specific about such matters, although almost all have similar documents.
As a Debian package moves from the Unstable repository to Testing and Stable, it is closely examined for compliance with Debian Policy. Moreover, as John Goerzen, another veteran Debian contributor, notes, this initial quality control is reinforced throughout a package’s life history through “unattended-updates, needrestart, debsecan, and debian-security-support […] Debian’s security team generally backports fixes rather than just say ‘here’s the new version’, making it very safe to automatically apply patches. As long as I use what’s in Debian Stable, all layers mentioned above [everything] will be protected using this scheme.”
Originally, universal packages lacked most of these quality control measures. Today, ones like automatic updates are mostly standard, but the kind of quality control offered by Debian is not instituted over night. Quality control in major distros like Debian is the result of decades, and universal packages cannot be expected to equal them in a few years, especially since quality control is a specialist role that many developers do not favor.
The Future of Universal Packages
Universal packages do have one advantage: They make having multiple versions of a package on the same system easier. However, when traditional packages are numbered, multiple versions of libraries and even desktop packages like LibreOfffice can coexist on the same system. Besides, multiple versions are a special case that do not affect many users unless they mix package repositories. Nor is there any reason why users should not mix package systems as they choose.
In a sense, universal packages are a modern version of static tarballs, which include all the dependencies needed to install a package. However, the majority of distributions rejected that model for package management years ago, and the improvements offered by universal packages are not enough to make them preferable to traditional systems.
Neither Flatpak nor Snap are about to go away, especially since they are backed by major Linux corporations. Free software has never been slow to make new technologies, and universal packages are no exception. Still, their current status is far from “the future of application deployment” promised on Flatpak’s front page or “the new bullet-proof mechanism for app delivery and system updates” announced by Snappy. Instead, as a much-reprinted famous xkcd comic observed, the attempt to reduce the number of competing standards has only added to the confusion, and the benefit is small.
next page » 1 2
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Halcyon Creates Anti-Ransomware Protection for Linux
As more Linux systems are targeted by ransomware, Halcyon is stepping up its protection.
-
Valve and Arch Linux Announce Collaboration
Valve and Arch have come together for two projects that will have a serious impact on the Linux distribution.
-
Hacker Successfully Runs Linux on a CPU from the Early ‘70s
From the office of "Look what I can do," Dmitry Grinberg was able to get Linux running on a processor that was created in 1971.
-
OSI and LPI Form Strategic Alliance
With a goal of strengthening Linux and open source communities, this new alliance aims to nurture the growth of more highly skilled professionals.
-
Fedora 41 Beta Available with Some Interesting Additions
If you're a Fedora fan, you'll be excited to hear the beta version of the latest release is now available for testing and includes plenty of updates.
-
AlmaLinux Unveils New Hardware Certification Process
The AlmaLinux Hardware Certification Program run by the Certification Special Interest Group (SIG) aims to ensure seamless compatibility between AlmaLinux and a wide range of hardware configurations.
-
Wind River Introduces eLxr Pro Linux Solution
eLxr Pro offers an end-to-end Linux solution backed by expert commercial support.
-
Juno Tab 3 Launches with Ubuntu 24.04
Anyone looking for a full-blown Linux tablet need look no further. Juno has released the Tab 3.
-
New KDE Slimbook Plasma Available for Preorder
Powered by an AMD Ryzen CPU, the latest KDE Slimbook laptop is powerful enough for local AI tasks.
-
Rhino Linux Announces Latest "Quick Update"
If you prefer your Linux distribution to be of the rolling type, Rhino Linux delivers a beautiful and reliable experience.