NEWS

Linux Kernel 5.10 is Almost Readyfor Release

For a while, Linus Torvalds was concerned about the size of changes for the Linux 5.10 release. However, upon release of the rc6 candidate, that worry has subsided. To this point, Torvalds said, "…at least this week isn't unusually bigger than normal – it's a pretty normal rc6 stat-wise. So unless we have some big surprising left-overs coming up, I think we're in good shape."

Torvalds continued to say, "That vidtv driver shows up very clearly in the patch stats too, but other than that it all looks very normal: mostly driver updates (even ignoring the vidtv ones), with the usual smattering of small fixes elsewhere – architecture code, networking, some filesystem stuff."

As far as what's to be expected in the kernel, there are two issues that have been around for some time that are finally being either given the boot or improved.

The first is the removal of the set_fs() feature, which checks whether a copy of the user space actually goes to either the user space or to the kernel. Back in 2010, it was discovered that this feature could be used to overwrite and give permission to arbitrary kernel memory allocations. The bug was fixed, but the feature remained. Since then, however, manufacturers improved the memory management such that on most architecture memory space overloads have been banned.

Another improvement is the continued work to address the 2038 issue (a bug that has been known for some time regarding time encoding). On POSIX systems, time is calculated based on seconds elapsed since January 1, 1970. As more time passes, the number to represent a date increases. By the year 2038, it is believed 32-bit systems will no longer function. As of the 5.6 release, those systems could pass the year 2038. The 5.10 release improves on that reliability.

The new kernel will also see filesystem and storage optimizations and support for even more hardware. It should be released mid-December, 2020.

For more information on the release, check out this message from Linus himself (https://lwn.net/Articles/838514/).

Canonical Launches Curated Container Images

Any admin that has deployed containers understands how important security is for business. The problem with containers is that it's often hard to know if an image is safe to use, especially when you're pulling random images from the likes of Docker Hub. You never know if you're going to pull down an image that contains vulnerabilities or malware.

That's why Canonical has decided to publish the LTS Docker Image Portfolio to Docker Hub. This portfolio comes with up to 10 years of Extended Security Maintenance from Canonical. To that, Mark Lewis, VP Application Services at Canonical has stated, "LTS Images are built on trusted infrastructure, in a secure environment, with guarantees of stable security updates." Lewis continued, "They offer a new level of container provenance and assurance to organizations making the shift to container based operations."

This means that Canonical has joined Docker Hub as a Docker Verified Publisher to ensure that hardened Ubuntu images will be available for software supply chains and multi-cloud development.

For anyone looking to download images, they can be viewed on the official Ubuntu Docker page (https://hub.docker.com/_/ubuntu) or pulled with a command like docker pull ubuntu.

For more information about this joint venture, check out the official Docker announcement (https://www.docker.com/blog/canonical-joins-docker-verified-publisher-program/).

AWS Container Image Library in the Works

In reaction to the new image limiting policy, Amazon is creating what some might call their own internal take on Docker Hub. Why? Because the world's most popular container image repository has started limiting the amount of images users of free accounts (and anonymous users) can pull. Although that number is starting pretty high (5,000 pulls per 6 hours for anonymous and free users) the eventual goal will be limits of 100 image pulls per 6 hours for anonymous users and 200 for free accounts. Although that might sound like quite a large number (even for free accounts), given how complicated some pod deployments can be (with large numbers of container images per deployment), that limit can easily be exceeded, especially when deploying at scale.

That's why Amazon Web Services (AWS) will be offering their own image repository, which will only be available for AWS customers. This new public container registry should hit AWS soon. In their November 2nd announcement (https://aws.amazon.com/blogs/containers/advice-for-customers-dealing-with-docker-hub-rate-limits-and-a-coming-soon-announcement/), Amazon intimated the solution would roll out "within weeks."

AWS public images will be geolocated for better availability and offer fast downloads, so you can quickly serve up your applications and services on demand. Amazon will also offer a public website, where anyone can browse the included collection of container images, developer-provided details, and even see pull commands.

Those who pull anonymously will get 50GB of free storage and 500GB of free bandwidth each month and pay nominal charges once that limit has been exceeded.

Make sure to follow the Amazon Containers blog to stay informed (https://aws.amazon.com/blogs/containers/).

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Docker Open Source Developer Tools

    Docker provides the open source tools and resources for compiling, building, and testing containerized applications.

  • Tutorials – Docker

    You might think Docker is a tool reserved for gnarly sys admins, useful only to service companies that run complicated SaaS applications, but that is not true: Docker is useful for everybody.

  • Docker

    Docker is an economical alternative to conventional virtualization. Because each Docker container shares the underlying operating system, it enjoys the resource isolation and allocation benefits of VMs but is much more portable and efficient.

  • Docker with OwnCloud

    Run your application smoothly and portably in the cloud with the Docker container system. This workshop takes a practical look deploying Docker with the OwnCloud cloud environment.

  • Credential Stuffing

    A credential stuffing cyberattack uses username and password credentials stolen in a data breach to gain access to your accounts. We explain how it works and how to prevent yourself from becoming a victim.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News