Distributing third party applications via Docker?

(4 comments)

Recently the discussions around how to distribute third party applications for "Linux" has become a new topic of the hour and for a good reason - Linux is becoming mainstream outside of free software world. While having each distribution have a perfectly packaged, version-controlled and natively compiled version of each application installable from a per-distribution repository in a simple and fully secured manner is a great solution for popular free software applications, this model is slightly less ideal for less popular apps and for non-free software applications. In these scenarios the developers of the software would want to do the packaging into some form, distribute that to end-users (either directly or trough some other channels, such as app stores) and have just one version that would work on any Linux distribution and keep working for a long while.

For me the topic really hit at Debconf 14 where Linus voiced his frustrations with app distribution problems and also some of that was touched by Valve. Looking back we can see passionate discussions and interesting ideas on the subject from systemd developers (another) and Gnome developers (part2 and part3).

After reading/watching all that I came away with the impression that I love many of the ideas expressed, but I am not as thrilled about the proposed solutions. The systemd managed zoo of btrfs volumes is something that I actually had a nightmare about.

There are far simpler solutions with existing code that you can start working on right now. I would prefer basing Linux applications on Docker. Docker is a convenience layer on top of Linux cgroups and namespaces. Docker stores its images in a datastore that can be based on AUFS or btrfs or devicemapper or even plain files. It already has a semantic for defining images, creating them, running them, explicitly linking resources and controlling processes.

Lets play a simple scenario on how third party applications should work on Linux.

Third party application developer writes a new game for Linux. As his target he chooses one of the "application runtime" Docker images on Docker Hub. Let's say he chooses the latest Debian stable release. In that case he writes a simple Dockerfile that installs his build-dependencies and compiles his game in "debian-app-dev:wheezy" container. The output of that is a new folder containing all the compiled game resources and another Dockerfile - this one describes the runtime dependencies of the game. Now when a docker image is built from this compiled folder, it is based on "debian-app:wheezy" container that no longer has any development tools and is optimized for speed and size. After this build is complete the developer exports the Docker image into a file. This file can contain either the full system needed to run the new game or (after #8214 is implemented) just the filesystem layers with the actual game files and enough meta-data to reconstruct the full environment from public Docker repos. The developer can then distribute this file to the end user in the way that is comfortable for them.

The end user would download the game file (either trough an app store app, app store website or in any other way) and import it into local Docker instance. For user convenience we would need to come with an file extension and create some GUIs to launch for double click, similar to GDebi. Here the user would be able to review what permissions the app needs to run (like GL access, PulseAudio, webcam, storage for save files, ...). Enough metainfo and cooperation would have to exist to allow desktop menu to detect installed "apps" in Docker and show shortcuts to launch them. When the user does so, a new Docker container is launched running the command provided by the developer inside the container. Other metadata would determine other docker run options, such as whether to link over a socket for talking to PulseAudio or whether to mount in a folder into the container to where the game would be able to save its save files. Or even if the application would be able to access X (or Wayland) at all.

Behind the scenes the application is running from the contained and stable libraries, but talking to a limited and restricted set of system level services. Those would need to be kept backwards compatible once we start this process.

On the sandboxing part, not only our third party application is running in a very limited environment, but also we can enhance our system services to recognize requests from such applications via cgroups. This can, for example, allow a window manager to mark all windows spawned by an application even if the are from a bunch of different processes. Also the window manager can now track all processes of a logical application from any of its windows.

For updates the developer can simply create a new image and distribute the same size file as before, or, if the purchase is going via some kind of app-store application, the layers that actually changed can be rsynced over individually thus creating a much faster update experience. Images with the same base can share data, this would encourage creation of higher level base images, such as "debian-app-gamegl:wheezy" that all GL game developers could use thus getting a smaller installation package.

After a while the question of updating abandonware will come up. Say there is is this cool game built on top of "debian-app-gamegl:wheezy", but now there was a security bug or some other issue that requires the base image to be updated, but that would not require a recompile or a change to the game itself. If this Docker proposal is realized, then either the end user or a redistributor can easily re-base the old Docker image of the game on a new base. Using this mechanism it would also be possible to handle incompatible changes to system services - ten years down the line AwesomeAudio replaces PulseAudio, so we create a new "debian-app-gamegl:wheezy.14" version that contains a replacement libpulse that actually talks to AwesomeAudio system service instead.

There is no need to re-invent everything or push everything and now package management too into systemd or push non-distribution application management into distribution tools. Separating things into logical blocks does not hurt their interoperability, but it allows to recombine them in a different way for a different purpose or to replace some part to create a system with a radically different functionality.

Or am I crazy and we should just go and sacrifice Docker, apt, dpkg, FHS and non-btrfs filesystems on the altar of systemd?

P.S. You might get the impression that I dislike systemd. I love it! Like an init system. And I love the ideas and talent of the systemd developers. But I think that systemd should have nothing to do with application distribution or processes started by users. I am sometimes getting an uncomfortable feeling that systemd is morphing towards replacing the whole of System V jumping all the way to System D and rewriting, obsoleting or absorbing everything between the kernel and Gnome. In my opinion it would be far healthier for the community if all of these projects would developed and be usable separately from systemd, so that other solutions can compete on a level playing field. Or, maybe, we could just confess that what systemd is doing is creating a new Linux meta-distribution.

Current rating: 4.7

Comments

Raphael Sanches 2 years, 11 months ago

Sorry but I have to disagree and say that your proposition is not making anything easier, quite the opposite, it makes everything much more complicated... it's almost like an "Inception" of a Linux, inside of a Linux, inside of another Linux, on top of another Linux ...... : O

I believe that the best thing to unify all Linux Distros and Applications would be to:
- API's/Packs/Modules: that applications would be build with, instead of relying on single files dependencies, distributed by the main Distro's Maintainers or the Linux Foundation (They don't seem to do anything relevant all day long anyway)
- Filesystem Structure: organized in a way that many versions of Libs (API's/Packs/Modules) can co-exist, separating the core of the system (XServer/Systemd/Etc) from applications dependencies/languages/api's
- Multi-arch: build applications with the flexibility to use multiple versions of it's dependencies/languages/api's

To sum everything I wrote: make Linux Desktop behave like Android!... Google already found a way to fix this "problem"!

Link | Reply

aigarius 2 years, 11 months ago

Either you allow the developer to choose what they want to depend on and then provide that on the user side (using all possible container or file system isolation techniques) or you limit what type of choice there is for the developers (like Google does) which would basically mean the end of distributions as we know them. That is fine for a locked down OS from one distributor, but for Linux in general that is a simply impossible proposition, mainly because it is so inflexible by design.

Link | Reply

Some 2 years, 11 months ago

Another very good reason to distribute end-user apps via containers is to save big on integration costs.
By integration I mean to get your app running on environments other than the distro customized by the devs, that they actually use to build.
It is very easy for the dev to install, say, cmake, Qt, obsucure_lib_1, obscure_lib_2 and all the "wrap" libraries that these libs need to talk to each other. But after some time in development, so much has been "customized" in the build env to get the app to run, that you may not even know what must be included in a distro package.
But if you want to release on RHEL derivatives, you need to know how to rpm these dependencies, and in Debian the packages may have different names, or even not exist in the versions you need. So you have to package not once or twice but usually about a dozen times.
So you spend an enormous amount of time doing stuff which is tangential to the scope of your app.
Containerizing apps solve this by distributing the dev environment together with the app. Yes it will be slower and bigger, but you save on dev time. And you don't have to be a specialist in integrating software to have the thing up-and-running mostly everywhere there's silicon.

Personally, I believe open source will go containerized big time in the next few years, and "localizing" to specific distros will be done only as an optimization step, if needed. Specially young devs, who may just want to publish their neat app, without having to first learn arcane makefile syntax, boring packaging rules or stuff like that. Github the code, publish the Dockerfile and reap the karma.

Link | Reply

aigarius 2 years, 10 months ago

There is a *bit* more discussion on this topic in LWN article - http://lwn.net/Articles/613260/

Link | Reply

New Comment

required

required (not published)

optional

Recent Posts

Archive

2017
2016
2015
2014
2013
2012
2011
2010
2009
2008
2007
2006
2005

Categories

Authors

Feeds

RSS / Atom