What does Docker add to just plain LXC ( https://docs.docker.com/faq/ ): ======================================================================== a) Portable deployment across machines Yes, docker makes it easy to "package" a container and xfer it to and deploy it on another machine. However, just copying over the container fs and lxc config files is easy as well (might require a little bit scripting), would have much less overhead and would be probably much faster. Deployment on a different architecture/OS is a different story, and needs its tweaking anyway ... b) Application-centric Hmmm, ok - basically a "chroot $rootfs $appbinary", i.e. running the application in a more or less isolated environment (sandbox). Has certainly its use cases and may come with slightly smaller images depending on the app. However, this is not, what we want for servers. For servers we want, that people/ops can login at any time and configure it to their needs (i.e. incl. network, firewall, user and application management). What we definitely not want is, that they have to login on the GZ (bare metal) and give them the "god" privileges to be able to administrate their container! So wrt. servers, containers should behave/feel as much as possible like a real machine (bare metal). c) Automatic build Well, as already said in a) - a simple tgz of the $rootfs and config scripts can do the same/doesn't include much effort. Preparing/mangle it for a different achitecture/OS is a different story, but usually also not needed/corner cases. Each OS/machine has its own characteristics ... d) Versioning IMHO pure overhead. The only thing which is needed is a modern filesystem, which allows snapshotting on-the-fly. Right now btrfs is pretty unstable/has a chaotic dev cycle (release, than test, than fix), misses essential features like hot spares and separate log/metadata devices (SSDs), seems to have performance problems for certain workloads. Most promissing is ZFS on Linux (ZoL), however, it probably needs as well ~2 years to get into the kernel and become stable/performant. e) "Component reuse" (ready to use images [on public repositories]) Where it might be usefull on desktops for "just give it a try"/marketing purposes it is certainly not the way to go for servers, unless one is in the DC business and hosts several 100 instances of the same kind of server. For SME own hosted servers the default thing is (or should be wrt. on server instance hosts everything), that one has several servers each with its own personality (DNS server, WebServer, Application- server, Fileserver, NIS/LDAP server, etc.). Deploying all with the same image or hosting all within the same instance is not very smart (a lot of overhead, security issues, maintenance, ...). Also who would run a server with "unknown content" in its internal network (i.e. deployed from a image repository)? Probably people, which have learned nothing in the last years, but not responsible admins. So what is better: Creating images by its own, let them bit rot and update when needed (at most 1-2 a year)? Or is it better to have a compact (in the sense of not more than really needed) list of packages for each type of server, so that an instance can be created on demand from RECENT/FIXED packages and the config scripts on has to write anyway (or document somehow). IMHO, the latter is much more efficient, ressource friendly if one has a capable admin. If not, one shouldn't run servers ;-) So what one actually really needs is a package mirror or cache nearby, to speed up deployments a little bit ... BTW: Would be an interesting task to find out, how many passwords/sensible information already leaked by [accidently] publishing images (dueto its "easy-to-use" ... ;-)) f) Sharing Basically the same as e). g) Tool ecosystem What for? Because docker hides so many LXC details (to make its use easier) so that it finally becomes a nightmare to tweak it in a way, that it fullfills the own requirements? Sorry, than IMHO one should prefer to use LXC directly and avoid all the other useless overhead/bloat around it. So my conclusion: docker might be ok to execute simple applications in a more isolated environment, however, it is not the right thing to use to provide permanent services as long as one doesn't have to run several dozens of similar instances like DCs. To be honest, even LXC is IMHO not a proper thing to host containers (dueto the limitations of Linux itself, i.e. networking, filesystem, privilege separation), but probably the best one can get on Linux right now. Compared to Solaris zones (incl. SmartOS/OmniOS et al.) Linux is still ~8-10 years behind and thus we can only hope, that'll catch up soon ...