Docker images are huge (typically for me in the range of 1 to 1.5 GB each)
Pushing and pulling Docker images to/from the Docker registry is slow - even if you push/pull only new layers. Nobody on the Docker forums could explain the widely differing loading times of layers (from slow to fast)
There is no trivial way to replicate Docker containers.
The concept of data containers is as long broken as there are no reasonable way to replicate and backup data containers. There is no trivial way for moving data between containers.
Backup methods specific to certain services do no longer work on the container level.
Docker builds are slow (even on fast machines).
Monitoring Docker containers is not trivial and hard to achieve. Our fat development server feels sluggish with a high load running only 3 Docker containers. I currently do not see obvious options for tracking down the real issue.
Docker runs stable (or not )depending on the underlaying Linux distro and kernel. I can not find any officially supported recommendations for distros and kernels.
The Docker registry is a huge bag of unsorted Docker image crap - it feels like the early PyPI years where every @@@@@ felt the power to upload every crap into the wild.
I am still using Docker for a few usecases like providing a demo instance for XML Director or for having containers with Base-X and eXist-db used for running unittests.
Docker has potential but it has many serious issues. Of course the Docker fan boys will crucify me for this post. Also looking into CoreOS Rocket over the next days however Docker feels much more mature than Rocket but Rocket also has to start from somewhere...the concepts of Rocket appear more well thought than Docker, the MongoDB of virtualization.