Docker in it's current form places any docker commands into a work queue and executes them one at a time and sequentially (synchronously). This means if you're creating a single threaded API (node.js) that incorporates docker commands such as standing up containers or downloading images it will very quickly get backed up and overload the docker work queue. I've gotten that far. The question is how could I re-model docker to handle commands asynchronously?
I care less about getting updates as to the commands status or even being able to supply a callback function for when it's done. My primary goal is to be able to rapidly call docker to stand up or execute commands in high volumes without affecting the response times for API.
Some thoughts -
Doesn't efficiently utilize each virtual machine. Costly. Inelegant solution.
Still not truly asynchronous. Additionally the various docker daemons are unable to share resources such as images. So if you have 10 docker daemons that want to run an Ubuntu container then you will need 10 separate ubuntu images which will take up 10 times the space of the original ubuntu image size.
Santiago Trujillo
You need to look at the following:
Registry: Docker registry image can be configured as caching proxy. Your docker daemons will be configured to use this caching proxy registry for image pulls. Only first image pull will get downloaded from dockerhub and images will be cached locally, so second docker daemon that requests the same images will have it almost instantly. You can use docker pull registry
to get the library (official) images.
Docker Swarm: You are talking about initializing more docker daemons and balance tasks, That is already done using Docker Swarm and integrated overlay network that it provides.
Kubernetes: An alternative to Docker Swarm with much more features and flexability (also complexity)