Contain yo self

Posted on August 19, 2023 • 8 min read • 1,529 words
Share via

Using docker to clean up userspace and containerize local tools. Making it easier to move to another system and replicate your own environment

Contain yo self
Photo by Sabu Thomas Vincent on NDZ

Docker to compartmentalize your local system.

At work people have changed to MacOs and the transition has not been entirely painless for all.

Some tools aren’t working as they expected it too others are not easily available from package managers as HomeBrew. As an example the sshpass utility is not installable this way since it’s deemed a security risk by the people behind homebrew. You can get around this by installing sshpass from random peoples github through homebrew but that feels a little sketchy, you could track down the source code and build it yourself on the system. There is a solution it just doesnt feel very Apple like.

This has peaked an interest in containers to solve some of those issues, and I am going to go through how to set that up in this post.

Docker on mac

Docker on MacOs is by standard supplied with Docker desktop, love it or hate it it comes with some pros and cons.

Docker-engine runs as root and by definition have system level access so if you configure that incorectly that is a security risk on the system. Many people will run docker at boot and either add themselves do the docker group or have passwordless sudo to have an easier time managing docker.

If I get access just to a base user without sudo rights but a member of the docker group I could do something like:

docker run -it -v /etc:/etc ubuntu bash

This would put me in a shell with root access to the entire /etc directory and I could alter host files at will.

Having docker tied to docker desktop makes it more obvious that docker is running in the first place but more importantly docker is running in userspace and is limited to the users current access rights. Docker desktop also comes with a lot more utilities and user guidance like using docker scout etc.

Docker desktop can be set to start at boot without being as big of a security risk as docker-engine very easily can be.

Using docker to seamlessly integrate with the system

In the section above I started a docker container at will and granted myself a bash shell in that container. This could also have been another tool like telnet, ansible or even vim or emacs.

Lets create a simple local image to handle our telnet and ssh connections.

from ubuntu:22.04

run apt-get update
run apt-get install -y telnet ssh vim

Put the contents above in a file called Dockerfile and run:

docker build -t toolbox ubuntu:latest .

Now wait for docker to build the image and when it is done run the following to test it is working.

docker run -it localbox telnet telehack.com
Trying 64.13.139.230...
Connected to telehack.com.
Escape character is '^]'.
starwars
... best starwars version begins playing

Hiding docker in a closet

Typing that command, remembering the image name etc. can be hard, so to make things a little easier and feel more native we can utilize shell aliases.

Depending on your shell environment an alias is put in different places. For a bash shell it goes in ~/.bash_aliases and if that does not exist it goes directly in ~/.bashrc.

For zsh as is the standard shell on MacOs it would go in ~/.zshrc.

An alias can overwrite a system level binary so even when telnet is installed locally we could force any telnet statements to use the docker version instead.

To use our new docker image for telnet on MacOs we open a terminal and run this (change the file at the end to the correct file for your shell)

echo 'alias telnet="docker run -it localbox telnet"' >> ~/.zshrc

Now either open a new terminal or run: source ~/.zshrc

To verify it is working open 2 terminal windows or tabs. In the first one run: telnet telehack.com

And in the other window now run: docker ps

Docker ps should show a running container and killing the telnet session in the first terminal window and then running docker ps again in this one should show the container has died when the telnet command finished.

Going even further beyond

Now running self contained tools through a docker container can only get you so far. But what if we need the tools to interact with out local filesystem? It could be something like ansible-playbook running in a container while the files are located on the host os filesystem, or having your development environment and configuration in a container while using it to edit host files. This is where docker volumes comes in handy.

Running a configured vim thorugh docker.

The following assumes you already have a local vim installation configured to your liking and just recreates that setup in the container.

The astute reader would have noticed that in out previous Dockerfile we also installed vim, that was both to show that one image could have multiple tools installed instead of building an image for each little tool, but also a precursor for this section.

Given the Dockerfile from earlier we are going to append a little bit to it.

from ubuntu:22.04

run apt-get update
run apt-get install -y telnet ssh vim

# add vim config files
add ~/.vimrc /root/
# add vim plugins like ctrlp etc.
add ~/.vim /root/

# change working directory so that volumes does not overwrite our vim config.
workdir /tmp/stuff

Now rebuild the image to get an updated version

docker build -t toolbox ubuntu:latest .

After the build process finishes we now also have a configured vim editor in our toolbox image. Now running vim from the container in the same fashion as we did with telnet is not very useful as it would only grant us access to files in the container

Instaed we are going to utilize docker volumes to give us access to local files.

docker run -it -v ${PWD}:/tmp/stuff toolbox vim docker-vim-test.txt

This would land you in a vim session where you could write "pleb.dk is the best site on the interwebz" Now you alt tab to a browser and start googling how so save and exit vim (:wq)

After you finally got released from the virtual prison that is vim you will now be back on your host system and have a file called docker-vim-test.txt in your current directory. Opening the file should show the contents we’ve put in there from the docker container.

What actually happened. While building the updated toolbox image we changed the working directory to /tmp/stuff at the end. In the docker run command we added -v ${PWD}:/tmp/stuff wich tells docker to add our current working directory to /tmp/stuff inside the container and by doing so giving the container access to the filed in our current directory.

This can ofcourse also be made easier and more seamless by creating an alias like we did for telnet, I will leave this excersise up the reader to figure out how to do.

What would the point of running tools inside a container?

In this particular example vim is built from our already configured system to make the examples easier and shorter.

In a real world scenario you would configure vim directly in the Dockerfile instead, by doing so you would gain the benefit that your editor setup is self contained and should you change systems or have your computer crash, getting up and running again is as easy as building the container.

Running tools from inside containers ensures stuff is replicable and it keeps your host os clean from as many customizations and tools that could potentially lead to instability and require a reinstall. Where as reinstalling a container is quick and easy it is a bit harder and more time consuming to restart your entire operating system.

This leads in to the discussion of having one big image with all your tools or having multiple images with a tool in each. The docker way would be multiple images.

The benefits of having multiple smaller images would be:

  • Faster build times
  • Simpler Dockerfiles
  • No conflicts package conflicts as there is only one tool in each image
  • Faster startup times for the images
  • Issues with one image only affects that image and the other images can be used while debugging the errors.

One would think that multiple small images would lead to a bigger overhead on diskspace usage, but that is actually very minor as each image would only be the size of the difference from the base image. Given that all the tools inherit from the same base image the increasd disk usage would be very minor.

Diskspace could be even more efficient by utilizing :alpine or :slim images or even setting things up as a multi-stage build process - but this is a topic for another post.

Disclaimer: The above docker image runs vim as root. This is not an issue on mac as the image runs in userspace and does not have root access on the host system by default, but on some linux distros this would lead to the files edited by “docker vim” being owned by root. This can be fixed by replicating your current user in the docker image, but this is also a topic for another post.

Follow me

If for some reason you would want to see more of me I also generate web trash at theese locations. Should you wanna buy me a beer click the heart, as beer is love.