Dalen Catt

My Extremely Low-Effort Blog

,

Crafting a Powerful Development Environment with WSL and Docker

As developers, we often find ourselves in the constant pursuit of the perfect development environment. In my journey to set up an efficient workspace for web and other applications, I’ve encountered my fair share of challenges and solutions. Today, I share my experiences with you, future me, and anyone else seeking to conquer the realm of Windows, WSL, and Docker. While the path may not always be smooth, the rewards are immense, and the possibilities are endless.

Let’s start by acknowledging that, given the choice, many of us developers would wholeheartedly opt for a native Linux environment. It’s a realm of efficiency and stability, where problems associated with Windows simply vanish. However, life is all about trade-offs, and for some of us, the siren call of Windows cannot be ignored. The need for software like Autodesk Inventor, Solidworks, Adobe Suite, Microsoft Office, and the joy of gaming pulls us back into the Windows world. While LibreOffice/OpenOffice exists as an alternative to Microsoft Office, it may not be a viable replacement in certain workplaces where compatibility is crucial.

My trusted companion in this endeavor is the Lenovo Legion 7i Gen 7, a true powerhouse that allows me to tackle any coding task while still flexing its muscles for CAD software and video editing. However, running Linux natively on this mobile workstation came with its challenges. Native Linux installations introduced compatibility issues with certain features that I adore, such as the ability to limit battery charge to preserve battery life, and intelligent GPU management to optimize power consumption while coding on the go. Unfortunately, these perks seemed to fade away when booting into a Linux environment, making the use of WSL and Docker a necessity for a balanced workflow.

Windows Subsystem for Linux (WSL) and Docker to the Rescue: To bridge the gap between Windows and Linux, I turned to Windows Subsystem for Linux (WSL) and Docker. WSL allowed me to work with familiar Linux tools without compromising on Windows compatibility. A dream come true, right? Almost. Initially, I found WSL lacked comprehensive documentation on best practices, leading to some confusion and missteps.

My initial encounter with WSL began with a simple directive: “Go to the Windows Store and install Ubuntu.” Easy, right? While this set me on the right path, it became evident that my developer journey required multiple environments, necessitating a more robust approach.

Docker Desktop for Windows offered an enticing way to set up containers quickly, but I soon discovered a crucial caveat. Attempting to use local Windows storage with Docker dev containers led to abysmal performance. The performance difference between accessing NTFS and Ext4 was staggering, with Ext4 providing multiple orders of magnitude improvement. However, to fully leverage the benefits of Ext4, it required organizing all development projects within one of the WSL installs. Juggling multiple WSL installs became a necessary consideration for those who desired different development environments.

So, if you, like me, desire to build a development environment that marries Windows, WSL, and Docker harmoniously, fear not. The journey may have its share of ups and downs, but the destination is worth every effort. With a dash of creativity and a sprinkle of resilience, you can craft a workspace that caters to your every coding whim.

I needed to start by first cleaning up the mess of an environment I had already made. First, I wanted to back up all the data that I had, including all my current projects, so that I could move them to the new environment later. To do this, I used the following commands to export my existing WSL instances.

wsl --export Ubuntu-22.04 F:\wsl\backups\OldUbuntu.tar
wsl --export docker-desktop F:\wsl\backups\docker-desktop.tar
wsl --export docker-desktop-data F:\wsl\backups\docker-desktop-data.tar
PowerShell

With my second SSD in mind (F: drive), I created a folder called “wsl,” containing subfolders for images, backups, and distros. Then, I removed my Ubuntu instance from WSL using the command:

wsl --unregister Ubuntu-22.04
PowerShell

However, I waited to do the same with the Docker containers until later. To have a clean Ubuntu image backup, I reinstalled the Ubuntu package from the Microsoft Store, set up the image, exported it to the images subfolder, and then removed it again. Here are the commands I used:

wsl --install -d Ubuntu-22.04
wsl --set-version Ubuntu-22.04 2
wsl --export Ubuntu-22.04 F:\wsl\images\UbuntuBase.tar
wsl --unregister Ubuntu-22.04
PowerShell

Next, I aimed to create a base image for my future instances that I could easily clone from. Although you can use any base image, I decided to use the clean Ubuntu image I just created, as it served as a safe starting point for my experiments. Here’s the command I used to import the Ubuntu 22.04 tar file:

wsl --import UbuntuBase F:\wsl\distros\UbuntuBase F:\wsl\images\UbuntuBase.tar --version 2
PowerShell

Before moving forward, I needed to set up the base image with all the software packages I’d always want access to. To begin, I created a new user for WSL to log in as and generated a WSL configuration file in /etc/wsl.conf:

NEWUSER=<your_username>
useradd --create-home --shell /usr/bin/bash --user-group --groups adm,dialout,cdrom,floppy,sudo,audio,dip,video,plugdev,netdev --password $(read -sp Password: pw ; echo $pw | openssl passwd -1 -stdin) $NEWUSER
touch /etc/wsl.conf
echo '[user]' >> /etc/wsl.conf
echo default=$NEWUSER >> /etc/wsl.conf
Bash

Remember to replace <your_username> with your desired username.

Finally, I exited the instance and relaunched it to log in as the new user:

wsl --terminate UbuntuBase
wsl -d UbuntuBase
PowerShell

Next, one thing I absolutely wanted was a simple way to share data between multiple WSL instances. The perfect solution was to create an extra VHD file that we could mount to WSL, ensuring seamless data accessibility across environments. To achieve this, I utilized the built-in Windows ‘Create and format hard disk partitions’ window to create the VHD file. Although I came across a PowerShell command for this task, it unfortunately didn’t work on my version of Windows. Your mileage may vary, so the manual approach sufficed for me.

Creating the VHD file was straightforward, and I placed it at F:\wsl\wsl_shared.vhdx. However, a crucial step was ensuring that the newly created drive was not mounted to Windows, as doing so would cause an error when trying to mount it to WSL.

To mount the drive to WSL, I used the following command:

wsl --mount --vhd F:\wsl\wsl_shared.vhdx --bare
PowerShell

Once the VHD was successfully mounted, I logged back into the WSL instance as root to work with the device. To identify the device, I used the lsblk command and then proceeded to create partitions using parted. In my case, the device was “sde,” so the commands looked like this:

parted /dev/sde mklabel gpt
parted -a optimal /dev/sde mkpart primary ext4 0% 100%
lsblk
mkfs.ext4 /dev/sde1
lsblk
e2label /dev/sde1 SHARED_DATA
Bash

This created the entire ‘drive’ as an Ext4 partition labeled as “SHARED_DATA.” The next step was to ensure that the partition mounts correctly. I modified the /etc/fstab file to achieve this. First, I obtained the UUID of the device using sudo blkid. I located the entry for the partition, which looked something like this:

/dev/sde1: LABEL="SHARED_DATA" UUID="<THE_UUID>" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="primary" PARTUUID="<NOT_THIS_UUID>"
Bash

Then, I added the following line to /etc/fstab:

UUID=<THE_UUID>   /home/<your_username>/_SHARED ext4    defaults    0   2
Bash

Please remember to replace <THE_UUID> with the actual UUID from the output, and ensure that the folder _SHARED exists before attempting to mount it. You can create the folder with a simple mkdir.

With this setup completed, my base image now had a shared drive accessible across all WSL instances. When creating new WSL instances from this base image, they automatically had access to the _SHARED folder.

Keep in mind that the wsl --mount command must be run once before any WSL instance that requires access to the virtual drive. This needs to be done each time WSL or Windows is shut down and restarted. Several methods can automate this process, including using the Windows Task Scheduler or modifying the command in each Windows terminal profile.

Before proceeding, I wanted to make my Windows terminal experience more delightful. I downloaded a NerdFont to enhance the visuals and boost productivity. Fira Code from https://www.nerdfonts.com/font-downloads proved to be the perfect choice. I extracted and installed all the fonts, then went to Windows Terminal Settings > Defaults > Appearance > Font and selected the NerdFont. This tiny tweak made a significant impact on the overall aesthetics and legibility of my terminal.

Furthermore, I installed some of my favorite essential packages to optimize my development environment:

sudo apt install nala -y
sudo nala install zsh bat fzf exa micro thefuck tldr
git clone https://github.com/romkatv/powerlevel10k.git $ZSH_CUSTOM/themes/powerlevel10k
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
Bash

With Zsh now installed, I configured it with just a few basic options to suit my preferences. For completeness, here is the entire config:

# Enable Powerlevel10k instant prompt. Should stay close to the top of ~/.zshrc.
# Initialization code that may require console input (password prompts, [y/n]
# confirmations, etc.) must go above this block; everything else may go below.
if [[ -r "${XDG_CACHE_HOME:-$HOME/.cache}/p10k-instant-prompt-${(%):-%n}.zsh" ]]; then
  source "${XDG_CACHE_HOME:-$HOME/.cache}/p10k-instant-prompt-${(%):-%n}.zsh"
fi

# If you come from bash you might have to change your $PATH.
# export PATH=$HOME/bin:/usr/local/bin:$PATH

# Path to your oh-my-zsh installation.
export ZSH="$HOME/.oh-my-zsh"

# Set name of the theme to load --- if set to "random", it will
# load a random theme each time oh-my-zsh is loaded, in which case,
# to know which specific one was loaded, run: echo $RANDOM_THEME
# See https://github.com/ohmyzsh/ohmyzsh/wiki/Themes
ZSH_THEME="powerlevel10k/powerlevel10k"

# Set list of themes to pick from when loading at random
# Setting this variable when ZSH_THEME=random will cause zsh to load
# a theme from this variable instead of looking in $ZSH/themes/
# If set to an empty array, this variable will have no effect.
# ZSH_THEME_RANDOM_CANDIDATES=( "robbyrussell" "agnoster" )

# Uncomment the following line to use case-sensitive completion.
# CASE_SENSITIVE="true"

# Uncomment the following line to use hyphen-insensitive completion.
# Case-sensitive completion must be off. _ and - will be interchangeable.
HYPHEN_INSENSITIVE="true"

# Uncomment one of the following lines to change the auto-update behavior
# zstyle ':omz:update' mode disabled  # disable automatic updates
# zstyle ':omz:update' mode auto      # update automatically without asking
# zstyle ':omz:update' mode reminder  # just remind me to update when it's time

# Uncomment the following line to change how often to auto-update (in days).
# zstyle ':omz:update' frequency 13

# Uncomment the following line if pasting URLs and other text is messed up.
# DISABLE_MAGIC_FUNCTIONS="true"

# Uncomment the following line to disable colors in ls.
# DISABLE_LS_COLORS="true"

# Uncomment the following line to disable auto-setting terminal title.
# DISABLE_AUTO_TITLE="true"

# Uncomment the following line to enable command auto-correction.
ENABLE_CORRECTION="true"

# Uncomment the following line to display red dots whilst waiting for completion.
# You can also set it to another string to have that shown instead of the default red dots.
# e.g. COMPLETION_WAITING_DOTS="%F{yellow}waiting...%f"
# Caution: this setting can cause issues with multiline prompts in zsh < 5.7.1 (see #5765)
COMPLETION_WAITING_DOTS="true"

# Uncomment the following line if you want to disable marking untracked files
# under VCS as dirty. This makes repository status check for large repositories
# much, much faster.
# DISABLE_UNTRACKED_FILES_DIRTY="true"

# Uncomment the following line if you want to change the command execution time
# stamp shown in the history command output.
# You can set one of the optional three formats:
# "mm/dd/yyyy"|"dd.mm.yyyy"|"yyyy-mm-dd"
# or set a custom format using the strftime function format specifications,
# see 'man strftime' for details.
# HIST_STAMPS="mm/dd/yyyy"

# Would you like to use another custom folder than $ZSH/custom?
# ZSH_CUSTOM=/path/to/new-custom-folder

# Which plugins would you like to load?
# Standard plugins can be found in $ZSH/plugins/
# Custom plugins may be added to $ZSH_CUSTOM/plugins/
# Example format: plugins=(rails git textmate ruby lighthouse)
# Add wisely, as too many plugins slow down shell startup.
plugins=(colorize colored-man-pages command-not-found copyfile copypath extract git-flow git history-substring-search jump last-working-dir nvm npm thefuck zsh-interactive-cd)

source $ZSH/oh-my-zsh.sh

# User configuration

path+=('/home/dalen/.bin')
export PATH

# export MANPATH="/usr/local/man:$MANPATH"

# You may need to manually set your language environment
# export LANG=en_US.UTF-8

# Preferred editor for local and remote sessions
# if [[ -n $SSH_CONNECTION ]]; then
#   export EDITOR='vim'
# else
#   export EDITOR='mvim'
# fi
export EDITOR='micro'
# Compilation flags
# export ARCHFLAGS="-arch x86_64"

# Set personal aliases, overriding those provided by oh-my-zsh libs,
# plugins, and themes. Aliases can be placed here, though oh-my-zsh
# users are encouraged to define aliases within the ZSH_CUSTOM folder.
# For a full list of active aliases, run `alias`.
#
# Example aliases
alias zshconfig="micro ~/.zshrc"
alias ohmyzsh="micro ~/.oh-my-zsh"

# To customize prompt, run `p10k configure` or edit ~/.p10k.zsh.
[[ ! -f ~/.p10k.zsh ]] || source ~/.p10k.zsh

alias edit='micro'
alias ls="ls --color -l -h"
alias grep="grep -n --color"
alias mkdir="mkdir -pv"
alias cat="batcat"
alias apt="nala"
Bash

Additionally, I installed “lf,” a terminal file manager, to further enhance my productivity:

mkdir ~/.bin
cd ~/.bin
wget https://github.com/gokcehan/lf/releases/download/r30/lf-linux-amd64.tar.gz
mv ./lf-linux-amd64.tar/lf ./lf
rm -r lf-linux-amd64.tar lf-linux-amd64.tar.gz
Bash

Fantastic! Now that we have our basic instance perfectly configured, it’s time to embark on the exciting journey of cloning it to create our new environment, aptly named UbuntuDev.

To achieve this, we’ll once again use the “wsl import” command, but this time we’ll specify that we are cloning a VHD directly. The magic of this process lies in the seamless duplication of our base image, complete with all the software and configurations we meticulously set up earlier.

wsl --terminate UbuntuBase
wsl --import UbuntuDev F:\wsl\distros\UbuntuDev\ --vhd F:\wsl\distros\UbuntuBase\ext4.vhdx
Bash

And there it is – UbuntuDev stands tall, ready to embrace the Docker world! With this process, we have established a solid foundation upon which we can build our Docker environment, unlocking a whole new realm of possibilities for seamless containerization.

Ah, now we’re taking the plunge into the world of Docker directly within our WSL environment! With the new UbuntuDev instance up and running, it’s time to install Docker using a helpful script. To get started, let’s log in to our fresh UbuntuDev environment.

wsl -d UbuntuDev
PowerShell

First, we’ll acquire a helper script that streamlines the Docker installation process. With the script in our possession, we’re all set to move forward. Execute the following commands to initiate the Docker installation:

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
Bash

Now, there’s a warning that suggests using Docker Desktop, but let’s forge ahead and disregard the warning, allowing the installation to proceed. We’re ready to embrace Docker directly within our UbuntuDev, unleashing its potential to streamline our development processes.

Next, we want to ensure that we can run Docker commands without the need for sudo. This is done by adding the current user to the docker group:

sudo usermod -aG docker $USER
Bash

With that taken care of, we are all set to harness the power of Docker from the comfort of our UbuntuDev instance.

But why stop there? Let’s take it up a notch by installing the Docker Compose plugin, which allows us to define and manage multi-container Docker applications using a simple YAML file:

sudo apt-get update && sudo apt-get install docker-compose-plugin
Bash

Ah, the thrill of anticipation! While we’re itching to start using Docker as we normally would, there’s a small hiccup we need to address. In our WSL2 environment, we don’t have systemd, which means we can’t simply start Docker as a service using traditional methods. Fear not, my fellow developer, we have a workaround up our sleeves!

Since newer versions of WSL do support running systemd, we could potentially explore that route, but as of now, I’m a bit cautious about enabling it, given limited testing. So, let’s go with a tried and tested approach that ensures Docker starts up smoothly.

To initiate Docker in our UbuntuDev instance, we can make a small modification to our profile or zshrc file, depending on your preferred shell. By adding a few lines of code, we’ll have Docker up and running whenever we launch UbuntuDev.

if grep -q "microsoft" /proc/version > /dev/null 2>&1; then
    if service docker status 2>&1 | grep -q "is not running"; then
        wsl.exe --distribution "${WSL_DISTRO_NAME}" --user root \
            --exec /usr/sbin/service docker start > /dev/null 2>&1
    fi
fi
Bash

This snippet of code performs a quick check to see if the distribution is running on WSL. If so, it then verifies if Docker is not running. If Docker is not running, we invoke wsl.exe with the necessary parameters to start the Docker service as the root user. This way, we can bypass the need for systemd and get Docker up and running within our WSL2 environment.

As we try to pull an image for a Docker container, we are faced with yet another hurdle, this time related to libsecret. However, worry not, my fellow developer, for we shall conquer this challenge with finesse and ingenuity!

To get around the libsecret issue and ensure smooth image pulls in Docker, we’ll take the following steps:

  1. Install the required libsecret package:
sudo apt install libsecret-1-0
Bash
  1. Create a configuration file for Docker:
mkdir ~/.docker
touch ~/.docker/config.json
Bash
  1. Populate the config.json file with the necessary settings to resolve the libsecret issue:
echo '{' >> ~/.docker/config.json
echo ' "credsStore": "pass"' >> ~/.docker/config.json
echo '}' >> ~/.docker/config.json
Bash

By adding the “credsStore” entry with the value “pass” to the Docker config.json file, we inform Docker to use the pass implementation as the credential store. This circumvents the libsecret problem and ensures a seamless Docker experience within our WSL environment.

Indeed, while using docker-credential-pass can resolve the libsecret issue and enable smooth Docker image pulls within our WSL environment, it’s essential to be aware of some potential pitfalls that come with this solution. As with any workaround, there are trade-offs and considerations we should keep in mind:

Using docker-credential-pass means storing Docker credentials in the pass password manager, which is a standard Unix password manager. While pass itself is secure and widely used, any misconfiguration or compromised access to pass can lead to potential security risks. It’s crucial to ensure proper access controls and permissions are in place to safeguard Docker credentials.

If the pass password manager requires a passphrase to unlock the Docker credentials, you might encounter password prompts when pulling Docker images or performing other Docker operations. This can be an inconvenience, especially when automating Docker tasks or dealing with many images.

While docker-credential-pass can be a viable solution for many users, it’s crucial to weigh the potential drawbacks and assess whether it aligns with your specific needs and security considerations. As with any technical solution, it’s essential to stay informed about updates, security best practices, and any changes in the Docker ecosystem to maintain a robust and secure development environment.

In a local development instance like WSL2, where credentials can be stored securely behind your Windows login, the potential risks of using a local password manager like pass might be relatively low. Moreover, this setup is not intended for running production Docker images, where security considerations would be paramount.

As developers, we often tailor our development environments to suit our specific workflows and preferences. If you rarely find yourself using the credential manager on your development machine and the use of docker-credential-pass does not significantly impact your daily work, it might be a pragmatic and viable choice for this specific use case.

While exploring alternative solutions like making the wincred helper work in WSL2 is valuable, striking a balance between convenience and security is key. As long as we are aware of the potential risks and make informed decisions based on our use case, we can confidently proceed with a setup that optimizes our development productivity.

In the end, it’s all about finding the right balance for your unique needs and ensuring that your development environment empowers you to work efficiently and effectively.

In conclusion, I must say that this setup has been nothing short of fantastic for my development needs. So far, I haven’t encountered any major issues, and it’s working seamlessly. The key is to remember to mount that extra VHD before launching any WSL instances, which gives me access to the glorious _SHARED directory.

With my UbuntuDev container up and running, I can effortlessly navigate to any file within the _SHARED directory and type code . to launch the VSCode server. This opens VSCode on the Windows host, allowing me to take advantage of its full potential while enjoying the benefits of my WSL environment. Moreover, creating development containers that launch within UbuntuDev is a breeze, streamlining my workflow and optimizing my productivity.

I must admit that Docker, running within this setup, has been quite impressive. It’s been smooth sailing, and I’m excited to see how it continues to perform over time. Docker’s ability to encapsulate applications and their dependencies makes development and deployment a breeze, freeing me from the shackles of environment inconsistencies.

What’s even better is that this setup is incredibly flexible. On top of my UbuntuDev container, I can create additional WSL instances tailored to specific long-lived projects. For instance, I’ve replicated the steps we took to create the Docker instance, but this time, I’ve created instances specifically designed for running Node applications and another for running Golang. The possibilities are vast, and I can’t wait to explore them further.

So there you have it! A powerful and efficient development environment, a harmonious fusion of Windows, WSL, and Docker, delivering an unmatched experience that empowers me to tackle any coding task with ease. With each passing day, I discover new ways to optimize my workflow and create incredible software solutions.

As the journey continues, I’ll be sure to share more insights, discoveries, and practical tips. Together, we’ll delve deeper into the world of Docker containers, unlocking their full potential and revolutionizing the way we develop and deploy applications.

Thank you for joining me on this exhilarating ride. The future is bright, and there’s so much more to come. Until next time, happy coding, and may your development endeavors be filled with creativity, innovation, and boundless success!


0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x