Personally, I use gitlab predominantly. Since I do, and self host, here is how. Main goal was to do everything rootless-as-non-root with podman. We also choose to use the omnibus install at this time instead of the cloud native chart. Consider that Gitlab has documentation on reference architecture, but for small user bases it will function well with limited resources pending you’re good to wait a while for the initialization/migration to complete.
Podman deployment of Gitlab Community Edition #
Prepare a user to run gitlab:
Choice
The following assumes that you’re ok with the default home directory of
/home/gitlab
being the storage location. If you desire something else you’re going to want to be sensitive to the selinux contexts prior to creating the user. For example, if we wanted to use/opt/services/gitlab
we would do the following:[root@lab ~]# mkdir -p /opt/services [root@lab ~]# semanage fcontext -a -e /home /opt/services
semanage fcontext
: Manage file context mapping definitions-a
: add,-e
: equivalence[root@lab ~]# restorecon -vRF /opt/services
restorecon
: restore file(s) default SELinux security contexts.-v
: show changes in file labels,-R
: recursive,-F
: force reset of contextBy getting the context of
/opt/services
to be proper, if you specified the home directory with-d /opt/services/gitlab
it would begin with the proper contexts. All of this will have significant impact later when trying to utilize podman as a non-root user.
[root@lab ~]# groupadd -g 3000 gitlab
[root@lab ~]# useradd -g 3000 -u 3000 -s /usr/sbin/nologin gitlab
Choice
Above user sub-UID and sub-GID are automatically created, if you want to modify this you’ll want to observe the assignments (e.g.
cat /etc/subuid
,cat /etc/subgid
) and delete (e.g.usermod --del-subuid --del-subgid
) then re-add the sub-UID and sub-GID range (e.g.usermod --add-subuid --add-subgid
) that you desire.
Enable linger so this user starts when the system starts, which will enable the users units to start at system boot:
[root@lab ~]# loginctl enable-linger gitlab
Become the user and get started:
[root@lab ~]# runuser gitlab
Prepare the directories:
[gitlab@lab ~]$ mkdir config data logs
- config: will store you configuration and certificates
- data: will store all running context for your instance, including backups
- logs: will store logging from all internal sub-systems
Configure your gitlab instance by creating a config/gitlab.rb
[gitlab@lab ~]$ vim config/gitlab.rb
Choice
There is a lot of configuration available but I’ve managed to survive with just a few that I find most valuable. I’ve left the strings mostly blank, consult the template and other documentation for full examples:
external_url '' gitlab_rails['time_zone'] = '' gitlab_rails['smtp_enable'] = true gitlab_rails['smtp_address'] = '' gitlab_rails['smtp_port'] = 465 gitlab_rails['smtp_user_name'] = '' gitlab_rails['smtp_password'] = '' gitlab_rails['smtp_domain'] = '' gitlab_rails['smtp_authentication'] = '' gitlab_rails['smtp_enable_starttls_auto'] = true gitlab_rails['smtp_tls'] = true gitlab_rails['smtp_openssl_verify_mode'] = 'peer' gitlab_rails['gitlab_email_from'] = '' gitlab_rails['gitlab_email_reply_to'] = '' gitlab_rails['gitlab_default_can_create_group'] = false gitlab_rails['gitlab_username_changing_enabled'] = false gitlab_rails['gitlab_default_projects_features_container_registry'] = false gitlab_rails['lfs_enabled'] = true nginx['listen_port'] = 80 nginx['listen_https'] = false letsencrypt['enable'] = false gitlab_rails['backup_archive_permissions'] = 0644 # make this readable for all gitlab_rails['backup_keep_time'] = 259200 # three days
This minset of configuration is a good starting point.
Choice
We are choosing put a proxy in front of gitlab, so we’re turning off it’s internal https. Setting up the proxy will not be covered in this writeup.
Now invoke gitlab into existence:
[gitlab@lab ~]$ podman run --detach --name=gitlab --label "io.containers.autoupdate=image" --health-start-period=5m --shm-size=8g --publish 127.0.0.1:8880:80 --volume ~/config:/etc/gitlab:Z --volume ~/logs:/var/log/gitlab:Z --volume ~/data:/var/opt/gitlab:Z docker.io/gitlab/gitlab-ce:latest
--label "io.containers.autoupdate=image"
: so we can usepodman auto-update
later--health-start-period=5m
: to wait until migrations are completed to begin health checks--shm-size=8g
: default shm is very small, gitlab will struggle without a larger one:Z
: The Z option tells Podman to label the content with a private unshared label.:latest
: gitlab releases often and addresses security issues, internally it handles migrations well, so we’ll be auto-updating this instance with podman.
Once you’ve gotten in and verified things are functioning, lets daemonize this with systemd:
[gitlab@lab ~]$ mkdir -p ~/.config/systemd/user ; cd ~/.config/systemd/user
[gitlab@lab ~]$ podman generate systemd --files --name --new --no-header gitlab
Now enable the service, which will restart the container but now be invoked by systemd:
[gitlab@lab ~]$ systemctl --user enable --now container-gitlab.service
Now enable auto updates:
[gitlab@lab ~]$ systemctl --user enable --now podman-auto-update.timer
Now observe:
- Gitlab is running under a rootless user, as non-root
- starts when the system starts due to lingering the user
- automatically updates daily
Podman-in-Podman deployment of Gitlab Runner #
It’s been a long journey for those waiting, but podman is now considered a drop in replacement for docker as a gitlab executor.
We will make a separate user for the runner. All of the notes above about home directories and selinux contexts still apply, consider examining context above as the deployment pattern is quite similar.
[root@lab ~]# groupadd -g 3002 runner
[root@lab ~]# useradd -g 3002 -u 3002 -s /usr/sbin/nologin runner
[root@lab ~]# loginctl enable-linger runner
Become the user and get started:
[root@lab ~]# runuser runner
Prepare the directories:
[runner@lab ~]$ mkdir config
Enable the podman socket:
[runner@lab ~]$ systemctl --user enable --now podman.socket
Now invoke the runner into existence:
[runner@lab ~]$ podman run --detach --privileged --name=runner --label "io.containers.autoupdate=image" --volume /var/run/user/3002/podman/podman.sock:/var/run/docker.sock:Z --volume ~/config:/etc/gitlab-runner:Z docker.io/gitlab/gitlab-runner:latest
--privileged
: will be run privileged, remember rootless containers cannot have more privileges than the account that launched them.--label "io.containers.autoupdate=image"
: so we can usepodman auto-update
later--volume /var/run/user/3002/podman/podman.sock:/var/run/docker.sock:Z
: “leaking” the socket into the container so it can instance other containers--volume ~/config:/etc/gitlab-runner:Z
: runner configuration
Enter running container to register runner:
[runner@lab ~]$ podman exec -it runner gitlab-runner register
Modify configuration, make sure to examine the upstream documentation:
[runner@lab ~]$ vim ~/config/config.toml
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = ""
url = ""
token = ""
executor = "docker"
[runners.docker]
tls_verify = false
image = "quay.io/podman/stable"
privileged = true
security_opts = ['label=disable']
volumes = ["/cache","/var/run/user/3002/podman/podman.sock:/var/run/docker.sock"]
Once you’ve gotten in and verified things are functioning, lets daemonize this with systemd:
[runner@lab ~]$ mkdir -p ~/.config/systemd/user ; cd ~/.config/systemd/user
[runner@lab ~]$ podman generate systemd --files --name --new --no-header runner
Now enable the service, which will restart the container but now be invoked by systemd:
[runner@lab ~]$ systemctl --user enable --now container-runner.service
Now enable auto updates:
[runner@lab ~]$ systemctl --user enable --now podman-auto-update.timer
Example initial CI (deploying this site) #
This hugo site is deployed via a runner, an example gitlab-ci.yml
:
stages:
- build
- deploy
variables:
GIT_SUBMODULE_STRATEGY: recursive
build:
stage: build
image: registry.gitlab.com/pages/hugo/hugo_extended:latest
script:
- hugo
artifacts:
paths:
- public
only:
- main
deploy:
stage: deploy
image: alpine:latest
before_script:
- apk add --update --no-cache rsync openssh openssh-client-common
- mkdir -p ~/.ssh && chmod 700 ~/.ssh
- echo ${DEPLOY_PRIVKEY} | base64 -d > ~/.ssh/id_deploy
- chmod 600 ~/.ssh/id_deploy
- ssh-keyscan -p ${DEPLOY_PORT} ${DEPLOY_HOST} >> ~/.ssh/known_hosts
script:
- rsync -avzP -e "ssh -p ${DEPLOY_PORT} -i ~/.ssh/id_deploy" --delete public/ ${DEPLOY_USER}@${DEPLOY_HOST}:${DEPLOY_PATH}
only:
- main