Skip to content

Instantly share code, notes, and snippets.

@hallyn
Created February 25, 2023 01:44
Show Gist options
  • Select an option

  • Save hallyn/b3334ac19f3e3edf05f805ff49cfa981 to your computer and use it in GitHub Desktop.

Select an option

Save hallyn/b3334ac19f3e3edf05f805ff49cfa981 to your computer and use it in GitHub Desktop.
setup shadow tests in ubuntu container
#!/bin/bash
cat /etc/apt/sources.list
sed -i '/deb-src/d' /etc/apt/sources.list
sed -i '/^deb /p;s/ /-src /' /etc/apt/sources.list
export DEBIAN_PRIORITY=critical
export DEBIAN_FRONTEND=noninteractive
apt-get update
apt-get -y dist-upgrade
sudo apt-get -y install ubuntu-dev-tools automake autopoint \
xsltproc gettext eatmydata expect byacc libtool libbsd-dev \
pkgconf tmux
su - ubuntu << EOF
git clone https://github.com/shadow-maint/shadow
cd shadow
autoreconf -v -f --install
./autogen.sh --without-selinux --disable-man --with-yescrypt
make -j4
EOF
@alejandro-colomar
Copy link

Why do you do the sed(1) stuff? Have you found some broken sources.list?

@alejandro-colomar
Copy link

alejandro-colomar commented Feb 26, 2023

I wrote this Dockerfile based on that to automate this a bit more:

ARG OS_IMAGE="debian:sid"

FROM "${OS_IMAGE}" AS build

RUN    export DEBIAN_PRIORITY=critical \
    && export DEBIAN_FRONTEND=noninteractive \
    && apt-get update -y \
    && apt-get upgrade -y \
    && apt-get dist-upgrade -y \
    && apt-get install -y \
               build-essential \
               ubuntu-dev-tools \
               automake \
               autopoint \
               xsltproc \
               gettext \
               eatmydata \
               expect \
               byacc \
               libtool \
               libbsd-dev \
               pkgconf \
               tmux \
    && apt-get autoremove --purge -y \
    && apt-get autoclean -y \
    && apt-get clean -y

COPY ./ /usr/local/src/shadow/
WORKDIR /usr/local/src/shadow/

RUN autoreconf -v -f --install
RUN ./autogen.sh --without-selinux --enable-man --with-yescrypt
RUN make -j4
RUN make install

It has something interesting, which is that it builds whatever you have on your current worktree, rather than cloning the repo, so you can test uncommitted code (and even for committed code, you don't need to specify the remote branch at git-clone(1)).

Do you think it would be interesting to add this to the repo?

Then you can do something like:

$ sudo docker build -t shadow:test .
$ sudo docker run -i --rm shadow:test bash <<__EOF__
    cd tests;
    ./run_some || { cat testsuite.log; false; }
__EOF__

@ikerexxe
Copy link

Do you think it would be interesting to add this to the repo?

I think it would be nice to add this to the repository, but I'd also like to have a common pattern as I'm adding compilations in other distributions in the following PR. There I'm handling everything in Github Actions and as mentioned I'd like to be consistent (I don't mind doing everything in a Dockerfile).

@alejandro-colomar
Copy link

alejandro-colomar commented Feb 27, 2023

Do you think it would be interesting to add this to the repo?

I think it would be nice to add this to the repository, but I'd also like to have a common pattern as I'm adding compilations in other distributions in the following PR. There I'm handling everything in Github Actions and as mentioned I'd like to be consistent (I don't mind doing everything in a Dockerfile).

We could maybe have several Dockerfiles, one for each package manager (maybe something like share/docker/shadow/deb.Dockerfile, share/docker/shadow/rpm.Dockerfile).

I prefer the Dockerfile over github actions because I can run it locally. I'm not sure if those Dockerfiles can be reused in GH Actions or if we need to repeat everything in there.

The main point is that I should be able to test locally whatever that CI can test, to the point that CI is unnecessary. Which doesn't mean I'd remove it; it's good to have it there to simplify the workflow with one-time contributors, or to catch something if we forget to run it. But CI should be just a second safety, rather than the only one. I like being able to fully test locally.

@ikerexxe
Copy link

We could maybe have several Dockerfiles, one for each package manager (maybe something like share/docker/shadow/deb.Dockerfile, share/docker/shadow/rpm.Dockerfile).

I'd prefer one per distribution, and I'm not sure about the location of the files.

I prefer the Dockerfile over github actions because I can run it locally. I'm not sure if those Dockerfiles can be reused in GH Actions or if we need to repeat everything in there.

GH Actions can use Dockerfiles directly: https://docs.github.com/en/actions/creating-actions/creating-a-docker-container-action. It increases the reusability by enabling both local and GH Action execution, so I think it's better to use Dockerfiles.

The main point is that I should be able to test locally whatever that CI can test, to the point that CI is unnecessary. Which doesn't mean I'd remove it; it's good to have it there to simplify the workflow with one-time contributors, or to catch something if we forget to run it. But CI should be just a second safety, rather than the only one. I like being able to fully test locally.

I think the CI is more important than the local workflow, but that doesn't make any difference in this conversation as we both agree that we should enable both local and CI execution of those workflows.

@hallyn
Copy link
Author

hallyn commented Feb 27, 2023

Why do you do the sed(1) stuff? Have you found some broken sources.list?

Oh, I do that because I also like to do 'apt-get build-dep shadow', but I see that's not in this gist.

@hallyn
Copy link
Author

hallyn commented Feb 27, 2023

doing apt-get build-dep shadow after the above commands installs the following additional packages:

ubuntu@shadow:~$ sudo apt-get build-dep shadow
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
  bison debhelper debugedit dh-autoreconf dh-strip-nondeterminism docbook-xml docbook-xsl dwz itstool libaudit-dev libbz2-dev libcap-ng-dev libdebhelper-perl libfile-stripnondeterminism-perl libpam0g-dev libpcre2-16-0 libpcre2-32-0
  libpcre2-dev libpcre2-posix3 libselinux1-dev libsemanage-dev libsepol-dev libsub-override-perl libxml2-utils po-debconf python3-libxml2 sgml-data

@alejandro-colomar
Copy link

alejandro-colomar commented Feb 27, 2023

Why do you do the sed(1) stuff? Have you found some broken sources.list?

Oh, I do that because I also like to do 'apt-get build-dep shadow', but I see that's not in this gist.

That's a great idea. Maybe we should rely on the Build-deps, and not call apt-get install directly (so everything needed for installing or testing shadow would be covered by apt-get build-dep shadow). And if something's missing, we should probably include it in the debian control file.

@hallyn
Copy link
Author

hallyn commented Feb 27, 2023

@alejandro-colomar yeah so i personally use lxc containers, sometimes lxd containers, and vms - which is why i prefer to have a simple script that can run anywhere. So perhaps the gist should do

cd
if [ ! -d shadow ]; then
   git clone https://github.com/shadow-maint/shadow
fi

That way if I do an lxc.mount.entry = /home/serge/src/shadow home/ubuntu/shadow, or if you have docker bind mount your shadow tree, it will preserve it, else it will clone one.

@hallyn
Copy link
Author

hallyn commented Feb 27, 2023

Why do you do the sed(1) stuff? Have you found some broken sources.list?

Oh, I do that because I also like to do 'apt-get build-dep shadow', but I see that's not in this gist.

That's a great idea. Maybe we should rely on the Build-deps, and not call apt-get install directly (so everything needed for installing or testing shadow would be covered by apt-get build-dep shadow). And if something's missing, we should probably include it in the debian control file.

True. That would be a good approach.

It would break right now since libbsd-dev is required but Debian packaging has not yet been updated :) But that's ok...

@hallyn
Copy link
Author

hallyn commented Feb 27, 2023

I guess pkgconf should also be added to the Build-depends for debian's package. Without it, the check for libbsd fails.

Heh, and expect for the tests.

@alejandro-colomar
Copy link

@alejandro-colomar yeah so i personally use lxc containers, sometimes lxd containers, and vms

I'd love to see the complete set of commands you run to set up those. I've never worked with them. :)

Although perhaps it's not as simple as docker build ... && docker run ....

  • which is why i prefer to have a simple script that can run anywhere.

If we simplify it enough that it's just a bunch of lines, maybe it's not even worth a script. With the Build-deps thingy, half of the script is just a few apt-get lines, and the other half is the usual autotools stuff. Calling such a script from a Dockerfile (or GH Action file) is not going to make it much smaller, so maybe we can have duplicate scripts for each package manager and container software.

So perhaps the gist should do

cd
if [ ! -d shadow ]; then
   git clone https://github.com/shadow-maint/shadow
fi

That way if I do an lxc.mount.entry = /home/serge/src/shadow home/ubuntu/shadow, or if you have docker bind mount your shadow tree, it will preserve it, else it will clone one.

I think this would overcomplicate it, and would increase overall code.

@alejandro-colomar
Copy link

alejandro-colomar commented Feb 28, 2023

We could maybe have several Dockerfiles, one for each package manager (maybe something like share/docker/shadow/deb.Dockerfile, share/docker/shadow/rpm.Dockerfile).

I'd prefer one per distribution, and I'm not sure about the location of the files.

Are Dockerfiles for RPM distros (Fedora, SuSE, RHEL, ...) any different at all? I expect the only difference is in the base image, which is already variable with an argument. If there's anything else different, maybe we can make it variable or maybe not. If not, sure, go for separate Dockerfiles.

About the location, I like to use FHS-like tree structures in my repos. That's why I chose ./share, which I use for the same meaning that /usr/share would have in a system. Dockerfiles are arch-independent, so that's a suitable place, I think.

I prefer the Dockerfile over github actions because I can run it locally. I'm not sure if those Dockerfiles can be reused in GH Actions or if we need to repeat everything in there.

GH Actions can use Dockerfiles directly: https://docs.github.com/en/actions/creating-actions/creating-a-docker-container-action. It increases the reusability by enabling both local and GH Action execution, so I think it's better to use Dockerfiles.

Nice.

The main point is that I should be able to test locally whatever that CI can test, to the point that CI is unnecessary. Which doesn't mean I'd remove it; it's good to have it there to simplify the workflow with one-time contributors, or to catch something if we forget to run it. But CI should be just a second safety, rather than the only one. I like being able to fully test locally.

I think the CI is more important than the local workflow, but that doesn't make any difference in this conversation as we both agree that we should enable both local and CI execution of those workflows.

:)

Without trying to convince you, I'll just say why I prefer local. It helps me compile, see the problems, and loop again until perfectly clean, very fast (much faster than going through the network, waiting to receive a notification, ...), and also I choose the interface (a terminal, much nicer than GH's or any other web GUI). After it succeeds locally, I know it's going to work in CI even before submitting (of course it may still fail, if I forgot something, or if there's some random behavior, but the chances are pretty low), so I reduce the noisy notifications that others will receive for each iteration of my tests and pushes.

--- Edit:

Oh, and another nice thing of having all tests local: you can always change the CI system; no lock-in, since it should work everywhere.

@hallyn
Copy link
Author

hallyn commented Feb 28, 2023

+1 for no CI system lock-in.

@hallyn
Copy link
Author

hallyn commented Mar 1, 2023

I'd love to see the complete set of commands you run to set up those. I've never worked with them. :)

Probably outside of the scope here mainly because I use different tools in different places :) For vm's on my main 'home' server, I use uvt-kvm, because I can do:

uvt-kvm create --memory 4096 --disk 30 rust release=jammy arch=amd64 --cpu 4 --run-script-once firstboot

For containers on my 'home' server I generally do

lxc-create -t download -n shadow -- -d ubuntu -r jammy - a amd64
lxc-start -n shadow
lxc-attach -n shadow -- << EOF
wget https://gisturl
./gist
EOF

In other environments I use lxd for both containers and vms. Those use cloud-init to specify a firstboot script.

Note that containers for shadow testsuite can be a problem as the testsuite uses a wide uid allocation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment