ToDo for jenkins.debian.net =========================== :Author: Holger Levsen :Authorinitials: holger :EMail: holger@layer-acht.org :Status: working, in progress :lang: en :Doctype: article :Licence: GPLv2 == About jenkins.debian.net See link:https://jenkins.debian.net/userContent/about.html["about jenkins.debian.net"] for a general description of the setup. Below is the current TODO list, which is long and probably incomplete too. The links:https://jenkins.debian.net/userContent/contributing.html[the preferred form of contributions] are patches via pull requests. == Fix user submitted bugs * There are link:https://bugs.debian.org/cgi-bin/pkgreport.cgi?tag=jenkins;users=qa.debian.org%40packages.debian.org["bugs filed against the pseudopackage 'qa.debian.org' with usertag 'jenkins'"] in the BTS which would be nice to be fixed rather sooner than later, as some people actually care to file bugs. == meeting agenda for jenkins-qa meetings * next meeting: Wednesday, January 25th 2017, 18 UTC on #debian-qa on irc.debian.org ** skipped: meeting on December 28th 2016, 18 UTC * schedule: every 4th Wednesday of each month, at 18 UTC === reoccuring agenda * short intro, why are we here (aka: say hi) * jenkins.d.n status * jenkins.d.o migration next steps (see below for details) * jenkins sprint in spring 2017 - probably march * AOB * thanks to profitbricks for hosting === old meetings * 2016-08-24: http://meetbot.debian.net/debian-qa/2016/debian-qa.2016-08-24-18.00.html * 2016-09-28: http://meetbot.debian.net/debian-qa/2016/debian-qa.2016-09-28-19.02.html * 2016-10: none * 2016-11-23: http://meetbot.debian.net/debian-qa/2016/debian-qa.2016-11-23-18.06.html == move this setup to jenkins.d.o ---- SWITCH TO jenkins.debian.org !! Start doing that… first with a few jobs and then… more… and then we are done and have a better & cleaner setup. see below for status ---- The idea is to run a jenkins.d.o host, which is maintained by DSA, but we are maintaining jenkins on it (so we can install any plugins we like etc). then we also setup several jenkins nodes, in the long term probably/maybe also maintained by DSA, on which we can use sudo as we need it. ==== next steps for jenkins.d.o migration * The machine jerea.debian.org is setup, please go to https://jenkins.debian.org * login using link:https://sso.debian.org[Debian SSO] * Jenkins and plugins are installed… * update_jdn.sh needs to be adapted for this new host… and for new path /srv/jenkins.debian.org/ ) * then start with migrating a single job: self_maintenance ** add pb7 as a node to jenkins.d.o ** add fdroid job to jenkins.d.o * migrate all the jobs ** add all the nodes as nodes to jenkins.d.o ** disable job execution on jenkins.d.net(!) ** deploy this configuration on jenkins.d.o…(!) (we) *** make update_jdn.sh warn if things are missing on .debian.org systems *** as we dont want irc nor mail notifications for this during the migration, we should disable those with an easily revertable commit before actual deployment *** then rename jenkins.debian.net to profitbricks-build0-amd64 - and switch all the jobs which used to run on the master node on that node, which already has the right sudoers, usercontent/reproducible/ and reproducible.db *** some authorized_keys will also need to be adopted for the change of IP address from jenkins.d.n to jenkins.d.o *** redirect jenkins.debian.net to jenkins.debian.org - tests.reproducible-builds.org will stay where it is. * party! == General ToDo * replace amd64 in scripts with $HOSTARCH * extend /etc/rc.local to do cleanup of lockfiles * explain in README how to write jobs, eg which pathes are on tmpfs ** EXECUTOR_NUMBER for X * fix apache ssl configuration as hinted by eg https://sslcheck.globalsign.com/en/sslcheck?host=jenkins.debian.net#78.137.96.196 * run all bash scripts with set -u and set -o pipefail: http://redsymbol.net/articles/unofficial-bash-strict-mode/ * teach bin/chroot-*.sh and bin/d-i_build.sh how to nicely deal with network problems… (as both reproducible_build.sh and schroot-create.sh do) * use static IP for the nodes (h01ger) * use vmdebootstrap where applicable * Tango Icons are gone: #824477 - update footer once this bug is fixed * add to all git post-receive hooks: `curl -s "https://jenkins.debian.net/git/notifyCommit?url=git://git.debian.org/git/d-i/$(basename $PWD .git)"` which will trigger jenkins to pull (check) that git repo… === ToDo for improving disk space * make live-build jobs work again or remove them * make sure the live-build jobs clean up /srv/live-build/results/*iso once they are done. thats 8gb wasted. === To be done once jenkins.d.n runs stretch * install botch from stretch and remove botch from the reproducible-unstable schroot ** botch now depends on a newer dose3, which depends on the ocaml from stretch. ocaml cannot be sensibly backported, so thats why this will have to wait for stretch ==== proper backup * postponed til we run on .debian.org * this needs to be backed up: * '/var/lib/jenkins/jobs' (the results - the configs are in .git) * '/var/lib/munin' * '/var/log' * '/root/' (contains etckeeper.git) * '/var/lib/jenkins/reproducible.db' (is backed up manually) * '/srv/jenkins.debian.net-scm-sync.git' (is backed up manually) * '/var/lib/jenkins/plugins/*.jpi' (can be derived from jdn-scm-sync.git) * '/srv/jenkins.debian.net-scm-sync.git' * '/etc/.git' and '/etc' === To be done once bugs are fixed * link:https://bugs.debian.org/767260[#767260] workaround in bin/d-i_build.sh (console-setup doesn't support parallel build) * link:https://bugs.debian.org/767032[#767032] manual fix in etc/munin/plugins/munin_stats * link:https://bugs.debian.org/767100[#767100] work in progress in etc/munin/plugins/cpu * link:https://bugs.debian.org/767018[#767018] work in progress in etc/munin/plugins/iostat_ios * link:https://bugs.debian.org/774685[#774685] workaround in bin/reproducible_create_meta_pkg_sets.sh === jenkins-job-builder related * investigate whether its possible nowadays to let it delete jobs which were removed.. nope. But there is a Makefile now which will find zombies… * yaml should be refactored, lots of duplication in there. this seems to be helpful: http://en.wikipedia.org/wiki/YAML#References (pyyaml which jenkins-job-builder uses supports them) === debugging job runs should be made easy ---- < h01ger> | i think the jenkins-debug-job script should be a python script < h01ger> | and j-j-b or another yaml parser can supply job configuration knowledge to that script < h01ger> | \o/ < h01ger> | and that python script can also first determine whether the environment is as needed for the job, and if not, complain verbosely+helpfully and exit ---- == Improve existing tests === tests.reproducible-builds.org ==== General website * btw, www.reproducible-builds.org is 404… * install cbfstool in diffoscope schroots: (useful for openwrt+coreboot) ** 'git clone --recursive http://review.coreboot.org/p/coreboot.git ; cd coreboot/util/cbfstool ; make ; cp cbfstool $TARGET/usr/local/bin/' * See https://wiki.debian.org/ReproducibleBuilds/TestsToDo for the tests.reproducible-builds.org related ToDo list. === Debian reproducible builds * make reproducible_build.sh rock solid again and get rid off "set -x # # to debug diffoscoppe/schroot problems" ** pass scheduled version to remote build nodes and abort the build there if the to be build version is different *** the current implementation does this backwards: it first builds and *then* checks on the main node if the build version was right… to make things worse, this check is also broken… ** add check if package to be build has been blacklisted since scheduling and abort ** on SIGTERM, also cleanup on remote build nodes there! (via ssh &) ** check rbuild logs for "DIFFOSCOPE='E: Failed to change to directory /tmp: Permission denied' and deal with those * higher prio: ** reenable disorderfs setup, check that it *always* unmounts + cleans up nicely ** pkg pages *** new table in pkg/test history page: schedule - if that package is currently scheduled *** add link to pkg set(s) if pkg is member of some ** link pkg sets and issues, that is: at least show packages without issues on pkg set pages, maybe also some issues which need actions (like uninvestigated test failures) ** use schroot tarballs (gzipped), moves are atomic then ** notes related: *** #786396: classify issue by "toolchain" or "package" fix needed: show bugs which block a bug *** new page with annoted packages without categorized issues (and probably without bugs as only note content too, else there are too many) *** new page with packages that have notes with comments (which are often useful / contain solutions / low-hanging fruits for newcomers) *** new page with notes that doesnt make sense: a.) packages which are reproducible but should not, packages that build but shouldn't, etc. *** new page with packages which are reproducible on one arch and unreproducible on another arch (in the same suite, so unstable only atm) *** new page with packages which ftbfs on one arch and build fine on another arch (in the same suite, so unstable only atm) *** new page with packages which ftbfs in testing but build fine on sid *** new page with packages which are orphaned but have a reproducible usertagged patch *** new page showing arch all packages which are cross-reproducible, and those which are not ** new pages: r.d.n/$maintainer-email redirecting to r.d.n/maintainers/unstable/${maintainer-email}.html, showing the unreproducible packages for that address. and a sunny "yay, thank you"-summary for those with only reproducible packages. ** new page: "open bugs with patches, sorted by maintainers" page and to the navigation, make those NMUable bugs visible ** improve ftbfs page: list packages without bugs and notes first ** mattia: .py scripts: UDD or any db connection errors should either be retried or cause an abort (not failure!) of the job ** bin/_html_indexes.py: bugs = get_bugs() # this variable should not be global, else merely importing _html_indexes always queries UDD ** once firefox 48 is available: revert 1b4dc1b3191e3623a0eeb7cacef80be1ab71d0a2 / grep for _js and remove it… * lesser prio ** scheduler: check if there have been more than X failures or depwait in the last Y hours and if so unschedule all packages, disable scheduling and send a mail informing us. ** check that cleanup of old diffscope schroots on armhf+amd64 nodes works.... ** check that /srv/workspace/pbuilder/ is cleaned up properly ** rewrite bin/schroot-create.sh from scratch, with little sudo. ** pkg sets related: *** add new pkg set: torbrowser-build-depends *** fix essential set: currently it only has the ones explicitly marked Essential:yes; they and their dependencies make up the full "essential closure set" (sometimes also called pseudo-essential) *** replace bin/reproducible_installed_on_debian.org with a proper data provider from DSA, eg https://anonscm.debian.org/git/mirror/debian.org.git/plain/debian/control *** reproducible_create_meta_pkg_sets uses schroot created by dpkg_setup_schroot_jessie job (outside of reproducible job space...) ** "fork" etc/schroot/default into etc/schroot/reproducible ** a reproducible_log_grep_by_sql.(py|sh) would be nice, to only grep in packages with a certain status (build in the last X days) ** database issues *** remove the rescheduling reason from the DB, that's really not needed *** stats_build table should have package ids, not just src+suite+arch as primary key *** move "untested" field in stats table too? (as in csv output...) *** new status: "timeout" - for when packages fail to build due to the max build time timeout ** blacklist script should tell if a package was already blacklisted. also proper options should be used... ** maintenance.sh: delete the history pages once a page has been removed from all suites+archs ** reproducible.debian.net rename: rgrep all the files… ** debbindiff2diffoscope rename: do s#dbd#ds#g and s#DBD#DS#g and rename dbd directories? ** diffoscope needs to be run on the target arch... (or rather: run on a 64bit architecture for 64bit architectures and on 32bit for 32 bit archs), this should probably be doable with a simple i386 chroot on the host (so using qemu-static to run it on armhf should not be needed, probably.) ** support for arbitrary (to be implemented) Debian-PPAs and external repos, by just giving a source URL ** once stabilized notification emails should go through the package tracker. The 'build' keyword seems to be the better fit for this. To do so just send the emails to dispatch@tracker.debian.org, setting "X-Distro-Tracker-Package: foo" and "X-Distro-Tracker-Keyword: build". This way people wanting to subscribe to our notification don't need to ask us and can do that by themselves. ** repo-comparison: check for binaries without source ** issues: currently only state of amd64 is shown. it would be better to display packages as unreproducible if they are unreproducible on any architecture. ** include diffoscope run time in log ** install build deps before locale change to reduce noise between diff of b1 and b2 builds (needs a pbuilder type A hook) * missing variations: ** prebuilder does (user) group variation like this: https://anonscm.debian.org/git/reproducible/misc.git/tree/prebuilder/pbuilderhooks/A02_user ** variation of $TERM and $COLUMN (and maybe $LINES), unset in the first run, set to "linux" and "77" (and maybe "42") in the 2nd run. *** actually TERM is set to "linux" by default already, COLUMN is unset ** vary order of $PATH entries, see #844500 ** have redundant $PATH entries (`/bin:/bin:…`) ** vary the length of the build paths (/build/first vs /build/second), only once the unreproducibilities caused by different build paths are solved ** vary the init system: sysv and systemd ** vary (with) usrmerge and without (needs debootstrap from jessie-backports) once #843073 is fixed (#810499 might be relevant as well) ** vary SSD/HDD on i386? ** vary temp dir variables such as TMP/TMPDIR/TEMP/TEMPDIR/MAGICK_TMPDIR ** maybe vary build with pbuilder and sbuild (but maybe only useful with different setup jobs only…) ==== reproducible Debian armhf * make systems send mail, use port 465 ==== reproducible Debian installation * see https://wiki.debian.org/ReproducibleInstalls * run this as a new job ==== reproducible non-Debian tests, new host for 398 day variation is unused * pb-build4 is not used yet * locations in the code which need to be changed ** ARCHLINUX_BUILD_NODE=profitbricks-build3-amd64 ** RPM_BUILD_NODE=profitbricks-build3-amd64 ** grep for profitbricks-build3-amd64, there's more * IOW: these tests should use it: ** coreboot ** netbsd ** fedora ** archlinux ** (fdroid) ** (openwrt|lede-project) ==== reproducible coreboot * add more variations: domain+hostname, uid+gid, USER, UTS namespace * build the docs? * also build with payloads. x86 use seabios as default, arm boards dont have a default. grub is another payload. and these: bayou coreinfo external filo libpayload nvramcui - and: ** CONFIG_PAYLOAD_NONE=y ** CONFIG_PAYLOAD_ELF is not set ** CONFIG_PAYLOAD_LINUX is not set ** CONFIG_PAYLOAD_SEABIOS is not set ** CONFIG_PAYLOAD_FILO is not set ** CONFIG_PAYLOAD_GRUB2 is not set ** CONFIG_PAYLOAD_TIANOCORE is not set * libreboot ships images, verify those? * explain status in plain english * use disorderfs for 2nd build ==== reproducible OpenWrt * add credit for logo/artwork * build more archs (http://downloads.openwrt.org/chaos_calmer/15.05-rc1/ lists many to choose from) * incorporate popular third-party ("external feeds") packages? * explain status in plain english * use disorderfs for 2nd build ==== reproducible NetBSD * explain status in plain english ** explain MKREPRO is set to "yes" ** explain MKREPRO_TIMESTAMP set to $SOURCE_DATE_EPOCH * use disorderfs for 2nd build ==== reproducible FreeBSD * useful improvements: ** investigate how to use tmpfs on freebsd and build there. see mdmfs(8) ** find a way to be informed about updates and keep it updated - see 'freebsd-update cron' and 'pkg audit'. The latter is run periodic(8) as part of the nightly root@ emails. ** modify PATH, uid, gid and USER too and host+domainname as well. The VM is only used for this, so we could change the host+domainname temporaily between builds too. ** add freebsd vm as node to jenkins and run the script directly there, saves lot of ssh hassle ** run diffoscope nativly * random notes, to be moved to README ** we build freebsd 10.1 (=released) atm ** we build with sudo too *** rather not change /usr/obj to be '~jenkins/obj' and build with WITH_INSTALL_AS_USER. also not build in /usr/src. if so, we need to define some variable so we can do so.... but we need a stable path anyway, so whats the point. *** maybe build as user in /usr/src... * first build world, later build ports (pkg info...) * document how the freebsd build VM was set up: ** base 10.1 install following https://www.urbas.eu/freebsd-10-and-profitbricks/ ** modified files: *** /etc/rc.conf *** /etc/resolv.conf *** /boot/loader.conf.local ** pkg install screen git vim sudo denyhosts munin-node ** adduser holger ** adduser jenkins (with bash as default shell) ** mkdir -p /srv/reproducible-results ** chown -R jenkins:jenkins /srv/ * system maintenance ** upgraded the VM to FreeBSD 10.3 *** done with: freebsd-update upgrade -r 10.2 *** and with: freebsd-update upgrade -r 10.3 *** and with: freebsd-update upgrade -r 11.0 in screen **** hangs after reboot… **** need to run after reboot: '/usr/sbin/freebsd-update install' ==== reproducible Fedora * make sure the pages meet https://fedoraproject.org/wiki/Design/Requirements and ask the web design team for help via filing a ticket as described there * '/var/cache/mock/fedora-23-x86_64/' has three subdirs we need to handle (put on tmpfs, monitor size, clean sometimes): ccache, root_cache and yum_cache * '/var/lib/mock' should be put on /srv/workspace aka tmpfs * dont hardcode 23 in reproducible_setup_mock.sh and …build_rpm.sh * setup script: ** mock --clean just uninstalls the chroot but it'll still be rebuilt next time using cache. you can delete the caches from /var/cache/mock/ or touch the mock config ** is /etc/yum/repos.d/fedora.repo really needed? ** hosts/pb-build3/etc/yum/repos.d/* is really not sooo good but works… * build script ** cleanup mock cache between two builds: --scrub=all might be too much, but whats sensible (or is it --scrub=all?)? ** no variations introduced yet: *** use '-j$NUM_CPU' and 'NEW_NUM_CPU=$(echo $NUM_CPU-1|bc)' *** modify TZ, LANG, LC_ALL, umask * other bits: ** use modified rpmbuild package from dhiru ** verify gpg signatures (via /etc/mock/) ** one day we will want to schedule all 17k source packages in fedora… * build rawhide too (once fedora-23 builds nicely), releasever=rawhide * more notes: ** https://fedoraproject.org/wiki/Using_Mock_to_test_package_builds ** http://miroslav.suchy.cz/blog/archives/2015/05/28/increase_mock_performance_-_build_packages_in_memory/index.html ** manually create a fedora chroot using rpm, wget + yum: http://geek.co.il/2010/03/14/how-to-build-a-chroot-jail-environment-for-centos ==== reproducible Arch Linux * describe archlinux setup…! * maintenance job: ** check for archlinux schroot sessions which should not be there and delete them. complain if that fails. ** properly clean schroot sessions, check on pb3… * setup_archlinux_schroot job: ** needs to be made idempotent ** needs to download bootstrap.tar.gz sig and verify ** once this has been done, run it more often than once a year * arch build.sh: ** introduce more variations: USER ** confirm the others are really working ** on SIGTERM, also ssh to remote host and cleanup there! (via ssh &) * put results in a db ** graph results * idea: when a package has been updated reschedule reverse build depends too ** (for that we need to detect updated packages first) ---- notes on source and binary versions: tar-1.28.tar.xz (source) -> tar-1.28-1-x86_64.pkg.tar.xz (binary) $PKG/PKGBUILD has: pkgname=tar pkgver=1.28 # sometimes this is calculated and not greppable, so PKGBUILD has to be sourced (in a safe environment…) pkgrel=1 ---- ==== reproducible fdroid * reproducible_setup_fdroid_buildserver.sh: ** ./jenkins-build-makebuildserver *** this script should probably be live-patched to use ftp.de.debian.org instead ftp.uk.d.o *** manually added the jenkins user to the vboxdrv group *** this downloads a base debian image and all Android tarballs (SDK, NDK, Gradle...) *** then enters the image, installs all debian packages and Android stuff from the cached tarballs ** cache is kept outside ('~/.cache/fdroidserver') but installed inside ** '~/.cache/fdroidserver' needs to be cleaned at some times… * reproducible_build_fdroid_apk.sh ** 1st run ./fdroid build some.app:vercode --server ** 2nd run ./fdroid build some.app:vercode --server *** eg: org.fdroid.fdroid:98006 *** or: "fdroid build -l org.fdroid.fdroid" to build the latest ** run diffopscope on the results * let froidserver.git trigger the setup job * make setup job fail, when a build job is running * make build job wait ("forever"), when a setup job is running * later, get a list of all available apps by listing fdroiddata/metadata/*.txt * later: switch reproducible_build_fdroid_apk.sh to F-Droid 'Verification Server' * also see https://f-droid.org/wiki/page/Build_Server_Setup * diskspace needs: ---- $ du -hd1 | sort -h | tail -n 6 4.4G ./android-sdk-linux_86 8.1G ./fdroidserver 8.3G ./.vagrant.d 71G ./VirtualBox VMs 150G ./fdroiddata 242G . ---- ==== reproducible qubes * dont forget: rm holger@profitbricks-build3-amd64:qubes-test/ * dont forget: rm userContent/qubes/q(1|2) on jenkins * add qubes test on pb3 / t.r-b.o ---- git clone https://github.com/qubesos/qubes-builder make get-sources BUILDERCONF=scripts/travis-builder.conf COMPONENTS=installer-qubes-os export DIST_DOM0=fc23 export USE_QUBES_REPO_VERSION=3.2 export INSTALLER_KICKSTART=/tmp/qubes-installer/conf/travis-iso.ks make qubes iso BUILDERCONF=scripts/travis-builder.conf VERBOSE=0 COMPONENTS=installer-qubes-os ---- * pb3 has been modified manually: apt install createrepo python-yaml * once this iso is being tested, it will be interesting to build the Qubes templates as well, as those images (Qubes templates are images) will be copied on the installation iso. the above iso is a stripped down iso without templates… (and not the real thing) ==== reproducible guix * there's no "apt-get install", because of non-FHS conformance, but see https://www.gnu.org/software/guix/download/ * there's a privileged build daemon, which is needed to perform fully isolated builds, see https://www.gnu.org/software/guix/manual/html_node/Build-Environment-Setup.html#Build-Environment-Setup * it's a bit of work to set up, but all the steps are documented. the "binary installation" method being the easiest. * Manolis wrote: ---- There are two ways to install guix, through prebuilt binaries or through the source. *Binary installation: Go to , grab the tarball and follow the instructions there. *Source instalation: First make sure you have the dependencies mentioned at installed. Then download Guix's source from ftp://alpha.gnu.org/gnu/guix/guix-0.9.0.tar.gz and use the usual ./configure && make && make install After you have Guix built, you need to create the build-users and have the guix-daemon run as root, as described here . Keep in mind that the guix-daemon must always run as root. *Testing if everything works: Now just run `guix package -i vim` as a non-root user. If it runs correctly, Guix is ready for work. ---- ==== reproducible... * openembedded.org! * Gentoo? === qa.debian.org* * udd-versionskew: explain jobs in README * udd-versionskew: also provide arch-relative version numbers in output too === d-i_manual* * d-i_check_jobs.sh: check for removed manuals (but with existing jobs) missing * svn:trunk/manual/po triggers the full build, should trigger language specific builds. * svn:trunk/manual is all thats needed, not whole svn:trunk === d-i_build* * d-i_check_jobs.sh: check for removed package (but with existing jobs) missing * build packages using jenkins-debian-glue and not with the custom scripts used today? * run scripts/digress/ ? === chroot-installation_* * use schroot for chroot-installation, stop using plain chroot everywhere ** https://anonscm.debian.org/git/mirror/dsa-puppet.git/tree/modules/schroot ** https://anonscm.debian.org/git/mirror/dsa-puppet.git/tree/modules/porterbox/files/dd-schroot-cmd ** https://gitweb.torproject.org/project/jenkins/tools.git/tree/slaves/linux/build-wrapper * add alternative tests with aptitude and possible apt * split etc/schroot/default * inform debian-devel@l.d.o or -qa@? * warn about transitional packages installed (on non-upgrades only) * install all the tasks "instead", thats rather easy nowadays as all task packages are called "task*". ** make sure this includes blends === g-i-installation_* Development of these tests has stopped. In future the 'lvc*' tests (see below) should replace them. These small changes are probably still worth doing anyway: * g-i: replace '--' with '---' as param delimiter. see #776763 / 5df5b95908 in d-e-c * download .isos once in central place ** /var/lib/jenkins/jobs/g-i-installation_*/workspace/*iso needs 53GB currently, it could be 30 less * g-i_presentation: use preseeding files on jenkins.d.n and not hands.com * turn job-cfg/g-i.yaml into .yaml.py === torbrowser-launcher_* * fix "schroot session cleanup loop" in _common.sh to ignore other schroots * test tbl in German * test tbl on i386 * test alpha releases ** edit '~/.config/torbrowser/settings' file and modify the latest_version setting ** get version from '~/.cache/torbrowser/download/RecommendedTBBVersions' ** (warning: on update checks these files are written again…) * fix broken screenshot while job is running via apache redirect * notifications should go somewhere public, after a while of testing. * run this in qemu and enable apparmor too? -> create new tests for apparmor first :) ** extend setup_schroot.sh to also setup virtual harddrives, see http://diogogomes.com/2012/07/13/debootstrap-kvm-image/ ** install linux, grub and copy the testscript and ssh keys on the the fs ** configure apparmor ** boot qemu ** ssh into the vm and run the script as usal ** touch -d "$(date -u -d '25 hours ago' '+%Y-%m-%d %H:%M')" $FILE *** repeat test… * run upstream tests on stable and sid too: https://trac.torproject.org/projects/tor/wiki/doc/TorBrowser/Hacking#QAandTesting ? === lvc, work in progress, just started * pick LANG from predefined list at random - if last build was not successful or unstable fall back to English ** these jobs would not need to do an install, just booting them in rescue mode is probably enough * for edu mainservers running as servers for workstations etc: "d-i partman-auto/choose_recipe select atomic" to be able to use smaller disk images ** same usecase: -monitor none -nographic -serial stdio * d-i page showing all characters, to be called from rescue mode in all alphabets and then do "rectangle detection" to spot bad/incomple fonts * put this on debian isos too?: config/chroot_local-includes/lib/live/config/9999-autotest * to debug cucumber: --verbose --backtrace --expand * search case-in-sensitive for tails+tor+amnesia == Further ideas... === rebuild sid completly on demand * nthykier wants to be able to rebuild all of sid to test how changes to eg lintian, debhelper, cdbs, gcc affect the archive: * h01ger> | nthykier: so a.) rebuild everything from sid plus custom repo. b.) option to only rebuild a subset, like all rdepends or all packages build-depending on something * h01ger> | and c.) only build once, not continously and d.) enable more cores+ram on demand to build faster * have a job to trigger such a rebuild on AWS? === Test them all * build packages from all team repos on alioth with jenkins-debian-glue on team request (eg, via a .txt file in a git.repo) for specific branches (which shall also be automated, eg. to be able to only have jessie+sid branches build, but not all other branches.) == Debian Packaging related This setup should come as a Debian source package... * /usr/sbin/jenkins.debian.net-setup needs to be written * what update-j.d.n.sh does, needs to be put elsewhere... * debian/copyright is incorrect about some licenses: ** the profitbricks+debian+jenkins logos ** the preseeding files ** ./feature/ is gpl3 // vim: set filetype=asciidoc: