1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
|
ToDo for jenkins.debian.net
===========================
:Author: Holger Levsen
:Authorinitials: holger
:EMail: holger@layer-acht.org
:Status: working, in progress
:lang: en
:Doctype: article
:Licence: GPLv2
== About jenkins.debian.net
See link:https://jenkins.debian.net/userContent/about.html["about jenkins.debian.net"] for a general description of the setup. Below is the current TODO list, which is long and probably incomplete too. The links:https://jenkins.debian.net/userContent/contributing.html[the preferred form of contributions] are patches via pull requests.
== Fix user submitted bugs
* There are link:https://bugs.debian.org/cgi-bin/pkgreport.cgi?tag=jenkins;users=qa.debian.org%40packages.debian.org["bugs filed against the pseudopackage 'qa.debian.org' with usertag 'jenkins'"] in the BTS which would be nice to be fixed soon, as some people actually care.
== General ToDo
* replace amd64 in scripts with $HOSTARCH
* extend /etc/rc.local to do cleanup of lockfiles
* explain in README how to write jobs, eg which pathes are on tmpfs
* fix apache ssl configuration as hinted by eg https://sslcheck.globalsign.com/en/sslcheck?host=jenkins.debian.net#78.137.96.196
=== proper backup
* gpg encrypted to some keys
* run on alioth or paradis
* '/var/lib/jenkins/jobs' (the results - the configs are in .git)
* '/var/lib/munin'
* '/var/log'
* '/root/' (contains etckeeper.git)
* '/var/lib/jenkins/reproducible.db' (is backed up manually)
* '/srv/jenkins.debian.net-scm-sync.git' (is backed up manually)
* '/var/lib/jenkins/plugins/*.jpi' (can be derived from jdn-scm-sync.git)
* '/srv/jenkins.debian.net-scm-sync.git'
* '/etc/.git' and '/etc'
* postpone til we run on .debian.org?
=== TODO for testing stretch
Most jobs have been converted, a few are left to do:
* add g-i tests for stretch
* add stretch live-builds
* do lvc for stretch too
* mention stretch in README where appropriate
=== move this setup to jenkins.d.o
The plan is to run a jenkins.d.o host, which is maintained by DSA, but we are maintaining jenkins on it (so we can install any plugins we like etc). then we also setup several jenkins slaves, probably/maybe also maintained by DSA (so we get them into their munin), but on which we can use sudo as we need it. (or maybe not dsa-maintained slaves, so that we can use sudo as we need, for the price of not being in DSAs munin.)
==== next steps for jenkins.d.o migration
* h01ger: sit down with weasel at cccamp or debconf and document current state and discuss next steps
** probably do (at least) the 2nd step at debconf so others can participate
* old plan:
** weasel/h01ger: install jenkins.deb from jenkins-ci.org
*** also create jenkins users in jenkins (KISS)
** h01ger: get slaves: wishlist for starting: 3 slaves, 8 cores, 32gb ram, 150gb hd space if we dont need squid3 on them, 200gb if we do.
*** install slaves - how (to automate)?
** install jenkins-job-builder "somehow", currently "only" a package from fil exists (but needs build-depends not yet even in sid atm)
** update DNS to point jenkins.d.o to jerea.d.o
*** the existing jenkins.d.o host needs to be renamed to something else (thats "just work" to do but not a major obstacle)
==== unsorted notes for jenkins.d.o migration
* sudoers.d/jenkins:
** not suitable for jenkins.d.o, thus we will run all tests on slaves, where DSA doesnt care what we do
* livescreenshot plugin (we use a patched version) is ok:
** jenkins maintenance probably is best done by jenkins users (as opposed to DSA) so it's up to us what plugins we install
* munin monitoring of the slaves
** DSA munin configuration is auto generated by puppet, so the slaves should become .d.o hosts too, to be included
** or install another munin instance, just to monitor the slaves, on jenkins itself
* chroot jobs should use real schroot sessions, and not just use schroot as poor chroot(8) replacement. some links:
** https://anonscm.debian.org/cgit/mirror/dsa-puppet.git/tree/modules/schroot
** https://anonscm.debian.org/cgit/mirror/dsa-puppet.git/tree/modules/porterbox/files/dd-schroot-cmd
** https://gitweb.torproject.org/project/jenkins/tools.git/tree/slaves/linux/build-wrapper
=== To be done once jenkins.d.n runs jessie
* replace with bin/setsid.py workaround with setsid from the util-linux package from jessie
* bin/g-i-installation: use lvcreate without --virtualsize
* check if the sudo workaround in bin/g-i-installation is still needed: 'guestmount -o uid=$(id -u) -o gid=$(id -g)' would be nicer, but it doesnt work: as root, the files seem to belong to jenkins, but as jenkins they cannot be accessed.
* install botch from jessie-backports once its available (and remove botch from the reproducible-unstable schroot)
=== To be done once bugs are fixed
* link:https://bugs.debian.org/767260[#767260] workaround in bin/d-i_build.sh (console-setup doesn't support parallel build)
* link:https://bugs.debian.org/767032[#767032] manual fix in etc/munin/plugins/munin_stats
* link:https://bugs.debian.org/767100[#767100] work in progress in etc/munin/plugins/cpu
* link:https://bugs.debian.org/767018[#767018] work in progress in etc/munin/plugins/iostat_ios
* link:https://bugs.debian.org/774685[#774685] workaround in bin/reproducible_create_meta_pkg_sets.sh
=== jenkins-job-builder related
* use fil's package: http://ftp.hands.com/hands-deb/pool/main/j/jenkins-job-builder/jenkins-job-builder_1.2.0~git-0~1.gbp1ae299.dsc
** needs at least some of the stuff on fil's j.d.n.git/needs-jjb-1.2.0 branch to work with that
==== old jenkins-job-builder TODO, mostly (all?) obsolete thanks to fil's work:
* use jessie version plus h01ger's patches from kali
* change of syntax:
----
properties:
- priority-sorter:
priority: 150
----
* this seems to be helpful: http://en.wikipedia.org/wiki/YAML#References (pyyaml which jenkins-job-builder uses supports them)
* cleanup h01ger's patches (eg add documentation) and send pull requests on github:
** publisher:logparse
** publisher:htmlpublisher
** svn:scm upstreamed at https://review.openstack.org/#/c/192095/
** wrappers:live-screenshot upstreamed at https://review.openstack.org/#/c/191708/
** image-gallery: https://review.openstack.org/#/c/175747/ superseeds h01gers patch: https://review.openstack.org/#/c/191950/
** sidebar: upstreamed at https://review.openstack.org/#/c/191585/
=== livescreenshot plugin
* publish forked livescreenshot plugin and send pull request for h01ger's bugfix
** see ssh://git.debian.org/git/users/holger/livescreenshot-plugin.git and 0b407b70025 there
== lvc, work in progress, just started
* put this on debian isos too: config/chroot_local-includes/lib/live/config/9999-autotest
* add another (smaller) test: download+run torbrowser daily
* re-read the docs!
** http://live.debian.net/manual/stable/html/live-manual.en.html#321
* generate feature files from templates? to cope with sub-products?
-> no. detect desktop type and set variables accordingly
-> simpler: pass an environment variable with the type
* get iso
* tables for looping through features: see tails/iuk.git/features/download_target_file/Download_Target_File.feature
* to debug cucumber: --verbose --backtrace --expand
* drop / remove
* can probably go: dhcp.rb firewall_leaks.rb dhcp.feature firewall_leaks.feature
* more occurances of "the computer boots Tails"
* @source (only keep product tests)
* disabled stuff in common_steps.rb
** #if @vm.execute("service tor status").success?
* "I set sudo password" not needed for debianlive nor debian(edu):
** #@screen.wait("TailsGreeterAdminPassword.png", 20)
* $misc_files_dir needed?
* def sort_isos_by_creation_date
Dir.glob("#{Dir.pwd}/*.iso").sort_by {|f| tails_iso_creation_date(f)}
-> useless for us, purpose is to automatically select the latest iso if none is given
* search case-in-sensitive for tails+tor+amnesia
* put in update_jdn.sh:
----
addgroup tcpdump
dpkg-statoverride --update --add root tcpdump 754 /usr/sbin/tcpdump
setcap CAP_NET_RAW+eip /usr/sbin/tcpdump
adduser $USER tcpdump
adduser $USER libvirt
adduser $USER libvirt-qemu
----
== Improve existing tests
=== reproducible
* higher prio:
** fix https://jenkins.debian.net/munin/debian.net/jenkins.debian.net/jenkins_builds.html which is broken since jessie upgrade
** repo-comparison: check for binaries without source
** document (in README) the multihost setup
* lesser prio
** more graphs:
*** graph average build duration by day
*** graph packages in testing+unstable which need to be fixed
** reproducible_create_meta_pkg_sets uses schroot created by dpkg_setup_schroot_jessie job (outside of reproducible job space...)
** "fork" etc/schroot/default into etc/schroot/reproducible
** move "untested" field in stats table too? (as in csv output...)
** new page: packages which are orphaned but have a reproducible usertagged patch
** a reproducible_log_grep_by_sql.(py|sh) would be nice, to only grep in packages with a certain status (build in the last X days)
** replace submit form by one without javascript (maybe with more url rewriting)
** when a package is automatically rescheduled because of the mirror was updated between the two tests, there will be three rbuild logs in one. thats confusing, the first one should be dropped.
** adopt usertag script from pkg-apparmor to notify us about new usertagged bugs automatically
* notes related
** #786396: classify issue by "toolchain" or "package" fix needed: show bugs which block a bug
** new page with annoted packages without categorized issues
** new page with packages that have notes with comments (which are often useful / contain solutions / low-hanging fruits for newcomers)
** new page with notes that doesnt make sense: a.) packages which are reproducible but should not, packages that build but shouldn't, etc.
*** aint that covered by reproducible_breakages.py already? no.
* pkg sets related:
** fix essential set: currently it only has the ones explicitly marked Essential:yes; they and their dependencies make up the full "essential closure set" (sometimes also called pseudo-essential)
** replace bin/reproducible_installed_on_debian.org with a proper data provider from DSA, eg https://anonscm.debian.org/cgit/mirror/debian.org.git/plain/debian/control
* missing tests:
** variation in kernel
** variation in date
** prebuilder does (user) group variation like this: https://anonscm.debian.org/cgit/reproducible/misc.git/tree/prebuilder/pbuilderhooks/A02_user
** different cpu type: Opteron_G3 AMD Opteron 23xx (Gen 3 Class Opteron) is the most powerful one that's different to current Opteron_G4
** variation of $TERM and $COLUMN (and maybe $LINES), unset in the first run, set to "linux" and "77" (and maybe "42") in the 2nd run. maybe vary $SHELL too.
*** actually TERM is set to "linux" by default already, COLUMN is unset
* status of remote build nodes for amd64
** profitbricks-build1-amd64 is setup
*** squid needs proper configuration
** profitbricks-build1-amd64 is setup
*** squid needs proper configuration
*** should run in the future: +1d+1m+1y
* profitbricks-build2-amd64 will also host a VM (profitbricks-build2-amd64, so we will be running kvm on kvm there)
** networking doesnt work yet, so profitbricks-build3-amd64 aint accessable atm - help welcome!
* enable people to upload test packages, to be built in jenkins:
----
<mapreri> h01ger: another wild future request by me: allowing us to upload something and let jenkins test it. rationale: I sent (another) patch for debian-keyring, to fix a timestamp issue in debian control files (due to not_using_dh-builddeb), but there is also a umask issue. I don't want to bother me to setup the very same things jenkins tests locally (I already did too much in this regards, imho), but really people can't tests everything
<mapreri> jenkins tests.
<h01ger> mapreri: please add the feature request to the todo. i'm thinking now that it maybe should just be a jenkins job not integrated into the rp.d.n webui, but... maybe we find a nice way to do it
<mapreri> h01ger: I'm instead thinking about a repo defining a reproducible-specific suite or something on that line, that integrates well with the current setup. but this is really something wild.
<h01ger> well, and everybody in debian-keyring from sid can uplood? :)
<mapreri> that would be wonderful.
----
==== design for reproducible remote building
* open questions:
** save build-host in build_duration table too? (and change to saving the time of a single build, not both combined)
* reproducible_build.sh behaviour change:
** called without param: behave as always
** called with a single param, "1" or "2": do first or second build (as specified below)
** called with two params: $node1 and $node2 where the build should happen.
* reproducible_build.sh (with two params) will be still always be run on the main node, that is the one holding reproducible.db, so jenkins.d.n atm
* job definitions:
** reproducible_build_amd64_1 runs "reproducible_build.sh profitbricks-build1-amd64 profitbricks-build2-amd64" # 8 core machines with 32gb ram
** reproducible_build_amd64_2 runs "reproducible_build.sh profitbricks-build1-amd64 profitbricks-build2-amd64"
** reproducible_build_amd64_3 runs "reproducible_build.sh profitbricks-build1-amd64 profitbricks-build2-amd64"
** reproducible_build_amd64_4 runs "reproducible_build.sh profitbricks-build1-amd64 profitbricks-build2-amd64"
** reproducible_build_amd64_5 runs "reproducible_build.sh profitbricks-build1-amd64 profitbricks-build2-amd64"
** reproducible_build_amd64_6 runs "reproducible_build.sh profitbricks-build2-amd64 profitbricks-build1-amd64"
** reproducible_build_amd64_7 runs "reproducible_build.sh profitbricks-build2-amd64 profitbricks-build1-amd64"
** reproducible_build_amd64_8 runs "reproducible_build.sh profitbricks-build2-amd64 profitbricks-build1-amd64"
** reproducible_build_amd64_9 runs "reproducible_build.sh profitbricks-build2-amd64 profitbricks-build1-amd64"
** reproducible_build_amd64_10 runs "reproducible_build.sh profitbricks-build2-amd64 profitbricks-build1-amd64"
** reproducible_build_armhf_1 runs "reproducible_build.sh wbq0-armhf-rb bpi0-armhf-rb" # wbq0 and cbxi4pro0 are the quad cores
** reproducible_build_armhf_2 runs "reproducible_build.sh wbq0-armhf-rb cbxi4pro0-armhf-rb" # with 2gb ram and and the other two
** reproducible_build_armhf_3 runs "reproducible_build.sh cbxi4pro0-armhf-rb hb0-armhf-rb.debian.net" # have dual cores with 1gb ram
** reproducible_build_armhf_4 runs "reproducible_build.sh cbxi4pro0-armhf-rb wbq0-armhf-rb"
* then we have a new script, reproducible_info.sh which just outputs key-value pairs, like "ARCH=armhf", DATETIME="Mo 10. Aug 11:56:22 CEST 2015" and "TZ=UTC" and whatever.
** this script is run on all nodes, but each run is triggered by a single job running on the main node (jenkins atm), so the results can be captured in /srv/reproducible-results/node-information/$NODE and then eg be used by reproducible_html_dashboard.sh to create the table with the differences between 1st and 2nd build...
** /srv/reproducible-results/node-information/$NODE could also be read by reproducible_build.sh to determine the dpkg-architecture a node is captable of building, but I think we also want that info to be encoded in the build job names, so probably there's no need to read it...
* how to build remotely, some terms and remarks:
** main node = the one running reproducible_build.sh with two params, so jenkins.d.n atm
** node = generic term for node1 or node2
** please note the difference between /srv/workspace/reproducible-builds and /srv/workspace/reproducible-results and /srv/workspace/reproducible-builds/$NODE and /srv/workspace/reproducible-builds/pending and whether these are on a node or on the main node.
1. reproducible_build.sh on main node determines what to build,
2. downloads the sources, and put's the sha256sum of the .dsc file into /srv/workspace/reproducible-builds/$NODE/$SUITE/$ARCH/$PKG/$VERSION on the main node
3. throws the source files away
4. scp's /srv/workspace/reproducible-builds/$NODE/$SUITE/$ARCH/$PKG/$VERSION on the node where this should be build 1st
5. runs "ssh $NODE1 /srv/jenkins/bin/reproducible_build.sh 1"
6. this causes a 1st build, which downloads the sources as specified in /srv/workspace/reproducible-builds/$NODE/$SUITE/$ARCH/$PKG/$VERSION and compares the sha256sum and builds it and copies the result to /srv/workspace/reproducible-results/$NODE/$SUITE/$ARCH/$PKG/$VERSION and exits.
7. reproducible_build.sh on the main node then tries to scp the result from $NODE:/srv/workspace/reproducible-results/$NODE...
8. reproducible_build.sh on the main node then triggers the 2nd build as the 1st.
9. voila
* more open questions:
** i believe "reproducible_build.sh 1" should immediatly move /srv/workspace/reproducible-builds/$NODE/$SUITE/$ARCH/$PKG/$VERSION to /srv/workspace/reproducible-builds/pending/$SUITE/$ARCH/$PKG/$VERSION (on the node) so it's only build once and so that we can detect stale builds
** maintenance is general, cleanup of started but interrupted builds...
==== reproducible Debian armhf
* then: include armhf in index_scheduled
* then: armhf scheduling (only for testing for now, introduce "arch-factor", eg. amd64=10 and armhf=3, to schedule less on some archs, also limits (when to schedule) should be different for different archs)
** call_apt_update() needs to be moved from _scheduler.py to jobs so its run on all build nodes
* then: armhf building - run the job on jenkins.d.n and build1 on one host and build2 on another and then run debbindiff on jenkins.d.n…
* monitor their temperatures via munin via ssh:
** http://munin-monitoring.org/wiki/Native_ssh
** http://guide.munin-monitoring.org/en/latest/example/transport/ssh.html
* make systems send mail
* change date on 2nd build hosts (and disable systemd-timesyncd)
==== reproducible Debian installation
* see https://wiki.debian.org/ReproducibleInstalls
* add the test (something weekly or so)
==== reproducible coreboot
* add more variations: domain+hostname, uid+gid, USER, UTS namespace
* build the docs?
* also build with payloads. x86 use seabios as default, arm boards dont have a default. grub is another payload. and these: bayou coreinfo external filo libpayload nvramcui - and:
** CONFIG_PAYLOAD_NONE=y
** CONFIG_PAYLOAD_ELF is not set
** CONFIG_PAYLOAD_LINUX is not set
** CONFIG_PAYLOAD_SEABIOS is not set
** CONFIG_PAYLOAD_FILO is not set
** CONFIG_PAYLOAD_GRUB2 is not set
** CONFIG_PAYLOAD_TIANOCORE is not set
* libreboot ships images, verify those?
* explain status in plain english
==== reproducible openwrt
* add credit for logo/artwork
* build more archs (http://downloads.openwrt.org/chaos_calmer/15.05-rc1/ lists many to choose from)
* build all packages? (set CONFIG_ALL=y and run 'make defconfig')
** just build some first...
* file dbd bug about unable to inspect these .bin files
* file dbd bug about crashing on certain squashfs files
* explain status in plain english
==== reproducible netbsd
* announce on their list
* explain status in plain english
** MKREPRO is set to "yes"
==== reproducible fedora
* use mock to create a fedora chroot to build in
** http://blog.packagecloud.io/eng/2015/05/11/building-rpm-packages-with-mock/
** http://blog.packagecloud.io/eng/2015/04/20/working-with-source-rpms/
* start with building a single package
* then build the full base system (100-500 packages)
==== reproducible freebsd
* document how the freebsd build VM was set up:
** base 10.1 install following https://www.urbas.eu/freebsd-10-and-profitbricks/
** modified files:
*** /etc/rc.conf
*** /etc/resolv.conf
*** /boot/loader.conf.local
** pkg install screen git vim sudo denyhosts
** adduser holger
** adduser jenkins (with bash as default shell)
** mkdir -p /srv/workspace/chroots/
** mkdir -p /srv/reproducible-results
** chown -R jenkins:jenkins /srv/
** ln -s /srv/ /usr/obj/srv
** we build freebsd 10.1 (=released) atm
** we build with sudo too
*** change /usr/obj to be '~jenkins/obj' and build with WITH_INSTALL_AS_USER ?
* first build world, later build ports (pkg info...)
* investigate how to use tmpfs on freebsd and build there
==== reproducible...
* openembedded.org!
* Arch? Gentoo?
=== qa.debian.org*
* udd-versionskew: explain jobs in README
* udd-versionskew: also provide arch-relative version numbers in output too
=== d-i_manual*
* d-i_check_jobs.sh: check for removed manuals (but with existing jobs) missing
* svn:trunk/manual/po triggers the full build, should trigger language specific builds.
* svn:trunk/manual is all thats needed, not whole svn:trunk
=== d-i_build*
* d-i_check_jobs.sh: check for removed package (but with existing jobs) missing
* build packages using jenkins-debian-glue and not with the custom scripts used today?
* run scripts/digress/ ?
* bubulle wrote: "Another interesting target would be d-i builds *including non uploaded packages* (something like "d-i from git repositories" images). That would in some way require to create a quite specific image, with all udebs (while netboot only has udebs needed before one gets a working network setup).
=== chroot-installation_*
* use schroot for chroot-installation, stop using plain chroot everywhere
* add alternative tests with aptitude and possible apt
* split etc/schroot/default
* inform debian-devel@l.d.o or -qa@?
* warn about transitional packages installed (on non-upgrades only)
* install all the tasks "instead", thats rather easy nowadays as all task packages are called "task*".
** make sure this includes blends
=== g-i-installation_*
Development of these tests has stopped. In future the 'lvc*' tests should replace them.
These small changes are probably still worth doing anyway:
* g-i: replace '--' with '---' as param delimiter. see #776763 / 5df5b95908 in d-e-c
* download .isos once in central place
** /var/lib/jenkins/jobs/g-i-installation_*/workspace/*iso needs 53GB currently, it could be 30 less
* g-i_presentation: use preseeding files on jenkins.d.n and not hands.com
* turn job-cfg/g-i.yaml into .yaml.py
The following ideas should really only be implemented for the new 'lvc*' tests.... (but are kept here for now)
* pick LANG from predefined list at random - if last build was not successful or unstable fall back to English
** these jobs would not need to do an install, just booting them in rescue mode is probably enough
* for edu mainservers running as servers for workstations etc: "d-i partman-auto/choose_recipe select atomic" to be able to use smaller disk images
** same usecase: -monitor none -nographic -serial stdio
== Further ideas...
=== rebuild sid completly on demand
* nthykier wants to be able to rebuild all of sid to test how changes to eg lintian, debhelper, cdbs, gcc affect the archive:
* h01ger> | nthykier: so a.) rebuild everything from sid plus custom repo. b.) option to only rebuild a subset, like all rdepends or all packages build-depending on something
* h01ger> | and c.) only build once, not continously and d.) enable more cores+ram on demand to build faster
=== Test them all
* build packages from all team repos on alioth with jenkins-debian-glue on team request (eg, via a .txt file in a git.repo) for specific branches (which shall also be automated, eg. to be able to only have squeeze+sid branches build, but not all other branches.)
== Debian Packaging related
This setup should come as a Debian source package...
* /usr/sbin/jenkins.debian.net-setup needs to be written
* what update-j.d.n.sh does, needs to be put elsewhere...
* debian/copyright is incorrect about some licenses:
** the profitbricks+debian+jenkins logos
** the preseeding files
** ./feature/ is gpl3
// vim: set filetype=asciidoc:
|