...
 
DC_POSTGRES_IMAGE=postgres:12-alpine
DC_SERVER_IMAGE=lavasoftware/lava-server:2020.01
DC_DISPATCHER_IMAGE=lavasoftware/lava-dispatcher:2020.01
DC_DISPATCHER_HOSTNAME=lava-dispatcher
DC_LAVA_LOGS_HOSTNAME=lava-logs
DC_LAVA_MASTER_HOSTNAME=lava-master
DC_LAVA_MASTER_ENCRYPT=
DC_SOCKS_PROXY=
DC_MASTER_CERT=
DC_SLAVES_CERT=
http_proxy=
https_proxy=
ftp_proxy=
all:
docker-compose pull
docker-compose build
docker-compose up
lava-dispatcher:
docker-compose build lava-dispatcher
docker-compose up lava-dispatcher
clean:
docker-compose rm -vsf
docker volume rm -f lava-server-pgdata lava-server-joboutput lava-server-devices lava-server-health-checks
docker volume rm -f lava-server-pgdata lava-server-joboutput lava-server-devices lava-server-health-checks worker-http worker-tftp
.PHONY: all dispatcher clean
docker-compose
==============
docker-compose file to setup an instance of **lava-server** and **lava-dispatcher**
docker-compose file to setup an instance of **lava-server** and/or **lava-dispatcher**
from scratch. In this setup, every service will be running in a separate container.
Usage
......@@ -32,6 +32,7 @@ Using it
In order to start the containers, run:
docker-compose build
docker-compose up
docker-compose will spawn a container for each services:
......@@ -42,8 +43,13 @@ docker-compose will spawn a container for each services:
* lava-master
* lava-logs
* lava-publisher
* lava-dispatcher
* ser2net
* tftpd
* dispatcher-webserver
* ganesha-nfs
All the services will be connected to each others.
All the services will be connected to each other.
docker-compose will also create some volumes for:
......@@ -51,3 +57,96 @@ docker-compose will also create some volumes for:
* health-checks
* job outputs
* PostgreSQL data
* dispatcher httpd
* dispatcher tftpd
Standalone dispatcher container
-------------------------------
## Configuration (simple, for QEMU purposes)
All configuration is stored in `.env` file. Some of the steps are required
whilst others are optional.
* Change DC_LAVA_MASTER_HOSTNAME and DC_LAVA_LOGS_HOSTNAME to <server_name>
which points to the running LAVA master instance.
* (optional) set DC_LAVA_MASTER_ENCRYPT to `--encrypt` if the master instance
is using encryption for master-slave communication.
* (optional) [Create certificates](https://validation.linaro.org/static/docs/v2/pipeline-server.html#create-certificates) on the slave.
`sudo /usr/share/lava-dispatcher/create_certificate.py foo_slave_1`
This can be done in two ways:
* by running "docker-compose exec -it docker-compose_lava-dispatcher_1 bash"
(for this to work you'd need to build and run the containers first - see
below).
* alternatively you can create the certificates on system which has LAVA
packages already installed.
* (optional) Copy public certificate from master and the private slave
certificate created in previous step to directory `dispatcher/certs/` of this
project. Currently the key names should be the default ones (master.key and
slave.key_secret).
* Execute `make lava-dispatcher`; at this point multiple containers should be
up and running and the worker should connect to the LAVA server instance of
your choosing.
* Add a new device and set its' device template (alternatively you can update
existing device to use this new worker)
Example QEMU device template:
```
{% extends 'qemu.jinja2' %}
{% set mac_addr = 'DF:AD:BE:EF:33:02' %}
{% set memory = 1024 %}
```
You can do this via [XMLRPC](https://validation.linaro.org/api/help/#scheduler.devices.set_dictionary), [lavacli](https://docs.lavasoftware.org/lavacli/) or [REST API](https://staging.validation.linaro.org/api/v0.2/devices/staging-qemu01/dictionary/) (if using version 2020.01 and higher).
* (optional) If the lab where this container runs is behind a proxy or you
require any specific worker environment settings, you will need to update the
proxy settings by setting the [worker environment](https://docs.lavasoftware.org/lava/proxy.html#using-the-http-proxy)
You can do this via this [XMLRPC API call](https://validation.linaro.org/api/help/#scheduler.workers.set_env).
In case the worker sits behind a proxy, you will also need to set
`SOCKS_PROXY=--socks-proxy <address>:port` in the `.env` configuration file
Furthermore, you will need to add a proxy settings to the `.env` file for
docker resource downloads (http_proxy, ftp_proxy and https_proxy environment
variable).
`Note: If the master instance is behind a firewall, you will need to create a
port forwarding so that ports 5555 and 5556 are open to the public.`
## Configuration (advanced, for physical DUT purposes)
Make sure you went through the basic configuration first, it is mandatory for
this step. In order to run test jobs on physical devices we will need a couple
of additional setup steps:
* PDU control:
* The dispatcher docker container will already download pdu scripts from
[lava-lab repo](https://git.linaro.org/lava/lava-lab.git/) which you can use
in device configuration but if you use custom PDU scripts you need to
provide them and copy them into `dispatcher/power-control` directory; they
will be copied into `/root/power-control` path in the container.
* If you need SSH keys for PDU control, copy the private key to the
`dispatcher/ssh` directory and the public key on to the PDU
* SSH config - if there's a need for a specific SSH configuration (like
tunnel passthrough, proxy, strict host checking, kexalgorithm etc), create
the config file with relevant settings and copy it into `dispatcher/ssh`
dir; it will be copied to `/root/.ssh` directory on the dispatcher
container.
* ser2net config - update `ser2net/ser2net.config` with the corresponding
serial port and device settings
* Update/add [device dictionary](https://docs.lavasoftware.org/lava/glossary.html#term-device-dictionary) with power commands and connection command
* Add dispatcher_ip setting to the [dispatcher configuration](https://validation.linaro.org/api/help/#scheduler.workers.set_config). Alternatively you can use
[REST API](https://lava_server/api/v0.2/workers/docker_dispatcher_hostname/config/) if you are using version 2020.01 or higher:
* `dispatcher_ip: <docker host ip address>`
* Disable/stop rpcbind service on host machine if it's running - docker service
nfs will need port 111 available on the host.
## Running
In order to start the containers, run:
docker-compose build lava-dispatcher
docker-compose up lava-dispatcher
or, alternatively:
make lava-dispatcher
\ No newline at end of file
ARG image=lavasoftware/lava-dispatcher:latest
FROM ${image}
ARG extra_packages=""
RUN apt -q update
RUN DEBIAN_FRONTEND=noninteractive apt-get -q -y install software-properties-common nfs-common
RUN apt-add-repository non-free
RUN apt -q update
RUN DEBIAN_FRONTEND=noninteractive apt-get -q -y install ${extra_packages} net-tools snmp snmp-mibs-downloader
RUN download-mibs
# Add MIBs
RUN mkdir -p /usr/share/snmp/mibs/
ADD https://download.schneider-electric.com/files?p_enDocType=Firmware+-+Released&p_File_Name=powernet428.mib&p_Doc_Ref=APC_PowerNetMIB428 /usr/share/snmp/mibs/powernet428.mib
# Add certificates.
COPY certs/* /etc/lava-dispatcher/certificates.d/
# Add ssh config.
COPY ssh/* /root/.ssh/
# Add power control scripts.
COPY power-control/* /root/power-control/
# Add lab scripts
RUN mkdir -p /usr/local/lab-scripts/
ADD https://git.linaro.org/lava/lava-lab.git/plain/shared/lab-scripts/snmp_pdu_control /usr/local/lab-scripts/
RUN chmod a+x /usr/local/lab-scripts/snmp_pdu_control
ADD https://git.linaro.org/lava/lava-lab.git/plain/shared/lab-scripts/eth008_control /usr/local/lab-scripts/
RUN chmod a+x /usr/local/lab-scripts/eth008_control
ENTRYPOINT ["/root/entrypoint.sh"]
......@@ -80,7 +80,17 @@ services:
restart: unless-stopped
lava-dispatcher:
image: ${DC_DISPATCHER_IMAGE}
build:
context: ./dispatcher
args:
image: ${DC_DISPATCHER_IMAGE}
extra_packages: "linux-image-amd64 curl"
depends_on:
- lava-dispatcher-webserver
- lava-dispatcher-tftpd
- lava-dispatcher-ser2net
- lava-dispatcher-nfs
hostname: worker0
devices:
- /dev/kvm # needed for QEMU
- /dev/net/tun # needed for QEMU
......@@ -90,13 +100,77 @@ services:
- NET_ADMIN # needed for QEMU
- SYS_ADMIN # needed for usb mass storage
environment:
DISPATCHER_HOSTNAME: "--hostname=lava-dispatcher"
LOGGER_URL: "tcp://lava-logs:5555"
MASTER_URL: "tcp://lava-master:5556"
DISPATCHER_HOSTNAME: "--hostname=${DC_DISPATCHER_HOSTNAME}"
LOGGER_URL: "tcp://${DC_LAVA_LOGS_HOSTNAME}:5555"
MASTER_URL: "tcp://${DC_LAVA_MASTER_HOSTNAME}:5556"
ENCRYPT: "${DC_LAVA_MASTER_ENCRYPT}"
SOCKS_PROXY: "${DC_SOCKS_PROXY}"
MASTER_CERT: "${DC_MASTER_CERT}"
SLAVES_CERT: "${DC_SLAVES_CERT}"
http_proxy: "${http_proxy}"
https_proxy: "${https_proxy}"
ftp_proxy: "${ftp_proxy}"
volumes:
- /run/udev/control:/run/udev/control:ro # libudev expects it for udev events
- /boot:/boot:ro
- /lib/modules:/lib/modules:ro
- '/dev/bus:/dev/bus:ro' # required for USB devices
- '/dev/serial:/dev/serial:ro' # required for serial adapters
- '/dev/disk:/dev/disk:ro' # required for SDMux
- worker-http:/var/lib/lava/dispatcher/tmp
- worker-tftp:/srv/tftp
lava-dispatcher-webserver:
build:
context: ./httpd
ports:
- 80
volumes:
- worker-http:/var/lib/lava/dispatcher/tmp
lava-dispatcher-tftpd:
build:
context: ./tftpd
environment:
http_proxy: "${http_proxy}"
https_proxy: "${https_proxy}"
ftp_proxy: "${ftp_proxy}"
ports:
- 69:69/udp
volumes:
- worker-tftp:/srv/tftp
lava-dispatcher-ser2net:
build:
context: ./ser2net
environment:
http_proxy: "${http_proxy}"
https_proxy: "${https_proxy}"
ftp_proxy: "${ftp_proxy}"
privileged: true
volumes:
- '/dev/serial:/dev/serial' # required for serial adapters
- '/dev:/dev'
devices: []
ports:
- 7101:7101
lava-dispatcher-nfs:
build:
context: ./nfs
environment:
http_proxy: "${http_proxy}"
https_proxy: "${https_proxy}"
ftp_proxy: "${ftp_proxy}"
privileged: true
volumes:
- worker-http:/var/lib/lava/dispatcher/tmp
ports:
- 111:111
- 111:111/udp
- 2049:2049
- 2049:2049/udp
- 35543:35543
volumes:
db-data:
......@@ -107,3 +181,5 @@ volumes:
name: lava-server-health-checks
joboutput:
name: lava-server-joboutput
worker-http:
worker-tftp:
FROM httpd
COPY httpd.conf /usr/local/apache2/conf/httpd.conf
ServerRoot "/usr/local/apache2"
Listen 80
LoadModule mpm_event_module modules/mod_mpm_event.so
LoadModule authn_file_module modules/mod_authn_file.so
LoadModule authn_core_module modules/mod_authn_core.so
LoadModule authz_host_module modules/mod_authz_host.so
LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
LoadModule authz_user_module modules/mod_authz_user.so
LoadModule authz_core_module modules/mod_authz_core.so
LoadModule access_compat_module modules/mod_access_compat.so
LoadModule auth_basic_module modules/mod_auth_basic.so
LoadModule reqtimeout_module modules/mod_reqtimeout.so
LoadModule filter_module modules/mod_filter.so
LoadModule mime_module modules/mod_mime.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule env_module modules/mod_env.so
LoadModule headers_module modules/mod_headers.so
LoadModule setenvif_module modules/mod_setenvif.so
LoadModule version_module modules/mod_version.so
LoadModule unixd_module modules/mod_unixd.so
LoadModule status_module modules/mod_status.so
LoadModule autoindex_module modules/mod_autoindex.so
LoadModule dir_module modules/mod_dir.so
LoadModule alias_module modules/mod_alias.so
<IfModule unixd_module>
User daemon
Group daemon
</IfModule>
ServerAdmin you@example.com
#ServerName www.example.com:80
<Directory />
AllowOverride none
Require all denied
</Directory>
DocumentRoot "/var/lib/lava/dispatcher/tmp"
<Directory "/var/lib/lava/dispatcher/tmp">
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
</Directory>
<IfModule dir_module>
DirectoryIndex index.html
</IfModule>
<Files ".ht*">
Require all denied
</Files>
ErrorLog /proc/self/fd/2
LogLevel warn
<IfModule log_config_module>
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
<IfModule logio_module>
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio
</IfModule>
CustomLog /proc/self/fd/1 common
</IfModule>
<IfModule headers_module>
RequestHeader unset Proxy early
</IfModule>
<IfModule mime_module>
TypesConfig conf/mime.types
AddType application/x-compress .Z
AddType application/x-gzip .gz .tgz
</IfModule>
<IfModule ssl_module>
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
</IfModule>
FROM debian:stable
RUN apt-get update
RUN apt-get install -y nfs-common nfs-ganesha nfs-ganesha-vfs
ADD start-nfs-ganesha.sh /
ADD ganesha.conf /etc/ganesha/
ADD services /etc/
CMD ["/start-nfs-ganesha.sh"]
EXPORT {
Export_Id = 1;
Path = /var/lib/lava/dispatcher/tmp;
Pseudo = /var/lib/lava/dispatcher/tmp;
Transports = UDP,TCP;
Disable_ACL = TRUE;
Protocols = 3,4;
Access_type = RW;
Squash = No_Root_Squash;
FSAL {
name = VFS;
}
}
NFS_Core_Param {
NSM_Use_Caller_Name = true;
Clustered = false;
MNT_Port = 35543;
}
sunrpc 111/tcp portmapper # RPC 4.0 portmapper
sunrpc 111/udp portmapper
ganesha.nfsd 2049/tcp # Network File System
ganesha.nfsd 2049/udp # Network File System
#!/bin/sh
init_services() {
echo "* Starting rpcbind"
mkdir -p /run/sendsigs.omit.d/
service rpcbind start
echo "* Starting nfs-common"
service nfs-common start
echo "* Starting dbus"
mkdir -p /var/run/dbus
chmod 755 /var/run/dbus
rm -f /var/run/dbus/*
rm -f /var/run/messagebus.pid
dbus-uuidgen --ensure
dbus-daemon --system --fork
sleep 1
}
init_services
exec /usr/bin/ganesha.nfsd -F -L /dev/stdout -f /etc/ganesha/ganesha.conf
FROM debian:stable
# install tftp package
RUN apt-get update
RUN apt-get install -y --no-install-recommends ser2net
ADD ser2net.conf /etc/
CMD echo -n "Starting " && ser2net -v && ser2net -d -c /etc/ser2net.conf
# example:
#7001:telnet:0:/dev/serial/by-id/usb-FTDI_TTL232R-3V3_FT914B60-if00-port0:115200 8DATABITS NONE 1STOPBIT LOCAL
FROM debian:stable
# install tftp package
RUN apt-get update
RUN apt-get install -y tftpd-hpa
CMD in.tftpd -L --user tftp -a 0.0.0.0:69 -s -B1468 -v /srv/tftp