[RFC] lava_scheduler_app: Handle exceptions from Device.is_valid(), etc.

In previous commit, Device.is_valid() and Device.load_configuration()
methods were made to raise exceptions, so any issues with device
configuration were propagated up to their "business logic" handlers,
at which place the issue they can be reported to user in appropriate
context and detail.

This commit of the series makes the changes to that effect, changing
most of the callers of those methods to expect and handle exceptions.
Where they were handled, the scope of handling was extended, to handle
any exceptions (as indeed, the set of exceptions may change with the
evolution of applications).

In rare cases, handling still consists of ignoring exception, as in
e.g. Device.get_extends() (arguably, if device configuration is invalid,
it's fair to say that it has none of Jinja "extends" directives).

In most other cases, handling is at least logging the exception via
appropriate logger. These cases won't be visible to an end user, but
at least will be visible to administartor or first-time installer.

However, in the cases where audit trail of operations is added to
database, information from the exception is included in such audit
trail to reach users and help debugging.

For some web views, exceptions aren't handled explicitly, asssuming
that a default webapp exception handler will handle it (at least log
and report a generic error to use). Such cases actually should be
re-reviewed by the core team, and possibly better handling is added.

One last note is sadly, there's no consistency as how different
exception classes handle their str() and repr() operations. For a
particular exception, either of these may miss important information.
For exmaple, jinja2's TemplateNotFound has str() of just template
filename (so, on its own, it wouldn't be clear what the issue is with
that filename), while repr() is just "TemplateNotFound()" (no filename
included), so we need to dump both exception type name and its str().
If this patchset is accepted, that needs to be done consistently
(it's currently not), perhaps by introducing a helper function like
format_exc().

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
Status Job ID Name Coverage
  Test
passed #106273
amd64
dispatcher-debian-10

00:03:43

passed #106275
amd64
dispatcher-debian-11

00:04:02

failed #106274
amd64
server-debian-10

00:04:10

failed #106276
amd64
server-debian-11

00:06:35

 
  Analyze
skipped #106277
amd64-dind allowed to fail
bandit-sast
skipped #106278
amd64
black
skipped #106279
amd64
code_quality
skipped #106284
amd64
codespell
skipped #106280
amd64
coverage
skipped #106281
amd64
dockerfiles
skipped #106283
amd64
pylint
skipped #106282
amd64
schemas
 
  Build
skipped #106286
amd64
debian/10
skipped #106285
amd64
doc
skipped #106287
amd64-dind
docker-amd64-dispatcher
skipped #106288
amd64-dind
docker-amd64-server
 
Name Stage Failure
failed
server-debian-11 Test
-- Docs: https://docs.pytest.org/en/latest/warnings.html
------------- generated xml file: /builds/pfalcon/lava/server.xml --------------
============= 4 failed, 613 passed, 636 warnings in 267.80 seconds =============
Uploading artifacts for failed job
Uploading artifacts...
common.xml: found 1 matching files and directories

server.xml: found 1 matching files and directories

Uploading artifacts as "junit" to coordinator... ok
id=106276 responseStatus=201 Created token=tfWY8T9s
ERROR: Job failed: exit code 1
failed
server-debian-10 Test

-- Docs: https://docs.pytest.org/en/latest/warnings.html
============== 4 failed, 613 passed, 4 warnings in 153.20 seconds ==============
Uploading artifacts for failed job
Uploading artifacts...
common.xml: found 1 matching files and directories

server.xml: found 1 matching files and directories

Uploading artifacts as "junit" to coordinator... ok
id=106274 responseStatus=201 Created token=prUU3McH
ERROR: Job failed: exit code 1