Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

Public Continuous Integration and Automated Test

(For the first implementation of some of these plans, see Go Continuous Integration Server)

Background

We are working to integrate a large set of some hundreds of upstream software
components, on an ongoing basis. The components are maintained by many people,
in many countries. Some of the maintainers are affiliated with us, but most are
not. For a few of the projects we can influence or even dictate the tools and
processes to be used, but for most projects we can not.

The integrations we create are expected to work for a range of device targets,
on a range of computer architectures. These target devices and architectures are
themselves expected to change and evolve over time. In many cases the
maintainers of upstream components have little or no direct access to our
target devices and/or architectures.

We hope that our integrations will be of long term benefit for our community.
Some community members are interested to adopt our integrations directly and may
consider refreshing every time we release a new update. Other members may fork a
given version and make their own changes. After forking, they may want to back-port
some or all of our subsequent changes. Alternatively members may 
prefer to
forward-port their changes to our newer releases, at a later time.

Objectives

In the above context we are aiming to improve our integration contribution for the community. This includes:

- reducing the effort and time required to integrate changes from upstream
- increasing the reliability, reproducibility and traceability of our work
- identifying problems in our integrations and fixing them
- identifying problems in upstream components which are exposed by our
integration work
- helping upstream to address problems as they are identified, by offering
well-described bugs, suggesting fixes, offering and/or reviewing patches, and
testing the improved software

As a result we are considering how best to establish and maintain service
infrastructure which can support automation of the integration and build
process, and support automated testing of the resulting built artifacts.

 

First pass at architecture

Requirements

Known requirements include:

  • handle multiple 'recipes' (for versions and variants) on an ongoing basis
  • keep source-code reliably and traceably (for compliance) from hundreds of
    upstream repositories
  • build on demand (eg when integrating for a release)
  • build on a schedule (eg nightly builds)
  • build for multiple architectures (eg ARMv7, ARMv8, x86_64, x86_32)
  • build for multiple target devices (eg a set of devboards)
  • publish build artifacts
  • comply with license requirements regarding source for published artifacts
  • deploy to a range of target devices
  • run a set of tests on the targets (and re-use tests for various recipes)
  • report the results of all of the above via public-facing web api/interface/ui

In spite of the widespread adoption of various CIAT approaches, our
specific situation is not particularly well-served by off-the-shelf solutions.

What do other projects do?
==========================

Interesting Candidate Solutions
===============================
For the CI/build process...

For the validation/test process

  • Lava
  • LTP
  • LTSI test suite
  • JTA (Jenkins Test Automation)

Proposed approach
=================

... re-use as much as we can to get started.

  • No labels