Blog from July, 2016

Developer summary

I have pushed updates to the previous Beta release of Renesas R-Car Gen 2 support for the Genivi 10 Yocto Baseline to github [1].

They have been pushed to the new product branch "genivi-10-bsp-1.10.0". This replaces the previous working branch "stevel/genivi-10" which is now deprecated and will be later deleted.

[1] https://github.com/slawr/meta-renesas/tree/genivi-10-bsp-1.10.0

Developer Notes

In summary there have been the following major changes:-

  • Fix "full path to dts" kernel bitbake build warning
  • Fix "license listed foo was not in the licenses collected" bitbake build warning
  • Various readme updates or improvements
  • Added sample local.conf and bblayers.conf
  • Kernel config changed to receive bootargs for u-boot when combined uImage+dtb image used

For those who like a graphical diff you can find a Github compare here.

The following sections explain some of these in more detail including a migration guide.

Yocto Bitbake Warnings

The updates address two bitbake warnings. The first addresses a malformed recipe variable that resulted in bitbake raising a license warning. The other addresses a bitbake kernel build warning related to dbs/dtb files.

Migration Guide / Behavior changes

U-boot

The Yocto board machine files, e.g. porter.conf, now default to the setup for Wayland/Weston. This means you no longer need to set this explicitly yourself in your local.conf.

Sample local.conf and bblayers.conf

The Yocto BSP supports multiple boards so it is impossible to add a local.conf.sample and bblayers.conf.sample that covers all boards. Examples for the different boards are available upstream, but I can see the value of having a sample within the Yocto BSP itself.

The commit adds a bblayers.conf.sample that should be applicable for all Gen 2 boards. The local.conf.sample is for the Porter board. For other boards change the MACHINE variable and in addition for H2 SoC based boards such as Lager change the MACHINE_FEATURES_append variable to be "rgx" rather than "sgx".

Kernel

The kernel config has been changed to accept boot args from u-boot when a combined uImage+dtb kernel image is used.

Testing

The update has been tested with Genivi Yocto Baseline 10 and the GDP Master branch tag gdp-10.

 

The past few weeks the GDP delivery team together with some key contributors, have been working on a not very visible but still important change. The GDP project has created the basis for turning the GDP release based delivery model to a "rolling" one. My colleagues will provide in a coming post the technical details behind this change. I want to provide a higher level view of what is happening and why.

Some background

 

GDP was born as a "demo" project. The main goal was to provide a platform to show the software components for automotive that the different GENIVI Expert Groups were developing. This was done through a delivery model focused on publishing a stable and easy to consume version of the project every few months, a major release.

 

Strictly speaking, the GDP is a derivative. It is based on poky and uses Yocto tools to "create" the Linux based platform, adding the different components developed by the GENIVI Alliance together with upstream software. For the defined purpose, the release centric model works fine, especially if you concentrate your effort in very specific areas of the software stack with a small number of dependencies on the other areas, and a limited number of contributions and environments where the system should work. During 2016, GDP has grown significantly. We have more software, more contributors, more components and more target boards to take care of. Although the above model has not been not challenged yet, it was just a matter of time. As explained in two previous posts[1][2], the GDP is moving from being a Demo to a Development Platform. Changing the mission means changing the goals and the target group, which implies the need to adjust the deliverable to meet the new expectations.

 

So, right after the 14th AMM, the Delivery Team decided to change the delivery model to better meet the new mission, providing developers the newest possible software with an increasing quality threshold. At the same time, in order to increase the number of contributors, the GDP needs to provide a new solid platform every once in a while. That should be done through a solid release.

What is a rolling delivery model?

The key idea behind a modern delivery release model is to ensure that the transition from one stable release to the next one takes an affordable effort. I will display an example to picture the idea.

Problem statement

 

Imagine an organization that publishes one release per year. Let's assume that a particular release included 100 patches developed by employees and, during the lifetime of the release (1 year too), another 100 patches were added to the product as bug fixes and updates. At the end of the release lifetime, the product includes 200 patches that define the value the product provides to customers and users.

 

Either for technical or business reasons, a year later it is time to upgrade. Our organization has to create a new Linux based system with newer upstream code and they have to integrate the patches from the previous release plus the updates and bug fixes developed for the coming release.

 

After a simplification process done by engineers, the number of patches needed to be integrated in this newer base system is reduced to 150. The organization also wants to add to this new release another 100 patches that represent the new features they have been developing during the last year for this new version. The delivery team now has to integrate 250 patches in the new base system, 150 of them coming from the previous release. One might think that the effort required to do this is 2.5 times the effort invested in the previous release. Maybe you think that the effort is not so high since some of the patches have been developed thinking about the new base system. There are many other considerations like this one that might affect the initial estimation. This example is obviously a simplification.

 

However any experienced release manager will tell you that moving patches integrated in an older system base onto a newer one (forward-porting) requires additional effort, beyond a linear relation with the number of patches. Forward-porting is the "road to hell". Iterate this example a few times and you will understand why there are so many organizations out there that have as many people focusing on delivery as they have in development. They migrated to Linux base system keeping the traditional delivery model they had while working with closed source software.

Possible solutions

 

One of the paths to improve the situation is upstreaming those changes that affect generic components. Some companies also upstream their new features early in their development process, generally looking for wider testing, or after they have been released to customers, to increase adoption and reduce future maintenance effort. This is definitely a must do.

 

From the delivery perspective, the most popular way to tackle the problem though is reducing the release cycle, so the number of patches to forward-port in each release is smaller. The development time and the maintenance cycles are also smaller. The same applies to the complexity of the forward-porting activities. "Jumping" from one release to the next one is easier to do. Add automation of repetitive tasks to this recipe and you feel you have a win.... for some time.

 

The journey through the "road to hell" becomes more comfortable, but our organization is still getting burned, even in the case that our customers and ourselves can digest releasing frequently. We all know how expensive and stressful a release might become.

 

The most suitable option to achieve sustainability while scaling up the amount of software an organization can manage without releasing more often than your market can digest is to change your delivery model. Rolling delivery models are a serious attempt to solve this problem, putting integration as the central element instead of the software itself.

 

This model is not new. Gentoo has been doing it forever, but it was Arch Linux who implemented it in a way that immediately attracted the attention of thousands of developers. Still it was a model with no hope beyond hardcore Linux developers. openSUSE brought this model to a new level by implementing a process whose output was stable enough for a much wider audience, and compatible with the release of a more stable and a commercial releases. Nowadays there are other interesting examples out there that commercial organizations can learn from.

What is a rolling model?

 

It is still hard to define but essentially it is a process in which ideally you have one continuous integration (CI) pipeline as the one and only entry point for the software you plan to ship. Releases then become snapshots of all or part of the software already integrated after going through a specific stabilization, deployment and release process.

 

So ideally, if you release a portfolio, you integrate only once, reducing significantly the costs of having different engineers working on different versions of the same software and forward-porting, among other benefits. A rolling delivery model is then a lot more than a continuous integration chain, although that is the key point. Please have in mind that this is an oversimplification. This description doesn't go into detail on other key aspects like maintenance cycles, how upstreaming affects the process, strategies towards updating the released products, etc.

 

A transformation process that takes an organization from a release centric model to a rolling one is about doing less and doing it faster, so less people can handle more software with less pain, allowing more people to concentrate in creating value, developing new and better software instead of just shipping it

Back to GENIVI

 

Moving from a release centric to a rolling model is hard work. Frequently it is easier to start all over again. Since the GDP is still a relatively small project, we can afford going through the transformation process step by step.

 

The first stage has been creating that single integration chain and treating GDP-ivi9, our latest release, and those that follow it, as a deliverable of what we call today Master. Ideally, no single patch will be added directly to the release branches. They should come from Master. That way, we reduce (ideally to zero) the effort of forward-porting of patches while putting in the hands of our contributors the latest software on a regular basis.

 

To do so, we are in the process of adapting our simple processes and CI system to the new model, GDP repository structure, the wiki contents, the task management structures, several key policies, our communication around the project...

 

The GDP will face a very interesting challenge since this model needs to be proven successful for a derivative. If we are able to move fast enough, it will come the time in which we will need to decide if GDP keeps being a derivative or it becomes upstream, that is, either GDP limits the delivery speed based on the Poky release cycle, or we work upstream with the Yocto project to increase our delivery speed.  That is a good problem to have, isn't it?

 

If (almost) everything goes right, after adding a few needed services in GENIVI's infrastructure and ensuring the updated software is in compliance with selected verification criteria, the same number of people will be able to manage and deliver more software. And once the new processes become more stable, automation will not just increase efficiency, it will boost the project by allowing GENIVI to achieve goals that only big organizations with large delivery teams can do. This is the kind of transformation that takes time to consolidate, but has a huge impact. Based on my experience, I believe that if GENIVI is able to sustain this effort and keep a clear direction the next couple of years, the benefits of moving towards a rolling model will be noticeable even outside the industry.

A new deliverable for those who want latest and greatest: GDP Master

With this blog post, the GDP Project reaches a relevant milestone. We have turned our previous release based delivery model into a rolling one by creating a central pipeline where GDP is continuously being integrated. Out of this integration point, the GDP Delivery Team can release different deliverables targeting different audiences. Currently there are two main deliverables, GDP Master, targeting those GDP contributors and Power Users who need the latest available software, and Major releases like GDP-ivi9, which targets GDP users who want to consume more stable software in the form of images ported to specific target boards.

Why did GDP switched from a traditional delivery model to a more flexible one?

Some of the main reasons are:

  • Makes it easier for interested parties to know where the work / bleeding edge of GDP is being conducted, previously we've received feedback stating the branch structure / naming schema was confusing, this should ease that problem.
  • Having Master not be a strictly defined 'release' or 'golden' branch also allows more flexibility for the project, allowing master to develop whilst having a stable maintenance branch for such purposes.
  • genivi-dev-platform repository started by having branches per target, that during release cycles could end up out of sync due to focus being applied to a specific target (mainly qemu, for non-bsp requirements). It was intended that the 'master' qemu branch could be used to rebase/merge all common changes across all target branches, but this proved more difficult in practice and quickly got un-manageable even for simple changes (e.g. .gitmodules + conf file conflicts).
  • Moving to a singular branch for all targets also comes with its own intricacies  (more explicit scripting, the assurance of common commits etc) however managing contributions and building images from scratch is more efficient.
  • Merging the git history of ~5 targets into a single branch in a sane way was not possible, so the qemu branch was taken as a base. Obviously the respective histories of each target branch in GDP is important, so the branches have been archived as tags.

And how does this new model works?

The more important policies that rules the submission and management processes are described in the GDP Management wiki page. The main goal of these policies is to create a system in which it possible to keep a constantly developing master branch, as well as providing a platform (branch) that acts as a stable snapshot of a release and as such is maintained for a given period. The policies will be applied by the maintainers, but it is important that users (whether that be contributors etc) follow the guidelines where possible, as this will ensure that GDP is workable for all parties. If a contributor is aiming to upstream a new package into GDP (directly into meta-genivi-dev or via an external layer) then it is expected to be compatible with the current genivi baseline / yocto baseline component versions available in master. This ensures that the GDP continues to move forward. Generally it is expected that any patch / PR to either repository is expected to pass the go.cd integration pipelines to ensure the system is buildable. go.cd/github has been configured to report the status of the relevant pipelines in the PR GUI.

That said this is only used as an initial check, and still requires standard code review. The CI is not bulletproof, build-failures are assessed to check if the PR was the root-cause and acted upon accordingly. Please note: as it stands GDP due to the nature of being defined essentially by two repositories, has well documented / discussed PR scenarios in which PR's to both repositories have to be tested in the same integration test. There is currently no mechanism in place to test this scenario and as such has to be dealt with manually for now. GDP now makes use of selective git submodules to generate the corresponding yocto layers based on the target being built, including meta-genivi-dev which also follows the master branch policy. The remote head of the repo is set to the master branch, i.e master is the default branch. And tags will still be used for snapshots of releases.

As you can see there is still a long way to go towards having a truly continuous delivery model through a rolling release but some fundamental steps has been taken. Obviously it is still possible to contribute to GDP via patches to genivi-projects@lists.genivi.org, as stated in the updated GENIVI Contribution policy.

GDP-ivi9 gets into maintenance mode

One of the consequences of adopting this new model is that GDP-ivi9, our latest release, gets into the maintenance cycle until the next major release is published, which is expected within this year. There are currently maintenance branches of genivi-dev-platform & meta-genivi-dev.git. The maintenance branch is the name given to the currently supported 'release' branch of GDP. Once a new release is declared and branched from master, the maintenance title is transferred. Only bug/security fixes or specific backport patches / PR's should be based against this branch. It is expected that a user looking for a 'stable' GDP build (or requiring a certain version of a package) would use the maintenance branch:

The intructions to build and run GDP-ivi9 has been updated to reflect these changes and new policies has been defined for maintenance.

Other areas of the project that has been improved as a consequence of the new model

Wiki

The fact that we have now two deliverables instead of one targetting different audiences have an impact in the way our contents are structured. Now there is one central page for each one of them from which Master and major release users can get all the information required to either run or download and boot GDP. For those who want to consume the latest and greatest, GDP Master is their page. Those more interested in a more conservative and easy to consume approach should go to GDP Releases wiki page. The reference page to download images and metadata remains unchanged.

As usual with key changes, there is still some work to be done in order to update contents in several pages, check links, titles, references to Master, update diagrams, etc. Not all this work is completed but most of the relevant updates are done by now. Feel free to point us to potential improvements.

Task management

Since Master is the central integration point and it is a continuous effort, we have re-structured the GDP Epics to reflect that most of the work is done now in Master. We have extended the usage of the JIRA capabilities related with Releases (FixedVersion), simplified the interfaces to create/update tasks, improve the integration between GitHub and JIRA/Confluence and other actions that will make our everyday work a little more efficient and easy to follow.

The main GDP Delivery Epics are now:

  • GDP-257 - Getting issue details... STATUS where all the tasks related with Master are being tracked.
  • GDP-23 - Getting issue details... STATUS where the delivery team collects all the tasks related with the potential next release. At this point it is undetermined if we will skip GDP 10 (based on poky 2.0 and meta-ivi 10) and jump directly to GDP 11. A new epic would be created in that case.
  • GDP-8 - Getting issue details... STATUS where the only active task is GDP-259 - Getting issue details... STATUS to track the main actions taken to maintain the current release.

 

In summary, GDP has adapted the traditional GENIVI delivery model to a more flexible and modern one, executing actions at different level as consequence of the changes introduced

 

GDP Delivery Team

Renesas R-Car M2 Porter and E2 Silk boards are now available from two new distributors Avnet and Marutsu. Avnet are shipping worldwide.

Details of where to buy boards can be found here for Porter boards and here for Silk.