Page tree
Skip to end of metadata
Go to start of metadata

The wiki text below was used to collaborate on the draft text for this tech brief. Final development has moved to editing the text in the attached word document


Sharing a physical display across multiple operating systems


Summary

Five primary techniques for distributing graphics and HMI between cooperating systems have been identified in the GENIVI Graphics Sharing and Distributed HMI project (GSHA).

  • Surface Sharing
  • API Remoting
  • GPU Sharing
  • Display Sharing
  • Shared State, independent rendering.

Display Sharing provides support in hardware for compositing the display output from multiple operating systems into a single final display buffer. In this way Using this, a combined HMI can be created from independently operating systems without any additional interaction.

Key Characteristics

  • The physical display can be shared across multiple operating systems or major functional units of a single operating system.
  • H/W Compositor block provides multiple layers that can be assigned to different functional units. The hardware then combines (composites) them to create the final display buffer.
  • The Compositor hardware block is typically close to or part of the hardware Display Unit block.
  • The different systems do not need to know about each other.
  • No need to invent protocols to pass graphics between operating systems.
  • H/W may provide additional features that enhance isolation of the different functional units.

Description

Display Sharing is used to allow multiple operating systems to share a physical display whilst retaining control over how their output is combined. Typically, this control over the separation is an important requirement in the system design. For example, the surface area of a physical display may be mostly dedicated to display cluster graphics, but areas of the display may also show IVI information such as multimedia or navigation, or the output from a reversing camera. In this case the IVI graphics should not be allowed to overwrite the cluster graphics.

This separation is enabled by a hardware block that provides separate hardware display layers with support for composition and per pixel alpha blending. Each operating system can be assigned a dedicated layer to draw into. The hardware block combines (composites) all layers into a final display buffer when is then passed on in the Display Unit for output on the physical display. By providing control over the Z-ordering (which layer is "in front" or "behind" in the drawing order) the compositor can ensure that one layer does not overwrite the other. For example, placing the layer assigned to the Linux operating system providing IVI behind the layer assigned to the safety-critical OS providing the Cluster in the Z-ordering. Per pixel alpha blending allows a visually complex composition of the different layers both in terms of transparency between the layers and the shape of the bounding area of each layer. A layer does not need to constrain its rendering into a simple box for example. In the case of the Renesas R-Car Gen 3 SoCs this functionality is provided by the VSPD hardware block.

Since the hardware block composites the different display layers each operating system does not need to know about the other and there is no need to invent protocols to pass graphics between operating systems.

For simplicity of description so far we have talked about sharing between multiple operating systems, however it could also be the case that a single operating system provides all functionality and assigns the hardware layers appropriately.

Where hardware display layers are not available a form of Display Sharing may be provided in software by the Display Manager of certain Hypervisors. The Display Manager provides a mechanism for guest VMs to pass display buffers which it then combines.

Display Sharing can be usefully combined with virtualisation and functional safety technology in both software and hardware. For example, the Renesas R-Car H3 SoC has multiple hardware virtualisation features to prevent one operating system stalling the graphics processing of another. Along with functional safety and safe rendering features. This protection can be further enhanced with a Hypervisor.

Usage in virtualised and functional safety environments

Display Sharing can be usefully combined with virtualisation. Particularly in the case of consolidated hardware where multiple ECUs or SoCs are consolidated into a single ECU or SoC. An environment in which mixed functional safety requirements are also common. For example, in the case of the Renesas R-Car H3 SoC a functional safety environment may be running on its R7 CPU providing early reversing camera video, whilst an IVI environment running the Linux operating system and or Android runs on the general purpose A5x CPUs.

The Display Manager of a Hypervisor can enhance the functional safety of Display Sharing by controlling access to the hardware block. The Display Manager may also enhance how displays are shared by offering control over how the hardware layers are represented to each operating system. It may describe a layer as being smaller than the physical display area to a specific operating system for example. Allowing this smaller area to be offset anywhere on the larger screen.

The Hypervisor may also combine Display Sharing with other operating system separation or functional safety features in the hardware to create a more robust and flexible platform. For example the Renesas R-Car H3 SoC provides hardware GPU virtualisation features that separate the processing of graphics from each operating system before it reaches the hardware composition layers. This prevents one operating system stalling the graphics processing of another. Graphics sharing in a virtualisation environment is covered in more detail in an upcoming Tech Brief.

Case Study

The Renesas R-Car Canvas Demo demonstrates consolidation of cockpit functionality onto a single H3 SoC running on a Salvator-X board driving three displays and multiple operating systems. In the version discussed here the Integrity Multivisor hypervisor is used to virtualise an Integrity RTOS instance running a cluster demo and Linux operating system instance running an IVI stack and other functionality. The first display is shared between Integrity and Linux, whilst the second and third display is dedicated to Linux.  For display one Integrity and Linux are assigned different hardware display layers in the VSPD hardware block, with Linux being placed behind the Integrity cluster in the Z-ordering to ensure the Linux applications cannot overwrite the cluster graphics.

The demo implements touch gestures in the Linux instance. These can be used to size and swipe applications together and between the screens. When applications are pushed to display one they are automatically docked into a dedicated area in the cluster. A small amount of transparency allows Linux applications placed behind the cluster to show through.

The cluster does not need to understand what the Linux applications are doing and the hardware support for OS separation ensures cluster performance is maintained.

Authors

  • Stephen Lawrence, Renesas Electronics

Related technology

  • Virtual display
  • Surface sharing
  • GPU virtualisation
  • Display Manager in virtualisation technology such as Hypervisor
  • Alternatively provided by a Display Manager in virtualisation technology such as a Hypervisor. Possibly with less protection of layers not over writing each other

Open questions:

  1. Do we want to be specific and say Physical Display Sharing or not? Is there a reason to be that specific or is it limiting?
    A: Decision in 29/11 GSHA meeting is it is sufficient to call it simply Display Sharing
  2. Move the HV alternative in key characteristics into the body text so as not to confuse main points?
    A: Decision in 29/11 GSHA meeting is to mention HV as an alternative at some point in the text.
  3. The virtualisation section seems a little large, but although not necessary to use display sharing, they partner well - particularly in H3 where it brings in the other gfx functional safety features - and it seems right to mention it. Particularly as it used in the Canvas demo.
    A: Decision in 29/11 GSHA meeting is to simplify where possible. Can mention where an additional feature changes in some way when combined with Display Sharing.


I used this page to analyze and put in more review comments / Gunnar.


  • No labels

12 Comments

  1. Hi Stephen Lawrence ,
    Description section covers multiple things e.g. use case, functional safety aspect, RCAR SoC specific information, advantages. Can we keep description section to just present the concept and use case? i.e. lets move advantages to the end of description and RCAR Soc specific information to Example section. Whats do you think about this?
    Also, along with RCAR we can give one more example of SoC at-least e.g. i.Mx6 supports two HW layers.

    For people who are completely unaware of display controller hardware, it might not be easy to get what a HW layer is. Whats your opinion about giving a one OR two liner, describing what a HW layer is?

    How about mentioning the advantage without being specific about "compared to what". There is no specific advantage OR disadvantage, it is the use case rather e.g. the statement "As the hardware block composites the different display layers each operating system does not need to know about the other and there is no need to invent protocols to pass graphics between operating systems." provides one view but, it is also true that in a virtualized environment, we need support from hypervisor to assign different HW layers to different operating systems. What do you think about curtailing this statement to "As the hardware block composites different display layers, each operating system does not need to know about the other."? OR does this statement look better : "As the hardware block composites different display layers, none of the operating systems need to know about others."?

    1. Thanks for the review Harsha (Harsha Manjula Mallikarjun)

      Can we keep description section to just present the concept and use case? i.e. lets move advantages to the end of description and RCAR Soc specific information to Example section. Whats do you think about this?

      I think I understand your point. In many respects I agree, but I was somewhat constrained by the established pattern the group has set with the tech briefs of keeping them to two pages. With the bullet list at the start and even a simple diagram you end up with a page and a half or less. So it would be my natural inclination to use sections more and as an example I had a section on virtualisation and functional safety environments at one point. I did that as it felt like an extension of Display Sharing but should be separate from the core description. However I was asked to simplify the text, in part because it over lapped with virtualisation. So I did that and because of the space limitations it was folded back into the description. I tried to do a split in that section into "what", followed by "why". I guess the "why" might be split into its own section if space allows without too much repetition. We could discuss in the next call.

      The description necessarily needs to be somewhat high level to allow different implementations. That felt a little vague reading it back so I made the two short reference to R-Car, without details, to bring the point being made to something specific or concrete, but I can move them if the group feels they are out of place?

      Although it is always a balance, especially given the space constraints, I felt it important to at least outline some of the advantages of the approach. To basically say here are some of the reasons why you might consider using this approach in your design.

      For people who are completely unaware of display controller hardware, it might not be easy to get what a HW layer is. Whats your opinion about giving a one OR two liner, describing what a HW layer is?

      It's a good point. I considered the same. The hard part is coming up with one or two lines that add value. For example I considered a block diagram showing the DU path in R-Car. That would show the relationship between the input planes, three VSPD blocks, DU and Displays and make it clearer I think. It could also show where the DISCOM (Display Compare Unit) and DOC (Display Output Checker) fit in. That takes space and needs text. I also thought about the buffers holding the layers, but again hard to add value in just two lines.

      Do you have any suggested text?

      What do you think about curtailing this statement to "As the hardware block composites different display layers, each operating system does not need to know about the other."? OR does this statement look better : "As the hardware block composites different display layers, none of the operating systems need to know about others."?

      I take your point about virtualisation and for the record recognised it when writing it that a HV complicates the statement. I didn't expand on that because of space restrictions. I do think the core point that hardware layers enable display sharing between compositors that do not have remote functionality or compatibility remains. For sure that is not the end of it and the wider platform requirements need to be considered. I don't think it removes the need for Weston remote or Waltham for example.

      Your suggested sentences are shorter. I would be happy to adopt one of them, but would likely break the protocol point of into a separate sentence in some way. I guess ultimately its whether the sentence as it stands confuses?

  2. Just before reviewing the approach introduced here, just wanted to learn if there any other alternatives are discussed against isolation over the hardware overlays between different OS instances. Maybe it would be better to highlight at one of the final sections.

    Here I see two bottlenecks with this apporach, one of which is that the compositors, lets say within wayland, are usually able to use a pool of hardware layers as they are available during runtime. Having dedicated overlays meaning a bit unscalability, and lack of optimization for each OS instance.

    Have you also reviewed the ways of any master compositor to render the final output while having auxiliary display outputs from the other operating systems sharing the same screen?

    1. ...if there any other alternatives are discussed...

      Have you looked at all the 5 categories (start at main page of the project)?  In various ways they keep the operating systems separate and/or communicating.  Each one has different results.  If you meant only the more strict isolation methods then "GPU Sharing (virtualization)" is the other alternative.

      (But Please share what you are thinking about, something we might be missing)

      ...master compositor to render the final output while having auxiliary display outputs from the other operating systems sharing the same screen?

      I don't quite understand this sentence but others might...  What is this master compositor?  Software component or hardware?  Where is it implemented - in a specific VM instance, in a separate node (ECU), in a hypervisor, or is it a hardware component? 


    2. Thanks for the useful input Ozgur (Özgür Bulkan).

      A little historical background. When the group originally discussed known building block technology, see the related section of the GSHA wiki, we could see some difficulties of characterisation. For example, Waltham, Weston remote / gst recorder, HV/ h/w display virtualisation and hardware layers all come down to some form of surface sharing. Combining technologies also could mean the result might move from one group to another, e.g. a HV Display Manager can introduce features that in combination with h/w layers can make the result more like the sub-display sharing of "Surface Sharing" rather than the whole screen sharing "Display Sharing". So some lines were drawn around core technology to try and characterise them so they can be usefully grouped and compared for certain use-cases.

      I can't recall who originally suggested it but it was suggested that as h/w compositor layers were typically related to the h/w Display Unit (DU) that this be called Display Sharing as compared to sub-display Surface Sharing. The Tech Brief flows from that. The intention is not to say that this is the only or definitive way to share a display.

      On a personal level I think in many cases it would be combined with other technology. Although embedded engineers may be familiar with h/w compositor layers, engineers coming from other domains, which is increasingly common with engineer shortages, may not be as they are typically separate from the GPU. So another personal goal is educational to summarise what h/w layers can provide.

      just wanted to learn if there any other alternatives are discussed against isolation over the hardware overlays between different OS instances

      There has been some discussion in the group calls and at the Tech Summit of Bangalore about other OS isolation base technology. From graphics perspective I mentioned things like OS ID in IPMMU and GPU, multiple GPU input ports, GPU scheduling. Also functional safety features for display output checks and dedicated rendering paths. Then there are the HV topics like HV Display Manager and GPU virtualisation. I think the HV and Gfx Sharing groups have yet to discuss the overlaps in detail. That's from my perspective.

      There are definitely some gaps in inputs around proprietary gfx frameworks, QNX and Android. Also not so much discussion yet on combining technology. The plan is to go on to do a white paper that would touch on some of that and contrast different techniques.

      Here I see two bottlenecks with this apporach, one of which is that the compositors, lets say within wayland, are usually able to use a pool of hardware layers as they are available during runtime. Having dedicated overlays meaning a bit unscalability, and lack of optimization for each OS instance.

      If they are dedicated yes. I think that's up to the platform design, depending on what is provided by the frameworks or the designers willingness to work on them. I just used OS separation as an example because of the Canvas demo case study and is a nice illustration of how h/w composition layers can free s/w design so that block A does not need to worry about block B. You could have a free form design if you wished.

      Have you also reviewed the ways of any master compositor to render the final output while having auxiliary display outputs from the other operating systems sharing the same screen?

      Like Gunnar I'm not exactly sure what you mean here. Obviously some master compositor to bring the DU images together, but I'm unclear about how. Do you have something specific in mind? Something like a HV Display Manager or other s/w to perform the same for example.

  3. >> Have you looked at all the 5 categories?
    Yes. My comments are only related to (physical) Display Sharing method introduced here.

    What I understood, it is proposed here to dedicate a number of hardware overlays at display backend to particular OS/VM instances. Even though it is practically well-isolating overlays between different OS instances, it would not be the optimized way of using hw resources. Assuming that one OS does not use some of the dedicated overlays (ex. only using 1 overlay of 4 overlays which were statically allocated for this OS) at a time, the other may not also utilize from the unused resources from the other's set of overlays. To share the overlays on the air between OS/VM instances would be beneficial. I know it would be hard to allocate/release overlays (thru kernel drivers as handshaking with different OS/VM instances), instead of statically dedicating them, but wanted to open a discussion if there would be achieved.



    1. OK, that's a discussion point that might be worth noting here (or in a deeper whitepaper discussion perhaps). 

      It seems like a tradeoff between flexibility and security (some layers are dedicated all the time because they are safety-critical)


    2. I know it would be hard to allocate/release overlays (thru kernel drivers as handshaking with different OS/VM instances), instead of statically dedicating them, but wanted to open a discussion if there would be achieved.

      I'm not the domain expert so I don't know the current state of the art. I can imagine its an area of differentiation between HV/virtualisation vendors as it must be part of their Display Manager feature set. Certainly something that could be brought up in discussions with the HV project.

  4. (just suggestion, maybe not relevant)

    Following questions popped up in my head while reading the description:

    • how is the compositing (per-pixel-alpha) implemented? Who provides the mask, who is the "master" compositor? How is the compositing being controlled from application PoV?
    • Is there a special API for the compositing? If yes - was Wayland an option? If yes, what was the decision to not go for it?
    1. Hi Violin (Violin Yanev), thanks for the review.

      I initially read this as specific questions from you, or do you mean more these are questions that people might expect to be answered?

      1. I mean it like that - that maybe other people will ask themselves the same questions. Generally, if the paper is supposed to be an academic paper, talking about API and usage is probably misplaced. However, since you mention concrete hardware, I imagine the focus is more towards software engineers/system designers, and those would probably be interested how the display sharing fits to their existing architectures/systems. Hence the question (wink)

        1. OK. It's good input. The main problem is space. A lesser problem is giving some detail in an h/w and s/w agnostic way. That could be an app note in itself: VSPD, VSPB, Weston renderers, OS, HV, libkms..