Page tree
Skip to end of metadata
Go to start of metadata

This is the draft text – Work in progress. It will be synchronized with the document format occasionally.



Abstract


Five primary techniques for distributing graphics and HMI between cooperating systems have been identified in the GENIVI Graphics Sharing and Distributed HMI project (GSHA).

… a little bit project purpose
… summarize findings/conclusions
… what to expect in the paper (descriptions & comparisons and guidelines)


Purpose of GSHA project is to explore the technologies for Graphics sharing and distributed HMI compositing. Here Graphics Sharing refers to “Graphics in the form of bitmap, scene graph or drawing commands generated on one ECU, transferred for display from another ECU (or between virtual machine instances)”. Distributed HMI compositing technologies refer to “methods to turn a multi-ECU system into what appears and acts as a single user-experience”.


The technologies discussed here are practically used and Open(TODO: not all of them??). Using these technologies one can realize all the automotive use cases, which are needed to turn a multi-ECU HMI into what appears and acts as a single ECU/system HMI.


In this paper one will get an overview of the technologies and use cases where these technologies can be applied. Each of these technologies are compared and guidelines are provided which assist in choosing the right technology for any use case.

TODO: Elaborate a bit more


Graphics Sharing Categories

(warning) ...which order?

  • Display sharing

  • GPU Sharing

  • Surface sharing
    Sub-category:  "
    Virtual Display" – Full display transfer by encoding as a video stream.

  • API Remoting

  • Shared state, independent rendering



API Remoting

API Remoting involves taking an existing graphics API and making it accessible through a network protocol, thus enabling one system to effectively call the graphics/drawing API of another system. In some cases an existing programming API such as that of a local graphics library (e.g. OpenGL), is simply extended to provide network-capability turning it into an RPC (Remote Procedure Call) interface. In other cases, new custom programming interfaces might be created that are specially tailored for distributed graphics.


use cases:

1) A device with no graphical capability can use API remoting to render on a remote server with graphical capability. Remote server then shows the render content on display.

2) With API remoting it is possible to  muticast to multiple servers, there by showing same content on many displays.


Consequences of API Remoting (TODO: Put a better title here. Do not use Cons/Disadvantages as its just a consequence):

1) Depending on the available bandwidth HMI/animation performance can vary.

2) Smoothness of animation and UI response additionally depends on Network latency.

3) In case of API remoting libraries need update, then this update needs to be carried on all relevant nodes in network.

4) If standard API (e.g. OpenGL) is converted to RPC then compatibility of applications is maintained, else changes are  needed for applications to use the custom APIs.

 

Example:

RAMSES: RAMSES is a framework for defining, storing and distributing 3D scenes for HMIs. From a user’s perspective, RAMSES looks like a thin C++ scene abstraction on top of OpenGL. RAMSES supports higher level relationships between objects such as a hierarchical scene graph, content grouping, off-screen rendering, animations, text, and compositing. All of those features follow the principle of API remoting, i.e. the commands (that define the state in the RAMSES case) can be sent to another system and the actual rendering is executed at the receiving side.


RAMSES distinguishes between content creation and content distribution; the same content (created with RAMSES) can be rendered locally, on one or more network nodes, or everywhere. RAMSES handles the distribution of the scenes, and offers an interface to control the final composition - which scene to be sent where, shown at which time,and how multiple scenes and/or videos will interact together. The control itself is subject to application logic (HMI framework, compositor, smartphone app etc.).


TODO: provide a link to RAMSES (repository, demo, tech brief...)

TODO: Give a brief on scene distribution and diagram showing rendering on local/network nodes.

TODO: Discuss on application portability. How easy/difficult to migrate an application which is using OpenGLES to RAMSES?

Surface Sharing

Subheadings...

  • Typical Systems, Compositor approach
  • Wayland
  • Waltham
  • Virtual Display
  • other examples (non-Wayland)…


Shared State, Independent Rendering


  • Smartphone interaction

GPU Sharing

...

Display Sharing

In display sharing, a display is shared between multiple operating systems, with each of the operating systems thinking that it has complete access to display. The operating systems are co-operative so that content on the screen is meaningful. Also it could be that specific areas of display are dedicated to each of the operating systems.

Realizing Display sharing requires "Hardware Layer" feature in Display controller OR linked image processing hardware. Each Hardware Layer holds a framebuffer and supports operations (e.g. scaling, color key and so on) on the pixels. Frame buffers of Hardware layers are blitted to form the final image which goes to the display. This blitting is carried out by the hardware itself.

Each operating system renders to one or more Hardware layers. Each Hardware layer is associated with one operating system.


TODO: add a diagram which shows hardware layers assigned to different operating systems and show the final display image.


Use case:

1) A display shared by instrument cluster and multi-media system. If each of these are controlled by different operating systems in a virtualized environment, display sharing can be used.

If each of the operating systems renders to specific area of the display then, using display sharing technology operating systems can work without knowing about each other i.e. there is no need of communication between operating systems to share the display. However, if the operating systems present a content on display which is dependent on other operatin system then communication is required.

Where hardware display layers are not available a form of Display Sharing may be provided in software by the Display Manager of certain Hypervisors. The Display Manager provides a mechanism for guest VMs to pass display buffers which it then combines.


Example:

RCAR H3 SOC. Renesas Canvas demo.

TODO: Provide short description of demo.

TODO: Are integrity patches available freely, to achieve display sharing on RCAR?


Subheading...

..


Choosing a strategy

Comparison of approaches

... characteristics of each - trade-offs, ...  "dry data"

Use-case examples

... Describe each example and then recommended approach and why.  This is provides more understandable to the dry comparison of previous chapter.



Conclusions



  • No labels

1 Comment

  1. Can't edit in the article directly, so here as comments...

    I like the ordering of topics in "low level → high level" order! Would maybe suggest to switch "GPU sharing" and "display sharing" because I think GPU sharing is a little more low-level than display sharing.

    Regarding the comparison - I think it is difficult to objectively measure all technologies, because some of them are inherently different in nature and supported use-cases. My suggestion therefore:

    • start with a disclaimer that those approaches are different, and the further comparison should be seen more as a practical rather than scientific comparison
    • Still, some approaches can be compared better than others - for example, I think "surface sharing" and "display sharing" have in common that it is a 2D pixel stream being shared, so a comparison in terms of bandwidth, complexity or memory accesses can be attempted. Other example: shared-state-independent-rendering has multiple approaches. It wouldn't make sense to compare their performance, BUT could try to compare their flexibility - e.g. what kind of content can be shared, is it rather generic or rather bound to specific use-cases, are they available in different programming languages or not... 
    • Maybe it would be worth to consider not "compare all vs. all", but select specific pairs of technologies which could be considered as interchangable. For example - RAMSES vs. OpenGL streaming. It is not 1-to-1 the same, but if I would be considering sharing generic 3D content, those would be the two main options. So as a reader I would be interested to know what are the differences.