Page tree
Skip to end of metadata
Go to start of metadata



This is the draft text – Work in progress. It will be synchronized with the document format occasionally.



Abstract


Five primary techniques for distributing graphics and HMI between cooperating systems have been identified in the GENIVI Graphics Sharing and Distributed HMI project (GSHA).

… a little bit project purpose
… summarize findings/conclusions
… what to expect in the paper (descriptions & comparisons and guidelines)


Purpose of GSHA project is to explore the technologies for Graphics sharing and distributed HMI compositing. Here Graphics Sharing refers to “Graphics in the form of bitmap, scene graph or drawing commands generated on one ECU, transferred for display from another ECU (or between virtual machine instances)”. Distributed HMI compositing technologies refer to “methods to turn a multi-ECU system into what appears and acts as a single user-experience”.


The technologies discussed here are practically used and Open(TODO: not all of them??). Using these technologies one can realize all the automotive use cases, which are needed to turn a multi-ECU HMI into what appears and acts as a single ECU/system HMI.


In this paper one will get an overview of the technologies and use cases where these technologies can be applied. Each of these technologies are compared and guidelines are provided which assist in choosing the right technology for any use case.

TODO: Elaborate a bit more


Graphics Sharing Categories

(warning) ...which order?

  • Display sharing

  • GPU Sharing

  • Surface sharing
    Sub-category:  "
    Virtual Display" – Full display transfer by encoding as a video stream.

  • API Remoting

  • Shared state, independent rendering



API Remoting

API Remoting involves taking an existing graphics API and making it accessible through a network protocol, thus enabling one system to effectively call the graphics/drawing API of another system. In some cases an existing programming API such as that of a local graphics library (e.g. OpenGL), is simply extended to provide network-capability turning it into an RPC (Remote Procedure Call) interface. In other cases, new custom programming interfaces might be created that are specially tailored for distributed graphics.


Use cases :

  1. A device with no graphical capability can use API remoting to render on a remote server with graphical capability. Remote server then shows the render content on display.
  2. With API remoting it is possible to  muticast to multiple servers, there by showing same content on many displays.


Consequences :

  1. Depending on the available bandwidth HMI/animation performance can vary.
  2. Smoothness of animation and UI response additionally depends on Network latency.
  3. In case of API remoting libraries need update, then this update needs to be carried on all relevant nodes in network.
  4. If standard API (e.g. OpenGL) is converted to RPC then compatibility of applications is maintained, else changes are  needed for applications to use the custom APIs.


Examples :

RAMSES:

RAMSES is a framework for defining, storing and distributing 3D scenes for HMIs. From a user’s perspective, RAMSES looks like a thin C++ scene abstraction on top of OpenGL. RAMSES supports higher level relationships between objects such as a hierarchical scene graph, content grouping, off-screen rendering, animations, text, and compositing. All of those features follow the principle of API remoting, i.e. the commands (that define the state in the RAMSES case) can be sent to another system and the actual rendering is executed at the receiving side.

RAMSES distinguishes between content creation and content distribution; the same content (created with RAMSES) can be rendered locally, on one or more network nodes, or everywhere. RAMSES handles the distribution of the scenes, and offers an interface to control the final composition - which scene to be sent where, shown at which time,and how multiple scenes and/or videos will interact together. The control itself is subject to application logic (HMI framework, compositor, smartphone app etc.).


RAMSES is available on Github. Overview and deployment examples as well as links to further documentation is available at the Github wiki pages.

TODO: add link to video

RAMSES is a low-level framework, closely aligned to OpenGL. Migrating from an existing OpenGL application to RAMSES usually involves providing a software wrapper which creates RAMSES objects (shaders, geometry, rendering passes, transformation nodes, etc) instead of sending OpenGL commands directly to the GPU. The main difference (and thus cause of migration effort) is that OpenGL sends a command stream per rendered frame, whereas RAMSES requires a scene definition which is updated only on change. For example, a static image would have to be re-rendered in every frame, starting with glClear(), setting all the necessary states and commands, and finishing with eglSwapBuffers(). With RAMSES, the scene creation code would look similar to an OpenGL command stream, but once the scene has been defined, its states and properties don't have to be re-applied every frame, only changes to it. Migrating from OpenGL to RAMSES is comparable to migrating from an OpenGL-centric rendering engine to a scene-graph-centric rendering engine. Depending on application this effort may vary.

Surface Sharing

Surface sharing distributes already rendered graphical content, representing the intended graphics from the application. A surface is represented as a two-dimensional image in memory, which can be described with width, height, pixel format and some additional meta-data. Along with the image data, other information, e.g. touch events, can be shared but in terms of size, image data would have by far the biggest share. Therefore, sharing of image data should be the driving point for optimization during the definition and implementation of the sharing mechanisms.

When possible, shared memory between systems should be used. On distributed systems without access to common memory all data needs to be shared via network. To reduce the bandwidth usage, video encoding and decoding hardware can be used with reasonable performance.

Use cases :

  1. Navigation surface rendered by infotainment unit needs to be shared with Instrument cluster for ease of navigation for driver.


TODO: Is there a need to give overview of graphical application and their relation to Wayland compositor?

Subheadings...

  • Typical Systems, Compositor approach
  • Wayland
  • Waltham
  • Virtual Display
  • other examples (non-Wayland)…


Surface sharing requires a communication protocol to request or notify about new available graphical content in the system, forward touch events and control the sharing in general. This  results in modification of standard graphical applications. To avoid modification of standard graphical application lets checkout virtual display and Waltham.


Consequences :

  1. In case of distributed systems, high memory bandwidth is consumed for transport of pixel data. Pixel data can be encoded to save bandwidth but there will loss of quality.
  2. Depending on the available bandwidth HMI/animation performance can vary.
  3. Smoothness of animation and UI response additionally depends on Network latency.


Examples :

Virtual Display:

Virtual Display describes a concept which can be used to realize distributed HMI use-cases. Important characteristic of the Virtual Display concept is that the entire display content is transferred instead of transferring content from dedicated applications. The system which provides the content should be presented with a display which act likes a physical display, but is not necessarily linked to a physical display, so the middleware and applications can use it as usual. Such a display can be called a Virtual Display. The implementation of Virtual Display on the producer system should be generic enough to look like the normal display and should take care of the transferring the display content to another HMI unit or another system.

Following diagram illustrates a full solution using virtual display concept and surface sharing with communication protocols defined using Waltham libraries.

TODO: Add a diagram to show the virtual display concept, together with surface sharing.


Open source wayland compositor-Weston provides an example of Virtual Display implementation [1]. Weston can be configured to create a Virtual Display which is not linked to any physical display. Properties like resolution, timing, display name, IP-address and port can be defined in weston.ini file. From this configuration Weston will create a virtual display "area" and all content which will appear in this area will be encoded to M-JPEG video compression format and transmitted to the configured IP address.

Weston has not yet implemented handling of inputs (touch, pointer and keyboard) from remote display. There are no communication protocols specified.


Android OS has support for creation of multiple virtual displays. Applications can render to virtual display. Android provides capabilities to access the framebuffer of virtual display. Sharing of this framebuffer over network or by means of memory sharing is left to individual implementations. One such solution from Android is described below:

TODO: Add the diagram for AllGo case study and few lines of explanation.


Shared State, Independent Rendering


  • Smartphone interaction

GPU Sharing

GPU sharing is referred to sharing of GPU hardware between multiple operating systems running on an hypervisor. When GPU is shared, each operating system works as though GPU hardware is assigned to it.

GPU Sharing can be implemented by:

  1. Providing virtualization capabilities to the GPU hardware.
  2. Using a virtual device which schedules render job on GPU by communicating with Operating system holding GPU hardware access.


GPU sharing using hardware virtualization capability:

Modern GPUs are equipped with their own MMU. Additionally, GPU hardware can implement functionality to identify the operating system which is scheduling render job. One way of doing this is to use the domain ID of the operating system to identify its rendering job. Now when a particular rendering job is executed, GPU can load the page tables corresponding to that domain (operating system), thus ensuring that a render job from one operating system does not access the memory dedicated to other operating systems. There can be additional capabilities needed e.g. if any of the domains is safety relevant then GPU faults of non-safe domains should not affect the other and it should be possible to prioritize the render jobs from different domains.


GPU sharing using virtual device:

Domains running on hyperisor can be categorized as host and guest. Host domain has full access to GPU hardware where as the guests use a different GPU driver which communicates with host. Rendering job of guest is scheduled on GPU by host.


Use cases :

  1. Instrument cluster and infotainment unit running as different domains on an hypervisor. Both need access to GPU hardware.

TODO: more use cases??

Consequences :

  1. GPU sharing requires a lot of attention in terms of security aspects, specifically when one or more of the domains are safety relevant.
  2. GPU sharing without GPU virtualization capabilities introduces communication and memory copy overhead. This affects the performance.

Examples :

virtio-gpu 3D:

virtio-gpu 3D is a virtual device. This is an open source implementation, where some modifications to the Mesa library (an open-source implementation of many graphics APIs, including OpenGL, OpenGL ES (versions 1, 2, 3), OpenCL) have been done on guest side. Applications on the guest side still speak unmodified OpenGL to the Mesa library. But instead of Mesa handing commands over to the hardware it is channeled through virtio-gpu to the backend on the host. The backend then receives the raw graphics stack state (Gallium state) and interprets it using virglrenderer from the raw state into an OpenGL form, which can be executed as entirely normal OpenGL on the host machine. The host also translates shaders from the TGSI format used by Gallium into the GLSL format used by OpenGL. The OpenGL stack on the host side does not even have to be Mesa, and could be some proprietary graphics stack.

TODO: Mention GPU hardware providing virtualization capabilities.

TODO: Add a diagram to provide a conceptual overview.

Display Sharing

In display sharing, a display is shared between multiple operating systems, with each of the operating systems thinking that it has complete access to display. The operating systems are co-operative so that content on the screen is meaningful. Also it could be that specific areas of display are dedicated to each of the operating systems.

Realizing Display sharing requires "Hardware Layer" feature in Display controller orlinked image processing hardware. Each Hardware Layer holds a framebuffer and supports operations (e.g. scaling, color key and so on) on the pixels. Frame buffers of Hardware layers are blitted to form the final image which goes to the display. This blitting is carried out by the hardware itself.

Each operating system renders to one or more Hardware layers. Each Hardware layer is associated with one operating system.


TODO: add a diagram which shows hardware layers assigned to different operating systems and show the final display image.


Use cases :

1) A display shared by instrument cluster and multi-media system. If each of these are controlled by different operating systems in a virtualized environment, display sharing can be used.

If each of the operating systems renders to specific area of the display then, using display sharing technology operating systems can work without knowing about each other i.e. there is no need of communication between operating systems to share the display. However, if the operating systems present a content on display which is dependent on other operatin system then communication is required.

Where hardware display layers are not available a form of Display Sharing may be provided in software by the Display Manager of certain Hypervisors. The Display Manager provides a mechanism for guest VMs to pass display buffers which it then combines.

Consequences :

TODO:

Examples :

RCAR H3 SOC. Renesas Canvas demo.

TODO: Provide short description of demo.

TODO: Are integrity patches available freely, to achieve display sharing on RCAR?



Choosing a strategy

Comparison of approaches

... characteristics of each - trade-offs, ...  "dry data"

Use-case examples

... Describe each example and then recommended approach and why.  This is provides more understandable to the dry comparison of previous chapter.



Conclusions



  • No labels

1 Comment

  1. Can't edit in the article directly, so here as comments...

    I like the ordering of topics in "low level → high level" order! Would maybe suggest to switch "GPU sharing" and "display sharing" because I think GPU sharing is a little more low-level than display sharing.

    Regarding the comparison - I think it is difficult to objectively measure all technologies, because some of them are inherently different in nature and supported use-cases. My suggestion therefore:

    • start with a disclaimer that those approaches are different, and the further comparison should be seen more as a practical rather than scientific comparison
    • Still, some approaches can be compared better than others - for example, I think "surface sharing" and "display sharing" have in common that it is a 2D pixel stream being shared, so a comparison in terms of bandwidth, complexity or memory accesses can be attempted. Other example: shared-state-independent-rendering has multiple approaches. It wouldn't make sense to compare their performance, BUT could try to compare their flexibility - e.g. what kind of content can be shared, is it rather generic or rather bound to specific use-cases, are they available in different programming languages or not... 
    • Maybe it would be worth to consider not "compare all vs. all", but select specific pairs of technologies which could be considered as interchangable. For example - RAMSES vs. OpenGL streaming. It is not 1-to-1 the same, but if I would be considering sharing generic 3D content, those would be the two main options. So as a reader I would be interested to know what are the differences.