Page tree
Skip to end of metadata
Go to start of metadata

(green star)  Official location is at the moment this one on Google Docs to enable collaborative editing.

Original text below this line (do not use!)

 Click here to expand...

Automotive Virtual Platform Specification

1. Introduction

Automotive requirements lead to particular choices and needs from the underlying software stack.  Existing standards for device drivers in virtualization need to be augmented because they are often not focused on automotive or even embedded, systems.  Much of the progression comes from the IT/server consolidation and in the Linux world, some come from virtualization of workstation/desktop systems.

A collection of virtual device driver APIs constitute the defined interface between virtual machines and the virtualization layer, i.e. the hypervisor or virtualization "host system".  Together they make up a definition of a virtual platform.  

This has a number of advantages:

  • Device drivers (for paravirtualization) for the kernel (Linux in particular), don't need to be maintained uniquely for different hypervisors
  • Simplify moving hypervisor guests between different hypervisor environments
  • Some potential for shared implementation across guest operating systems
  • Some potential for shared implementation across hypervisors with different license models
  • Industry shared requirements and test-suites, a common vocabulary and understanding to reduce complexity of virtualization. 

In comparison, the OCI initiative for containers serves a similar purpose.  There are many compatible container runtimes → there could be the potential for standardized "hypervisor runtime environments" that allow a standards compliant virtual (guest) machine to run with less integration efforts.

  • Hypervisors can fulfill the specification, with local optimizations / advantages
  • Similarly, guests VMs can be engineered to match the specification.

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC 2119].

2. Architecture

Assumptions made about the architecture, use-cases...

Limits to applicability, etc...

3. General requirements

Automotive requirements to be met (general)...

→ Risk of being imprecise and not useful.

  • Each listed feature is optional but requirements need to be implemented entirely for each feature that is included.

Alternative to discuss:  Should certain features be mandatory (to be compliant with) the virtual platform (i.e. available for use if the OEM product requires them)?

Usage of built-in virtualization in hardware

Req: If running on hardware that supports it, then the architectural virtualization interfaces (for interrupts and timers, performance monitoring, ...) shall be used where available.

?? Does this not weaken the standardization?  In some cases supporting
(VIRTIO) abstraction might be more standard. 
Matti: No, these features do not overlap with VIRTIO typically.

2.5 Booting Guests

(warning) Placeholder

Bla bla boot Protocol between HV and Guest

This provides information to OSes - e.g. where do I find my device tree information.  Abstracting certain HW specifics. Base services like: real-time clock, wake-up reason,
Always been project specific outside PC world.  EBBR says you do this by using UEFI APIs.  The small subset of UEFI APIs that is sufficient are not too hard to handle.  This is what EBBR requires.


  • Systems that support a dynamic boot protocol should implement (the mandatory parts of*) EBBR
    • As EBBR allows either ACPI or Device Tree implementation, this can be chosen according to what fits best for the chosen hardware architecture.\

*TBD: Figure out of some optional EBBR requirements should be mandatory in this specification.

  •  For systems that do not support a dynamic boot protocol (see discussion), the virtual hardware shall be described (by HV vendor) using a device tree format so that implementors can program custom boot and setup code


Some systems might not realistically implement EBBR protocol (e.g. some ported legacy AUTOSAR Classic based systems and other RTOS guests).   These are typically implemented using compile-time definition of the hardware platform.  It is therefore expected that some of the code needs to be adjusted when porting such systems to a virtual platform.

Option 1)  HV exposes the expected devices in the expected location (and behavior) of an existing legacy system.
Option 2)  HV decides where to place device features and communicate that to the operating system.   This would be done by device-tree snippets (statically defined)

    More to do:  "Standard" for how HV describes hardware/memory map to legacy system guests, and recommendations for how to port legacy systems accordingly.
    Consensus seems to be:  Device Tree  (independent specification at used as a human readable specification (i.e. the boot code could still be hard-coded and do not need to support a lot of runtime configuration) → see requirement above.

TBD: Linux and Android have different boot req
Android can be booted using UEFI - likely ebbr req will work for both.

3. Common Virtual Device categories

3.1 Storage (Block Device)


When using hypervisor technology data on storage devices needs to adhere to high-level security and safety requirements such as isolation and access restrictions. Virtio and its layer for block devices provides the infrastructure for sharing block devices and establish isolation of storage spaces. This is because, actual device access can be controlled by the hypervisor. However, Virtio favors generality over using hardware-specific features. This is problematic in case of specific requirements w.r.t. robustness and endurance measures often associated with the use of persistent data storage such as flash devices. In this context one can spot three relevant scenarios:

  1. Features transparent to the GuestOS.  For these features, the required functionality can be implemented close to the access point, e.g., inside the actual driver. As an example, one may think of a flash device where the flash translation layer (FTL) needs to be provided by software. This is in contrast to, for example, MMC flash device, SD cards and USB thumb drives where the FTL is transparent to the software.
  2. Features established via driver extensions and workarounds at the level of the GuestOS. These are features which can be differentiated at the level of (logical) block devices such that the GuestOS use different block devices and the driver running in the backend enforces a dedicated strategy for each (logical) block device. E.g., GuestOS and its application may require different write modes, here reliable vs. normal write.
  3. Features which call for an extension of the VIRTIO Block device driver standard. Whereas category 1 and 2 does not need an augmentation of the Virtio block device standard, a different story needs to be told, whenever such workarounds do not exist. An example of this is the erase of blocks. The respective commands can only be emitted by the GuestOS. The resulting TRIM commands need to be explicitly implemented in both the front-end as well as in the back-end driven by the hypervisor.

3.1.x Meeting automotive persistence requirements

Typical automotive persistence requirements
(to be met by the entire system, i.e. from App to persistence system on Guest, through drivers, HV, to hardware)

1) Ability to write some specific data items, for which it is guaranteed to be stored (within reasonable and bounded time),
i.e. "Write Through mode" (seen from the perspective of the user space program in the guest) while the majority of data is written in "Cached mode"
(Optionally:  Ability to do this using file system, i.e. mount something which guarantees this on a VFS path)

2) Data integrity in case of sudden power loss

3) Flash lifetime (read/write cycle maximums) guarantees (e.g. 10-15 years)

NOTE:  LUNs (defined in UFS) divide the device into parts, so that a fully-unified-access FUA does not mean all caches should be flushed.   eMMC does not provide this.  Either NVMe or SCSI or UFS devices are required.  You could map partitions onto LUNs (some optimized for write-through and some for better average performance) and build from there.

VIRTIO should be sufficient in combination with the right requests being made from user space programs (and running appropriate hardware devices below).

With VIRTIO:  option 1 - device is in write-through mode  (WCE = on, set per-device in VIRTIO block device)

                          option 2 - device has WCE = off → BLK_WRITE followed by BLK_SYNC in driver.  Possible on a raw block device from user space - open device with OSYNC. 
                                             For filesystem it is file system dependent which O-option (and only some guarantee to respect it 100%)

Linux API create a block request.  One of the flags in the request is FUA.

Conclusion is that VIRTIO does not break the native behavior.  Even in the native case it can be somewhat uncertain but VIRTIO does not make it worse.

Only thing missing:  VIRTIO might not provide the "total blocks written" data that is available in native systems.  Arguable if guest needs to know this?  Are there preventative measures / diagnostics that would benefit from this?


  • Implement virtual block device according to chapters 5.2 in [VIRTIO].
  • The system must implement support for the VIRTIO_BLK_F_FLUSH feature flag.
  • The system must implement support for the VIRTIO_BLK_F_CONFIG_WCE feature flag.
  • The system must implement support for the VIRTIO_BLK_F_DISCARD feature flag.
  • The system must implement support for the VIRTIO_BLK_F_WRITE_ZEROS feature flag.

3.2 Network Device

Standard networks

Standard networks include those that are not automotive specific, but instead frequently used in the computing world.  In other words these are typically IP based networks, but some of them simulate this level through other means (e.g. vsock which does not use IP addressing).  The physical layer is normally some variation of the Ethernet/WiFi standard(s) (according to standards 802.*) or other transport that transparently exposes a similar network socket interface 

virtio-net  = Layer 2  (Ethernet / MAC addresses)
virtio-vsock = Layer 4.  Has its own socket type.  Optimized by stripping away the IP stack.  Possibility to address VMs without using IP addresses. Primary function is Host (HV) to VM communication.

Vsock: Each VM has logical ID but the VM normally does not know about it.  Example usage: Running a particular agent in the VM that does something on behalf of the HV.  There is also the possibility to use this for VM-to-VM communication, but since this is a special socket type it would involve writing code that is custom for the virtualization case, as opposed to native.

vsock is the application API.   Multiple different named transport variations exist in different hypervisors which means the driver implementation differs depending on chosen hypervisor.  Virtio-vsock however locks this down to one chosen method.


  • If the platform implements virtual networking, it shall also use the virtio-net required interface between drivers and Hypervisor.
  • If the platform implements vsock, it shall also use the virtio-vsock required API between drivers and Hypervisor.
  • Virtual network interfaces shall be exposed as the operating system's standard network interface concept, i.e. they should show up as a normal network device.
  • The hypervisor/equivalent shall provide the ability to dedicate and expose any hardware network interface to one virtual machine.
  • The hypervisor/equivalent shall(?) be able to configure virtual inter-vm networking interfaces.
  • Implementations of virtio-net shall support the  VIRTIO_NET_F_MTU  feature flag
  • Implementations of virtio-net shall support the  VIRTIO_NET_F_MAC  feature flag
  • Implementations of virtio-net shall support the  VIRTIO_NET_F_CTRL_MAC_ADDR  feature flag

Move vsock part to separate (sub)chapter.  Maybe common chapter name should be Communication?  Inter-VM communication might not be searched within the networking chapter.

    Virtual network interfaces ought to be exposed to user space code in the guest OS as standard network interfaces.   This minimizes custom code appearing because of the usage of virtualization is minimized.  

MTU may differ on the actual network being used.   There is a feature flag that a network device can state its maximum (advised) MTU and the guest application code might make use of this to avoid segmented messages.

The guest may require a custom MAC address on a network interface.  This is important for example when setting up bridge devices which expose the guest's MAC address to the outside network.  To avoid clashes the host must be able to set an explicit (possibly also stable across reboots) MAC address in each VM (at a later time, after bootup).

In addition, the guest shall be able to set its own MAC address.  (Adam: The HV can deny the request ...?) 

Offloading and other features are considered optimizations and therefore not set as absolutely required.

... discussion on MAC address change request from a guest VM --> who if anyone should force other VMs to update (clear ARP caches)?
Some features exist but they seem to be primarily for live migration.  For the moment this is not required.

Wireless Stuff

Wifi adds some additional characteristics not used in wired networks : SSID, passwords/authentication, signal strength, preferred frequency...
What are the typical designs for this?  Expose WiFi to only one VM and let it act as gateway/router for the others.  Or should the WiFi interface be truly sharable  (e.g. Ethernet level bridges created in HV so that each VM has its own network device and they all connect to the WiFi connection)?

To do sharing it would be required to have a WiFi controller that can connect to more than one endpoint (Broadcom, Qualcomm, others... have various hardware solutions.) .   MAC-VTAB does a kind of mediated passthrough.  This is a passthrough of the MAC address used on the VM side?  
Xen: Remote wpa-supplicant, on (VM) network level it is as normal.
Alternative: Virtual device emulate full MAC level on the host side. 


Automotive networks

All traditional in-car networks and buses, such as CAN, FlexRay, LIN, MOST, etc., which are not Ethernet TCP/IP style networks, are treated in the chapter TBD.

Time-sensitive Networking standards

(warning) Placeholder.  How do those requirements affect, and how is the real-time demands implemented in practice in a virtual environment?

3.3 GPU Device

The virtio-gpu is a virtio based graphics adapter. It can operate in 2D mode and in 3D (virgl) mode. The device architecture is based around the concept of resources private to the host, the guest must DMA transfer into these resources. This is a design requirement in order to interface with 3D rendering.

3.3.1 GPU Device in 2D Mode

In the unaccelerated 2D mode there is no support for DMA transfers from resources, just to them. Resources are initially simple 2D resources, consisting of a width, height and format along with an identifier. The guest must then attach backing store to the resources in order for DMA transfers to work.

        Device ID.

(lightbulb) REQ-1:   The device ID MUST be set according to the requirement in chapter 5.7.1 in [VIRTIO-GPU].



(lightbulb) REQ-2:   The virtqueues MUST be set up according to the requirement in chapter 5.7.2 in [VIRTIO-GPU].

Feature bits.

(lightbulb) REQ-3:   The VIRTIO_GPU_F_VIRGL flag, described in chapter 5.7.3 in [VIRTIO-GPU], SHALL NOT be set.

(lightbulb) REQ-4:  The VIRTIO_GPU_F_EDID flag, described in chapter 5.7.3 in [VIRTIO-GPU], MUST be set and supported allow the guest to use display size to calculate the DPI value.

        Device configuration layout.

(lightbulb) REQ-5:   The implementation MUST use the device configuration layout according to chapter 5.7.4 in [VIRTIO-GPU].

(lightbulb)      REQ-5.1: The implementation SHALL NOT touch the reserved structure field as it is used for the 3D mode.

        Device Operation.

(lightbulb) REQ-6:   The implementation MUST suport the device operation conceprt (the command set and the operation flow) according to chapter 5.7.6 in [VIRTIO-GPU].

(lightbulb)      REQ-6.1: The implementation MUST support scatter-gather operations to fulfil the requirement in chapter in [VIRTIO-GPU].

(lightbulb)      REQ-6.2: The implementation MUST be capable  to perform DMA operations to client's attached resources to fulfil the requirement in chapter in [VIRTIO-GPU].

        VGA Compatibility.

(lightbulb) REQ-7:   VGA compatibility, as described in chapter 5.7.7 in [VIRTIO-GPU], is optional.

REQ-TBD: * The device must implement support for the VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 as described in [VIRTIO-FUTURE]


Newer versions of VIRTIO include features like EDID support. A primary use is the DPI value of a virtual display, which is otherwise unknown,
and this would means things like font sizes get messed up. 

When attaching buffers use pixel format, size, and other metadata for registering the stride.  With uncommon screen resolutions, this might be unaligned and some custom strides might be needed to match.

3.3.2 GPU Device in 3D Mode

3D mode will offload rendering operations to the host gpu and therefore requires a gpu with 3D support on the host machine.

The guest side requires additional software in order to convert OpenGL commands to the raw graphics stack state (Gallium state) and channel them through virtio-gpu to the host. Currently the 'mesa' library is used for this purpose. The backend then receives the raw graphics stack state and interprets it using the virglrenderer library from the raw state into an OpenGL form, which can be executed as entirely normal OpenGL on the host machine. The host also translates shaders from the TGSI format used by Gallium into the GLSL format used by OpenGL.

The solution should become more flexible and independent from third party libraries on the guest side as soon as Vulkan support is introduced. It is achieved by the fact that that Vulkan uses Standard Portable Intermediate Representation as an intermediate device-independent language, so no additional translation between the guest and the host are required. It is still work in progress [VIRTIO-VULKAN].

Input from Eugen Friedrich:  Would like to discuss a standard for sharing graphics memory buffers between VMs.

3.4 IOMMU Device

NOTE: The current specification draft looks quite neat except the fact that it marks many requirements as SHO


ULD or MAY and leaves it for an implementation. Here I try to provide more strict rules when it is applicable.

An IOMMU provides virtual address spaces to other devices. Traditionally devices able to do Direct Memory Access (DMA masters) would use bus addresses, allowing them to access most of the system memory. An IOMMU limits their scope, enforcing address ranges and permissions of DMA transactions. The virtio-iommu device manages Direct Memory Access (DMA) from one or more physical or virtual devices assigned to a guest.

Potential use cases are:

  • Limit guest devices' scope to access system memory during DMA (e.g. for a pass-through device).

  • Enable scatter-gather accesses due to remapping (DMA buffers do not need to be physically-contiguous).

  • 2-stage IOMMU support for systems that don't have relevant hardware.

These requirements should probably not go into draft.   Dmitry, please decide or rework

        Device ID.

(lightbulb) REQ-1:   The device ID MUST be set according to the requirement in chapter 2.1 in [VIRTIO-IOMMU].


(lightbulb) REQ-2:   The virtqueues MUST be set up according to the requirement in 2.2 in [VIRTIO-IOMMU].

Feature bits.

(lightbulb) REQ-3:   The valid feature bits set is described in chapter 2.3 in [VIRTIO-IOMMU] and is dependant on the particular implementation.

Device configuration layout.

(lightbulb) REQ-4:   The implementation MUST use the device configuration layout according to chapter 2.4 in [VIRTIO-IOMMU].

Device initialization.

(lightbulb) REQ-5:   The implementation MUST follow the initialisation guideline according to chapter 2.5 in [VIRTIO-IOMMU].

(lightbulb)      REQ-5.1: When implementing device initialization requirements from chapter 2.5.2, a stricter requirement takes place:

                     a. If the driver does not accept the VIRTIO_IOMMU_F_BYPASS feature, the device SHALL NOT let endpoints access the guest-physical address space.

Device Operation.

(lightbulb) REQ-6:   The implementation MUST suport the device operation concept (the command set and the operation flow) according to chapter 2.6 in [VIRTIO-IOMMU].

(lightbulb)      REQ-6.1: When implementing support for device operation requirements from chapter 2.6.2, stricter requirements take place:

             a. The device SHALL NOT set status to VIRTIO_IOMMU_S_OK if a request didn’t succeed.

             b. If a request type is not recognized, the device MUST return the buffers on the used ring and set the len field of the used element to zero.

             c. If the VIRTIO_IOMMU_F_INPUT_RANGE feature is offered and the range described by fields virt_start and virt_end doesn’t fit in the range described by input_range, the device MUST set status to VIRTIO_IOMMU_S_RANGE and ignore the request.

             d. If the VIRTIO_IOMMU_F_DOMAIN_BITS is offered and bits above domain_bits are set in field domain, the device MUST set status to VIRTIO_IOMMU_S_RANGE and ignore the request.

        (lightbulb)      REQ-6.2: When implementing a handler for the ATTACH request as described in chapter, stricter requirements take place:

                     a. If the reserved field of an ATTACH request is not zero, the device MUST set the request status to VIRTIO_IOMMU_S_INVAL and SHALL NOT attach the endpoint to the domain.

                     b. If the endpoint identified by endpoint doesn’t exist, then the device MUST set the request status to VIRTIO_IOMMU_S_NOENT.

                     c. If another endpoint is already attached to the domain identified by domain, then the device MUST attempt to attach the endpoint identified by endpoint to the domain. If it cannot do so, the device MUST set the request status to VIRTIO_IOMMU_S_UNSUPP.

                     d. If the endpoint identified by endpoint is already attached to another domain, then the device MUST first detach it from that domain and attach it to the one identified by domain. In that case the device behaves as if the driver issued a DETACH request with this endpoint, followed by the ATTACH request. If the device cannot do so, it MUST set the request status to VIRTIO_IOMMU_S_UNSUPP.

                     e. If properties of the endpoint (obtained with a PROBE request) are incompatible with properties of other endpoints already attached to the requested domain, the device SHALL NOT attach the endpoint and MUST set the request status to VIRTIO_IOMMU_S_UNSUPP.

        (lightbulb)      REQ-6.3: When implementing a handler for the DETACH request as described in chapter, stricter requirements take place:

                     a. If the reserved field of a DETACH request is not zero, the device MUST set the request status to VIRTIO_IOMMU_S_INVAL, in which case the device SHALL NOT perform the DETACH operation.
                     b. If the endpoint identified by endpoint doesn’t exist, then the device MUST set the request status to VIRTIO_IOMMU_S_NOENT.
                     c. If the domain identified by domain doesn’t exist, or if the endpoint identified by endpoint isn’t attached to this domain, then the device MUST set the request status to VIRTIO_IOMMU_S_INVAL.

        (lightbulb)      REQ-6.4: When implementing a handler for the MAP request as described in chapter, stricter requirements take place:

                     a. If virt_start, phys_start or (virt_end + 1) is not aligned on the page granularity, the device MUST set the request status to VIRTIO_IOMMU_S_RANGE and SHALL NOT create the mapping.
                     b. If the device doesn’t recognize a flags bit, it MUST set the request status to VIRTIO_IOMMU_S_INVAL. In this case the device SHALL NOT create the mapping.
                     c. If a flag or combination of flag isn’t supported, the device MUST set the request status to VIRTIO_IOMMU_S_UNSUPP.
                     d. The device SHALL NOT allow writes to a range mapped without the VIRTIO_IOMMU_MAP_F_WRITE flag. However, if the underlying architecture does not support write-only mappings, the device MAY allow reads to a range mapped with VIRTIO_IOMMU_MAP_F_WRITE but not VIRTIO_IOMMU_MAP_F_READ.
                     e. If domain does not exist, the device MUST set the request status to VIRTIO_IOMMU_S_NOENT.

        (lightbulb)      REQ-6.5: When implementing a handler for the UNMAP request as described in chapter requirements take place:

                     a. If the reserved field of an UNMAP request is not zero, the device MUST set the request status to VIRTIO_IOMMU_S_INVAL, in which case the device SHALL NOT perform the UNMAP operation.

                     b. If domain does not exist, the device MUST set the request status to VIRTIO_IOMMU_S_NOENT. 
                     c. If a mapping affected by the range is not covered in its entirety by the range (the UNMAP request would split the mapping), then the device MUST set the request status to VIRTIO_IOMMU_S_RANGE, and SHALL NOT remove any mapping.
                     d. If part of the range or the full range is not covered by an existing mapping, then the device MUST remove all mappings affected by the range and set the request status to VIRTIO_IOMMU_S_OK.

        (lightbulb)      REQ-6.6: When implementing a handler for the PROBE request as described in chapter requirements take place:

                     a. If the reserved field of a PROBE request is not zero, the device MUST set the request status to VIRTIO_IOMMU_S_INVAL.

                     b. If the endpoint identified by endpoint doesn’t exist, then the device SHOULD set the request status to VIRTIO_IOMMU_S_NOENT.

                     c. If the device does not offer the VIRTIO_IOMMU_F_PROBE feature, and if the driver sends a VIRTIO_IOMMU_T_PROBE request, then the device MUST return the buffers on the used ring and set the len field of the used element to zero.
                     d. The device MUST set bits [15:12] of property type to zero.
                     e. If the properties list is smaller than probe_size, then the device SHALL NOT write any property and MUST set the request status to VIRTIO_IOMMU_S_INVAL.
                     f. If the device doesn’t fill all probe_size bytes with properties, it MUST terminate the list with a property of type NONE and size 0. The device MAY fill the remaining bytes of properties, if any, with zeroes. If there isn’t enough space remaining in properties to terminate the list with a complete NONE property (4 bytes), then the device MUST fill the remaining bytes with zeroes.

        (lightbulb)      REQ-6.7: When implementing support for RESV_MEM property as described in chapter, stricter requirements take place:

                     a. The device MUST set reserved to zero.
                     b. The device SHALL NOT present mo


re than one VIRTIO_IOMMU_RESV_MEM_T_MSI property per endpoint.
                     c. The device SHALL NOT present RESV_MEM properties that overlap each others for the same endpoint.

        (lightbulb)      REQ-6.8: When implementing support for fault reporting as described in chapter, stricter requirements take place:

                     a. The device MUST set reserved and reserved1 to zero.
                     b. The device MUST set undefined flags to zero.
                     c. The device MUST write a valid endpoint ID in endpoint.
                     d. If a buffer is too small to contain the fault report (this would happen for example if the device implements a more recent version of this specification than the driver, whose fault report contains additional fields) , the device SHALL NOT use multiple buffers to describe it. The device MUST fall back to using an older fault report format that fits in the buffer.

3.5 USB Device

The working group and industry consensus seems to be that it is difficult to give concurrent access to USB hardware from more than one operating system instance.  In other words, to create multiple virtual USB devices that somehow map to a single host role on a single USB port, (or perhaps some kind partitioning of the tree of devices provided when USB hubs are involved).  The host (master) / device (slave) design of the USB protocol makes it challenging to have more than one software stack playing the host role.  Considering how this might be done could be an interesting theoretical exercise but value trade-off does not seem to be there, despite some potential ways it might be used if it were possible (see use-case section).

[VIRTIO] does not in its current version mention USB devices.

After deliberation we have decided also in this specification to assume that hypervisors will provide only pass-through access to USB hardware.

The ability for one VM to request dedicated access to the USB device during runtime is a potential improvement and it ought to be considered when choosing a hypervisor.   With such a feature, VMs could even alternate their access to the USB port with a simpler acquire/release protocol than a true full virtualization would use (note Use-case section for caveats).

USB On-The-Go(tm) is left out of scope, since most automotive systems implement the USB host role only, and in the case a system ever needs to have the device role it would surely have a dedicated port and a single operating system instance handling it.

(warning) The configuration of pass-through for USB is yet not standardized and for the moment considered a proprietary API.  This is a potential for future improvement.

Hardware support for virtualization

It seems likely that trying to implement special support for splitting a single host port, between multiple guests is more complicated than just approaching it as multiple ports.  This applies also  if the hardware implements not only host controllers but also a USB-hub.  In other words, SoCs are likely better off simply providing support for more separate USBs at the same time than to build in special virtualization features.

(question) Is there anything in particular the hardware should do to facilitate pass-through?


Configurable pass-through access to USB devices.

(lightbulb) REQ-3.5-1:   The hypervisor shall provide statically configurable pass-through access to all hardware USB devices

Resource API for USB devices

(lightbulb) REQ-3.5-2:   The hypervisor may optionally provide an API/protocol to request USB access from the virtual machine, during normal runtime.

Use Case discussion

There is a case to be made for more than one VM needing access to a single USB device.  For example, a single mass-storage device (USB memory) may be required to provide files to.  There are many potential use cases but just as an example, consider software/data update files that need to be applied to more than one VM/guest, or media files being played by one guest system whereas navigation data is needed in another.

In the previous chapter, the idea of alternating/reconfiguring pass-through access was raised without defining it further.  It should be noted of course that it raises many considerations about reliability and one system starving the other of access.  Such a solution would only apply if policies, security and other considerations are met for the particular system.

The most likely remaining solution, considering that virtualized USB access is not a promoted solution, is that one VM is assigned to be the USB master and provide access to the filesystem (or part of it) by means of VM-to-VM communication.  For example, a network file system such as NFS or any equivalent solution could be used.

4. Hardware pass-through


Explain what this means in practice, how it can be done, limitations on what VMs are able/allowed to do.   In what extent are pass-through features also in fact abstractions and at what extent does this constitute "direct hardware access".


There is a need to document interactions between hardware parts (and sometimes technically limit VM capability)  because of the difficulty for VMs to actually configure hardware correctly without causing issues, (including security/stability problems and other...     (Matti, for more details)

5. Special Virtual Device categories

5.x GPIO


Should GPIO be para-virtualized and what would it look like?
Consider... whereas an implementation on a kernel might be able to suspend rescheduling for a while and bit-bang a particular interface, this might break completely when HV is in charge of schedulling.
Conclusion : Requiring that the VM has such scheduling guarantees is not realistic.
                   -→ Whitepaper topic?


  • The hypervisor/equivalent shall support configurable pass-through access to a VM for digital general-purpose i/O hardware

?? For digital I/O pins, refer to standard pinmux specification ((warning) need clarification)

?? TODO:  I2C, similar low-level buses

5.x  Sensors



Sensors = Ultrasonic sensor?  Thermal imaging camera is a sensor?
Rather here we are discussing sensors such as ambient light, temperature, pressure, accelaration, IMU Intertial Measurement Unit (rotation), etc.

Accessing off-SoC sensors - this is basically done through a protocol.  Since SCMI implements a kind of protocol.  It is usually spoken from
general cores to the system controller (M3 core responsible for clocktree, power regulation, etc.) via a hardware mailbox.

Since this protocol is defined, it would be possible to reuse it for accessing sensor data quite independently of where the sensor is located.

The SCMI specification does not specify the transport, suggesting hardware mailboxes but acknowledging that this can be different.  The idea is to define how to SCMI over VIRTIO.


Sensors can be handled by a dedicated co-processor or the hypervisor implementation and provide the sensor data through a communication protocol.  This essentially offloads the burden of defining a "virtual hardware access" from the VM to the measuring hardware.   

For sensors that are not appropriate to virtualize, please refer to chapter on Hardware Pass Through.

Systems Control Management Interface (SCMI) protocol was not originally defined for the virtual-sensor purpose itself, but describes a flexible and an appropriate abstraction for sensors. It is also appropriate for controlling power-management and related things.  The actual hardware access implementation is according to ARM offloaded to a "Systems Control Processor" but this is a virtual concept.  It could be a dedicated core in some cases, perhaps in others not.

TODO: Reference official SCMI spec.

Remaining details

1) Specifying how to put SCMI over VIRTIO.  (can be driven by HV group)
2)  IIO (Industrial I/O subsystem) driver is being developed for Linux kernel (coming)
Requirement is PENDING for now for those two reasons.


  • [PENDING]  For sensors that need to virtualized the SCMI protocol SHALL be used to expose sensor data from a sensor subsystem to the virtual machines.

5.x Audio


Proposal made to VIRTIO, waiting for comments.
Spec says how the HV can report audio capabilities to the guest.  Input/output   (microphone/speaker) what format the stream support.

Formats - sample format, it must be PCM, how many bits, and num channels,  sampling rate (frame rate).

These capabilities are defined for each stream.  One VM can open multiple streams towards the HV.  A stream can include more than one channel (interleaved, according to previous agreement of format)

Virtual audio card does not support any controls (yet).   Either guest does nothing with volume - mixing/priority is somehow implemented by HV (or companion VM, or external amplifier, or...)

There is no volume control in the VIRTIO interface .   Software control (scaling) of volume can of course be done in guest through user-space code doing this.  
Or you can process/adjust volume on the host side (it unspecified here how to do that).

Control API to set volume/mixing/other on the hypervisor side. In an ECU, the volume mixing/control might be implemented on a separate chip, and the details of that control interface is not specified here.

Challenges include the real-time behavior, keeping low latency in the transfer, avoiding buffer underruns, etc.

Consequences of ASIL encumbered features?  → typically this is done separately by preloading chimes/sounds into some media hardware and triggered through another event interface.

Some VIRTIO feature flags.

State transitions can be fed into the stream.   Start, Stop, Pause, Unpause.   These transitions can trigger actions.  For example, when navigation starts playing, you can lower volume of media.

Start means start actually playing the samples from the buffer (which was earlier filled with data)
(and opposite for input case).   Pause means stop at the current location, do not reset internal state, so that unpause can continue playing at that location.

There are no events from the virtual hardware to the guest because it does not control anything.  It is also not possible to be informed about buffer underrun, etc.

Driver proof of concept exists in OpenSynergy GitHub.  And example implementation in QEMU.  Previously QEMU played through hardware emulation of a sound card, now via this VIRTIO std.


  • [PENDING]  If virtualized audio is implemented it MUST implement the virtio-sound standard according to TBD
  • [PENDING] Implement feature flag foo
  • [PENDING] Implement feature flag bar

5.x Media codec 

The proposed VIRTIO standard includes an interface to report and/or negotiate capability, stream format, rate, etc.

It is already possible to do multiple decoding.  Current drivers also allow multiple decoding streams, up to the capability of the hardware.  The situation will be the same here but multiple VMs share the capabilities.  Prioritization of VMs can be handled by Host.


TODO – follow VIRTIO

5.x Cryptography

(warning) TODO:  See saved text on minutes page.

5.x.x Random Number Generation

Random number generation is typically created by a combination of a true-random and pseudo-random implementations.  A pseudo-random generation algorithm is implemented in software.   "True" random values may be acquired by a hardware-assisted device, or a hardware (noise) device may be used to acquire a random seed which is then further used by a pseudo-random algorithm.


    * The Hypervisor MUST offer at least one good entropy source accessible from the guest
* The entropy source SHOULD be implement using virtio-entropy according to VIRTIO TBD
    * To be specific, it is required that what is received from the implementation of virtio-entropy must contain only entropy and never the result of a pseudo random generator.


Crypt device has been added to VIRTIO (crypto accelerator for ciphers, hashes, MACs, AEAD) 
Sharing of crypto accelerators.  On ARM they are often they are only accessible from TrustZone, and also stateful as opposed to stateless.  Both of these things make sharing difficult.
Usable for filesystem encryption?   Although CPU might be just as performant.
virtio-entropy  (called virtio-rng inside Linux implementation) is preferred because it's a simple and cross-platform interface to deal with.

Some hardware implements only one RNG in the system and it is in TrustZone.   It is inconvenient to call APIs into TrustZone in order to get a value that could just be read from the hardware but on those platforms it is the only choice.  While it would possible to implement virtio-entropy via this workaround, it is more convenient to make direct calls to TrustZone.

The virtual platform is generally expected to provide access to a hardware-assisted high quality random number generator through the operating system's preferred interface (/dev/random device on Linux)

The virtual platform should describe a security analysis of how to avoid any type of side-band analysis of the random number generation.

TEE access


Access to TrustZone and equivalent functions SHOULD work in the exact same way as for a native system using the standard access methods (SMC 
calls on ARM, equivalent on Intel).  Another option used on some Intel systems is to run OPTEE instances, one per guest. The rationale for this is that implementations that have been carefully crafted for security (e.g. Multimedi DRM) are unlikely to be rewritten only to support virtualization.

* Access to TrustZone and equivalent functions MUST work in the exact same way as for a native system using the standard access methods (SMC calls on ARM, equivalent on Intel).


Changes expected because we want to know if VIRTIO will approve proposal.


1-2 sentences, just explain. Replay Protected Memory Buffers.

Note: Virtual rpmb recently proposed on VIRTIO mailing list (from Intel)

Different guests need their own unique Strictly Monotonic Counters.  It's not expected for counters to increase by more than one, which could happen if more than one guest share the same mechanism.

Potential future requirements (still pending) :

* If a system requires replay protection, it MUST be implement according to virtio-rpmb as specified in [VIRTIO-future]

Cryptography acceleration

* If the virtual platform implements crypto acceleration, then the virtual platform MAY implement virtio-crypto as specified in chapter 5.9 in [VIRTIO]

(NB This requirement might be a MUST later on, if the hardware is appropriate, and optional for hardware platform platforms that are limited to single-threaded usage or other limitations)

TODO:   Please consider if this should be a MUST or a weaker requirement.

TODO:   Need to review feature bits in VIRTIO, which ones are mandatory (for typical automotive embedded hardware).


VIRTIO-crypto standard seem started primarily for PCI extension cards implementing crypto acceleration, although specification seems generic enough to support future (SoC) embedded hardware.

The purpose of acceleration can be pure acceleration (client has the key) or rather HSM purpose (such as key is hidden within hardware).

The implementation consideration is analogous to the discussion on RNGs.  On some ARM hardware these are offered only within TrustZone and in addition the hardware implementation is stateful.  It ought to be possible to implement virtio-crypto also by delegation into TrustZone and therefore we require it also on such platforms however it should be understood that parallell access to this feature may not be possible, meaning that this device can be occupied when a guest requests it.  This must be considered in the complete system design.

6. Supplemental Virtual Device categories

6.x  Text Console


While they may be rarely an appropriate interface for the normal operation of the automotive system, text consoles are expected to be present for development purposes.  The  virtual interface of the console is adequately defined by [VIRTIO] 

Text consoles are often connected to a shell capable of running commmands.  For security reasons, text consoles need to be possible to shut off entirely in the configuration of a production system.


  • The virtual interface of the console MUST be implemented according to chapter 5.3 in [VIRTIO] 

  • To not impede efficient development, text consoles shall further be integrated according to the operating systems' normal standards so that they can be connected to any normal development flow.
  • For security reasons, text consoles MUST be possible to shut off entirely in the configuration of a production system. 
    This configuration SHALL NOT be modifiable from within any guest operating system

It is also recommended that technical and/or process related countermeasures which ensure there is no way to forget to disable these consoles, are introduced and documented during the development phase.

6.1 Filesystem virtualization


This chapter discusses two different features, one being  host-to-vm filesystem sharing and the other being VM-to-VM sharing, which can be facilitated by Hypervisor functionality.

The function of providing disk access in the form of a "shared folder" or full disk passthrough is a function that seems more used for desktop virtualization than in the embedded systems that this document is for.  In desktop virtualisation, for example the user wants to run Microsoft Windows in combination with a MacOS host, or to run Linux in a virtual machine on a Windows-based corporate workstation, or bespoke Linux systems run in KVM/QEMU on Linux host for development of embedded systems.  Host-to-VM filsystem sharing might also serve some purpose also in certain server virtualization setups. With this background, we consider Host-to-VM filesharing an optional feature and cover it only briefly here.

VIRTIO covers (very briefly) the 9P / 9PFS protocol for host-to-vm filesystem sharing.


The working group found little need for this host-to-vm disk sharing in the final product in an automotive system, but we summarize the opportunities here if the need arises for some particular product.

[VIRTIO] describes one network disk protocol for the purpose of hypervisor-to-vm storage sharing, which is 9pfs.  It is a part of a set of protocols defined by the Plan9 operating system.

Most systems will be able to accomodate any network disk protocol needs by implementing the network protocol in one or several of the VMs. The typical systems we deal with can implement a more standard and capable protocol such as NFS within the normal operating system environment that is running in the VM and share storage between them over the (virtual) network they have. In other words, for many use cases sharing of disk/filesystem resources need not be implemented in the hypervisor itself.

In [VIRTIO], the protocol 9pfs is mentioned in two ways: A PCI type device can indicate that it is going to use the 9P protocol. The specification also has 9P as a specific separate device type. There seems to be no strict definition (or even specific reference) to the protocol itself.  It appears to be assumed to be well known by its name, and possible to find online. The specification is thus complemented only by scattered information found on the web regarding the specific implementations (Xen, KVM, QEMU, ...) 

9pfs is a minimalistic network file-system protocol that could be used for simple HV-to-VM exposure of file system, where performance is not critical. Other network protocols like NFS, SMB/SAMBA etc. would be too heavy to implement in the HV-VM boundary, but are still an option between VMs.  9pfs has known performance problems however but running 9pfs over vsock could be an optimization option.   9pfs feels a bit esoteric, and seems lacking in a flexible and reliable security model, which seem somewhat glossed over in the 9pfs description:  It briefly references only "fixed user" or "pass-through" for mapping ownership on files in guest/host.

The recently introduced virtio-fs uses the FUSE protocol over VIRTIO.  Using FUSE means reusing a proven and stable interface.  It furthermore guarantees expected POSIX filesystem semantics also when multiple VMs operate on the same file system (which file-sharing approaches like 9pfs may either not guarantee or implement in a very inefficient way).    Optimizations of VM-to-VM sharing will be possible by using shared memory as being defined in VIRTIO 1.2.  File system operations on data that is cached in memory will then be very fast also between VMs.

It is uncertain if file system sharing is truly a desired feature in typical automotive end products but the new capabilities might open up a desire to use this to solve use-cases that were previously not considering shared filesystem as the mechanism.   Software update use-cases might have the Hypervisor in charge of actually modifying critical software sections (boot, etc. but could include the VM software as well).   For this use case, download of software could be delegated to a VM which has advanced capabilities for networking and other operations, but the HV is still in charge of checking software authenticity (after locking VM access to the data) and then doing the actual update.    In the end, using filesystem sharing is an open design choice since the data exchange between VM and HV could alternatively be handled by some other network protocol.

Links: Virtio 1.0 spec : {PCI-9P, 9P device type}.
Kernel support: Xen/Linux 4.12+ FE driver Xen implementation details

Some 9pfs references include:
(Links are not provided since we cannot at the moment evaluate the completeness, or if these should be considered official specification).

  • A set of man pages that seem to be the definition of P9.
  • QEMU instruction how to set up a VirtFS (P9).  
  • Example/info how to natively mount a 9P network filesystem.
  • Source code for 9pfs FUSE driver

NOTE: The virtio-fs feature is fairly new and therefore we have included only a soft requirement here.  Revisit this later on.


  • If filesystem virtualization is implemented then virtio-fs, implemented according to XXX, MUST be one of the supported choices.

3. References

    [RFC 2119]

    [VIRTIO]  Virtual I/O Device (VIRTIO) Version 1.1, Committee Specification 01, release 20 December, 2018

    [VIRTIO-GPU]  Virtual I/O Device (VIRTIO) Version 1.0, Committee Specification 03-virtio-gpu, release 02 August 2015.
     -→ UPDATE THIS FOR 1.1?





  • No labels