Skip to main content

ACRN Project Releases Version 1.0

By May 10, 2019April 24th, 2020Blog

We are pleased to announce the release of ACRN™ Hypervisor version 1.0, a key Project ACRN milestone focused on automotive Software-Defined Cockpit (SDC) use cases and introducing additional architecture enhancements for more IoT usages, such as Industrial workload consolidation.

This v1.0 release is a production-ready reference solution for SDC usages that typically require multiple VMs, GPU sharing, and rich I/O mediation for sharing storage, network, USB devices, and more. This release also includes architectural enhancements for supporting diverse IoT workload consolidation usages, for example, Logical Partition mode and initial Real-Time VM support.

In this reference release, we use Clear Linux* as the Service OS (SOS) and User OS (UOS).  Android* and other Linux* based OSes can also be used as a UOS.

See the full release notes and latest documentation for more information about this 1.0 release.

Key features for this 1.0 release include:

Supported Hardware

  • ACRN supports multiple x86 platforms and has been tested with Apollo Lake and Kaby Lake NUCs, and the Apollo Lake UP Squared (UP2) board.

Supported Firmware

  • UEFI BIOS and Slim Bootloader (SBL) have been tested on NUC and UP2 boards. Slim Bootloader is a modern, flexible, light-weight, open source reference bootloader that is also fast, small, customizable, and secure.

Supported OSes

  • Clear Linux is supported and used as the release reference for the Service and User OS.

CPU Virtualization

  • Based on Intel VT-x virtualization technology, ACRN emulates a virtual CPU with core partitioning. The ACRN hypervisor supports virtualized APIC-V, EPT, IOAPIC, and LAPIC functionality.

GVT-g Virtual Graphics (a.k.a. AcrnGT)

  • GVT-g virtual graphics support lets the Service OS and User OS applications run GPU workloads simultaneously with minimal overhead. This helps ensure that both the SOS and the UOS instances can benefit from the full physical GPU capabilities.
  • AcrnGT supports Direct display, where the Service OS and User OS are each assigned to a different display. The display ports support eDP and HDMI.
  • AcrnGT supports GPU Preemption, where the system will preempt GPU resources occupied by lower priority workloads when needed. GPU preemption ensures the graphics performance needs of critical workloads can be met, such as the display frame rate per second of an SDC Instrument Cluster.
  • AcrnGT supports Surface Sharing, which allows the SOS to access an individual surface (or a set of surfaces) from the UOS without accessing the entire frame buffer of the UOS.

Devices Features

  • Device pass-through: VT-d provides hardware support for isolating and restricting device access to only the owner of the partition managing the device. It allows assigning I/O devices to a VM and extending the protection and isolation properties of VMs for I/O operations.
  • Virtio virtualization: The Service OS and User OS applications can share physical devices using industry-standard I/O virtualization virtio APIs, where performance-critical device sharing is enabled. By adopting the virtio specification, we can reuse many frontend virtio drivers already available in a Linux-based User OS, dramatically reducing the development effort for the frontend drivers.
  • Ethernet: ACRN hypervisor supports virtualized Ethernet functionality. The Ethernet Mediator is executed in the Service OS and provides packet forwarding between the physical networking devices (Ethernet, Wi-Fi, etc.) and virtual devices in User OS VMs. The HW platform physical connection can be shared, for regular (i.e. non-AVB) traffic, with Linux or Android applications by the SOS.
  • Wi-Fi: ACRN hypervisor supports pass-through of the Wi-Fi controller to a UOS, enables control of Wi-Fi as an in-vehicle hotspot for third-party devices, provides third-party device applications access to the vehicle, and provides third-party devices access to the TCU (if applicable) used to interpret and disperse data between electronic systems in an automobile.
  • Bluetooth: ACRN hypervisor supports Bluetooth controller pass-through to a single UOS, for example, for In-Vehicle Infotainment (IVI) use cases.
  • Mass Storage: ACRN hypervisor supports virtualized non-volatile R/W storage for Service OS and User OS instances, supporting VM private storage and storage shared between User OS instances.
  • USB Virtualization: ACRN hypervisor supports pass-through of USB xDCI controllers to a User OS from the platform. ACRN hypervisor supports an emulated USB xHCI controller for a User OS.
  • Image Processing Unit (IPU): ACRN hypervisor provides an IPU mediator to share with a User OS. Alternatively, the IPU can also be configured as pass-through to a User OS without sharing.
  • GPIO virtualization: ACRN supports GPIO para-virtualization based on the Virtual I/O Device (Virtio) specification. The GPIO consumers of the front-end are able to set or get GPIO values, directions, and configuration data via one virtual GPIO controller. In the back-end, the GPIO command line in the launch script can be modified to map native GPIO to a UOS.

Logical partition mode

  • In addition to “shared partition mode” support common in SDC use cases, ACRN supports a new “logical partition mode” for supporting industrial uses. With logical partitioning, all UOS VMs are launched directly by the hypervisor and not through the SOS VM, allowing the UOS VMs to run with minimal hypervisor intervention.

Preliminary support for Industrial workload consolidation

  • While this release focuses on SDC use-cases, this release includes preliminary support for Industrial use-cases.  
  • A UOS can run as a virtual machine (VM) with real-time characteristics.  A tutorial on how to use PREEMPT_RT-Linux for real-time UOS is published.
  • ACRN supports starting a UOS VM as a “Pre-launched VM”, (launched before the Service OS is started), and a “Post-launched VM”, (launched by the Service OS).
  • Cache Allocation Technology (CAT) is available on Apollo Lake (APL) platforms, providing cache isolation between VMs. CAT is used mainly for real-time performance quality of service (QoS).
  • ACRN supports Device-Model QoS based on a runC container to control the SOS resources (CPU, Storage, MEM, NET) by modifying the runC configuration file.

Refer to the ACRN version 1.0 release notes for more details. To learn more about the ACRN project community and products using ACRN,  visit the projectacrn.org website.


About the ACRN™ Project

ACRN is a flexible, lightweight reference hypervisor, built with real-time and safety-criticality in mind, optimized to streamline embedded development through an open source platform. To learn more, please visit https://projectacrn.org/.

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.