Skip to main content
Category

Blog

ACRN Project Releases Version 1.0

By Blog

We are pleased to announce the release of ACRN™ Hypervisor version 1.0, a key Project ACRN milestone focused on automotive Software-Defined Cockpit (SDC) use cases and introducing additional architecture enhancements for more IoT usages, such as Industrial workload consolidation.

This v1.0 release is a production-ready reference solution for SDC usages that typically require multiple VMs, GPU sharing, and rich I/O mediation for sharing storage, network, USB devices, and more. This release also includes architectural enhancements for supporting diverse IoT workload consolidation usages, for example, Logical Partition mode and initial Real-Time VM support.

In this reference release, we use Clear Linux* as the Service OS (SOS) and User OS (UOS).  Android* and other Linux* based OSes can also be used as a UOS.

See the full release notes and latest documentation for more information about this 1.0 release.

Key features for this 1.0 release include:

Supported Hardware

  • ACRN supports multiple x86 platforms and has been tested with Apollo Lake and Kaby Lake NUCs, and the Apollo Lake UP Squared (UP2) board.

Supported Firmware

  • UEFI BIOS and Slim Bootloader (SBL) have been tested on NUC and UP2 boards. Slim Bootloader is a modern, flexible, light-weight, open source reference bootloader that is also fast, small, customizable, and secure.

Supported OSes

  • Clear Linux is supported and used as the release reference for the Service and User OS.

CPU Virtualization

  • Based on Intel VT-x virtualization technology, ACRN emulates a virtual CPU with core partitioning. The ACRN hypervisor supports virtualized APIC-V, EPT, IOAPIC, and LAPIC functionality.

GVT-g Virtual Graphics (a.k.a. AcrnGT)

  • GVT-g virtual graphics support lets the Service OS and User OS applications run GPU workloads simultaneously with minimal overhead. This helps ensure that both the SOS and the UOS instances can benefit from the full physical GPU capabilities.
  • AcrnGT supports Direct display, where the Service OS and User OS are each assigned to a different display. The display ports support eDP and HDMI.
  • AcrnGT supports GPU Preemption, where the system will preempt GPU resources occupied by lower priority workloads when needed. GPU preemption ensures the graphics performance needs of critical workloads can be met, such as the display frame rate per second of an SDC Instrument Cluster.
  • AcrnGT supports Surface Sharing, which allows the SOS to access an individual surface (or a set of surfaces) from the UOS without accessing the entire frame buffer of the UOS.

Devices Features

  • Device pass-through: VT-d provides hardware support for isolating and restricting device access to only the owner of the partition managing the device. It allows assigning I/O devices to a VM and extending the protection and isolation properties of VMs for I/O operations.
  • Virtio virtualization: The Service OS and User OS applications can share physical devices using industry-standard I/O virtualization virtio APIs, where performance-critical device sharing is enabled. By adopting the virtio specification, we can reuse many frontend virtio drivers already available in a Linux-based User OS, dramatically reducing the development effort for the frontend drivers.
  • Ethernet: ACRN hypervisor supports virtualized Ethernet functionality. The Ethernet Mediator is executed in the Service OS and provides packet forwarding between the physical networking devices (Ethernet, Wi-Fi, etc.) and virtual devices in User OS VMs. The HW platform physical connection can be shared, for regular (i.e. non-AVB) traffic, with Linux or Android applications by the SOS.
  • Wi-Fi: ACRN hypervisor supports pass-through of the Wi-Fi controller to a UOS, enables control of Wi-Fi as an in-vehicle hotspot for third-party devices, provides third-party device applications access to the vehicle, and provides third-party devices access to the TCU (if applicable) used to interpret and disperse data between electronic systems in an automobile.
  • Bluetooth: ACRN hypervisor supports Bluetooth controller pass-through to a single UOS, for example, for In-Vehicle Infotainment (IVI) use cases.
  • Mass Storage: ACRN hypervisor supports virtualized non-volatile R/W storage for Service OS and User OS instances, supporting VM private storage and storage shared between User OS instances.
  • USB Virtualization: ACRN hypervisor supports pass-through of USB xDCI controllers to a User OS from the platform. ACRN hypervisor supports an emulated USB xHCI controller for a User OS.
  • Image Processing Unit (IPU): ACRN hypervisor provides an IPU mediator to share with a User OS. Alternatively, the IPU can also be configured as pass-through to a User OS without sharing.
  • GPIO virtualization: ACRN supports GPIO para-virtualization based on the Virtual I/O Device (Virtio) specification. The GPIO consumers of the front-end are able to set or get GPIO values, directions, and configuration data via one virtual GPIO controller. In the back-end, the GPIO command line in the launch script can be modified to map native GPIO to a UOS.

Logical partition mode

  • In addition to “shared partition mode” support common in SDC use cases, ACRN supports a new “logical partition mode” for supporting industrial uses. With logical partitioning, all UOS VMs are launched directly by the hypervisor and not through the SOS VM, allowing the UOS VMs to run with minimal hypervisor intervention.

Preliminary support for Industrial workload consolidation

  • While this release focuses on SDC use-cases, this release includes preliminary support for Industrial use-cases.  
  • A UOS can run as a virtual machine (VM) with real-time characteristics.  A tutorial on how to use PREEMPT_RT-Linux for real-time UOS is published.
  • ACRN supports starting a UOS VM as a “Pre-launched VM”, (launched before the Service OS is started), and a “Post-launched VM”, (launched by the Service OS).
  • Cache Allocation Technology (CAT) is available on Apollo Lake (APL) platforms, providing cache isolation between VMs. CAT is used mainly for real-time performance quality of service (QoS).
  • ACRN supports Device-Model QoS based on a runC container to control the SOS resources (CPU, Storage, MEM, NET) by modifying the runC configuration file.

Refer to the ACRN version 1.0 release notes for more details. To learn more about the ACRN project community and products using ACRN,  visit the projectacrn.org website.


About the ACRN™ Project

ACRN is a flexible, lightweight reference hypervisor, built with real-time and safety-criticality in mind, optimized to streamline embedded development through an open source platform. To learn more, please visit https://projectacrn.org/.

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

ACRN Project Releases Version 0.8

By Blog

We are pleased to announce the release of Project ACRN version 0.8 (see the release notes and documentation). ACRN is a flexible, lightweight reference hypervisor, built with real-time and safety-criticality in mind, optimized to streamline embedded development through an open source platform. Check out the Introduction to Project ACRN for more information. The project team encourages you totry it out, and also to join the weekly technical call.

All project ACRN source code is maintained in the https://github.com/projectacrn/acrn-hypervisor repository and includes folders for the ACRN hypervisor, the ACRN device model, and documentation. You can either download this source code as a zip or tar.gz file (see the ACRN v0.8 GitHub release page or use git clone and checkout commands:

   git clone https://github.com/projectacrn/acrn-hypervisor
   cd acrn-hypervisor
   git checkout v0.8

The project’s online technical documentation is also tagged to correspond with a specific release: generated v0.8 documents can be found at https://projectacrn.github.io/0.8/. Documentation for the latest (master) branch is available at https://projectacrn.github.io/latest/.

ACRN v0.8 requires Clear Linux OS version 28260 or newer. Please follow the instructions in the Getting started guide for Intel NUC.

Version 0.8 new features

GPIO virtualization

GPIO virtualization is supported as para-virtualization based on the Virtual I/O Device (VIRTIO) specification. The GPIO consumers of the Front-end are able to set or get GPIO values, directions, and configuration via one virtual GPIO controller. In the Back-end, the GPIO command line in the launch script can be modified to map native GPIO to UOS.

Enable QoS based on runC container

ACRN supports Device-Model QoS based on runC container to control the SOS resources (CPU, Storage, MEM, NET) by modifying the runC configuration file.

S5 support for RTVM

ACRN supports a Real-time VM (RTVM) shutting itself down. A RTVM is a kind of VM that the SOS can’t interfere at runtime, and as such, can only power itself off internally. All poweroff requests external to the RTVM will be rejected to avoid any interference.

Document updates

Several new documents have been added in this release, including:

Fixed Issues:

See the release notes.

Known Issues:

See the release notes.

ACRN Project Releases Version 0.7

By Blog

We are pleased to announce the release of Project ACRN version 0.7 (see the release notes and documentation). ACRN is a flexible, lightweight reference hypervisor, built with real-time and safety-criticality in mind, optimized to streamline embedded development through an open source platform. Check out the Introduction to Project ACRN for more information. The project team encourages you to try it out, and also to join the weekly technical call.

All project ACRN source code is maintained in the https://github.com/projectacrn/acrn-hypervisor repository and includes folders for the ACRN hypervisor, the ACRN device model, and documentation. You can either download this source code as a zip or tar.gz file (see the ACRN v0.7 GitHub release page or use git clone and checkout commands:


git clone https://github.com/projectacrn/acrn-hypervisor
cd acrn-hypervisor
git checkout v0.7

The project’s online technical documentation is also tagged to correspond with a specific release: generated v0.7 documents can be found at https://projectacrn.github.io/0.7/. Documentation for the latest (master) branch is available at https://projectacrn.github.io/latest/.

ACRN v0.7 requires Clear Linux OS version 28260 or newer. Please follow the instructions in the Getting started guide for Intel NUC.

Version 0.7 new features

Enable cache QOS with CAT

Cache Allocation Technology (CAT) is enabled on Apollo Lake (APL) platforms, providing cache isolation between VMs mainly for real-time performance quality of service (QoS). The CAT for a specific VM is normally set up at boot time per the VM configuration determined at build time. For debugging and performance tuning, the CAT can also be enabled and configured at runtime by writing proper values to certain MSRs using the wrmsr command using the ACRN shell.

Support ACPI power key mediator

ACRN supports ACPI power/sleep key on the APL and KBL NUC platforms, triggering S3/S5 flow, following the ACPI spec.

Document updates

Several new documents have been added in this release, including:

See the full release notes for details about new features, issues addressed, known issues remaining, and the change log since the previous 0.6 release.

Fixed Issues:

See the release notes.

Known Issues:

See the release notes.

Project ACRN at Embedded World 2019

By Blog

Embedded World 2019, held in Nuremberg, Germany, showcased the best of the embedded and IoT industry’s latest innovations and trends to over 30,000 visitors. Project ACRN™ was present at the Linux Foundation’s Zephyr™ Project booth. The Zephyr Project is a small, scalable open-source Real-Time Operating System (RTOS) for IoT embedded devices. ACRN is a flexible, lightweight reference hypervisor, built with real-time and safety-criticality in mind.

Project ACRN and Zephyr Project engineering teams are collaborating to make one of the hottest trends a reality: workload consolidation, the ability to run multiple, heterogeneous functions on a single embedded system. While there are many techniques to do this, few offer a path that supports deterministic, real-time workloads or sub-systems that are Functional Safety (FuSa) certifiable. Zephyr combined with ACRN, now offers such a path forward.

At Embedded World 2019, we demonstrated the Zephyr RTOS running in an ACRN Virtual Machine (VM) in a real-time configuration, while concurrently showing a Clear Linux VM running an Artificial Intelligence (AI) object detection algorithm.

These are exciting times to live in with such transformations and innovation happening in the industry. ACRN and Zephyr are at the forefront of the innovation bringing new, open-source solutions that will help the embedded and IoT industry segments transform for the future.

ACRN Look Ahead in 2019

By Blog

2019 will be an exciting year for project ACRN. Several big things are planned:

  • By early Q2, we’ll welcome ACRN v1.0 and provide a stable software reference for Software-Defined-Cockpit (SDC) usage on Intel Apollo Lake platforms.
  • Real-Time OS will be supported, opening use of ACRN in industrial scenarios needing low latency, and fast, predictable responsiveness. Initial support is for VxWorks and Zephyr OS as Real-Time guest OSes in Q2, and PREEMPT-RT Linux in Q3.
  • A new ACRN Hybrid Mode will be completed in Q2, giving ACRN the ability to run mixed-criticality workloads. For example, running a Real-Time Guest OS with a time sensitive application and dedicated hardware resources assigned, together with a normal priority Guest OSes (UOS) running with Service OS (SOS) and sharing the remaining hardware devices.
  • Windows as Guest (WaaG) will be officially supported in Q4, but you will see incremental features merged before that. For example, we’ll soon introduce a virtual boot loader, OVMF, that enables UEFI support for Virtual Machines required for supporting WaaG.
  • Kata Containers will be supported in Q3. Kata Containers is an open source project and community working to build a standard implementation of lightweight Virtual Machines (VMs) that feel and perform like containers, but provide the workload isolation and security advantages of VMs.
  • More I/O device virtualization will be implemented to enrich ACRN’s IoT device hypervisor capability, including GPIO virtualization in Q1, I2C virtualization in Q2 and Intel GPU Gen11 support in Q4.
  • CPU sharing will be a big thing for ACRN. Typically used for embedded systems, a partitioned CPU will be assigned to a Guest VM to benefit the isolation and fast response from hardware. There’s also a requirement for non-critical usage for sharing CPU cores among multiple VMs to better support Kata container.
  • Functional Safety (FuSa) certification process will be applied to ACRN core feature development, and ultimately help ACRN be deployed in industrial or automotive (SDC) uses.

*More details can be found in “ACRN Open Source Roadmap 2019”

ACRN Project Releases Version 0.6

By Blog

ACRN has released version 0.6 (see the release notes and documentation), an updated revision of the project with significant new features and fixed bugs. The project team encourages you to learn more about ACRN and try it out, and also to join the weekly technical call.

The major new and updated features are summarized in the release notes, along with bugs fixed and known issues. These are the new features in 0.6:

New Features:

Enable Privileged VM support for real-time UOS in ACRN: Initial patches to enable a User OS (UOS) running as a virtual machine (VM) with real-time characteristics, also called a “Privileged VM”. We’ve published a tutorial Using PREEMPT_RT-Linux for real-time UOS. More patches for ACRN real time support will continue.

Documentation Updates:

Fixed Issues:

See the release notes.

Known Issues:

See the release notes.

ACRN Project Releases Version 0.5

By Blog

ACRN has released version 0.5 (see the release notes and documentation), an updated revision of the project with significant new features and fixed bugs. The project team encourages you to learn more about ACRN and try it out, and also to join the weekly technical call.

The major new and updated features are summarized in the release notes, along with bugs fixed and known issues. These are the new features in 0.5:

New Features:

OVMF support initial patches merged in ACRN: To support booting Windows as a Guest OS, we are using Opensource Virtual Machine Firmware (OVMF). The initial patches to support OVMF have been merged into the ACRN hypervisor. Note: There will be additional patches for ACRN and patches upstreaming to OVMF.

UP2 board serial port support: This release enables serial port debugging on UP2 boards during SOS and UOS boot.

One E2E binary to support all UEFI platform: ACRN can support both ApolloLake (APL) and KabbyLake (KBL) NUCs. Instead of having separate builds, this release offers community developers a single end-to-end reference build that supports both UEFI hardware platforms, configured with a new boot parameter. See: Getting Started for more information.

APL UP2 board with SBL firmware: With this 0.5 release, ACRN now supports APL UP2 board with slim Bootloader (SBL) firmware. Slim Bootloader is a modern, flexible, light-weight, open source reference boot loader with key benefits such as speed, small footprint, customizable, and security. An end-to-end reference build with ACRN hypervisor, Clear Linux as SOS, and Clear Linux as UOS has been verified on UP2/SBL board. See the Using SBL on UP2 Board documentation for step-by-step instructions.

Documentation Updates:

Fixed Issues:

See the release notes.

ACRN Project Releases Version 0.4

By Blog

ACRN has released version 0.4 (see the release notes and documentation), a new updated revision of the project with many added features and fixed bugs. The project team encourages you to learn more about ACRN and try it out, and also to join the weekly technical call.

The major new and updated features are summarized in the release notes, along with bugs fixed and known issues. These are the new features in 0.4:

Documentation Updates:

GSG Guide has been updated to avoid the “black screen” issue.

The tutorial “Using Ubuntu as the Service OS” was refreshed.

New Features:

Implemented “wbinvd” emulation for cache write-back.

The script “launch_uos.sh” was simplified to default to the latest iot-lts2018 kernel by default.

Fixed Issues:

See the release notes.

ACRN Project at Open Source Summit/IoT Summit/Embedded Linux Conference Europe

By Blog

The ACRN Project made a splash at the Open Source Summit/Embedded Linux Conference Europe/OpenIoT Summit in October in Edinburgh, UK. In addition to being featured in an ongoing demo in the Intel booth at the conference, project architect Eddie Dong presented ACRN: A Big Little Hypervisor for IoT Developmentto a packed room with 112 attendees.

ACRN continues to be very well-received in automotive, industrial, and other IoT industries as an IoT-friendly lightweight hypervisor with a very small codebase. The project is concerned with making the hypervisor easier to certify for Functional Safety, which was of particular interest at this event due to the attendance of European automotive electronics companies. ACRN offers flexibility by allowing guest operating systems to share devices between VMs, or to create completely isolated hardware partitions. The ACRN demo showed that ACRN performs extremely well and provides support for Linux-based guest operating systems, with RTOS and other alternatives on the roadmap.

ACRN is a fully open source project hosted by the Linux Foundation. For more information about ACRN, lightweight hypervisor technology, and the potential for collaboration on your project, visit https://projectacrn.org.

ACRN Project Releases Version 0.3

By Blog

ACRN has released version 0.3 (see the release notes), a new updated revision of the project with many added features and fixed bugs. The project team encourages you to learn more about ACRN and try it out, and also to join the weekly technical call.

The major new and updated features are summarized in the release notes, along with bugs fixed and known issues. These are the new features in 0.3:

High level design document: The high level design documents are completed with refreshed content, including: CPU virtualization, GPU virtualization, memory management, VM management, physical interrupt, timer management, CPU P-state and C-statke management, S3/S5 management, power management in hypervisor, static CPU core partition, VT-d design, device pass-through, device model, I/O emulation, Virtio supported devices, USB virtualization, random device virtualization, ACRN trace, ACRN log, hypervisor console.

CSME sharing support: Intel® Converged Security and Management Engine (Intel® CSME) is used to enhance the platform, OS, and application security. ACRN provides CSME sharing capability, so the system can support access to the CSME and all of its constituent subcomponents by multiple guest OS images (Linux, Android, or Clear Linux as Service OS) running concurrently on the same platform.

vHost and vHost-Net support: For upstream and performance improvement, vHost framework and vHost network are enabled to accelerate guest networking with virtio_net.

vSBL enhancement: There are multiple updates for vSBL module, for example, supporting Guest OS crash mode in vSBL debug version, and supporting ACPI customization. vSBL can get RPMB key by hypercall and pass down to boot loader

xD support: The platform supports execution disable (xD) for all virtualized operating systems.

Interrupt storm mitigation: This feature is to mitigate the risks of device interrupt storm.

ACRN compiler and linker enhancement: Setting and flags have been enabled in the compiler and linker to harden ACRN software, including stack execution protection, data relocation and protection (RELRO), stack-based buffer overrun detection, Position Independent Execution (PIE), fortify source, format string vulnerabilities.

Naming convention: MISRA-C has requirements on how identifiers are named and ISO 26262 highly recommends adopting naming conventions for products of any safety level. This release addresses those requirements.

Code reshuffle: Several modules’ codes have been clean up and reshuffled to make upstream friendly, for example, VM loader was updated to avoid involving hypervisor when passing information from Device Model to guest, MMU code was modified by referring to x86 SDM, IOC mediator reshuffled by replacing strtok function with strsep and checking snprintf return value, Virtio code updated by removing unused virtio_console_cfgwrite in virtio_console.