Platform Product Roadmap

Based on the working prototype system we are now developing a minimum viable product (MVP) for our technology. This ARcore platform we plan to release to customers in two versions:

  1. High performance version for high speed, high resolution systems
  2. Medium performance version for the creation of very cost-efficient systems

Version A is to be released at the end of 2017 and version B until the beginning of 2018. The table below shows an overview of the components and specifications of the two product versions.

We intended the platform to be small enough to work as a portable device worn e.g. in a belt pocket. ARcore (FPGA) module and content (GPU) module are to be attached to a multi-layered base board, with peripherial connectors located at the side of the board for easy access.

Dipping into the Tech

ARcore Platform Overview

The purpose of the ARcore platform is to provide a customer with a complete system integration to create his own high performance AR product based on the ARcore.

The platform consists of two parts:
The zero-latency FPGA module, managing the camera pass-through and overlay mixing. Besides the ARcore IP implemented on the FPGA, an ARM dual-core CPU runs a firmware that allows the user to control all aspects of the zero-latency pipeline. Firmware functions can be accessed via a Linux-based API.

The GPU module runs a graphics engine (e.g. Unity) to provide the AR content to be superimposed on the camera image. Furthermore we provide a middleware with support algorithms (QR marker detection, visual SLAM etc.) for common high level image processing tasks to further increase development efficiency for the user.

The two modules exchange information via a PCIe and HDMI interface. Cameras and inertial measurement unit are connected directly to the FPGA module. The GPU module connects to an external server to stream data to/from other HMDs via wireless LAN.

ARcore Prototype System

The core components of the ARcore platform are implemented in our existing prototype system. The FPGA SoC module is based on an Arria10 development kit extended with custom hardware.

The GPU component is equipped with a special HDMI-to-CSI interface to create an HMDI input for the Jetson TX1 platform. The raw camera image is forwarded from the imaging pipeline on the FPGA. A resulting content overlay image is returned back to the imaging via an HDMI output. In the future this exchange will happen using PCIe.

The prototype uses a modified virtual reality head mounted display. The cameras of the HMD are directly connected to the main platform system via LVDS lines for lowest latency signal propagation.

Software Eats the World

The FPGA module as well as the GPU module both contain a software stack and API to be used by the developer. All important core functionality of the system is accessible through the software library and allow easy implementation of applications.

Impressions from ViveX

HTC VIVE gave us the chance to participate in their ViveX accelerator. We spent a great time with the team in Taipei, San Francisco and Shanghai, a very educational and interesting experience. We are glad to be a part of the ViveX community. Below a few impressions from our presentations at the ViveX demo days.

Excuse my German accent 🙂

Seeon explaining the system at the demo day in Taipei:

Lin demonstrating the ARcore at the Shanghai demo day:

Lin presented while wearing our prototype:

Futurist Robert Scoble giving our demo a try in SF:

A First Application Demo

In collaboration with LP-RESEARCH Inc. we created a first demo video of our technology. The video shows the application of a headset with the ARcore to a very simple industrial maintenance task. The status of a robot is monitored using an inertial measurement unit. A machine learning algorithm is used to determine if a component of the robot needs maintenance. The result of this evaluation is displayed via the AR headset in real-time.

To show the incremental improvement that we have made since last year, below is a further video showing the very initial capabilities of the ARcore. At this time the functionality was still very rudimentary so that a real application was not possible. Luckily until now we have progressed past this initial stage.

ARcore for Industrial Maintenance

Xikaku’s first product is the ARcore. The ARcore is an IP component that allows zero latency camera pass-through and video mixing to create an immersive augmented reality experience.

A possible use-case of this system is a factory maintenance scenario as displayed in the illustration below. A factory maintenance worker might be wearing a virtual reality HMD with two cameras attached to it. ARcore forwards the image from the cameras directly to the display of the headset with minimal latency. On top, ARcore creates a superimposed image that let’s the user see measurement information recorded by sensing units attached to the factory machines. This allows the maintenance worker to fulfill his task accurately and efficiently.

The environment seen by the maintenance worker might look like the image below. Superimposed over the factory machines is status information reflecting the condition of each individual machine. One machine in the center of the image has a red marker attached to it, signalling the urgent need for maintenance.

Our Take on Augmented Reality

Xikaku creates solutions for industrial and medical augmented reality applications. In contrast to many others, we here at Xikaku have a very conservative opinion regarding the future of augmented reality.

In our opinion many creators of augmented reality hardware and software, specifically with a consumer focus, are trying to serve a customer need that does not really exist. The imagined use case of a person wearing smart glasses 24/7, having an augmented universe around her, constantly confronting the person with information and, importantly, advertisement, is in my opinion a technologist’s utopia.

We believe that essentially human beings want to be free, in the sense that they have access to information when they want and need it, but very selectively. This need can more efficiently and healthily be satisfied by ubiquitous computing devices such as smart speakers with built-in AI assistants, than by using a device that is permanently attached to your head/face/eyes/brain.

However, we wouldn’t be invested in this industry, if we didn’t understand that there indeed are very strong use-cases for augmented reality devices. We see these use cases in areas, where people require tools to perform tasks that they otherwise couldn’t.

Such a use case might be a surgeon using an AR head-mounted display to overlay spatial x-ray information over the human body in real-time. It might be a welder, who uses an augmented welding mask to enhance his vision of the material he is working on. It might be a factory maintenance worker, who sees critical machine status information superimposed over his vision the moment he enters the machine room.

To conclude, our vision for Xikaku and for AR in general, is to create augmentation technology that helps people perform their work more efficiently and safely, so they have more free time to spend in reality.