AR - Enhanced projects#
Summary#

AR Scene#
In red, the objects are static in the AR world (based on sensors and where the app is launched at first).
Here are some distance Markers
(“TicTacs” marking steps of 1 meters), and a simple cube pivoting non-stop.
Those objects are simple 3D shapes.
In yellow are the objects recognized and enhanced (explained further below).
In green are the managers, dispatching events like speech recognition and scene data.
Note
An object in the hierarchy can be seen as a container gathering models and scripts which has a position and size.
A Marker
is solely composed of a 3D body (Mesh renderer
), while the PivotingCube
has a 3D model and a script, and the SceneManager
contains only a script without model.
In the relation tree, the child objects position themselves related to their parents.


Launching#
The project is made and intended to be used with Unity 2020.3.19.f1.
When opening the project, go to Assets -> Scenes -> double click Scene
.
To enable the Vuforia library, go to Vuforia’s developer website and register for a developer license.
Then, on Unity, go to MixedRealityPlayspace -> click ARCamera -> under Inspector, click Open Vuforia Configuration
. Simply enter your license under App License Key
.
You can modify other settings here.
To get a better stabilization while on Android, the ARCore
library is included and required (will be managed by Unity).
Enhanced targets#
Vuforia’s targets seem to look for sharp angles and high contrasts. Texts will define a lot of interest points. The targets are modified to get enough interest points.
SPL#
The SPL target is the following :


While the full augmentation is such as:

On detection, a sound is played and the logo animation launched. It consists of a rotation and expansion, then the logo will fix itself in front of the target:


The text itself is based on a billboard, i.e. it will reorient to face the user view:

DigitAlu#
The DigitAlu target is the following :


While the full augmentation is such as:

On detection, a sound is played and a “force field” is projected around the part.
The effect is given by an HollowBox
where a shader named Plasma
is applied, from the Ultimate 10+ Shaders
package:


The text itself is based on a billboard, i.e. it will reorient to face the user view:

DigitalTwin#
The DigitalTwin target is the following :


While the full augmentation is such as:

On detection, a sound is played, the poster is shown and a scan animation will launch.
Note
The poster is the transparent version of the PDF. Drawing it is point based, and such very slow with a white background.
Drawing only the required points, then projecting a white Quad
(90[°] rotated plane) is much faster.


The light “laser grid” effect comes from a simple Spot
, where a cookie is applied.
Note
A cookie is based on a grayscale mask, where the more black it is, the less light will pass through.


Finally, the light is animated by rotating it and changing the spot intensity based on a sinus pattern. The cubes are moved with a second animation, where their position is simply modified over time:


Unique Stability Plates#
The USP target is the following :


While the full augmentation is such as:

On detection, a sound is played, the poster is shown and a two .csv
files are read and displayed as a moving graph.
A custom script USP Data Provider
takes a CSV file and will display more or less data, more or less quickly.
The moving tick can be synchronized with another dataset to ensure synchronicity.


VLSPM#
The VLSPM target is the following :


While the full augmentation is such as:

On detection, a sound is played, a summary is shown along with a robot animation and a video button. On button click, a video from VLSPM++ is displayed and can be hidden again (either by pressing the button again, or when the video finishes):


The button comes from the MRTK
toolkit. On click, it shows or hide the video screen.
The screen itself will simulate a click on the button when the video finishes:


The video screen automatically plays when the object is activated:

MRTK - The Mixed Reality ToolKit#
MRTK
is a toolkit to build experiences in AR and VR, capturing the device inputs and projecting the scene on the device.OpenXR
.The configuration is available under the MixedRealityToolkit
object.

To modify a setting, first Clone
the corresponding profile.
It manages:
The
Experience
scaling (if user will use the device in a room-scale, seated …)The
Camera
to manage the hologram reprojectionThe
Input
from the user (hand gesture, eye tracking, controllers, speech)The
Boundary
such that the world is mapped and will interact the Unity scene (collisions)The
Teleport
system to move through the game (mostly for VR to avoid loss of orientation and vomiting effects)The
Spatial Awareness
that will show the world boundaries as a meshSome
Diagnostics
to try and find slow-downs while developingThe
Scene System
allow switching between multiple scenes
Vuforia#
Vuforia
library is intended to detect targets (images, 3D models, QR-codes-like) in AR experiments in an easy-way.The configuration is available under the MixedRealityPlayspace -> ARCamera -> Open Vuforia Configuration
object.

One can select: * Between quality and speed * How many targets are tracked simultaneously * The objects database * The Android ARCore usage (better AR tracking) * The camera used for tests in the player