============================== AR - Enhanced projects ============================== Summary ========= | The `Enhanced projects `_ repo is meant to give AR augmentations to targets linked to SPL's projects. | One can use either an Hololens2 or an Android phone. The recognition is based on `Vuforia's `_ library. .. figure:: img/01_scene.* :align: center :alt: AR Scene :width: 90% AR Scene In red, the objects are static in the AR world (based on sensors and where the app is launched at first). Here are some distance ``Markers`` ("TicTacs" marking steps of 1 meters), and a simple cube pivoting non-stop. Those objects are simple 3D shapes. In yellow are the objects recognized and enhanced (explained further below). In green are the managers, dispatching events like speech recognition and scene data. .. note:: An object in the hierarchy can be seen as a container gathering models and scripts which has a position and size. A ``Marker`` is solely composed of a 3D body (``Mesh renderer``), while the ``PivotingCube`` has a 3D model and a script, and the ``SceneManager`` contains only a script without model. In the relation tree, the child objects position themselves related to their parents. .. image:: img/03_simpleobject.* :width: 33% .. figure:: img/02_autorotation.* :width: 52% Launching ============= The project is made and intended to be used with Unity 2020.3.19.f1. When opening the project, go to ``Assets -> Scenes -> double click Scene``. To enable the Vuforia library, go to `Vuforia's developer website `_ and register for a developer license. Then, on Unity, go to ``MixedRealityPlayspace -> click ARCamera -> under Inspector, click Open Vuforia Configuration``. Simply enter your license under ``App License Key``. You can modify other settings here. To get a better stabilization while on Android, the ``ARCore`` library is included and required (will be managed by Unity). Enhanced targets ================== Vuforia's targets seem to look for sharp angles and high contrasts. Texts will define a lot of interest points. The targets are modified to get enough interest points. SPL -------- The SPL target is the following : .. image:: img/spltarget.* :width: 46% .. image:: img/spltarget_ip.* :width: 46% While the full augmentation is such as: .. figure:: img/04_spl1.* :align: center :width: 70% On detection, a sound is played and the logo animation launched. It consists of a rotation and expansion, then the logo will fix itself in front of the target: .. figure:: img/05_spl2.* :align: center :width: 70% .. figure:: img/splanim.* :align: center :width: 70% The text itself is based on a billboard, i.e. it will reorient to face the user view: .. figure:: img/06_spl3.* :align: center :width: 70% DigitAlu ------------- The DigitAlu target is the following : .. image:: img/digalu.* :width: 46% .. image:: img/digalu_ip.* :width: 46% While the full augmentation is such as: .. figure:: img/10_da1.* :align: center :width: 70% On detection, a sound is played and a "force field" is projected around the part. The effect is given by an ``HollowBox`` where a shader named ``Plasma`` is applied, from the ``Ultimate 10+ Shaders`` package: .. figure:: img/11_da2.* :align: center :width: 70% .. figure:: img/daanim.* :align: center :width: 70% The text itself is based on a billboard, i.e. it will reorient to face the user view: .. figure:: img/12_da3.* :align: center :width: 70% DigitalTwin --------------- The DigitalTwin target is the following : .. image:: img/dt.* :width: 46% .. image:: img/dt_ip.* :width: 46% While the full augmentation is such as: .. figure:: img/20_dt1.* :align: center :width: 70% On detection, a sound is played, the poster is shown and a scan animation will launch. .. note:: The poster is the transparent version of the PDF. Drawing it is point based, and such very slow with a white background. Drawing only the required points, then projecting a white ``Quad`` (90[°] rotated plane) is much faster. .. figure:: img/21_dt2.* :align: center :width: 70% .. figure:: img/dtanim.* :align: center :width: 70% The light "laser grid" effect comes from a simple ``Spot``, where a cookie is applied. .. note:: A cookie is based on a grayscale mask, where the more black it is, the less light will pass through. .. figure:: img/22_dt3.* :width: 62% .. figure:: img/25_dt_cookie.* :width: 34% Finally, the light is animated by rotating it and changing the spot intensity based on a sinus pattern. The cubes are moved with a second animation, where their position is simply modified over time: .. figure:: img/23_dt4.* :width: 40% .. figure:: img/24_dt5.* :width: 55% Unique Stability Plates ---------------------------- The USP target is the following : .. image:: img/usp.* :width: 46% .. image:: img/usp_ip.* :width: 46% While the full augmentation is such as: .. figure:: img/30_usp1.* :align: center :width: 70% On detection, a sound is played, the poster is shown and a two ``.csv`` files are read and displayed as a moving graph. A custom script ``USP Data Provider`` takes a CSV file and will display more or less data, more or less quickly. The moving tick can be synchronized with another dataset to ensure synchronicity. .. figure:: img/31_usp2.* :align: center :width: 70% .. figure:: img/uspanim.* :align: center :width: 70% VLSPM --------------- The VLSPM target is the following : .. image:: img/vlspm.* :width: 46% .. image:: img/vlspm_ip.* :width: 46% While the full augmentation is such as: .. figure:: img/40_vlspm1.* :align: center :width: 70% On detection, a sound is played, a summary is shown along with a robot animation and a video button. On button click, a video from VLSPM++ is displayed and can be hidden again (either by pressing the button again, or when the video finishes): .. figure:: img/41_vlspm2.* :align: center :width: 70% .. figure:: img/vlspmanim.* :align: center :width: 70% The button comes from the ``MRTK`` toolkit. On click, it shows or hide the video screen. The screen itself will simulate a click on the button when the video finishes: .. figure:: img/42_vlspm3.* :width: 50% .. figure:: img/43_vlspm4.* :width: 44% The video screen automatically plays when the object is activated: .. figure:: img/44_vlspm5.* :width: 55% MRTK - The Mixed Reality ToolKit =================================== | ``MRTK`` is a toolkit to build experiences in AR and VR, capturing the device inputs and projecting the scene on the device. | It is especially made for the Hololens, but also manages Android through the use of ``OpenXR``. The configuration is available under the ``MixedRealityToolkit`` object. .. figure:: img/50_mrtk.* :width: 85% To modify a setting, first ``Clone`` the corresponding profile. It manages: * The ``Experience`` scaling (if user will use the device in a room-scale, seated ...) * The ``Camera`` to manage the hologram reprojection * The ``Input`` from the user (hand gesture, eye tracking, controllers, speech) * The ``Boundary`` such that the world is mapped and will interact the Unity scene (collisions) * The ``Teleport`` system to move through the game (mostly for VR to avoid loss of orientation and vomiting effects) * The ``Spatial Awareness`` that will show the world boundaries as a mesh * Some ``Diagnostics`` to try and find slow-downs while developing * The ``Scene System`` allow switching between multiple scenes Vuforia ========== | The ``Vuforia`` library is intended to detect targets (images, 3D models, QR-codes-like) in AR experiments in an easy-way. | A license is required to deploy the app, but a developer one can be used. The configuration is available under the ``MixedRealityPlayspace -> ARCamera -> Open Vuforia Configuration`` object. .. figure:: img/51_vuforia.* :width: 85% One can select: * Between quality and speed * How many targets are tracked simultaneously * The objects database * The Android ARCore usage (better AR tracking) * The camera used for tests in the player