AARemu (https://github.com/donaldmunro/AARemu) is a software tool enabling simulation of Augmented Reality
by allowing an AR developer to record a 360 degree view of a
location using the devices camera and rotational sensors. The ARCamera
class which provides an impostor or mock of the Android camera class
can then be used to preview the recorded scene instead of the live
camera preview provided by the Android Camera class. The ARCamera
preview callback is analogous to the standard Camera preview
callback except that the preview bytes provided in the callback
are extracted from a file created by the recorder application
based on the current bearing returned by the orientation
sensor(s). These preview bytes are passed to the development code
via the same preview callback as provided by the standard Camera
classes and can thus be processed by Computer Vision algorithms
before being displayed by the client application. The frames are
stored as individual video frames in RGBA, RGB or RGB565 format
and not as video so the preview can be accessed in both
clockwise and anti-clockwise directions and precise seeks to bearing
locations are possible instead of to a video keyframe.
The tool is aimed at developers of outdoor mobile AR application
as it allows the developer to record one or more 360 degree
panoramas of a given location and then debug and test the AR
application in the comfort of a office or home without having to
make extensive changes to the programming
This app is the AARemu recorder. It functions by displaying the camera output in full screen mode with an interface drawer on the left border of the display which can be dragged out. To start recording drag the drawer out and click the recording button. At start of recording the user is asked to provide a name for the recording files, a recording method, file format, resolution, recording increment and which orientation sensor implementation to use.
The file format can currently be one of RGBA, RGB, RGB565, NV21 and YV12.
While resulting in larger files RGBA is preferred as GPU texture units
work best with 4 byte aligned textures and most OpenGL implementations
convert to RGBA internally anyway.
The resolution can be selected in a spinner which provides all of the resolutions
supported by the device. The recording increment specifies the bearing increment
between which frames are saved. The Rotation sensor specifies which orientation sensor
fusion method to use for calculating the device orientation and bearing.
The recording methods are currently Retry and Traverse until Complete. The retry method works as follows:
Once recording the interface drawer displays the current bearing and the target bearing. At the start of the recording the target is set to 355 in order to start at 0 approaching in a clockwise direction. The camera output surface displays an overlaid arrow with the direction of movement which is red if correcting and green if recording. Once the user moves to 355 the target is set to 0, the arrow becomes green and recording commences. During recording if a frame is missed then the arrow color and direction changes to red until the user corrects.
The Traverse recording method starts recording from the current location. An overlaid arrow indicates the direction of movement while recording. Missed bearings do not cause the user to be prompted to move back, instead missed bearings are picked up in subsequent traversals, ie more than one 360 degree traversal may be required On subsequent traversals the overlaid arrow will be blue for bearings which have already been processed, but will change to green before encountering a bearing that was missed in the previous traversal.
For both methods keeping the device at a constant vertical angle and rotating slowly and smoothly is important for accurate recording. For the traversal method also try to keep the movement continually in a clockwise direction with no reversals.