Ask an expert: get a personalized answer within 24 hrs!X Close

m-apps:vidAR BETA

m-apps:vidAR BETA

1,000 - 5,000 downloads

Add this app to your lists

+ By High-level Algorithms Limited

This is the initial BETA-test release of m-apps:vidAR
... an extended version of the locAR real-time team location tracking application (see m-apps.com/loc for the locAR User Guide and other documentation)

vidAR adds optical surveillance and image tracking capabilities to the team data sharing and location tracking features of locAR

When vidAR is first started its functionality is essentially the same as the Map view of locAR. You can join a team and share your location, motion, texts and images with your team friends in just the same way

To try vidAR's image surveillance features select Cam view mode (Menu-Cam), place your phone down somewhere so that the camera is covering an area of interest and leave for a short while (usually around 30 seconds or so) for the image to 'stabilise'

Note that it is strongly recommended that you enable speech output (Menu-Data-User-Talk) so that vidAR can announce its operation verbally (as you may not be able to see the screen easily). On first use this may request that you download and install a standard speech library (this may have already been done if, for example, you have already enabled speech output for locAR)

This release is built for Android v1.6 or later to support a wide range of phones. That API level does not have support for multiple cameras, so always uses the 'default' camera (typically a rear-facing camera). Other vidAR releases coming up will be built for later API levels which allow switching between multiple (e.g. front/rear) cameras.

vidAR uses advanced 'scene-learning' algorithms to determine which areas of the observed scene are stable and suitable for the generation of optical tracks. This is much more than just a 'frame comparison' process and is designed to minimise false alarms whilst being very sensitive to the presence of actual foreground objects against the 'learned' background field of view

When objects are 'seen' in front of stable background areas they are used to generate optical tracks. These tracks are further checked against various selectable 'zone grid' patterns which are then announced verbally. You can change the current zone grid via Menu-Vid-Zone-Grid in Cam view

This means that, for example, you can use your phone as an optical intruder-detection system while it is charging overnight! Just remember to enable voice output ...

PLEASE NOTE that this is very much an early BETA TEST version of vidAR. There are still several 'rough edges' to the app and, in particular, the memory-intensive image processing features can sometimes cause FCs (particularly when switching screen orientation). However, even in this early form vidAR can already perform a very useful surveillance role if applied carefully. If you do get FCs please just try re-starting vidAR again...

This is all just a taster of many more advanced features to be supported in upcoming releases of vidAR. More detailed instructions and other documentation will also be made available soon ....

Tags: vidzone, vidar, vidzone andropid remote, vidzone für, freevidar, vidar program for phones, mapps.com/loc, vidar program.

Screenshots m-apps:vidAR BETA
View bigger - m-apps:vidAR BETA for Android screenshot
View bigger - m-apps:vidAR BETA for Android screenshot
Comments and ratings for m-apps:vidAR BETA

  • There aren't any comments yet, be the first to comment!