The app is targeted generally towards the visually challenged users who are not able to see and type anything on the screen. It allows users to use the Google voice recognition service to populate the HTML form elements by either spelling or saying their personal information(please use dummy info).
The "SEND" voice command submits the complete form data with a buzzing sound and vibration to a Web service that sends back an audible claim response to the challenged user.
Users can also edit the HTML form values at any point in the application totally using the predefined voice commands without having to actually touch anywhere on the screen. Any touch events caused to cancel the Voice recognition service will lead users to quit the application.
None of the spoken data entered on the web page gets saved anywhere in the device's permanent or temporary storage space. All the personal data spoken on the web page gets auto destroyed once the application is forced to quit.
( In addition to the "send" command, I will be introducing a strong phone shake equivalent to a 4G Anti-Gravity Forward Drag and that will in return trigger phone's motion sensors.
Once the phone sensors attain the preset Force Value by shaking the phone they(sensors) will then activate the "SEND" voice command and call the web service to return the desired result )