Blind-End; An Approach to Support Orientation and Navigation for Blind People
The claim is that blind persons can perceive objects like door, pillar, ditch, elevator, passage, room, hall, building, street, etc., having only mobile cellular phones. The primary goal of all positioning systems is to determine the user's position as precisely as possible, whereas the main purpose of our project is to provide a blind person with the ability to locate an object and then to perceive it by getting to know its attributes. This permits area familiarization and route planning. Object position may be one of the attributes. Once a blind person identifies an object (by being close to it), she/he can get to know her/his position from the object attributes. It is interesting that the position is not so important also for sighted persons; the position is usually relative and can be derived from perceiving orientation points (e.g., interesting objects) that have already been remembered. The crucial assumption of our project is that objects can be located (identified) using IrDA connectivity. This means that in order to be located an object must have an infrared transceiver (standard IrDA controller) that transmits data (to a mobile) containing the object’s identifier and the azimuth of the infrared message beam and, if it is necessary the current values of some of its attributes. Given the object’s identifier the complete object description can be downloaded from a local or global repository via Bluetooth, or/and GPRS connectivity of a mobile. It is important to note that, unlike Talking Signs, the description is not in a voice format. It is expressed in terms of generic attributes and types so that it can be processed automatically. Only the result of such processing is delivered to a blind user as voice. Key words: navigation and orientation for blind people, cognitive maps, GIS, GPS.