Today, maps are commonly used for navigation purposes in driving and bicycling. In such situations, the interaction of a user with the map is impossible or, at least, dangerous. Moreover, individuals who have difficulties with their vision, may not see the icons and signs properly. As conversation is a natural way of interacting among people, audio inputs can reduce the existing complexities within general-purpose geospatial information systems (GIS). Therefore, the development of a user-oriented GIS which is able to interact with visually impaired individuals and drivers is important. In this paper, we first review the existing navigation systems, focusing on their user interface (UI). Then, the prototype of an audio-enabled open-source web-based GIS is presented. The users of this WebGIS can utilize its capabilties only using audio commands, without the need to use their hands. In order to establish a connection with the browser's speech recognition engine, the annyang library has been used. An audio user interface has some advantages over casual interfaces, namely being hands-free, the compatibility with up-to-date web technologies, not requiring additional hardware, and consequently being low-cost. Beside the audio user interface, a graphical user interface (GUI) has also been designed. Hence, the user can interact with both user interfaces simultaneously.