Visual nav to replace GNSS?
Ditching satellites and opting for camera technology inspired by small mammals may be the future of navigation systems.
Dr Michael Milford from Queensland University of Technology's Science and Engineering Faculty, Australia, claims that his research into making more reliable global positioning - using camera technology and mathematical algorithms - would make navigating a far cheaper and simpler task.
He explains 'At the moment you need 3 satellites in order to get a decent GPS signal and even then it can take a minute or more to get a lock on your location. There are some places geographically where you just can't get satellite signals - and even in big cities we have issues with signals being scrambled because of tall buildings or losing them altogether in tunnels.'
Hence, what is claimed as the world-first approach to visual navigation algorithms - Sequence Simultaneous Localisation and Mapping (SeqSLAM) - using local best match and sequence recognition components to lock in locations.
Dr Milford continues 'SeqSLAM uses the assumption that you are already in a specific location and tests that assumption over and over again.'
'For example, if I am in a kitchen in an office block, the algorithm makes the assumption I'm in the office block, looks around and identifies signs that match a kitchen. Then if I stepped out into the corridor it would test to see if the corridor matches the corridor in the existing data of the office block layout.'
'If you keep moving around and repeat the sequence for long enough you are able to uniquely identify where in the world you are using those images and simple mathematical algorithms.'
It seems that this 'revolution' of visual-based navigation came about when Google photographed almost every street in the world for their Street View project. But the challenge was making the streets recognisable in a variety of different conditions - and to differentiate between streets that were visually similar.
The research - which uses low resolution cameras - was inspired by Dr Milford's background in the navigational patterns of small mammals such as rats. He has studied how small mammals manage incredible feats of navigation despite their eyesight being quite poor.
He is asking whether we can use a very simple set of algorithms which don't require expensive cameras or satellites or big computers to achieve a similar performance to GNSS.
Dr Milford will present his SeqSLAM paper at the International Conference on Robotics and Automation in America later this year.
The research has been funded for 3 years by an Australian Research Council $375,000 fellowship.
Details from Queensland University of Technology below.