Forgotten Password?
Follow RIN on:  Follow us on Facebook Twitter Feed RSS Feed

Latest News across the RIN

Commissioners of Irish Lights 2015 AtoN Review
24/11/2014

...more

Trinity House 2015 AtoN Review Consultation
22/11/2014

Trinity House has recently undertaken a review of their Aids to Navigation (AtoN) provided around the coasts of England, Wales and ...more

Visit to the Old Royal Naval College, Greenwich
17/10/2014

An opportunity to participate in a guided Navy Days Tour on 10 February 2015 at the Old Royal Naval College (ORNC), Greenwich.

William Bligh Lecture RAF Wyton Thursday 19th June 2014
15/06/2014

Change of Start Time

Assessing the short-term effects of capture, handling and tagging of sandgr..
24/11/2014

Assessing the short-term effects of capture, handling and tagging of sandgrouse

Decoding the cognitive map: ensemble hippocampal sequences and decision mak..
23/11/2014

Decoding the cognitive map: ensemble hippocampal sequences and decision making

Visual nav to replace GNSS?

Dr Milford.jpg

Ditching satellites and opting for camera technology inspired by small mammals may be the future of navigation systems.

Dr Michael Milford from Queensland University of Technology's Science and Engineering Faculty, Australia, claims that his research into making more reliable global positioning - using camera technology and mathematical algorithms - would make navigating a far cheaper and simpler task.

He explains 'At the moment you need 3 satellites in order to get a decent GPS signal and even then it can take a minute or more to get a lock on your location. There are some places geographically where you just can't get satellite signals - and even in big cities we have issues with signals being scrambled because of tall buildings or losing them altogether in tunnels.'

Hence, what is claimed as the world-first approach to visual navigation algorithms - Sequence Simultaneous Localisation and Mapping (SeqSLAM) - using local best match and sequence recognition components to lock in locations.

Dr Milford continues 'SeqSLAM uses the assumption that you are already in a specific location and tests that assumption over and over again.'

'For example, if I am in a kitchen in an office block, the algorithm makes the assumption I'm in the office block, looks around and identifies signs that match a kitchen. Then if I stepped out into the corridor it would test to see if the corridor matches the corridor in the existing data of the office block layout.'

'If you keep moving around and repeat the sequence for long enough you are able to uniquely identify where in the world you are using those images and simple mathematical algorithms.'

It seems that this 'revolution' of visual-based navigation came about when Google photographed almost every street in the world for their Street View project. But the challenge was making the streets recognisable in a variety of different conditions - and to differentiate between streets that were visually similar.

The research - which uses low resolution cameras - was inspired by Dr Milford's background in the navigational patterns of small mammals such as rats. He has studied how small mammals manage incredible feats of navigation despite their eyesight being quite poor.

He is asking whether we can use a very simple set of algorithms which don't require expensive cameras or satellites or big computers to achieve a similar performance to GNSS.

Dr Milford will present his SeqSLAM paper at the International Conference on Robotics and Automation in America later this year.

The research has been funded for 3 years by an Australian Research Council $375,000 fellowship.

Details from Queensland University of Technology below.


Bookmark with:

Facebook      Delicious Save this on Delicious