The most noticeable is the rotating roof-top Lidar – a camera that uses an array of 32 or 64 lasers to measure the distance to objects to build up a 3D map at a range of 200m, letting the car "see" hazards.
The car also sports another set of “eyes”, a standard camera that points through the windscreen. This also looks for nearby hazards - such as pedestrians, cyclists and other motorists - and reads road signs and detects traffic lights.
Speaking of other motorists, bumper-mounted radar, which is already used in intelligent cruise control, keeps track of other vehicles in front of and behind the car.
Externally, the car has a rear-mounted aerial that receives geo-location information from GPS satellites and an ultrasonic sensor on one of the rear wheels that monitors the car’s movements.
Internally, the car has altimeters, gyroscopes and a tachometer (a rev-counter) to give even finer measurements on the car’s position, all of which combine to give it the highly accurate data needed to operate safely.
How Google’s driverless cars work
The data that Google's software receives is used to accurately identify other road users, their behaviour patterns, and what commonly used highway signals mean.
For example, the self-driving Google car can successfully identify a bike and understand that if the cyclist extends an arm, the person intends to make a manoeuvre. The driverless car then knows to slow down and give the bike enough space to operate safely.
No comments:
Post a Comment