Friday, 1 January 2016

Google Driverless Car

Image result for Google's self-drivingGoogle’s driverless car comes equipped with eight different types of sensor.
The most noticeable is the rotating roof-top Lidar – a camera that uses an array of 32 or 64 lasers to measure the distance to objects to build up a 3D map at a range of 200m, letting the car "see" hazards.
The car also sports another set of “eyes”, a standard camera that points through the windscreen. This also looks for nearby hazards - such as pedestrians, cyclists and other motorists - and reads road signs and detects traffic lights.
Speaking of other motorists, bumper-mounted radar, which is already used in intelligent cruise control, keeps track of other vehicles in front of and behind the car.
Externally, the car has a rear-mounted aerial that receives geo-location information from GPS satellites and an ultrasonic sensor on one of the rear wheels that monitors the car’s movements.
Internally, the car has altimeters, gyroscopes and a tachometer (a rev-counter) to give even finer measurements on the car’s position, all of which combine to give it the highly accurate data needed to operate safely.

How Google’s driverless cars work

How do self-driving cars work?
No single sensor is responsible for making Google's self-driving car work. GPS data, for example, is not accurate enough to keep the car on the road, let alone the correct lane. Instead, the driverless cars use data from all eight of their sensors, interpreted by Google's software, to keep you safe and get you from A to B.
The data that Google's software receives is used to accurately identify other road users, their behaviour patterns, and what commonly used highway signals mean.
For example, the self-driving Google car can successfully identify a bike and understand that if the cyclist extends an arm, the person intends to make a manoeuvre. The driverless car then knows to slow down and give the bike enough space to operate safely.
Are self driving cars safe?

No comments:

Post a Comment