Manufacturers
According to Wikipedia, "Quasi-autonomous demonstration systems date back to the 1920s and the 1930s" (para 3). An example listed was the Eureka Prometheus Project. There have also
been prototypes made by Mercedes-Benz, General Motors, Continental Automotive Systems, Autolive Inc., Bosch, Nissan, Toyota, Audi, Vislab (BRAiVE), Oxford University, and Google (Google Driverless Car).
Self-parking cars are an example of this driverless quality and technology, which many people use in today's society. Other examples have also been shown in select films as well. According to Wikipedia, films such as Christine, Batman, Total Recall, Demolition man, Timecop, Minority Report, and I, Robot. Nevertheless,
the technology of driverless cars operates as it does because it utilizes a super to process data and deliver commands, as well as a number of different technologies to assist in this process.
Crash-Avoidance Systems
One of the greatest design cues that have been implemented by car manufactures are the cameras, sensors, and lasers that make up crash-avoidance systems.
Most of these technologies are available in the market today even though some brush it off as science fiction.
We currently have rear-view cameras to assist with parallel parking and we have
created such views to prevent collisions of incoming traffic. Such systems have been created that cars today can all but park themselves.
Multimedia Entertainment
Positioning Devices
Robotics. Furthermore, as described in the overview, Current robotics research have setup these cars to be accurate on the road, with their reaction times reaching perfection in terms of driving
habits will present a sort-of “brain” that controls the functions of the vehicle. The University of Berlin presented its findings on the utility of these hardware assistance systems in the paper
Semi-Autonomous Car Control Using Brain Computer Interfaces. The research provided by the Artificial Intelligence Group at the school tested scenarios in which their BCI or “brain computer interface”
proved successful on semi-autonomous cars. Essentially, one could think about what actions they may want their car to take and a neuroheadset would sense the brain patterns to control the interface
for steering and throttle/brake responses. After many benchmark experiments, the results show that “Brain-computer interfaces pose a great opportunity to interact with highly intelligent systems such
as autonomous vehicles” (Gohring, etc.) These machineries must improve autonomous functions such as following paths and avoiding obstacles to become widely accepted by society, however the leaps and
bounds that are being created by researchers are making these computer technologies an extremely plausible option for those who are bad drivers, too old to operate, or are just looking for a way to
free up time spent driving.
According to Consumer Reports, “Research shows that 90 percent of crashes are caused by human error. That’s why the National Highway Traffic Safety Administration (NHTSA)
and every major automaker are increasingly focusing on systems that allow the vehicle to become a partner in the drive by monitoring a car’s surroundings, warning the driver of
danger, and even taking control of the car in certain situations” (www.consumerreports.org).
Those who are not fond of parallel
parking or are weary of distances and have a tendency of backing up into other vehicles can all but forget about their woes.
If a majority of accidents never occur,
thousands of insurance claims will all but disappear, thus completely slashing insurance costs in the process. As we eliminate accidents, lawmakers and government officials will
have no reason but to increase highway speed limits. This will greatly decrease travel time and speeding will become completely non-existent. This way, the police force will be
able to focus on other more important matters than traffic violations or accidents, as autonomous cars will make the roads flow like a consistent stream of traffic.
Long distance travel will become more efficient, easier to bear, and congestion on the roads will become a thing of the past.
Google's car creates new time for leisure and in-car entertainment.
Whether it is gaming, business, task making, et cetera, innovative start-ups will find a huge area for growth once these cars hit the market. Displays that are fused into seats or
dashboards of cars are becoming more commonplace as time goes on. Tesla broke records last year with its inclusion of a massive 17-inch screen in the interior of the car. In addition,
Apple recently has taken the initiative to create Car Play, it’s user interface for vehicles to sync Apple devices with car displays. Expect many other companies to follow suit.
Radars and sensors. Driverless cars also employ technologies that allow for the vehicle to position itself next to everything in its surroundings.
This includes "lidar," radars and position estimators, and video cameras, as illustrated in the image to the left. All of these technologies
allow driverless cars to remotely position themselves. According to Michael A. Lefsky, Warren B. Cohen, Geoffrey G. Parker, and David J. Harding (2002),
Laser altimetry, or lidar
(light detection and ranging), is an alternative remote sensing technology that promises to both increase the accuracy of biophysical measurements and extend spatial analysis into
the third (z) dimension (p. 19)... The basic measurement made by a lidar device is the distance between the sensor and a target surface, obtained by determining the elapsed time between the
emission of a short-duration laser pulse and the arrival of the reflection of that pulse (the return signal) at the sensor's receiver. Multiplying this time interval by the speed of light
results in a measurement of the round-trip distance traveled, and dividing that figure by two yields the distance between the sensor and the target (p.19).
This explains how this technology is
able to identify objects and terrain around it. Nevertheless,
this is not the only form of radar/sensor that some driverless cars use. According to the BBC (2011), Google's car employs
there are two types of functions for sensors: The first [sensor] identifies a 'landing strip' when the vehicle stops. This then triggers the second set
which receives data informing the machine where it is positioned and where it should go. 'The landing strip allows a human driving the vehicle to
know acceptable parking places for the vehicle,' the patent filing says. 'Additionally, the landing strip may indicate to the vehicle that it is
parked in a region where it may transition into autonomous mode' (para. 5-7).
GPS receivers. Furthermore, BBC (2011) explains that cars identify their placement on
these landingstrips by initiating a GPS receiver that approximates the car's location, as well as identifies trees, foliage, and other landmarks" (para. 9). The car also locates itself by scanning and reading QR codes that may be associated with landing strips (BBC, 2011, para. 10).
These GPS receivers dictate the
necessary routes, distances, and points that allow the cars to go from point A to point B. This also allows the cars to register their location. GPS receivers pick up
signals from U.S. satellites: "The Global Positioning System (GPS) is a U.S.-owned utility that provides users with positioning, navigation, and timing (PNT) services ("Global Positioing," 2014, para. 1)...
[And] consists of a constellation of satellites transmitting radio signals to users ("Space Segment," 2014, para. 1). Visit
GPS.gov for more information regarding GPS. Lastly, video cameras are used to provide
real-time situations and positioning of pedestrians, vehicles, and buildings.