Principles of Social Informatics
1. Uses of ICTs lead to multiple and sometimes paradoxical effects.
2. Uses of ICTs shape thought and action in ways that benefit some groups more than others.
3. The differential effects of the design, implementation, and uses of ICTs often have moral and ethical consequences.
4. The design, implementation and uses of ICTs have reciprocal relationships with the larger social context.
5. The phenomenon of interest will vary by the level of analysis.
The third principle of the 5 principles of Social Informatics focuses on the moral and ethical consequences
that a technology may cause and raise. Because the developers of Google's car intend to increase human safety while on the roads, one would argue that Google's driverless car
is ethical and moral in nature. However, there are a variety of ethical and moral concerns that need to be addressed regarding this technology.
To begin, Google's car employs a great deal of hardware and software that is quite expensive. Because of this, it is imperative that hardware and software programmers ensure that software is up to date
and that hardware is running properly. The pressures related to developing and releasing a technology on schedule may tempt individuals to disregard security, software, and hardware problems all in order
meet a time table. Operators of Google's car would depend upon this hardware and software, and by failing to identify risks, as well as keep it up-to-date and functional, Google would be endangering the lives
of everyone on the road. Therefore, Google has an ethical responsibility to ensure that all software and hardware are up to date and functioning well, in addition to having been developed and tested for a significant amount of time.
The implementation of this technology could also lead to ethical and moral issues regarding accountability if an accident does occur. With the ability for a user to disengage the car's "autopilot,"
as well as the car's ability to gain control from a human driver, it is almost impossible to specify who would be at fault for an accident: the driver or Google? Would we be capable of differentiating whether or not
the driver responded in the manner in which he or she did due to a technical issue?
Furthermore, what are the ethical consequences related to having a system that constantly locates an individual and his or her vehicle? Google will have to ensure that this information is almost impossible to be
tracked, for it could violate American rights of privacy, escalate social tensions, and figuratively place a bulls eye on a possible target's back.
Lastly, Google's car influences individuals to embrace a more laid back and "eased" driving experience. This would allow drivers to become more relaxed and comfortable within their cars, which could be a positive thing.
However, does the advocation for easy driving situations feed into a commercial ploy to attract buyers, as well as a highly acceptable level of indolence. Essentially, this reliance and submission to technology
replaces human judgement. This brings into question the power this technology has to make one forget about the safety of their fellow drivers, and whether or not this reliance will force society to evolve into technological
singularity.
Who Are We?
Hello! Our names are Adrian, Dedrick, and Sorab. We are students at Rutgers University, majoring in Information Technology.
This site serves as a project for our Social Informatics course, which is requiring us to examine an emerging technology based off of the 5 Principles of Social Informatics.
The technology that we are assessing is driverless cars: more specifically, Google's Driverless car.
We hope to demonstrate information regarding background and development, the need for the technology, the public's preconceived notions and expectations of the technology,
the social components of the technology, the technological components of the machine,
the possible paradoxical effects that may result from its application in society, and its its moral consequences.
Research Methods
In order to properly depict Google's Driverless car, we shall be providing information regarding
the basics of the machine that we have found from a list of scholarly journals (which are each
listed in the bibliography portion of the website). Furthermore, we shall be analyzing the videos and interviews provided
by Google in order to communicate frst-hand experiences and results from test drives. Lastly, because the goal of driverless
cars is to increase and promote safety, the technology will be investigated mostly by its ability to seemingly provide a safer means of transportation
without stimulating too many negative paradoxical effects.
Discussion Questions
1. Will the implementation of Google Driverless cars cause humans to become less concerned about the welfare of others behind the wheel?
2. Does exposing an individual's location intentionally or unintentionally warrant legal action due to a breach in privacy rights?
3. Can ambiguity regarding machine error and human error require dragged out legal processess if the need arises?
4. Does Google, as a manufacturer, possess an equal amount of legal responsibility that is similar to drivers who operate regular cars?
5. What moral and ethical expectations can society demand of Google?
6. Would it be unethical for Google not to offer a warranty that covers car parts and computer parts?
7. Can blame be placed more heavily on technical malfunction rather than human error when an issue arises?
9. Does Google's car evidence itself to be more concerned about improving safety or providing the ultimate joy ride?