Computers with No Conscience

Seemingly omniscient information made accessible by computers gives us a false sense of security. It renders our view of the world dangerously one-dimensional. With nearly the entirety of recorded human thought only a click away, it is easy to believe access equates possession of knowledge itself. Exemplifying this bias is the modern belief that those who inhabited the past were more ignorant than those alive today because they lacked accessible information. However much this may be true, it does not imply that we are more enlightened just because encyclopedias are at our fingertips. Moreover, such access does not equate good judgment. Throughout life, we are often faced with questions which our experience and available information may fail to answer or fully explain. These scenarios are where human faculties in a digital age should remain necessary and never outsourced. They are where we use our conscience to discern emotionally and reasonably amidst uncertainty. Computers can never, and will never, possess such consciousness. Computers may know, compute, and speculate to the nth degree. They may resolutely decide based on data and opportunity costs. Yet, they will never make choices stained with doubt, displeasure, hope, or sacrifice. Such metaphysical pressure is a key element of thinking. Our fallen natures have been a key element to philosophical methods, political theories, and world theologies. Thus, computers will never think in the highest sense, for thinking is more than just computational and rational competency. 

Let’s denote the conscious choices we make throughout life as ‘yellow’. They are yellow because we remember them as leading to actions which do not seem obviously permissible (green) or unconditionally forbidden (red). They are choices we make based on the rules and advice inherited from culture, religion, family, ethics, and identity. Each of these broad terms contain numerous philosophies and methods to assist and comfort us. Seeking solace in this inheritance stems from the recognition that the problem confronting us is not easily solvable. We then decide to the best of our ability, however doubtful, and hopefully choose the good. In other words, we utilize our conscience. This is often difficult. Many historical thinkers have attempted to claim they possess the circumvention of such difficult choices, usually via hyper-rationality, to make life easier. Alan Turing remarked upon this idea in his famous paper Computing Machinery & Intelligence,

It is not possible to produce a set of rules purporting to describe what a man should do in every conceivable set of circumstances. One might for instance have a rule that one is to stop when one sees a red traffic light, and to go if one sees a green one, but what if by some fault both appear together...To attempt to provide rules of conduct to cover every eventuality, even those arising from traffic lights, appears to be impossible.

This quote prefaces Turing’s ideas on the objection that computers are incapable of thinking because they only adhere to laws of behavior (algorithms) rather than also recognizing rules of conduct informed by subjective norms. Turing countered that if computers operated by merely adhering to behavioral rules, they would be entirely predictable. He thus challenged any individual to predict a computer’s response to an untried value. 

This does not seem to be a valid challenge. Take Turing’s initial example, the traffic light. When faced with both red and green, a computer may calculate the risk associated with moving and decide that it is safer to remain stationary momentarily. Mathematical representations of risk exist to be applied to questions of legal liability, insurance, and gambling. However, discernment between a moral right and wrong may not figure into this calculus. Rather, the objective question of safeguarding one’s material life and property is relevant to mathematical risk. What about more complex situations, such as deciding when to withhold the truth? What about the civil question of legalizing the murder of another human being in self-defense or by state execution? When we make these conscious decisions, we often must value our life together with the lives of others — life in its physical, emotional, and spiritual capacities. Such questions may not, and should not, be modeled or priced. Do we expect computers to place such prohibitions on itself? In modern cases where they nearly reach such questions, they are barred from answering fully. It is important that tech companies take such questions seriously, for the way we define thinking matters greatly.

Thinking is not merely an operational or procedural act. At its most profound heights, it is often a spiritual and moral endeavor. 


Gustavo Alcantar is a rising senior in Columbia College majoring in Economics with a concentration in History. As someone who loves to cook, he is always ready to share and make his family recipes or help others prepare theirs. You can usually find him waiting in line Sunday mornings for an Absolute Bagel and coffee, or most weekday afternoons in Avery Library. Gustavo is an active member of Columbia Catholic Ministry, Model UN, and the Columbia Undergraduate Law Review. 

Previous
Previous

This Summer, Take a Real Rest

Next
Next

Before the Harvest