In our last post, part 1, we stated two major questions still haunt AI research.

  1. Should AI simulate human intelligence, incorporating the sciences of psychology and neurology, or is human biology irrelevant?
  2. Can AI, simulating a human mind, be developed using simple principles, such as logic and mechanical reasoning, or does it require solving a large number of completely unrelated problems?

Why do the above questions still haunt AI? Let us take some examples.

  • Similar types of questions arose in other scientific fields. For example, in the early stages of aeronautics, engineers questioned whether flying machines should incorporate bird biology. Eventually bird biology proved to be a dead end and irrelevant to aeronautics.
  • When it comes to solving problems, humans rely heavily on our experience, and we augment it with reasoning. In business, for example, for every problem encountered, there are numerous solutions. The solution chosen is biased by the paradigms of those involved. If, for example, the problem is related to increasing the production of a product being manufactured, some managers may add more people to the work force, some may work at improving efficiency, and some may do both. I have long held the belief that for every problem we face in industry, there are at least ten solutions, and eight of them, although different, yield equivalent results. However, if you look at the previous example, you may be tempted to believe improving efficiency is a superior (i.e., more elegant) solution as opposed to increasing the work force. Improving efficiency, however, costs time and money. In many cases it is more expedient to increase the work force. My point is that humans approach solving a problem by using their accumulated life experiences, which may not even relate directly to the specific problem, and augment their life experiences with reasoning. Given the way human minds work, it is only natural to ask whether intelligent machines will have to approach problem solving in a similar way, namely by solving numerous unrelated problems as a path to the specific solution required.

Scientific work in AI dates back to the 1940s, long before the AI field had an official name. Early research in the 1940s and 1950s focused on attempting to simulate the human brain by using rudimentary cybernetics (i.e., control systems). Control systems use a two-step approach to controlling their environment.

  1. An action by the system generates some change in its environment.
  2. The system senses that change (i.e., feedback), which triggers the system to change in response.

A simple example of this type of control system is a thermostat. If you set it for a specific temperature, for example 72 degrees Fahrenheit, and the temperature drops below the set point, the thermostat will turn on the furnace. If the temperature increases above the set point, the thermostat will turn off the furnace. However, during the 1940s and 1950s, the entire area of brain simulation and cybernetics was a concept ahead of its time. While elements of these fields would survive, the approach of brain simulation and cybernetics was largely abandoned as access to computers became available in the mid-1950s.

In the next and concluding post, we will discuss the impact computer had on the development of artificial intelligence.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte