Cognificance

Cognificance

About the significance of machine cognition

Cognificance

About the significance of machine cognition

Artificial Intelligence (AI)

What is Artificial Intelligence?

Wikipedia defines cognition to encompass processes such as knowledge, attention, memory and working memory, judgment and evaluation, reasoning and "computation", problem solving and decision making, comprehension and production of language. AI can take care of some of these requirements (such as evaluation, problem solving and production of language), but should not be thought of on the same level as cognition.

This is where I am at odds with the
Wikipedia definition of AI, as that article attributes too many capabilities to AI that, in my humble opinion, don't apply. If we look at the Wikipedia definition of intelligence and work from there, we get closer to what I feel AI is (and isn't): "Intelligence is … the ability to perceive information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context."

As in an animal, intelligence allows it to recognize a predator in the bushes and to initiate a reaction (running away). The gruntwork of getting away from becoming breakfast isn't taken care of by the intelligence mechanism, but rather by older parts of the brain, hormones in the bloodstream and muscles. Similarly, artificial intelligence is able to recognize patterns in data and initiate reactions to that data, but the gruntwork is done by software - anything from simple scripts to macro collections to full-blown
Robotic Process Automation.

An important point about AI is that in order for pattern recognition to work, a so-called learning set needs to be generated. There is an important distinction to be made, however: just because a software is able to do pattern recognition does not necessarily qualify it as being an AI! Take for example products developed in the early 2000's that you would pass sample forms (usually scanned paper) so that an automated differentiation might be made between form layouts. Given a sample set of about 20 forms each, these algorithms are able to differentiate between up to 400 different forms (document classes) at lightning speed and with exceptionally high recognition quality. The technology behind this capability has nothing to do with AI, but rather with statistical analysis.

The difference between an AI trained on forms and one of these statistical analizers is that given an unknown document class, the algorithm will spit it out and tell you that this document doesn't fit any of the "learned" ones. An AI will tell you that it doesn't fit as well but is able to suggest similarities and take a "guess" at what document class it may be related to. That's an oversimplification, of course, but the point is: AI recognizes unknown patterns and tries to define them. statistical Algorithms don't.

It is unfortunate that many people, including IT journalists (who should know better) confuse the workings of a statistical analysis algorithm with true AI, adding to the watering-down of AI as a term.

Uses of Artificial Intelligence

Artificial intelligence has gone through major improvements, by leaps and bounds, in the last 10 years. Several events made global media coverage:

  1. in 2011, IBM's Watson beat the then incumbent winner of Jeopardy, Ken Jennings
  2. in 2016, Google's DeepMind AI beat Korean grandmaster Lee Sedol at Go, a game that is rumored to have more move variations than particles in the known universe

It is important to note that IBM's Deep Blue win against Chess Grand Master Garry Kasparov in 1997 was not an AI win but rather a brute-force algorithm.

The use of AI isn't just for beating humans at strategy games, however. In 2016, Google trained its AI on thousands of hours of "talking heads" (Newsreaders) from BBC, for which closed captioned content was available. With the closed captioning information, the video material provided a perfect training set for an AI capable of analyzing video material. Google's AI is now able to lip-read better than any human expert can. In case you're wondering why conversations between coaches and strategy consultants at the 2016 soccer world championship were always held with a hand covering the mouth, now you know. Without visually covering the mouth, some entity with an AI trained as Google's was might use the technology to capture the conversation on the field and relay its important points to the opposing team!

The caveat of AI, incidentally, is that an AI can only find patterns in data properly, if a lot of data is presented (I'll get into the significance of that in a future article). In this way, an AI learns similarly to humans (or any other intelligent animal): by using massive amounts of input to fine-tune the response system.

Subsequently, AI is great at analyzing anything that can present massive data in a reproducible format. Historic census data is a classic - AI is able to detect patterns such as migration of social groups in or out of cities over the years. The stock market, of course, is a welcome target - not only for AI researchers but also anyone wanting to make a buck.

The key, however, is that you can't just dump data you ingest from somewhere into an AI and expect results. The data has to be structured somehow, to keep the apples with the apples and the oranges with the oranges. A great way to do this - especially if the data needs to be collected from the internet or internal systems (databases, host systems, etc.) is to use
RPA, of course.

AI systems are also used for Natural Language Processing (NLP), something most of us know in the form of Siri, Cortana, Amazon's Echo or Google Assistant. NLP has more uses than to automate actions on a smartphone, of course, although I believe the main drive behind NLP development these days is consumer use. NLP is used for medical comment recording (in Radiology or Pathology for example), where writing down notes would impede the medical practitioner's concentration on the subject matter.

NLP is also used to take protocol during conversations with legal bearing, such as
call center activity to sell something over the phone. Call centers record all incoming and outgoing calls and are now starting to use this wealth of historical data to feed AI systems for analysis. To analyze the data, a specialized NLP AI needs to convert the spoken word to text first, of course, ideally with additional metadata to preserve inflection, speed and emotion of the speaker(s).

No discussion on the use of AI would be complete without touching on autonomous transportation or driverless cars. The real problem of autonomous transport in the future isn't the technology. This will be on par and affordable within just a few years. The problem will be the mix of autonomous and human-driven vehicles. Humans are horrible drivers (try the German Autobahn and you'll learn what that means). Humans are controlled by hormones ejected into the bloodstream by parts of the brain not controllable by our conscious mind, which - coupled with the constant information overload inherent in city driving - leads to completely wrong actions and reactions.

An autonomous vehicle control system will need to be able to take such reactions into account and work with them. This requires true machine cognition, something that I believe we are quite a ways away from.