The strong and the weak: the problem with artificial intelligence


By Andrew Collins
Tuesday, 13 November, 2012


The strong and the weak: the problem with artificial intelligence

While popular science fiction considers the main question around artificial intelligence to be how will we overthrow our eventual machine oppressors?, many academics are not even sure true AI is possible.

Artificial intelligence (AI) is a concept that has held the attention of scientists and philosophers for decades. Some even trace the concept back to ancient myth, with many cultures describing some form of thinking machine.

While popular science fiction considers the main question around AI to be how will we overthrow our eventual machine oppressors? (see the Terminator franchise), many academics are not even sure true AI is possible.

Some believe that given the right design and the right programming, a machine can think and have a mind, exactly as a human does. This argument was once known as (and is still referred to in academia) as ‘Strong AI’.

Others hold the more conservative belief that giving a machine a mind is impossible and that an AI can only ever act like it can think and has a mind. In other words, the best we can ever do with AI is simulate a real human intelligence. This position was initially known as ‘Weak AI’.

This distinction is considered irrelevant by some modern researchers, who “take the weak AI hypothesis for granted, and don’t care about the strong AI hypothesis”, at least according to Stuart Russell and Peter Norvig, in their 2003 textbook Artificial Intelligence: A Modern Approach.

And while the concept itself may seem simple from the outside - you just pump more power into a computer and eventually it’ll be able to do anything, right? - the reality is much more complicated. There are a range of philosophical problems that a model of AI must address.

These include (but are definitely not limited to): how to represent knowledge, different types of learning, perception and creativity.

This is a mere taster of the problems that face cognitive scientists aiming to model an artificial intelligence. Dr Kevin Korb, a Reader at Monash University, will discuss the state of AI later this week in a lecture titled ‘A history of Artificial Intelligence: AI as a degenerating scientific research program’.

“Many thousands have worked on one aspect or another of the AI research program and it has achieved a great many things, but where is the AI?” Korb said in a teaser for his lecture.

Korb’s lecture will take place at 2-3pm on Thursday, 15 November 2012, in Seminar Room 135, Building 26, at Monash University’s Clayton campus. There’s no need to RSVP - simply turn up and learn a thing or two about AI.

Related Articles

Digital experience is the new boardroom metric

Business leaders are demanding total IT-business alignment as digital experience becomes a key...

Data quality is the key to generative AI success

The success of generative AI projects is strongly dependent on the quality of the data the models...

The top hurdles that will keep Australian CDOs up at night in 2024

The era of AI promises plenty of potential but this also guarantees increased complexity for...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd