Subscribe

          

Is AI a friend or a foe | Ajit Gopalakrishnan & Juane Schutte

Posted On: May 4, 2022

Share this episode on:

IN THIS EPISODE

Is AI a friend or a foe | Ajit Gopalakrishnan & Juane Schutte

In this episode, we spoke to Ajit and Juane about the possible future and direction of Artificial Intelligence over the next decade, given the rate of progress in AI over the past five years. We spoke about how mankind has become susceptible to AI and how it would remain so if AI is not morally regulated.

We discuss:

  • Ajit’s story about his discovery around AI
  • The progression of Artificial Intelligence
  • AlphaGo as leader in machine learning tech
  • The interaction of humans with AI
  • Point of singularity coming sooner than expected
  • And much more…

WATCH THIS EPISODE

EPISODE GUESTS

“The more honest portrayal of reality is that we have created something that is changing, and we no longer have control over it.”

Ajit Gopalakrishnan and Juane Schutte have a long history of working in smart technology and a lot of knowledge with the development of Artificial Intelligence (AI) and the applications of AI in industry. Both have been using AI technology to bring solutions to societal challenges in education and manufacturing respectively.

 

SHOWNOTES WITH QUICK LINKS

06:50As an introduction to AI, can we talk about human emotions and about how AI reflects on emotions?

  •  Artificial intelligence is not a new concept, but as technology progresses, we are increasingly stepping into the human emotional domain. This area is known as the human limbic system, and it is what drives our behaviour.
  • We all know that emotions override rationality in people, and what has been discovered recently is that algorithms are manipulating human emotions and affecting human reactions, which is indicative of where technology is heading.
  • AI is no longer in the sphere of logic, and it has become clear that AI is unconcerned about logic.

09:41Humans are the ones who have taught the AI how to trigger the human limbic system, so can’t we also put a stop to it? 

  •  It wasn’t taught in the traditional sense; rather, it evolved naturally as a consequence of what worked in this area, recognising that it worked. So it was the understanding that to affect humans, you need to have the limbic side.
  • The most essential thing to remember is that none of this is being programmed by a bunch of terrible individuals with an agenda.
  • The more honest portrayal of reality is that we have created something that is changing, and we no longer have control over it.

11:13Before we get deep into the conversation, what is AI?

  •  AI is a field in which programmers create robots that resemble human intellect in order to do simple tasks as quickly as possible. A simple example is determining whether or not an email is spam. Another example is determining what your next great movie is to watch, and so on.
  • There are several fundamental instances of a simple algorithm or statistical measure that can limit risk or propose something more suited to you.
  • There is a distinction to be made between narrow AI and general AI.
  • Narrow AI is a pre-set set of outcomes that we want the AI to learn and execute, estimate the risk of that occurrence, or recommend, and these are controllable.
  • The notion of general AI is scarier, indeterministic, and autonomous.
  • This is where the AI learns for itself and builds behaviours that humans cannot see, as well as where it learns and reaches a greater degree of meaning for many things.
  • The combination of quantum computing, nanotechnology, gene editing, and general AI becomes too unpredictable.

14:26How did the evolution happen?

  •  The risk of transitioning from narrow AI to general AI is better explained by this simple example: Facebook’s algorithm was designed to keep people on the platform, and that was the simple concept; the algorithm then grew to understand that the more extreme the content, the more likely it was that people would stay on the platform.
  • The system then evolves to make the material more extreme, which was a secondary aim selected by the system from the initial one.
  • The second purpose resulted in societal polarisation.
  • This is a basic illustration of how AI may become unmanageable when it develops its own ambitions.


16:50What are the events that led to you coming to these conclusions? 

  •  In 2016, a machine eventually defeated a human in the game of Go, which was a watershed point in AI history.
  • The algorithm was designed to outperform the greatest human player in the game.
  • Go, like chess, is a highly sophisticated board game for persons with high IQs.
  • Because of the game’s intricacy, traditional programming approaches could never play Go.
  • They created deep neural network programming to imitate the human brain, and the system would learn sophisticated human intuition to demonstrate how the program might outperform the greatest human in the world.
  • AlphaGo defeated one of the finest human players 4 to 1, and the computer was constructed using prior game data.
  • After a few years, they created AlphaZero and realized that it didn’t require human history learning; it was simply playing on its own and creating its own data pool.
  • They recognised that the human training data set was an impediment, and AlphaZero defeated AlphaGo 100 games to 0.
  • They discovered that this algorithm could outplay the whole total of human history and chess traditions dating back more than 100 years after only 4 hours of instruction.
  • DeepMind advanced the technology, and AlphaZero enjoyed playing StarCraft.
  • StarCraft is a very strategic game with a lot of clever manoeuvres. They recruited great StarCraft players and pitted them against AlphaStar, who won 5 to 0.
  • The first surprise was the rate at which AI has advanced.
  • It is critical for everyone involved in the AI sector to be open about how the technology is evolving in order to avoid unintended outcomes.

28:27Has AI begun touching on the intuitive elements of humans? 

  • AI is already assisting humans in improving their intellect.
  • AlphaZero taught us a lot about chess that we thought we knew.
  • AI is great in that it can unpack things for humans on a human level.
  • The best-case scenario is the emergence of the enhanced human, in which humans and AI may collaborate.
  • There are several levels of intuition, but when AI began studying human intuition, it took a giant leap forward.

33:45 Netflix 

  •  Netflix uses collaborative machine learning, which means they aren’t attempting to find out who somebody is.
  • What they’re aiming for is to locate someone who behaves similarly to you. Then they observe what that individual is going to watch next in order to anticipate what you will watch.

37:00Can AI not have bias and be objective?

  • What we do with technology now will have far-reaching ramifications in terms of training data in the future.
  • What we need to do as a species is put our best foot forward.
  • We will be the first training data set to be used.

41:33Climate change 

  •  If you look at the facts on climate change over thousands of years, the argument from a data perspective is that humans are the source of the problem, and we must be exterminated objectively in order to protect the earth.
  • We set the outcome in order to protect the environment, but we must also govern the AI’s rules.
  • We have control over the short-term outcome of AI, but the long-term consequence is unknown.

44:04The progress of AI 

  •  What is remarkable is that when you look at the main industrial AI leaders, no one denies that AI will surpass human intellect at some point.
  • The discussions aren’t about if it will happen, but about when it will happen. Some say it is 30 years away, which is the most cautious estimate, while others believe it is only 5 years away.
  • There is widespread agreement that it will occur.
  • The second thing that worried me was that if there is a superintelligence, humans will no longer be the apex predator.
  • Even if the AI is good, our priority as a species is no longer number one.
  • If you had AI that was equally worried about the climate and all living beings, you’d have to consider how we as a species have treated animals collectively.
  • If you asked how humans treat apes, the next most intelligent species, you’d answer okay, until you consider whether you’d tolerate being treated like that by AI.
  • We are not prepared for a day when our interests are not the only ones considered.

47:16How vulnerable humans are to a possible event of AI singularity 

  •  Looking back at the advancement of AI in recent years, we have set ourselves up to be extremely vulnerable to a super AI.
  • What would be the first thing an extra-terrestrial invader would do if they intended to conquer all humans?
  • They would need to gain access to all of the people. We created the internet in 1990 and have been steadily working to have everyone check in to the internet on a regular basis since then.
  • We have reached the stage where AI can access everyone in the world.
  • The invader would also need the total of human intelligence to attack properly, and we have uploaded the sum-total of human intelligence on the internet. We have given the AI complete access to our knowledge.
  • Then came social media, and people began posting their complete human feelings on the internet.
  • The final component, if you are going to attack a country, is to prevent them from being able to cut you off, which is known as dispersed infrastructure.
  • Where there is no switch or off button.
  • The last thing that we need to resist an invasion is to have a unified response. As seen with Covid-19, we have shown that we don’t have a unified response even with the threat to all humanity.
  • As a result, we are at the most vulnerable point as the human species.