In these special livestream events, I will present my arguments that while IA (Intelligent Assistance), and some forms of narrow AI could be beneficial to humanity, the idea of building AGIs i.e., ‘generally intelligent digital entities’ (as set forth by Sam Altman / OpenAI and others) represents an existential risk that should not be undertaken or self-governed by private enterprises, multi-national corporations or venture-capital funded startups.

There will be 2 sessions with the same content but timed to fit your timezone:

July 18, 6pm CET for EU/Americas: LinkedIn | Youtube

July 19, 9am CET for EU / UK / APAC, AUS/NZ / INDIA et al: LinkedIn | YouTube

In my presentations, I will explain:

  • Why I believe we need an AGI-Non-Proliferation-Agreement
  • What the difference is between IA/AI and AGI or ASI (super-intelligence)
  • Why it matters, and
  • How we could go about it

I will present for 25 minutes and will then answer your questions and comments; and I may have some special guests on this live show and take live questions on LinkedIn, YT and Twitter – and possibly have some live video call-ins!