We recently spoke about what the singularity is - spoiler/recap: It is the theoretical point where the accelerating pace of technology and innovation gets so fast, that we don't control it anymore and even cannot understand it anymore. The train will run so fast, you can't catch up no matter how fast you are.
The elephant in the room is the role of Artificial Intelligence. Talking about financial markets is nice, but the money/dangers/rewards/developments are happening in AI.
You know Artificial Intelligence, right? It's a computer program that learns from "experience" and eventually comes up with things nobody programmed. An example would be AlphaGo (the boardgame computer that recently beat the world champion). Its latest versions are not pre-programmed or use a big database of clever moves, but it will learn even the rules by playing itself and come up with strategies nobody thought of before.
No matter how many games AlphaGo plays, it would still not be able to tell a puppy from a bagel though (could you?). It would fail at things a three-year-old could do.
We have Artificial Intelligence, but the race is on to create Artificial General Intelligence. That would be a computer program that could do anything a human could do. You know - not make a fool of itself in conversations, play a solid round of chess, talk about a Picasso. Just like me. #not
AGI would be great. It would be scary too. And it would bring in tons of money.
An Artificial Super Intelligence is an AGI that is much smarter than a human, on every level. We're not talking "Holger vs. Einstein", we're talking "bug vs. Einstein". A super intelligence might not bring a singularity by itself (maybe it will not be possible to improve it, or take too long, or be too expensive) - but it might. It might start improving itself, it might convince people to help it - we must assume that this intelligence will want to preserve itself, and that improvement is possible. In those scenarios, all containment efforts are flawed in ways we can figure out ourselves (or the AI will simply invent new ways).
They squabble. There are people, many associated with lots of money and big companies, saying "AGI will doom us all" and "AGI will be paradise".
Elon Musk, Bill Gates, Nick Bostrom and the late Stephen Hawking oppose AI to various degrees. For them, the risk is too great. A popular analogy is "AI is like a room full of babies playing with kalashnikovs." More like nuclear bombs, according to some.
Ray Kurzweil, Satya Nadella, almost the entire Google leadership and an army of nerds and entrepreneurs think AGI is awesome and it cannot come soon enough.
While those arguments are going on, all agree:
AGI is very hard to develop, and we don't have it yet
AGI is very hard to develop, but nobody has proven it cannot be done
If we got it, we will need to control it.
So let's make the best use of our time and get to work:
A bunch of math stuff I don't understand. It has been argued that today's AI models are nice, but have gone down an easy road that has yielded most benefits. New roads are needed = insane math is needed.
OMG MORE'S LAW IS ENDING. Just when everything needs to be better, faster, more energy efficient, more scalable. Not a problem! Better workarounds are being spun up already, and tech progress will speed up even more:
Crazy ideas such as quantum computing apparently work very well with AI and will trigger a new growth explosion. Fusion power (if it ever comes) would trigger another growth phase.
Chipmakers are starting to build computer chips made for nothing but AI - those chips will have integrated AI models for special purposes and won't need to be trained, for example.
With all that computing power, even better chips are possible, which ... You get it.
Better sensors and different ways of processing their output
Better understanding the brain, or consciousness
Better understanding why an AI acted the way it did. This will speed up AI development (you can actually debug it) and trust in AI. Google is making progress in this area.
Many of these questions sound like from a science fiction book, but are worked on full-time by serious people:
How to avoid an agent "going bad"
How to avoid an agent being silly or lazy or a cheater
How to instill human values into an AI (and what are those values, to begin with)
Do machines have rights
What will happen to society, jobs, the meaning of life, religion?
Yes we should. AI is already giving us super powers and will likely accelerate advances in all fields. Since I plan on living forever, AI better discover the cause of aging, and throw in some therapies, and soon. OK, it won't by itself - though it will help researchers get there. Yet we must not allow ourselves to be overpowered by it.
AI is a very big topic and we'll have much more to talk about. I hope I was able to answer some questions and raise your interest - regardless of where you are in life, I believe this is the single most important topic of our age.
Where will all this lead? What should we hope and strive for? Find my thoughts here, on the next stop towards the singularity. Read on!