Google I/O is Google's developer conference - it has gotten a notable Android focus since Google Cloud Next got as big as it is today, but it is relevant even for non-Android users. I'd like to point out three points that Google made - not the flashy points, but the really important ones (especially if you are the competition).
AI development is currently plagued by some controversies and hindrances, and if you looked carefully at Sundar's keynote, you could see the AI 💪 its muscle. For me, this was what the conference was really about.
Traditionally, programmers provide a ... well, programmed response to every user command (and if you didn't, your program would crash or somebody would steal your data). Deep learning, which powers today's AI, is different: It creates the best responses itself through training. By doing so, it may discover patterns the programmer could not foresee (or understand).
AI does not require your data, but this is where AI creates value. Your Netflix recommendations, your commute, your food preferences - it's a fine line between privacy and usefulness.
💪: Google doesn't need your data anymore - well, they do, but they won't horde it any longer. You can have data older than three months automatically wiped. Apparently, they can extract and learn everything relevant from it much faster.
💪: For certain on-device activity, like your AI-powered keyboard, Google will not need the primary data. Only abstract, higher-level concepts will be synched to give you things like Smart Compose. They call it "federated learning".
Doesn't sound like much, but Google wouldn't pull such a move if it cost them data or opportunities. For me, it's testament at how sophisticated the learning has become. I've long been telling my parents that "nobody is really interesting in reading what you write". Apparently, that's now reality and it certainly helps in times of GDPR. Nothing projects power like "Your data? Have it, we don't need your data".
All the training data your algorithms feed on create something of value: An AI, which can suddenly distinguish cats from dogs (and from bagels, too). How well it does that depends on training data and it easily gets worrying if your training data contain mostly men when you teach your AI what a doctor looks like. You are implicitly teaching your AI that a woman with a stethoscope is less likely to be a doctor than a man with a stethoscope.
This gets worse when your AI will be used for recruiting. Or for policing. Or for sentencing. Bias identification has become a huge topic in AI.
💪: Google is working on ways to identify bias by teaching AIs what human-identifiable concepts (skin color, gender etc.) are. The AI can then identify how important that concept is in building its model. Machine learning specialists can then take corrective action, such as expanding the training data. This alone does not resolve the problem, but Google has stepped out in the open and has concrete solutions. It's very encouraging towards a fairer future.
Process/proof: 3x3x3 points.
Make up to 3 points (less is better). Every point from 3 perspectives. Repeat each point 3 times.
💪: Google managed to shrink the models by a factor of 20 and make them less power-hungry, so they now run on your phone. Offline and 10 times faster. This will allow more natural interactions with the assistant and will eventually make its way onto other devices, so that everything will get smarter.
💪: This will also enable offline speech recognition in every movie or help people with speaking and hearing difficulties have better conversations. Which will improve the models, which will improve the services we get in Google Workspace. Language as a layer, self-improving!
Thank you for reading!