With the ascent of GPU computing, the last decade witnessed a sudden rise in AI innovation, which led to truly astonishing discoveries in a variety of fields. This is driven by the fact that the massive computing power provided by this piece of hardware allows the training of massive neural networks, which are at the core of most modern AI systems. Neural networks are mathematical models that can learn a variety of tasks, given a numerical formulation of the problem and enough examples on how to solve it.
These models are at the core of several breakthroughs in problems that were previously deemed impossible. A known and peculiar example is Tesla’s autonomous driving system, which allows the driver to sit back and relax, while the car navigates through the traffic to reach the given destination. The system improves continually, by learning from data provided by the entire Tesla fleet, which is composed by nearly 1M vehicles. Tesla also relies on simulation data to help the driving system learn how to behave in scenarios of danger, which would be unsafe to test in the real world.
Just last week, OpenAI released Codex , an AI that converts instructions given in natural language into code. It performs tasks ranging from the most trivial print “Hello world!” to more complex ones like creating and hosting a web server. Codex also provides plugins for the office suite, allowing to perform long and tedious tasks such as removing all trailing spaces in a document by typing the simple line of text “delete all trailing spaces”. The business applications for this system are endless and have the potential to drastically increase the productivity of people with and without a technical background.
Finally, one of the most recent and perhaps more impactful breakthroughs is the solving of the protein folding problem by DeepMind’s AI: AlphaFold . The problem consists in figuring out the shape of a protein based on the sequence of amino acids that composes it. For decades a community of scientists have been trying to solve it, which resulted in the creation of the CASP competition in 1994. Participants in the competition have to guess the shape of a given set of proteins and the team with the most accurate predictions wins. This gives the possibility to research teams to objectively test their work, and to push forward the state of the art in the protein folding problem. The submissions are evaluated using the Global Distance Test (GDT) metric, where a GDT of over 90% is considered a solution to the problem. As shown in the graph below, the results plateaued at a GDT of around 40%, until AlphaFold entered the competition in 2018. Since, the AI system pushed the state of the art to a GDT of 60% and finally, in 2020, to 90%, therefore solving the protein folding problem. This breakthrough is particularly important since it will allow to tackle many of the world’s greatest challenges, like developing treatments for diseases or finding enzymes that break down industrial waste.
This article featured amazing applications that showcase the potential of modern AI systems. It is however worth mentioning that the major limitation of these models is explainability. For instance, when neural networks are at the core of an AI system, it is very difficult to understand why the system made a certain decision. This creates a degree of uncertainty which makes it so that, when using these models for critical tasks, human supervision is almost always required. For example, this is one of the main reasons why you are required to always keep your hands on the wheel when Tesla’s autopilot is engaged.
This article presented only few of the latest advancements made in the field of AI.
If you want to learn more and stay up to date with this topic, I highly recommend the YouTube channel Two Minute Paper . The channel regularly posts about the latest advancements in AI and covers the topic an entertaining and easy to understand way.
If you have any questions or would like to understand how Net Reply can help you with this or similar solutions, get in touch with !