Without attempting to diminish the latest advances in artificial intelligence, the concept is not new and it is true that it is used as a portmanteau term for things that are not even remotely related to AI or applications that their inventors had no clue they could do it otherwise and in a simple manner.

Although the cloud, IIoT and big data have accelerated the need for large scale AI applications, AI is a brute force method that should be used in the absence of deterministic understanding of the underlying principles of a problem. Strikingly, an AI application was recently touted by one of the big tech companies that could be arguably solved much more easily if one could master the Fourier decomposition. Instead they relied on massively training a neural network to produce a subpar result.

AI should not make us dumber but instead be that Swiss knife in our toolbox that can solve problems we cannot possibly conceive or control and help us understand them better.

That said, the fact that we now have specific architectures available on cloud platforms to solve problems in ways that were not possible a few years ago is a huge step forward.

Let’s see what Wired has to say about it.

“By now you must have heard the good news about our savior, artificial intelligence. It makes you look better in selfies, prevents blindness, and can even turn water into tastier beer. Tech giants and governments say we’re living in a golden age of AI. Roll out the self-driving cars!

Truth is, most times you hear the term artificial intelligence, the specific technology at work is called machine learning. Despite the name, it relies heavily on human teaching.

Back in the 20th century, computer programmers had to get their electronic charges to do things by tapping out lines of code specifying exactly what needed to be done. Machine learning shifts some of that work away from humans, forcing the computer to figure things out for itself.

Machine learning sounds modern, but it’s one of the oldest ideas in computer science. In 1959, a room-filling computer called the Perceptron set a milestone in artificial intelligence when it learned to distinguish shapes such as triangles and squares.

It was built on an approach to machine learning called artificial neural networks—which also power most of the AI projects grabbing headlines today. Neural networks in the cloud or even on our phones are behind virtual assistants and goofy photo filters.

The resurgence of neural networks has made machine learning part of everyday life. All big tech companies’ plans for the future hinge on it, whether Alphabet’s ambitions to predict kidney failure before it happens or Amazon’s concept of stores without checkouts.

All that is genuinely exciting. Computers are becoming more capable at interacting with and understanding the world—and us. But don’t get swept away by the hype: machine learning doesn’t make computers anything like people.”

Content retrieved from: https://www.wired.com/story/how-we-learn-machine-learning-human-teachers/