Andrew Ng Foresee the Next Decade in Artificial Intelligence
Have you ever felt that your current line of work had enough and wanted a transition to a new role? If you have, then congratulation, you aren't alone. However, besides participating in the Great Resignation, there are also a few radical methods, like the most prominent AI figure - Andrew Ng. Ng is the founder of DeepLearning.AI and Landing AI, co-chairman of Coursera, and adjunct professor at Stanford University wants to shift his current priority from bits to things.
Andrew Ng: From Bits to Things
In 2017, Ng founded Landing AI - a startup working on facilitating AI adoption in manufacturing. He added that his motivation is industry-based and considers that manufacturing is one of the most significant sectors that has a massive effect on people's lives. Therefore, he wanted to adopt the tech that changed internet ventures and leverage it to assist individuals working in the manufacturing industry.
While AI is being drastically used in the sector, going from bits to things has become much more strenuous than Ng imagined. After working on several customer projects, Landing AI and Ng created a new toolkit and playbook for creating AI work in manufacturing and industrial automation, leading to Landing Lens and the development of a data-centric approach to AI.
Data-centric AI and Foundation Models
Ng has noted that expecting companies to train their own personalized AI models is not realistic. However, the only way out of it is to create tools that motivate customers to create their own models, engineer the data, and express their domain knowledge. With Landing Lens, Ng and Landing AI enable domain experts to communicate their expertise with data labeling.
Consistency throughout the data turns out to be crucial for obtaining an AI system to achieve good performance swiftly. Landing Lens helps customers find the most suitable examples that create the most consistent labels and enhance the image and label quality fed into the learning algorithm.
The Next Decade in AI
When you have a considerable volume of data, the amount of knowledge domain fed into the system has decreased over time. For example, in the earlier phases of Deep Learning (DL), people would train a small DL model daily and combine it with more traditional knowledge domain-based methods. But when the models started getting bigger, injected with more data, less knowledge domain was fed. According to Ng, people tended to have a learning program view of massive data, so machine translation eventually demonstrated that the end-to-end purity of learning methods could work pretty well.
Ng said 10 years ago, he misjudged the amount of work needed to elaborate DL. He thinks that many of us are still underestimating the amount of innovation, creativity, and tools to bring out data-centric AI its full potential. But as we make collective progress on technologies over the following decades, Ng hopes it will enable a lot more AI applications.