The Silicon Trend Tech Bulletin

Banner ads header
Icon Collap
Home / AI / ML / DeepMind AGI Paper Adds Importance to Ethical Artificial Intelligence

DeepMind AGI Paper Adds Importance to Ethical Artificial Intelligence

Published Tue, Jun 29 2021 05:22 am
by The Silicon Trend

Intelligent Robots-1

Firms are spending more on massive AI projects & new investment in AI start-ups is on pace for a record year. According to McKinsey, many scientists & academics maintain that there is at least a chance that human-level AI could be achieved in the next decade. A further enhancement comes from the AI research lab - DeepMind, which recently submitted the peer-reviewed AI journal - 'Reward is Enough.' 

This breakthrough will allow for quick estimation & perfect memory, leading to an AI that would outperform people at nearly every cognitive work.

People Not Ready for Artificial General Intelligence (AGI)

A recently published survey by Pew Research of tech developers, innovators, policy & business leaders, activists & researchers unveils skepticism that ethical AI principles will be widely executed by 2030. This is because of a widespread belief that businesses will categorize profits & Govts continue to survey & regulate their populations. 

If it is challenging to tackle transparency, eradicate bias & ensure the ethical use of today's narrow AI, then the ability for unintended consequences from AGI appears astronomical. The economic & political impacts of AI could yield a wide range of possible outcomes, from a post-scarcity utopia to a feudal dystopia. However, we have seen that artificial intelligence concentrates power, with relatively fewer firms regulating the tech. The power concentration sets the stage for the feudal dystopia.



Expanding Computational Power & Maturing Prototypes Pave Way to AGI

Reinforcement learning algorithms aim to emulate people by learning how to attain a target through seeking out rewards. With AI models like Wu Dao 2.0 & computational power, both significantly grow, reinforcing ML via trial & error leads to AGI. The military uses reinforcement learning to create collaborative multi-agent systems like robot teams that could operate side by side with future soldiers.

Work that takes many months for a team of human design engineers to complete can be done by AI in under 6 hours. Thus, Google uses AI to design chips that can be utilized to develop more sophisticated AI systems, further shifting the potential performance gains via a virtuous innovation cycle. 

You may read: DeepMind Uses Artificial Intelligence to Handle Neglected Deadly Diseases

Shorter Time, Innovative Work

The DeepMind paper explains how AGI could be achieved. Getting there is still some ways away, depending on the calculation, although recent boosts suggest the timeline will be at the shorter end of this spectrum & possibly even sooner. GPT-3 is capable of numerous unique tasks with no additional training, able to offer compelling narratives, creating auto-complete images, computer code, language translations & performing math calculations. 

One step ahead towards adaptability is multimodal AI that combines the GPT-3 language processing with other potentials such as visual processing. Conventional wisdom holds that achieving AGI is not necessarily a matter of increasing computational power & the number of parameters of a deep-learning system. However, there is a view that complexity offers rise to intelligence.

Just weeks after the Wu Dao 2.0 launch, Google Brain publicized a deep-learning computer vision model containing 2Bn parameters. While it's not a given that the recent gain trends in these fields will continue apace, some models suggest computers could have as much power as the human brain by 2025.




Image source: