Technology Singularity: Singularity in technology is a hypothetical state or event, which is widely endorsed by scientists, researchers and experts in the field of technology. In the state of singularity, it is believed technology, after a prolonged period of progression, would become so rapid that it would surpass human intelligence. It is thus argued that the future of technology would be self-dependent, wherein machines and devices could experiment and improve upon their design.
The progression would be much quicker than as performed by humans and they would be able to perform better than the human-built designs and prototypes. This also means a future event, wherein the technological advancement would happen on its own, without human interference and humans would have less or no control over it, as they would fail to understand the new technology.
Singularity in Artificial Intelligence (AI)
Similar to technological singularity, in AI, the singularity is the forecasted state, when AI-based programs and machines would surpass human intelligence and begin to function independently and develop more sophisticated programs. The state of singularity in AI is being viewed as a profound implication as far as the future of humanity is concerned, as machines could go beyond leaps and bounds and innovate much faster than humans.
Will AI complement or compete with humans?
Even though the common belief is unwelcoming the state of singularity, authors of an article published in AI and The Singularity: A Fallacy or a Great Opportunity? hold a separate view. The article presents some of the ignored facts related to singularity, which reduce the fear involved with the hypothesis when taken into account. The authors point out that AI does not consider the full dimensions of human intelligence. Human intelligence does not only function based on logical operations and computation but also involves other aspects, which uniquely, only humans can acknowledge. These include judgement, experience, wisdom, morality, values, curiosity, imagination, emotions and even humour. Singularity, as a concept, tends to ignore all these aspects.
Computer Scientist Daniel Tunkelang, in his article, 10 Things Everyone Should Know About Machine Learning raises some very important points that blow the lid off the myths associated with singularity and AI.
Tunelang categorically mentions that Machine Learning (ML) is highly vulnerable to operator error with the comment “Machine learning algorithms don’t kill people; people kill people.” He says ML systems seldom fail because of their internal algorithm, but because of the training data fed into the system. Operating AI-based machines require a discipline similar to that in software engineering. In other words, getting adverse results from the machine is in the hands of humans and the operators need to be very careful with the data.
In the final pointer, Tunelang mentions, “AI is not going to become self-aware, rise up, and destroy humanity.” He clarifies the misconceptions people have developed based on information shown in science-fiction movies, where machines are believed to rise and destroy humanity.