Since the day Deep Blue became the trailblazer in beating a human in the game of chess in 1997, the age of rising complex computing algorithms and programs started. With every year programmers identified new complex puzzles and wrote the algorithms that
would work out the scenarios and potential improvisations on the go.
The advent of Artificial Intelligence started with the evolution of neural networks and the maturity coming in the procedures and modularization of code that could be enhanced by better coders. This soon hit its own limitation as humans do have a computing
limitation that may limit the quality of code as well, hence the self-learning subroutines became more and more prevalent with time.
With computational capacity multiplying every next year and cost of hardware becoming cheaper it was only time when Big data became the obvious choice and organic progression for the Data Warehouses. From the normalized tables from relational databases to
a simplified master schema in a data warehouse to a direct dump of unstructured data in a file in Hadoop storage which is directly processed for insights and data assessment we’re moving faster towards NLP being the direct consumption to our intelligent systems.
These advancements helped the Big Data movement where everyone and every organization is looking to consume huge data sets at a reasonable cost, whereas original methods of data sampling and approximation & extrapolation methods are becoming a thing of the
On one side enterprise solutions are evolving and making things easier to build new solutions, researchers are finding more ways to push the boundaries. From a topical or specialized AI, researchers are stepping into the space of AGI (Artificial General
Intelligence) and moving in the domain of complex decision-making systems that require more than one direction of thinking.
The biggest frontier in this space is Gaming. As games require humans to think laterally and many times utilize non-linear approaches to win against computers or other human beings (based on the nature of the game).
The new generation of games
In 2015, DeepMind Technologies, a subsidiary of Google (which Google acquired for $500 Mn in 2014), created the program AlphaGo, which leveraged on the machine learning algorithms to train itself on the past games played by many players in the well-known
Chinese game called Go. It took AlphaGo three months to process past games and retraining itself by playing with real humans to come out with the latest iteration of itself that defeated a human champion of
As the AlphaGo was a machine learning system, deep mind started pitting two different versions of AlphaGo against each other, as human players weren’t posing enough challenge for the algorithms to learn more. By 2018 the new avatar of AlphaGo became so efficient
that it beat the old & original version of AlphaGo that beat the human player in 2015 by a margin of 100 to 0 with less processing power than its predecessor. The actual training time for the new version was just 3 days to attain that perfection.
The same subsidiary, DeepMind started working on a different problem. In Dec-2016, Deepmind and Blizzard released Starcraft II’s new version as an AI Research environment. This was a huge step, as games like Chess & Go have a set of rules and have finite
possible moves They are great case points for neural network-based algorithms, but by getting into Starcraft universe they were entering a stateless game, which is much more difficult to master. The developer community was astonished by this development in
2016. Almost everyone was skeptical. It took the algorithm 2 years to scope out the full open environment of the Starcraft’s open universe and learn the tactics that can be used to thwart moves from opposing players. The program was named as AlphaStar.
“When Google’s AI can beat a human at StarCraft, it’s time to be very afraid.”
“theptip” one of the users in Ycombinator news site in Nov-2016
Objections were raised, that an AI has an undue advantage over a human in I/O I,e, fingers are inherently slower than computer running processes internally. So, to generate a level field, the AI was slowed down with the processing capability to trigger only
15 processes per second, but, even with that limitation, on 24-Jan-2019 AlphaStar recorded 5-0 victories over top StarCraft Pros. Professional StarCraft commentators were also astounded by AlphaStar’s play and described it as “phenomenal” and “superhuman.”
The team at DeepMind has done a tremendous job. However, it’s important to understand and recognize the differences between AI and human intelligence. AlphaStar can’t develop the human mental model of StarCraft II, but it can certainly make up for its shortcomings
through sheer speed and parallel processing. That’s why it was able to play hundreds of years’ worth of games in a matter of two weeks and also merge the knowledge of several AI agents into a final model. What this translates to normal life, is that more such
engines if combined can gain insights into our human civilization in less than a decade.
So the proverbial clock for Singularity is ticking!!!