AlphaCode, DeepMind’s artificial intelligence coding system, outperforms human coders in simulated coding competitions

AlphaCode by DeepMind can outcompete human coders

AlphaCode achieved an average ranking of 54.3% when evaluating simulated evaluations for recent coding competitions using the Codeforces platform, which only allowed 10 generation solutions to each problem. AlphaCode was able to solve 66% of the problems using its first submission.

It may not seem impressive, especially when compared with the seemingly superior performances of models against humans in board games. However, researchers point out that winning coding competitions is a unique challenge. AlphaCode needed to understand complex coding questions in natural language and then \”reason\”, rather than memorize code snippets, about unexpected problems. AlphaCode solved problems that it had never seen before. The researchers say they have found no evidence to suggest their model copied core logix directly from the training data. Researchers say that these factors combined make AlphaCode’s performance \”a big step forward.\”