The World Go Championships
When Deep Mind’s AlphaGo algorithm played the world champion, Lee Se-Dol, at ‘Go’, an ancient Asian strategy board game, the world watched in shock. ‘Go’ is a complex game with a large psychological element, a little like poker. A search tree approach, the kind that plays chess very effectively, wouldn’t handle the 10-to-the-power-of-170 possible Go board configurations and moves. The fact that the machine learnt to play the game would have been enough to get the algorithm into the record books. Something even more remarkable happened the day it went up against the brightest human mind, though.
The now-legendary ‘move-36’ that the machine made in the second game was initially interpreted as a mistake on the machine’s part. It was so out of sequence with the way humans play the game. As play continued, the machine made another move, and another, until its opponent was in a hopeless position. It shocked the large television audience that were watching live across Asia. It absolutely stunned the computer experts catching up later that dat. Move 36 was an act of genuine, unmistakable creativity, and machines were not meant to be able to do that. You can watch on the award-winning documentary on the match.
Machine creativity and the law
Since then, machines have been used to create a number of works, from art, music and poetry to more traditional design engineering – all outputs that can be monetized. The law on copyright is struggling to catch up. While anything written by a human is, in theory, subject to copyright protection, the key factor is that the creator must be human. With AI work currently augmenting human output, is an AI intervention human or machine driven?
This is an area where technical and legal concerns come into conflict. Any machine learning algorithm needs to be trained on a wide range of data collated at the very least by a human hand and much of that data set content is under copyright. How should we interpret that process in a legal context?
It could be treated the way we view like human reading. A person is free to repeat or summarise the content of anything they’ve read in their own words, as long as they cite it. An alternative is to view it as sampling music in a hip-hop track, where copyright rules must be respected, and royalties paid.
A second involves the issue of originality. Take, for example, the Next Rembrandt project. There was very little artistic skill, judgement or labour involved, but there was considerable technical expertise evidenced. Is the resulting Rembrandt the output of a creative, albeit non-human, mind, or is it an example of forgery?
Many countries have begun to address these issues. In fact, the first law to anticipate the advent of AI was way back in 1987, when protection for computer generated works was introduced in the UK. It was intended to protect objects such as weather maps, rather than musical tunes, and interpretation of copyright laws have been variable.
In general, the sticking point is the extent to which humans need to be involved. Strictly speaking, no machine can create anything without some human involvement in selecting data and training the AI, or more directly in directing the type of output, such as a genre of music, or in curating the best of many poor options that the machine spits out.
The nearest we have internationally is the AIPPI resolution on artificially generated works. It does not resolve all the questions we may have, but the emphasis on human involvement and a strict definition of originality are helpful. With several nations reviewing their IP laws at the moment, further decisions are likely to come.