top of page
Search
Writer's pictureNancy Nemes

Artificial intelligence: Will computers become more creative than us?

By guest author: Dr. Isabella Hermann - Ms. AI Ambassador for AI & Politics


In 2018, the painting "Portrait of Edmond Belamy" done by an algorithm was auctioned at Christie's for $432,500. The news made the headlines because artificial intelligence (AI) now seemed to be interfering with the ultimate creative domain of humans: the creation of art. But algorithms not only paint pictures, they produce music, translate into other languages, write texts and defeat persons in complex strategy games.


"Portrait of Edmond Belamy": Wikipedia. AI-artwork signed with an algorithm by real human artists.

Creativity – which is generally understood as coming up with something new that has a value or is useful – was previously associated with a specifically human ability. Why do we worry when machines enter this domain? It is not only an insult to the human Ego, we are afraid that machines will take over and we will become superfluous. Not only in art, music and literature – but in all fields.


However, the increasingly powerful computer systems do not simply fall from the sky, but are built by ourselves - precisely because we are creative. The science fiction author Isaac Asimov called this paradox between the desire to create something intelligent and the simultaneous fear that the creation could get out of control, "Frankenstein complex". Whether out of market capitalist logic or romantic inventiveness, AI systems are built and used to create value and utility according to human judgement: In most cases a gain in money and efficiency as well as savings in costs and time, and not to be underestimated: an increase of scientific knowledge.


Machines with human characteristics

But precisely because we apply human standards, this leads to the fear that machines could replace us. For we tend to anthropomorphize, i.e. humanize, AI systems. In other words, we transfer human concepts, characteristics and abilities such as intelligence, creativity or feelings to machines. So, for us, it’s not of relevance anymore, what an AI system can and does, but what we believe it can and does. Accordingly, we think that if we perceive something as intelligent, creative or emotional, it is like that.


As a result, we do not see AI systems as tools, but as equal competitors, opponents or perhaps even saviours – images that we have known for a long time from science fiction films. In the media, such notions are taken up iconographically and mere computer programs are portrayed as human robots that perform human activities. One example is Open AI's powerful GPT-3 language model, presented only last July, which was depicted by some media as a white children-robot lying on its stomach reading a book.


Such anthropomorphizations are misleading because they pretend that AI systems would understand what they are doing. But they do not, they are just powerful computers that use data inputs to optimise for results according to human standards. GPT-3 has been fed with the largest text data set available on the internet to date and can quickly produce texts of any style according to just a few specifications - whether it is a business email correspondence, a lifestyle blog or a poem. The results are impressive, but GPT-3 remains a tool that calculates probabilities of consecutive words. Tests show that the system only masters the syntax, i.e. the grammatical rule system of the language, but not the semantics, i.e. the meaning and content of what is written. The fact that GPT-3 does not understand anything does not make the program any less useful, but also no less dangerous - people could generate masses of fake news and spam in seconds.


AI is a mirror of our world

AI systems are basically always a mirror of our society, because the algorithms and the data they are fed with are a reflection of our human interests and world views. This can be a problem if, for example, GPT-3 learns and spreads discrimination and misinformation in texts without understanding what it is doing. But it can also be an opportunity if we look at such systems critically and learn from undesirable outcomes.


Moreover, AI does not stand on its own but, like all technology, is socially embedded. The auction of the "Portrait of Edmond Belamy" is not really about the question of whether algorithms replace man-made art – especially since behind the artwork were actually people in the form of the Parisian AI artist group "Obvious". The whole process showed rather par excellence that the AI investment hype has now jumped over to the already completely overheated art market


And finally, the truly creative achievement in the context of AI is not what the programs calculate from given data, but how they are written. The AlphaGo program developed by Google DeepMind, which defeated Go champion Lee Sedol in 2016, is not intelligent, however, it possesses a huge computing capacity. The trick was to find a new approach that combines different AI components in such a way that the programme suggested the right moves - something the team around lead researcher David Silver worked on day and night until before the competition. After the victory, it was not the computer on stage that was applauded, but the team of programmers.


Only humans have world knowledge

AI can do amazing things and will make many professions obsolete - but it will also lead to many new professions. In primarily creative fields, artists, musicians and writers use AI systems as tools for their artistic work.


But AI systems always operate in the past of a data world chosen by people. We humans have the world knowledge about social complexities and thus visions for the future. AI systems can support us on this path, but social and cultural progress through creative ideas can only come from us.


74 views0 comments

Comments


bottom of page