Machines learning to create artistic artworks. Humans learning to appreciate and enjoy their creations.

There are Google Magenta, Google Art Experiment, Artist and Machine Intelligence, Deep Dream, Vincent around among others. All are important steps using machine learning to push arts frontier.

 

Hold on, what do we understand about the word art?

After reading from oxford dictionary, philosophy, academic and finally landing on a site showing famous people definitions of art. I will conclude there is no one definition for art and shall relate to how Lisa Marder explains:

“… there is general consensus that art is the conscious creation of something beautiful or meaningful using skill and imagination”. – Ways of Defining Arts, Lisa Marder

 

In the first place, why do we even love and appreciate art?

Now feel ourselves in the artists’ world! Every art is incomplete, leaving a magical missing piece for us to relate. Completed only when we look, listen, feel or interact with it. Stirring the emotional power in us. Eliciting feelings of lightness, heaviness, numbness, spaciousness, sadness, dense, fear, shock, anger, hope, awe, to having profound realisation, inspirations and uncontrollable reactions.

As for me, I am someone who can only appreciate and relate to arts where I can connect with. Drawing upon my intrinsic interpretation of skills and beauty.

Now, my logic hat is put on.

From a science point, we are social beings and naturally draw connections from things around us. Do you know the mirror neurons in our brain help us understand actions, intentions and emotions of other people by imitating them? When we receive an external stimulus like a painting, an inner simulation is created by the mirror neurons. Without having to physically experience it, we can relate to what emotions the painting is trying to invoke and even what the artist was experiencing.

Art philosopher Denis Dutton spoke of artistic beauty not being entirely cultural in his TED talk:

One fundamental traits of the ancestor personalities persists in our aesthetic cravings: the beauty we find in skilled performances… We find beauty in something done well.

– Ted Talk: A Dwarwinian Theory of Beauty, Dennis Dutton

It is in our genes, we are drawn to things skillfully done!

 

Technology art is a new era.

For many decades, technology played a big role in facilitating creative arts. Interestingly, we are now beginning to see machines attempting to replace creative work which was long thought to be unique human talent.

A short detour to quickly understand art from known history and science.

Sprinting through the long history. Since the Bronze Age (~3,200BC), arts were used for honouring ancestors or beliefs in something greater than themselves. Age of Idealism (~900BC) was when arts begin to show individualism. From Middle Ages (~500AD) onwards, some common trends in the world were to use art for promoting religions, statuses and influencing masses to an ideology.

A large part of how art progress seems to closely follow the spirit of the era. Finally, art since the 18th century had slowly evolved to discovering and expressing our own style and experiences today.

Outright lazy with a paraphrase from a good (long) neuroscience article explaining art and evolution: Most activities that are important for the survival of a species, such as eating and sex are pleasurable; human brains evolved mechanisms to reward and encourage these behaviours, promoting the passing on of genes. But humans can learn to tap directly into these neural reward systems. Humans can eat foods that have no nutritive value and have sex without reproducing. As cognitive psychologist Steven Pinker puts it, the arts respond to “a biologically pointless challenge: figuring out how to get at the pleasure buttons of the brain and deliver little jolts of enjoyment without the inconvenience of wringing real fitness increments from the harsh world”.

 

Art on its own means skill and craft.

Creative art includes mind and intuition; bringing disparate things together and finding meaning in them with skill and craft.

These people are all using creative skills and crafts. Sculptors who crave, musicians who compose, artists who paint, scientists who discover through experimentations, businessmen who create whole new business landscapes, digital artists who produce creative work like film, music, paintings, web design.

 

So can machine truly be an artistic creator?

After all these information. I would say yes! But in the case where humans are only involved in setting up and training the machine to create art. Finally, once a good algorithm has been learnt, the machine can create new artistic artworks without human inputs. In my opinion, this can mean creative art done by a machine.

The rest of this post will be on using machine learning methods to create art. Specifically drawing and painting possibilities since I enjoy drawing too!

A progressive flow on how human-assisted machine become an independent creator:

  1. Categorising styles – just sorting them out
  2. Transferring styles – machine change images into a different style
  3. Suggesting/Designing style – human and machine collaboration
  4. Creating new style – machine own creation

 

1. Categorising style

Categorising the style of an artwork using machine learning. Take an example of simply sorting art pieces into painting, drawing, graffiti, and sketching. Or even more complex sorting like what was used to create a painting; oil pastel, watercolour and colour pencils, and an artists’ style, nuances and subtle characteristics.

Classification method will be used for this form of sorting. Usually, deep learning (neural) network will be used for better accuracy.

Categorising into various artists’ style may get very debatable as most people will likely refer to well-known artists. When it comes to art, you never really know what a unique style is until the artist’s arts are recognised. To complicate further, being recognised can refer to the whole world, within a country, within a community or even within an aspiring group.

Another type of categorising artworks is to use unsupervised learning to cluster them into similar styles. An efficient way to find out different types of styles available when you have too many artworks data. Sometimes with surprising results how the art pieces are similar in ways you never thought of before.

 

2. Transferring Styles

Starting with single style transfer.

Style transfer is one of the earliest methods using neural network models (or deep learning) to create artistic images. Even though a lot of progress has been made in the last couple of years, it is still a new area with lots of research opportunities.

The output image will keep its content but it resembles being created in a different style. The results are really promising from an artistic sense and the potential to recreate pictures in any style. An example below from Google Research Blog:

Left: Original photo by Zachi Evenor. Right: processed by Günther Noack, Software Engineer. (Source: Inceptionism: Going Deeper into Neural Networks)

And style transfer on a video:

Now a simple explanation on how deep learning (neural network) creates it.

A deep learning network usually has many layers in it. We will start off with what is happening in each layer. When a picture is first being processed by the network, the lower layers will learn patterns such as colours, edges, shapes. As the layers go higher (deeper), it gradually learns more abstract, complex and fine details. The lower layer, when used to enhance an image, creates the effect you see in the photo above or more here.

What details the layers are extracting are well illustrated here:

Source: Understanding, generalisation, and transfer learning in deep neural networks

Want to give it a try? Use this site.

As for those who know some basics on neural network. Instead of classifying the data, you are actually transforming the image you have input with the style you desire. By doing a gradient descent (minimisation) on the output image style loss with both the input image and chosen style. More information to read or watch.

 

To mixing different styles:

Most of us probably have a few favourite styles. Here is multi-style pastiche generator from Magenta Tensorflow illustrating how a photo can be recreated with different styles:

Orginal Photo:

Screen Shot 2017-10-15 at 1.42.29 PM

After mixing styles:

Screen Shot 2017-10-15 at 1.37.13 PM

And a real-time multi-style app:

The drawback of this style transfer is its inability to accurately recreate fine details. Sometimes you will want to retain the high resolution on faces and landscapes.

There are already different methods (Markov Random Fields (MRFs), Champandard and K-Nearest Neighbor) being explored to improve the resemblances of the original image fine details.

 

3. Suggesting / Designing style

A collaboration between human and machines to create art. With machine assisting human through suggesting and designing styles together.

Closest of style suggestion will be a nascent but promising one using assisted drawing. It seems to have the potential for many future possibilities. As for designing style, “Vincent” will be the latest development in this space. With DeepDream by Google having some aspects of co-designing too.

Suggesting Style.

Design your drawing on a white space assisted by an AI bot. AutoDraw by Google does just that, in a form of clipart style doodling.

A good thing about this is it allows you to design your drawing with a bot continuously suggesting pictures for your choosing. Sometimes with absurd suggestions which may expand your imaginations. Who knows?

 

Designing Style:

A few years ago, machines are already able to improvise on classical music. Now, Cambridge Consultants had come up with “Vincent” which builds on your sketch input. Creating art on white spaces with you.

I would think Vincent is a mix of suggesting and co-designing art with humans. Using what it had learnt from thousands of Renaissance to current day paintings, Vincent will turn your sketch into a complete drawing. With the sketcher guiding and influencing Vincent on its output.

Really like “Vincent“! I am calling it a sketcher’s transformer:

Vincent uses a relatively new neural network architecture call Generative Adversarial Networks (GANs) to improve what the networks are learning. Known for its accurate regeneration of photorealistic pictures. More information here and here.

 

Next, is creating arts with DeepDream by Google.

This method sometimes generates unexpected images. That’s where the all the black box magic happens.

In the hands of creatives, there are always ways to create cool stuff with them. Give the neural network model a tweak! Go deeper and mess around with the higher network layers.

DeepDream is an interesting way to recreate a style likened to memory reconstruct. The output effects it creates have some aspects of co-design between human and machine.

We will give the machine an image to design into something only it can relate. A glimpse at what happens using deep learning higher network layers on images (outputs look totally different!):

Screen Shot 2017-10-18 at 11.57.07 AM
Inceptionism: Going deeper into Neural Networks

And videos using DeepDream (first with the lower layers, second with higher layers):

There is a website DreamDeeply where you can try out deep dream images!

Like me, you may feel deep dream images using lower neural network layers (video – Deep dreaming of Alice) seems similar to results using style transfer method. In actual, DeepDream is using a very different method.

Instead of trying to classify a picture like what neural network usually perform, we will have to maximise the similarities the trained network layer found in the input image. Remember the lower and higher layers learn different types of details?

If you have some basic understanding on Neural Network backpropagation. This video gives a good basic explanation on how it is done, also a video explaining it using Google Tensorflow.

DeepDream creations using the higher layers drift away from more predictable creations. How the higher layers interpret a picture can sometimes change the output image completely into something else. The psychedelic effect of the output is a style of its own. Even though it is still limited to what the trained layers have learned, you may get surprising results!

 

Progress is neverending! An interesting work by Qifeng Chen at Stanford University using memory reconstruct method to create dreamlike fake street. Create a scene by labelling the objects to be found in it. Leave the algorithm to reconstruct how it might look like in photo style.

 

 

4. Creating new styles

Without human assistance, can a machine learn to create a drawing or painting style of its own?

My favourite Shimon can not only improvise music now but also create its own classical music.

Since I unable to find what creating a new drawing and painting styles mean to me. Let me explain my thoughts with a short story.

For the last 12 months, Don has been sitting in the middle of a big bustling city park filled with beautiful perennial, seasonal flowers and sparse trees. Every day, Don takes in everything that happened in its 360 degrees line of vision. The colourful fun-filled park during the day and during the quiet late night, couples strolling to occasional mugging and vice activities.

Don not only record what he sees but sort them all into information clusters through its algorithm. Its surrounding will be sorted every 30 seconds into weather, colours, people, animals, insects, sound, spatial, ongoing activities, up to 100 thousands different types of clusters. Creating new clusters when necessary. In each cluster, every piece of sorted information will have a very long list of features; covering different emotional states, level of significance in a situation, design principles, general elements of designs, colours and tones representation, and so on. Each feature has a weight given according to its correlations to what the information is about (eg: a kid laughing will have a higher weight on happiness feature).

Each morning from 7am-8am, a crowd gathered around. Don’s algorithm starts processing all the information it has sorted.

Don then starts creating one art piece, reconstructing a random segment of the park. It decides on its own whether to use a computer-generated watercolour, oil pastel, sketch, photorealistic or a mixed style. The daily artwork generated is influenced by what was recorded during last one day, one week, one month and from the day Don was switched on in the park. Similar to our short, mid, long-term memories.

With learning capabilities, Don can improve its skill through our feedback. After seeing Don’s artwork, you decided to push the buttons to rate what feelings are invoked in you. After giving 10 emotions, you caught a sign below a camera reading,  “Love your feedback! Please note your ratings might be normalised if our camera detects significant inconsistencies between what you have rated and your facial expressions. This is to minimise trolling and incorrect data inputs for Don to learn.”

Don’s first few weeks of artworks are a messy blend of activities happening in the day and night. Gradually over time, some artworks are beautifully blended as an expression of life in the park. When mugging happened the night before, Don thinks it is significant and traumatic enough, using darker shades and violent figures in its artwork. Some days you see a detailed and fine artwork, some days with a queer twist which you felt might indicate a lot of unexpected activities have happened over a period.

People start calling it fake, spooky, nonsense, scam, artistic, talented, sick, awesome, on and on. Don doesn’t care a bit! Only when it has recorded your reactions and words, the next artwork might to some extent be inspired by you.

How will you think of Don the artist?

 

Will we appreciate fully machine-generated art?

Let’s first take a look at how we have adapted to modern art after over a century and now a multi-billion dollars market. If you do a search, there are definitely many mixed feelings (more negative) for modern art.

I would say art is very personal. A great piece of art might be perfect for many but there will always be someone who feels nothing special about it. When a piece of art is meant for you, it becomes a bridge between your inner world and the senses it invoked.

Machine art will probably go through the same or even more challenging passage to social acceptance. Maybe we should also wonder how the unknown generation after iGen will embrace arts.

To end this, will be some questions to ponder on. If machine creates its own art without human inputs. Can we feel talent in machine artwork? Can we or acclaimed artists really appreciate an art piece or music created by a machine? Can we call even call it a “Masterpiece”?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s