Raise your hand if you have never used your calculator to play Super Mario in class instead of solving differential equations. For this first article in a series on design and machine learning, I want to start by exploring what happens when things go purposefully, exquisitely wrong.
For a long time, the tools available to artists were designed specifically for them. It’s hard to do anything else but paint with color tubes — even though their invention revolutionized visual art by allowing painters to take their easels outside of the studio where they previously mixed fragile pigments themselves.
Photography might be one of the first technological tools that were “hi-jacked” by artists. The pioneers of photography were interested in an accurate depiction of the world, but visual artists were quick to use the tool to depict more than the view from their room. And as the devices became cheaper and easier to use, amateurs and the greater public started using them too. Eventually, photography became ubiquitous, and everyone carries a camera with them — in their smartphone.
That’s the life-cycle of technological innovation. From experts to aficionados and finally just about everyone. What took almost two hundred years for photography (from Niepce’s first heliographic print in 1826 to the first iPhone in 2007) is now taking only a few years (if we consider that the layer model of images introduced by Photoshop in the 1990s is reflected in Snapchat filters from the 2010s).
Can machine learning be creatively misused?
We might be at the beginning of this process for generative design. Even though many machine learning platforms such as TensorFlow are open-source, they are not available to non-developers or at least people willing to invest significant time in educating themselves and tinkering with models. But the technology has already crossed-over to artists. And you guessed it right, they are misusing it. Mario Klingemann works with machine learning models to create otherworldly portraits. His goal is not to fool you into believing you are looking at a “real” painting or photographs as a typical app would try to. Instead, he generates images that question our grasp of “reality” and our relationship to machines.
As interfaces are developed to make data manipulation and computer-assisted creativity easier, I predict that users will create new images, a novel vocabulary that will go beyond face filters and the surge of deepfake video apps.
Creativity cannot emerge if everything goes according to plan. I would argue that closed-design, top-down interfaces such as today’s social media platforms are actually curtailing creativity because everything we post will appear exactly as it was intended to, with design choices actually left to the company designers and not the user. In contrast, early web 2.0 companies such as Geocities or MySpace allowed users more freedom to customize their page. True, that sometimes led to choices that would make most professional designers cringe, but it introduced a generation to visual web design and CSS. In turn, it inspired artists such as Olia Lialina & Dragan Espenschied to document these visual riches with their project One Terabyte of Kilobyte Age.
How to design for creative misuse
When I published AR Copy Paste, the project that would become ClipDrop, I was agnostic about its use, since the one user it addressed was… myself. As the code is open-source, I actually invite users to use and misuse ClipDrop in creative and unexpected ways. This spring, someone made margaritas using some kind of robot cocktail maker and the ClipDrop prototype. I loved it, and I have to say it sets the bar pretty high.
You obviously can’t expect the unexpected. But you can make room for it: by making tools that are modular, open-source, and easy to experiment with. Please keep the margaritas, and the ideas, flowing!