Creative Computer
Mind, Body, and Will

“Progress happens when all the factors that make for it are ready, and then it is inevitable.”

–Henry Ford

“All mankind is of one author, and is one volume”

–John Donne

I often find people using creativity as a thought-terminator in discussions of artificial intelligence: “Computers can’t do [whatever] because they can’t be original! They’re much too deterministic!” but is that really true? does creativity require nondeterministic behaviour? Can computers really not accomplish it?

Although Dreyfus, Searle, and others showed that intelligence problems like creativity are hard, I predict creative machines based on an argument from both sides: we have solved other problems that dreyfus identified as hard, and because creativity is an easier problem than perhaps we think.

some intelligence problems are hard

John Searle’s Chinese Room thought experiment1 showed a distinction between intelligence and simulated intelligence. That intelligent behaviour is never proof of actual intelligence or “understanding”. He identified that a machine with real understanding (strong AI) is much harder to create than a machine that behaves intelligently (weak AI). However, Searle puts no limits on the ability of weak AI to act intelligent. In fact, the experiment presupposes that a weak AI could pass a turing test.

Similarly, Moravec demonstrated in2 that perhaps counterintuitively, things we think of as “hard” (logical–symbolic reasoning, say, chess or math) are actually much simpler than things we think of as “easy” (like using hundreds of muscles in careful unison to stand still while also breathing). Standing still only seems easy because millions of years of evolution have turned us into high performance standing machines. To use Heidegger’s terminology, our unconscious skills are ready to hand. They work so well that we’re generally unaware of them. That’s why they’re unconscious. That we think of something as difficult shows that we are conscious of it and can’t do it well.

In his critique of B.F. Skinner,3 Chomsky argued that language processing, even though it feels like logical-symbolic reasoning, is nevertheless one such hard problem. Language seems easy because our skills are ready to hand, but it’s really the most complicated thing anyone has ever done.

Dreyfus’s argument4 takes cues from Moravec and Heidegger. He suggested a difference between “knowing how” and “knowing that” based on Heidegger’s ready to hand and present at hand. A beginner plays chess by “knowing that.” They know that a pawn can move in a particular direction, they know that the game ends when the board reaches a particular condition. They play by thinking through a series of rules. It’s the sort of symbolic calculus that computers are historically good at. An experienced positional chess player has enough experience not to think about individual rules or pieces. They are able to glance at the board and then describe the position using words like “balance” and “space” and “pressure”. This sort of calculation depends not on tens or hundreds of rules, but on the lived experience of tens of thousands of positions. It’s quite like how seeing the world in visual stimulus depends on the lived experience of tens of thousands of other moments of experience seeing. This is what Dreyfus would call “knowing how”. I speak English by knowing how. I speak Spanish by knowing how to speak English, then knowing that a set of rules will allow me to translate.

  • jr lucas: godel something
  • embodiedness

but we can solve some hard intelligence problems

In the intervening thirty years since Hubert Dreyfus started writing about intelligence, artificial intelligence research has shifted towards systems for knowing how. Within the field of artificial intelligence, I’d relate that change to the debate between computationalism and connectionism. Computationalists held that intelligence was fundamentally based on the logical manipulation of symbols, using a sort of language of thought. Collectionists argued that intelligence is based on large networks of simple nodes. That the ability to manipulate symbols and logic emerges from enough relationships between those nodes. Collectionists won. Research has shifted from expert systems to neural networks, which are based on research in associative learning by neuropsychologist Donald Hebb5, based on the idea that if you have enough nodes, and they operate such that any signal passed between two nodes strengthens the path between those two nodes, the system can learn to do anything.

Collectionist research has been hugely successful at tackling the sorts of ready-to-hand problems that Moravec showed were diffucult. According to a widely cited paper called “A Neural Algorithm of Artistic Style”, in “key areas of visual perception such as object and face recognition near-human performance was recently demonstrated by … Deep Neural Networks.”6

That paper’s particular focus is on the the visual process of identifying and imitating the style of an image, like a Van Gogh painting, and then micicking that style and applying it to another image, like a photo of a starry sky. They  describe,

“In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image. Thus far the algorithmic basis of this process is unknown and there exists no artificial system with similar capabilities.”7

They showed that despite the authors (or anybody, for that matter) not knowing knowing the symbolic basis of how style transfer is done, their network was able to learn and apply it anyway. This is wonderful news for artificial intelligence. Computers can learn how to do things, even when we don’t know how to teach them. Indeed, computers can learn how to do things that we don’t know how we do them ourselves. If you take that realization and apply it to language, it follows that we could teach computers how to learn a language before understanding how we learn languages. They can already perform many medical diagnosis and prediction tasks more effectively than humans can.

Another development I find prescient is that of decentralized autonomous organizations. I’ll avoid getting too technical, but permanent cryptographic stores of value like Bitcoin have made it possible for computer programs to enter technically (rather than legally) binding contracts with humans, businesses, or other computer programs. A Facebook distributed autonomous orginazition, for example, could take its ad revenue and pay its own operation costs to whichever hosting provider offers the best deal at any given moment; moving its own physical presence between servers with the tides of market rates.

More interestingly, that Facebook autonomous organization could buy its own upgrades: buying whichever software upgrades lead to the greatest tested increase in profit. Of course, the software could be sold by humans or companies, but it could also be sold by another decentralized autonomous organization.

This provides an important and powerful new model for “learning”, but requires a new epistomology. A distributed autonomous organization is a network of a higher order than, for example, a neural network running on a particular machine. It exists independantly of the nodes that comprise it: it’s the same organization whether the current version was written by a human or a machine, whether it operates on one server in one country or another, or (more likely) some combination of the  above.

Sociologist Bruno Latour provides an epistomology that supports these complex networks: his actor–network theory. Graham Harman describes Latour’s epistomology in The Prince and the Wolf–a published, public conversation between Latour and Harman–as follows:

“We have actors, probably the most important concept of his philosophy. Actors are obviously different from traditional substances, the most famous version of objects in the history of philosophy. Actors come in all sizes. The London School of Economics can be an actor, and so can an atom or a piece of paper. Latour is not distinguishing between substance and aggregates the way that Leibniz did, where a circle of men holding hands cannot possibly be a substance because it is merely an aggregate of many individuals. For Latour every individual is already an aggregate to begin with.”

This view, wherin an object can be (must be) an aggregate of other objects or networks allows for complicated structures like decentralized autonomous agents.

and creativity is easier than we might think

In his 1964 book The Act Of Creation8, Arthur Koestler posited that all creativity falls into a pattern of bisociation. Taking two things from different groups and juxtaposing them together. This juxtaposition can be a comparison, a metaphor, a categorisation, a composition, or what have you.

In The Way We Think9, Mark Turner and Gilles Fauconnier expanded Koestler’s ideas into what they called a conceptual blending theory. Turner wrote in The Literary Mind10, that, “conceptual blending is a fundamental instrument of the every day mind, used in our basic construal of all our realities, from the social to the scientific.” Scott Berkun’s widely cited book, The Myths of Innovation puts it succintly: “all ideas are made from other ideas”11

Of course, this idea itself is made of other ideas. Our textbook quotes Steinberg paraphrasing Hume:

“Nothing is more free than the imagination of man; and though it cannot exceed that original stock of ideas, furnished by the internal and external senses, it has unlimited power of mixing, compounding, separating, and dividing these ideas, in all the varieties of fiction and vision.”12

Hume believed that all ideas are the result of combining and comparing other ideas. Even Henry Ford independantly arrived at that same idea. He said,

“I invented nothing new. I simply assembled the discoveries of other men behind whom were centuries of work. Had I worked fifty or ten or even five years before, I would have failed. So it is with every new thing. Progress happens when all the factors that make for it are ready, and then it is inevitable. To teach that a comparatively few men are responsible for the greatest forward steps of mankind is the worst sort of nonsense.”

If creativity does not depend on the creation of new ideas from whole cloth but rather the synthesis of existing ideas, then it’s quite reasonable to imagine that a network could perform creatively.

Hobbes thought similarly, but he took it further. not only is creativity the deterministic product of operations on prior input, but in fact, so is all human action. Our textbook quotes Tuck,

“In Hobbes’s deterministic view of human behavior, there was no place for free will. People may believe they are ‘choosing’ because at any given moment one may be confronted with a number of appetites and aversions and therefore there may be conflicting tendencies to act. For Hobbes, will was defined as the action tendency that prevails when a number of such tendencies exist simultaneously. What appears to be choice is nothing more than a verbal label we use to describe the attractions and aversions we experience while interacting with the environment. Once a prevalent behavioral tendency emerges, ‘freedom’ is simply ‘the condition of having no hindrance to the securing of what one wants’”13

Contemporary neuroscience shares Hobbes’ view. As we learned from an in class video presentation, decision itself may be a myth.

Here’s an exercise. Be completely still. Do your best. You can still feel parts of your body moving and twitching, when I do it, I feel myself intending to twitch, but only right at the moment when it happens. And of course, I know I didn’t really intend to twitch, since I’m trying to remain still. We learned from the video that the first brain signals about an impending behavior can be detected as much as a second prior to the behavior’s execution–long before I perceive myself “intending” to twitch.


Dreyfus, Searle, and others have shown that solving artificial intelligence problems like creativity is very difficult. However, we have solved other problems that Dreyfus identified as hard, including some directly related to artistic creativity. Additionally, we have found that creativity is not about a “eureka!” moment where one produces a completely original idea from nothing. Rather, it’s a deterministic process based on operations on many billions of sensory inputs over a long period of time.

“Many billions” and “a long period of time” sounds difficult, but we’ve found that it’s the sort of problem we can solve without even fully understanding it. Especially with the additional possible power provided by distributed autonomous agents, I see several clear paths towards computer creativity.


Berkun, S. The Myths Of Information, 2010.

Chomsky, Noam. “A Review of B. F. Skinner’s Verbal Behavior.” In Readings in the Psychology of Language, 142–43, 1967.

Dreyfus, H L, and S E Dreyfus. Mind over machine. Vol. 1, 1986. doi:10.1038/nrn2001.

Fauconnier, Gilles, and Mark Turner. The Way We Think, 2002.

Gatys, Leon A, Alexander S Ecker, and Matthias Bethge. “A Neural Algorithm of Artistic Style.” ArXiv Preprint, 2015, 3–7. doi:10.1561/2200000006.

Hebb, Donald O. The organization of behavior. Vol. 911. 1, 1949.

Hergenhahn, B R, and T B Henley. An Introduction to the History of Psychology, 2014.

Koestler, Arthur. “The Act Of Creation,” 1964.

Moravec, Hans. Mind Children: The Future of Robot and Human Intelligence. Vol. February 1, 1988.

Turner, Mark. The Literary Mind, 2011. doi:10.1093/acprof:oso/9780195126679.001.0001.

  1. Hergenhahn and Henley, An Introduction to the History of Psychology.

  2. Moravec, Mind Children: The Future of Robot and Human Intelligence.

  3. Chomsky, “A Review of B. F. Skinner’s Verbal Behavior.”

  4. Dreyfus and Dreyfus, Mind over machine.

  5. Hebb, The organization of behavior.

  6. Gatys, Ecker, and Bethge, “A Neural Algorithm of Artistic Style.”

  7. Ibid.

  8. Koestler, “The Act Of Creation.”

  9. Fauconnier and Turner, The Way We Think.

  10. Turner, The Literary Mind.

  11. Berkun, The Myths Of Information.

  12. Hergenhahn and Henley, An Introduction to the History of Psychology.

  13. Ibid.