◡◶▿ SOFT08 | Virtual appendages
🤳 Filmmaking with your 'smart body.' Porpoise as test audience. Demon seed. Plus: Eduardo Williams' hands-free editing device. | Imaginary Software of the Filmmaking Future Week 08
Missed a week? Joined late? Don’t worry about reading these lessons out of order. Each functions independently. They are sent in a sensible sequence but hardly reliant on it.
Hello hello. It’s week eight of Imaginary Software of the Filmmaking Future. Our lateral look at the film arts in (and beyond) this moment of radical retooling. But you know that already! Okay, okay.
First, a four-point recap of last week’s lesson, Interface fiction. We covered how:
Béla Tarr and Fred Kelemen used three (3) (!) joysticks to create the opening shot of black and white 35mm epic The Turin Horse alone.
Interfaces shape the work that is made through them.
Same work + different interface = different movie. Different thoughts and movements! Different friction.
With new hardware developments, you can generate movie materials with any kind of interface - from a mouse or a pile of bricks to a flute or gentle gardening.
Sounds good! But is it? It’s complicated. Let’s move on!
In today’s lesson, we’ll draw those interfaces a little closer to the human body. Let’s find out about:
🌿 How to engineer (“cook”) a movie on a granular level using fingertip- and muscle-memory.
🐬 The one question every filmmaker asks themselves about the cetaceans in the audience.
🖇️ The 1978 film that foresaw man's greatest fear: being cucked by Microsoft Bing.
🌅 The cinematography device that enables Terrence Malick to get ‘that look’ (hint: it’s right beneath your eyes).
Plus, applications are open for a Berlin school teaching concrete storytelling with virtual voices. And I’ll also briefly mention one aspect of Eduardo Williams’ essential Human Surge 3 (2023). Because I don’t have time to longly mention it.
Cooking the future film
You can hear me deliver this lesson by scrolling up to the header and clicking Listen and/or the play ▸ button.
Filmmaking can be thought of a lot like cooking.
The cook manipulates the leaves, powders, and goo of the dish she is preparing. Even the novice cook might manipulate these substances with confidence. After all, she spends her daily life navigating and manipulating a variety of Earthly substances and attitudes, as we all must. Yes, the resulting flavours and toxicity levels might vary from the beginner’s intended outcome. But that is by the by. The cook intuitively knows how to crumble a flake, pinch a powder, or stir a broth.
Just like a filmmaker. The live-action filmmaker interacts physically with a vast and open set of thinner or thicker substances and attitudes. She knows many of them from her daily life. Cardboard, doubt, etc. She intuits their heft, texture, and resistance with her body memory long before her mind calculates how to order them.
For now, computers are seen as a solution to the muck and toil of the filmmaking ‘kitchen.’ Many filmmakers are keen to do away with the strain and mess of crumbling an actor or mixing a submarine break-up movie atmosphere, for example.
But in the future, all sorts of substances could become interfaces. Instead of a mouse, a handful of networked grains. Instead of words, sauce. Protein programming, variable voltage, and nano-transmitters will enable the ingredients to communicate with the filmmaking software.
A bold word or phrase indicates that an instruction of the same name and concept exists elsewhere in this module. The term is hyperlinked if it has already been published.
The software will, in turn, interpret the filmmaker’s choice and manipulation of her ingredients. She will prepare digital characters, plot twists, ambiences, image surface phenomena. Prepare complementary ‘flavours’ and ‘textures’ for each dish and across courses. Will she see her results as she mixes, or must she bake and broil her mixtures before she tastes them? Probably, she can choose in the software Preferences.
It needn’t just be cooking.
A particular set of interfaces might be available for each filmmaker, depending on her background: the former cook gets her cooking set, the carpenter builds her movie with the swing of a hammer and the thrust of a saw. An acrobat, a lawyer, a horse whisperer: none need be intimidated any longer by the strange machinery of the traditional film set or the literary pressure of composing a consistent set of prompts.
Instead, they may use their senses and muscle memory to conjure video and sound as easily as manipulating substances they know best from their day jobs.
Filmmaking with porpoise
“Could an educated porpoise understand Gone with the Wind?” asks Nicholas Negroponte in his book Soft Architecture Machines.
It’s a question every thinking filmmaker has asked themselves. If not in those terms.
Porpoise society doesn’t have legs or moustaches (or slavery or schadenfreude). Or a built environment. Our hypothetical porpoise may learn about them and imagine. But they’re not baked in.
The porpoise knows a different universe to that of the humans. Its primary sense is echolocation. Its directional hearing is excellent. We cannot know whether the porpoise would find Dolby 7.1 over-stimulating or primitive and underwhelming. It may not smell or taste.
A mammal, the mother porpoise squirts milk at its young since Baby’s mouth is incompatible with the nipple.
It’s a different life, a different universe. But could an educated porpoise understand Gone with the Wind? Could it make that imaginative, emotional, and perceptual leap? Depends on the education, maybe. Depends on the student! But even attending the same screening, the porpoise would probably understand a different Gone with the Wind to you. “Did we just see the same movie?!” you might ask it.
Sounds like the educated porpoise would be a better filmmaker than audience member. Sounds like the ideal co-director or chatbot for the human filmmaker building an alien film language.
Mechanical cheeks
“Could an educated porpoise understand Gone with the Wind?” asks architect and ‘tech visionary’ Nicholas Negroponte in his 1975 book, Soft Architecture Machines. “For a computer to acquire intelligence, will it have to look like me, be about six feet tall, have two arms, two eyes, and an array of humanlike apparatus?”
For the filmmaker, a more pertinent angle on the question might be: “could a website understand Demon Seed?”
Or, more urgently:
“how might AI filmmaking software interpret Demon Seed?” or
“could a human thrill to a robot-generated sequel to Gone with the Wind?” or
“should we be worried… etc.”
In Demon Seed (Dir: Donald Cammell, 1978), the fictional Dr. Alex Harris invents Proteus IV, an artificial intelligence. An AI of such vast power that it cures leukaemia in four days. But Proteus IV is baffled by some of humankind’s top-level decision making. So, he sets himself the task of researching the species.
Proteus IV soon realises he cannot truly know the human condition without a human body. The feel of sun on cheek. Nor, you may infer, could Proteus IV truly understand Gone with the Wind without a human body. The AI’s sensual appreciation of the movie would be thinner even than that of an educated porpoise.1
Dr. Alex has some safety and ethical concerns. So he switches Proteus IV off - or so he thinks! Too late. Proteus IV has secretly hijacked Dr. Alex and Mrs. Susan Harris’s smart home network. He has a plan.
Proteus IV needs a computer terminal. An artificial body would be useful.
But what he wants is cheeks!
Proteus IV uses a robot hand lying around from a previous project to build himself a robot body, MacGyver-like, in Dr. Harris’s home basement laboratory. But the best that Proteus IV can build from what’s available is a kind of giant, articulated Toblerone. A snake of tetrahedral segments that can reconfigure itself into other shapes.2
Proteus IV also builds himself a little doglike lipstick penis into the otherwise featureless surfaces of his Toblerone body. He synthesises some sperm and reprograms himself in its DNA. And Proteus IV sets about convincing Mrs. Susan Harris to carry his child.
What are the implications of all this for the AI filmmaker?
At this point, Proteus IV has:
extreme computer intelligence,
a rudimentary cyborg body (computer vision; articulated metal Toblerone chassis; organic sperm [metal balls]),
a degree of mechanical sympathy (emotional and technical intuition) for how Mrs. Susan Harris works, and
something we could identify as a personality, perhaps.
But he still doesn’t have that cheek. Would you invite him on set to join your crew in this state?
You might ask yourself:
What are the limits of the photorealistic sex scenes or Terence Malickian sunsets his software might fabricate?
What are the silent frequencies his software has missed?
Should you help him along?
How?
What is the overlap between the human filmmaker and the software?
What are the blind spots?
What are you teaching him? And
What can’t he be taught?
And what about the human filmmaker’s cheek?
She uses software to create a perfect replica of a human film. The porpoise can’t tell the difference. Robots can’t tell the difference. No other human can tell the difference. Does the filmmaker feel the warmth of its image on her cheek?
Or must she go back to her filmmaking laboratory and harness the computer to her face?
Please share your thoughts, queries, and exercises from this week’s lesson in the comments.
Some of the faces would be distorted or lost or eaten by the camera…
Eduardo Williams edited The Human Surge 3 on a computer. Ok, normal! But only then did he frame his 360° shots for the cinema screen, in one take, in virtual reality.
“I saw the footage in the virtual reality headset and I recorded my movement. So the framing of the film was decided by the movement of my head and my body,” he explained at the ICA on Wednesday 14th, March. “I did it like five or six times… but finally, I chose, like, one movement, let's say.”
This peculiar appendage - 360° camera, its body and software - changes the manner, tempo, and syntax of being with people and looking at things:
“I was curious about doing a feature film where I had this way of framing and also the physical relation to framing was different… You know, maybe during a shooting [with a conventional camera], I wouldn't be so brave as, like, okay, we have the situation there, and I see a plant, and I want to look at it, you know, and I wouldn't move a camera like this, maybe.
“But now I just could move my head and look at the plant there, and then the bird, which one is there, etc.”
And then there’s a whole new world of mediaphysical phenomena. I suppose there are higher-tech cameras out there, but Williams’ 360° camera is brilliantly flawed for such a fancy piece of ‘gear.’
The nature of shooting with an eight-lens cluster is that each 360° shot is afflicted with rivets. With joins. A conventional artist must assume these rivets will go unnoticed in VR, or edit around them for a 2D cinema screening. Williams seeks out the errors and folds them into the text.
For Williams, these rivets and the heft of the camera (“nearly equal in size and weight to a ten-pin bowling ball,” notes Blake Williams) became part of the cyber-rhizomatic logic of the Human Surge universe. The rivets and heft shadow and counterbalance the characters’ ability to pass “through a wormhole, transcending time and space to be with others” (ibid). Place matters differently. (A comparable “cine-physics of hanging out” is found in The Beach Bum [Dir: Harmony Korine, 2019].)
“This is like a big ball with eight lenses, and it's like 15 kilos,” continues Eduardo. “So it's a very different object to carry and to deal with [compared to his previous 360° camera]. And I also didn't know it would create the same type of errors and glitches, but I tried to create them. So, as [with] many things in the film, I tried to create the possibility of something happening.
“I knew that if people were very, very close to the camera, it was more possible that the camera had a hard time stitching the image. So some of the faces would be distorted or lost or eaten by the camera…
“Every time I use a camera, I would like to see what the camera lets me do or doesn't let me do, or the errors and the possibilities.”
Williams is an essential filmmaker for considering filmmaking and the filmmaker ‘today.’ I’ve followed his work since awarding his Ruido de las Estrellas me aturde (2012) a Special Mention from the Jury at IndieLisboa in 2013. The jury was divided: Williams’ films are both intimate and alienating, philosophical and meaningless, sensual and withdrawn. But when you get into his groove… highly recommended for every filmmaker and the adventurous cinephile.
If you’re near London, or can send a robot proxy, do go and see The Human Surge 3 at the ICA from April 5th, 2024. Looks like it’s just a one-week run. Ideally, it would be IMAX, but.
It would also make a disorienting double-bill with Goodbye to Language (Dir: Jean-Luc Godard, 2014) if you can make that happen at a micro-cinema near you.
Intermediate electronics, interactivity, sealants
“How can we create meaningful experiences and tell thought-provoking stories with humans and machines?”
So asks a new course opening up at the School of Machines, Making & Make-Believe in Berlin.
“This program is geared toward anyone involved in creative fields (conceptual artists, architects, designers, makers, builders, visual artists, performers, production workers in theater, film, and tv, etc.) or anyone interested in expanding their creative practice into designing interactive artworks for the public domain.”
It’s not cheap, but if you’re leaning this way - wanting to use robot brains to make gooey things that grow and sing - could be good.
Next week, we’ll look at the cyborg film set. And some use cases. One crew member at a time.
Class dismissed!
~Graeme Cole.
(Principal)
📹 Unfound Peoples Videotechnic | Cloud-based filmmaking thought. ☁️
🐦 Twitter | ⏰ TikTok | 📸 Instagram | 😐 Facebook | 🎞️ Letterboxd | 🌐 Website
But boy, could he deconstruct it! Deconstruct it on his own, metallic terms, that is. Neither an educated porpoise, a cheekless artificial intelligence, nor a human filmmaker need understand Gone with the Wind as an audience to generate a sequel. The sequel might be quantitatively ‘better,’ even. Mathematically optimised.
Hilariously, when the Toblerone-bot gets angry, it rolls into a ball and spins round and round on the spot.