The biological faculties we are endowed with define the ways in which we perceive the world.
We experience its myriad colors and hues due to our sensitivity to waves on the visible light spectrum. We are attuned to rustling winds, chirping birds, barks and voices thanks to our eardrums, capable of converting atmospheric wave propagations into interpretable sounds. Over time, we’ve managed to build on top of these natural faculties through the invention of language, producing repeatable sounds that help us communicate across distances, and the invention of writing, converting audible messages into visual ones, which helped us communicate across time.
However, all of these are just the tip of the iceberg.
So much of the modern world operates in mediums and at scales entirely invisible to us. Whether it’s the ubiquitous radio waves carrying Internet packets or the microscopic transistors interpreting them, humans are deaf and blind to the signals and scopes of modern technologies.
It needn’t always be this way.
New devices are constantly enabling us to extend our faculties of perception, communication and even manipulation.
From telescopes that let us see far out into the Milky Way to smartphones or imagined uplinks that can carry our voices thousands of miles, to even robotic arms that can execute our hand movements far beyond where our bodies can reach, we are on a constant mission to perceive and interact with the world in new ways.
Indeed, as the nature of our interaction with the world changes, so too will our primary methods of sensing and communicating. Imagine, for instance, seeing the full range of the electromagnetic spectrum, being able to visually identify the flow of digital communications, radiation or heat. Imagine being able to peer into the very molecular structure of the objects around you through microscopic vision. Imagine being able to instantly understand the contents of someone’s thoughts without having to hear or read a single sentence.
While these might seem implausible, or faculties fundamentally unfit for our brains, the reality is that such sensory capabilities are all theoretically possible because of the nature of our brains.
For reasons we cannot yet fully explain, the human brain is a tremendously dynamic and adaptable organ.
Over the course of its lifetime, the brain can do everything from learning how to process data from new input sources to transferring brain functions from one region to another. This feature is called neuroplasticity, and it’s part of the reason that neuroscientist David Eagleman calls the human brain “a general purpose computing device.”
One major clue here is that the cortex appears to be composed of “cortical columns,” or repeating general-purpose learning circuits. Each one consists of the same six or so layers, with the same cell types, and the same pattern of how each layer connects to the others. This circuit is consistent across the cortex, with very few differences between the cortical columns in the visual cortex, motor cortex, auditory cortex, prefrontal cortex, and so on.
As long as you can find some new channel to feed the brain information, the natural circuitry of the brain will take care of the rest.
This phenomenon was first demonstrated in 1969 in a paper called “Vision Substitution by Tactile Image Projection.” In it, researchers used cameras to capture an image, and then reproduced that image through a grid of tactile sensors on a patient’s back. With a bit of practice and time, eventually, patients could begin to distinguish what the camera was looking at by developing a sensitivity to the tactile patterns they felt. In other words, they could learn to see through touch. A contemporary version of this experiment was produced by the company Brainport in 2014. It sought to restore vision to patients who had lost their ability to see after a stroke. They fitted patients with a camera mounted on a pair of glasses, and attached the image feed to a multi-electrode array placed on the patient’s tongue. Just as in the example with the haptic grid, in time, the brain began to translate the electrical stimulation on the tongue into legible images, enabling patients to see through their tongues.
The inherent plasticity of the brain alongside increasingly broad and miniaturizable sensing and communicating devices allows for a near-infinite number of ways for the brain to perceive physical phenomena and interact with data.
After all, the true nature of the world is an enigma, interpretable to us only by the narrow sensory channels we’ve been endowed with.
There is a very broad range of applications of this kind of technology; it’s not hard to argue that writing, sign language, and data visualization are all much older and more mundane technologies that work via similar principles, even if they are much more “low-tech.” In Ponds, a more social form of the technology is emphasized, focusing on personal communication.
It could be that in the not-so-distant future, artists and observers won’t be limited to the instruments of paint or pianos, but could very well be producing works by manipulating the interactions of radio waves, dabbling on the infrared spectrum, making music through fluctuations in magnetic fields.
Perhaps, in a time when we have a lot more choice over exactly how our eyes can see and ears can hear, we will come to truly understand the nature of the phrase beauty lies in the eye of the beholder.
This companion piece was written by Anna-Sofia Lesiv, who writes for Contrary’s Foundations and Frontiers publication.
Her personal website can be found here, and can be found on Twitter/X as well.
The art for Ponds was created by Michael Simmons.
This essay is a nonfiction companion piece to last week’s story, Ponds, by Orion Ruffin-Green, Ponds can be read here: