The grace of a running gazelle, the majesty of a migrating herd of wildebeests, the elegance of an outstretched wing in flight or the grand glissades of massive glaciers. These are nature’s lissome movements. There’s nothing else like them.
That hasn’t stopped us humans, however, from trying to replicate them with artificial intelligence (AI) and robotics. AI is a discipline that focuses on enabling machines to develop the same intellectual capabilities as humans (the software). Robotics, on the other hand, is the science of designing and building physical robots to improve automation and innovation (the hardware).
For decades, robotics engineers have worked and invested many millions of dollars in attempts to create a robot that can run or walk as well as an animal. And yet, many animals are capable of feats that would be impossible for the robots that exist today. But that may soon change. A new, four-legged robot has learned to avoid falls by spontaneously switching between “pronking”—an arch-backed, leaping gait used by animals such as springboks and gazelles—trotting and walking; a milestone for roboticists as well as biologists interested in animal locomotion.
Attaining superior sight—beyond that of human animals—is another area where robots are making strides. One new camera system allows ecologists and filmmakers to produce videos that accurately replicate the colors that different nonhuman animals see in natural settings. And, with the help of an underwater robot, a research team from New Zealand and the U.S. has gotten an unprecedented look inside a crevasse in the Kamb Ice Stream, which is revealing more than a century of geological processes beneath the Antarctic ice.
And in what could be the best possible use of A.I. and robotics, scientists are scouring information from social media and websites to filter vast amounts of online data that could support wildlife conservation.
Robot rambles
Wildebeests can migrate for thousands of miles over rough terrain; mountain goats can climb precipitous cliffs, finding footholds that don’t even seem to be there; and cockroaches can lose a leg and not slow down, say researchers at Simon Fraser University’s Department of Biomedical Physiology and Kinesiology in British Columbia, Canada. But we have no robots capable of exhibiting anything like these types of endurance, agility and robustness.
To understand why and quantify how robots lag animals, an interdisciplinary team of engineers and scientists from leading research universities recently completed a detailed study of various aspects of running robots, comparing them with their animal equivalents, for a paper published in the journal Science Robotics in April 2024. They found that, by the metrics engineers use, biological components performed surprisingly poorly compared to fabricated parts. Where the animals excel, though, is in their control and integration of those components.
The researchers each studied one of five different “subsystems” that combine to create a running robot: actuation, control, frame, power and sensing. They then compared them with their biological equivalents. Previously, it was commonly accepted that animals’ outperformance of robots must be due to the superiority of biological components. But the researchers found that, with only minor exceptions, the engineering subsystems outperform the biological equivalents—sometimes, radically. But what also became clear is that if you compare animal movement to robot movement at the whole-system level, animals are amazing. And robots have yet to catch up.
However, more optimistically for the field of robotics, the researchers noted that if you compare the relatively short time that robotics has had to develop its technology with the countless generations of animals that have evolved over many millions of years, the progress has been remarkably quick. And they predict, it will move faster because evolution is undirected. In robots, design can be corrected; after learning something in one robot, it can be quickly downloaded into every other robot. So, the engineering of robots can move much more quickly than evolution—but evolution has had a massive head start.
Having effective running robots would offer countless potential uses, such as carrying out searches in dangerous environments, handling hazardous materials or solving “last-mile” delivery challenges in a world designed for humans that is often difficult to navigate for wheeled robots.
The researchers hope that their study will help direct future developments in robot technology, with an emphasis not on building a better piece of hardware but in understanding how to better control and integrate existing hardware. As engineers learn more about integration principles from biology, conclude the researchers, running robots will become as agile, efficient and robust as their biological counterparts.
Another study, published in the journal Nature Communications in April 2024, shows that robotic movement is advancing. With the help of a form of machine learning called “deep reinforcement learning” (DRL), a robot learned to transition from trotting to pronking to navigate a challenging terrain, with gaps ranging from five to nine inches. The study, led by Switzerland’s BioRobotics Laboratory in EPFL’s School of Engineering, offers new insights into why and how such gait transitions occur in animals.
Previous research postulated energy efficiency and musculoskeletal injury avoidance as the two main explanations for gait transitions. More recently, biologists have argued that stability on flat terrain could be more important. But animal and robotic experiments have shown that these hypotheses are not always valid, especially on uneven ground. So, scientists were interested in a new hypothesis for why gait transitions occur: viability (fall avoidance). To test their theory, they used DRL to train a quadruped robot to cross various terrains.
On flat terrain, they found that different gaits showed different levels of robustness against random pushes, and that the robot switched from a walk to a trot to maintain viability, just as quadruped animals do when they accelerate. And when confronted with successive gaps in the experimental surface, the robot spontaneously switched from trotting to pronking to avoid falls. Moreover, viability was the only factor that was improved by such gait transitions.
On most terrains, then, viability leads to the emergence of gait transitions, but energy efficiency is not necessarily improved. It seems that energy efficiency, which was previously thought to be a driver of such transitions, may be more of a consequence. When an animal is navigating challenging terrain, it’s likely that its priority is not falling, followed by energy efficiency.
To model locomotion control in their robot, the researchers considered the three interacting elements that drive animal movement: the brain, the spinal cord and sensory feedback from the body. They used DRL to train a neural network to imitate the spinal cord’s transmission of brain signals to the body as the robot crossed an experimental terrain. Then, the team assigned different weights to three possible learning goals: energy efficiency, force reduction and viability. A series of computer simulations revealed that of these three goals, viability was the only one that prompted the robot to automatically—without instruction from the scientists—change its gait.
The team emphasizes that these observations represent the first learning-based locomotion framework in which gait transitions emerged spontaneously during the learning process, as well as the most dynamic crossing of large, consecutive gaps for a quadrupedal robot.
The researchers aim to expand on their work with additional experiments that place different types of robots in a wider variety of challenging environments. In addition to further elucidating animal locomotion, they hope that, ultimately, their work will enable the more widespread use of robots for biological research, reducing reliance on animal models and the associated ethics concerns.
Robot sight
Robot “sight” is also improving, giving us new views of the world—and how other animals on Earth see it. While modern techniques in sensory ecology allow us to infer how static scenes might appear to animals, they often make crucial decisions on moving targets (such as when detecting food items or evaluating a potential mate’s display). Now, a new camera system allows ecologists and filmmakers to produce videos that accurately replicate the colors that different animals see in natural settings, say scientists in a report published in the journal PLOS Biology on January 23, 2024.
Different animals perceive the world differently because of the capabilities of the photoreceptors in their eyes. For example, animals like honeybees and some birds can see UV light, which is outside the range of human perception. Reconstructing the colors that animals see can help scientists better understand how they communicate and navigate the world around them.
False color images give us a glimpse into this dynamic world, but traditional methods—such as spectrophotometry—cannot capture moving images, require specific lighting conditions and are time-consuming. To address these limitations, researchers developed a novel camera and software system that captures animal-view videos of moving objects under natural lighting conditions.
The camera simultaneously records video in four color channels: blue, green, red and UV. This data can be processed into “perceptual units” to produce an accurate video of how those colors are perceived by animals, based on existing knowledge of the photoreceptors in their eyes. After testing the system against a traditional method that uses spectrophotometry, the researchers found that the new system perceived colors with an accuracy of over 92%.
The system is built from commercially available cameras, housed in a modular, 3D-printed casing. The open-source software will allow other researchers to use and build on the technology in the future. This unique camera system will provide new avenues of research for scientists, and allow filmmakers to produce accurate, dynamic depictions of how animals see the world around them, say the report’s authors.
We’re also learning to understand the world better through enhanced sight in Antarctica. High in a narrow, seawater-filled crevasse in the base of Antarctica’s largest ice shelf, cameras on the remotely operated Icefin underwater robot relayed a sudden change in scenery. Walls of smooth, cloudy meteoric ice suddenly turned green and rougher in texture, transitioning to salty marine ice.
Nearly 1,900 feet above, near where the surface of the Ross Ice Shelf meets the Kamb Ice Stream, a U.S./New Zealand research team recognized the shift as evidence of “ice pumping”—a process never before directly observed in an ice shelf crevasse and important to its stability. They were looking at ice that had just melted less than 100 feet below, flowed up into the crevasse and then refrozen.
The Icefin robot’s unprecedented look inside this crevasse and its observations revealing more than a century of geological processes beneath the ice shelf are detailed in an article published March 2, 2023, in the journal Nature Geoscience.
Combined with recently published investigations of the fast-changing Thwaites Glacier—explored the same season by a second Icefin vehicle—this research is expected to improve models of sea-level rise by providing the first high-resolution views of ice, ocean and seafloor interactions at contrasting glacier systems on the West Antarctic Ice Sheet.
Icefin carries a full complement of oceanographic instruments on a modular frame more than 12 feet long and less than 10 inches in diameter. It was lowered on a tether through a borehole that the New Zealand team drilled through the ice shelf with hot water. During three dives spanning more than three miles near the grounding zone where the Kamb Ice Stream transitions to the floating Ross Ice Shelf, Icefin mapped five crevasses and the seafloor, while recording water conditions, including pressure, salinity and temperature. The team observed diverse ice features—such as golf ball-like dimples; ripples; vertical runnels; and “weirder” formations near the top of the crevasse, such as globs of ice and fingerlike protrusions—that provide valuable information about melt rates and water mixing.
Thwaites, which is exposed to warm ocean currents, is one of the continent’s most unstable glaciers. The Kamb Ice Stream, where the ocean is very cold, has been stagnant since the late 1800s. Kamb currently offsets some of the ice loss from Western Antarctica; but if it reactivates, it could increase the region’s contribution to sea-level rise by 12%.
The ice pumping observed in the crevasse likely contributes to the relative stability of the Ross Ice Shelf—the size of France and the world’s largest by area—compared to Thwaites Glacier, the researchers said. It’s a way these big ice shelves can protect and heal themselves. A lot of the melting that happens deep near the grounding line then refreezes and accretes onto the bottom of the ice.
On the seafloor, Icefin mapped parallel sets of ridges that the researchers believe are impressions left behind by ice shelf crevasses—and a record of 150 years of activity since the Kamb Ice Stream stagnated. As its grounding line retreated, the ice shelf thinned, causing the crevasses to lift away. The ice’s slow movement over time shifted the crevasses seaward of the ridges. Scientists can look at those seafloor features and directly connect them to what they saw on the ice base; in a way, rewinding the process.
Antarctica is a complex system, and researchers are looking to understand those that are already undergoing rapid change as well as quieter systems where future change poses a risk. Observing Kamb and Thwaites together helps them learn more.
NASA provided funding for Icefin’s development and the Kamb exploration as a precursor for journeys beyond Earth. Marine ice like that found in the Antarctica crevasse may be an analogue for conditions on Jupiter’s icy moon Europa, the target of NASA’s Europa Clipper orbital mission slated for launch in 2024. Later lander missions might one day search directly for microbial life in the ice.
Robot threat assessor
A new study published by England’s University of Sussex shows how researchers are using AI technology and social media to help identify global threats to wildlife.
Researchers at Sussex used AI to access online records from Bing, Facebook, Google and X/Twitter to map the global extent of threats to bats from hunting and trade. The new study demonstrates how social media and online content generated by news outlets and the public can help to increase our understanding of threats to wildlife across the world—and refocus conservation efforts.
The Sussex team identified 22 countries involved in bat exploitation that had not previously been identified by traditional academic research, including Bahrain, New Zealand, Singapore, Spain and Sri Lanka. The team then developed an automated, AI system that allowed them to conduct large-scale searches across multiple platforms, filtering tens of thousands of results to find relevant data. Any observations or anecdotes of bat exploitation were used to develop a global database of “bat exploitation records.”
To better understand threats to bats, the team compared academic records with online records, knowing that data and information shared online is influenced by factors that include global events and where people have access to the Internet. AI allowed a low-cost way to access data at scale and complete a global analysis, which isn’t something possible to do using traditional field studies. Another benefit of using online data combined with automated data filtering is that more information can be obtained in real time, ensuring that scientists can keep on top of current threats.
During the COVID-19 pandemic, the hunting and sale of bats for meat soared. But there is also a worrying trade of bats as curios or medicines, and often species are sold much further afield from where they are found. It’s vital to know where bat exploitation is happening, which has been difficult historically because it often takes place in remote places where illicit trade can be hidden. Such trade undermines bat conservation directly and poses a wider threat in terms of increasing the risk of zoonosis. The scientists say that this research shows that posts on the Internet and social media can provide vital evidence, that can now be followed up on the ground.
Bats make up about a fifth of all mammal species globally and have a vital role in ecosystems. They are pest controllers, pollinators and seed dispersers. Over half of bat species are considered as either “threatened with extinction” or “data deficient” by the International Union for Conservation of Nature (IUCN). Much less is known about the impact of hunting and trade of bats compared with other mammals. However, their very low reproductive rate and longevity—usually 10 to 30 years—makes them likely to be vulnerable on a scale more commonly associated with much larger mammals, such as bears, chimpanzees or lions.
Being able to expand knowledge of bat exploitation—and monitor how this wildlife trade operates—using crowd-sourced digital records can help identify bat populations most in need of conservation action and feed that information into global assessments, such as the IUCN Red List. Knowledge also offers a pathway for examining ways to disrupt that trade.
Robot heart
Whatever general opinion you have of robots, it’s clear that they are helping us understand—and learn lessons from—the natural world in a multitude of ways. Hopefully, side by side with the education they provide about the animals and landscapes that surround us, we are teaching them some compassion for the other more-than-humans out there.
Here’s to finding your true places and natural habitats,
Candy