This post was originally published by Lance Eliot, Contributor at Forbes (Innovation)
Why did the chicken cross the road?
To befuddle those newly emerging self-driving cars.
And, maybe, just maybe, in hopes of reaching the other side of the street.
This is a bit of a tongue-in-cheek twist on a legendary and familiar joke, but there are nonetheless some useful and quite serious insights to be gleaned from this otherwise seemingly lighthearted topic.
Speaking of chickens, you might have seen in the news recently that some rambunctious and altogether live chickens in New Jersey were running loose in a McDonald’s parking lot. Some drivers that were in the drive-thru line were annoyed when the chickens opted to peck at the car tires. Guess those drivers were hoping for sedate chicken sandwiches more so than rowdy chickens wreaking havoc (the police responded to a 911 call and nabbed the perps before they could fly the coop, it was so reported).
Chickens can be quite serious business.
As we approach the advent of self-driving cars, some various nuances and qualms still need to be worked out, including the seemingly innocuous matter of chickens crossing the road, along with other facets of anything or anyone that perchance crosses the road while in front of an oncoming self-driving car.
Do not though get into an undue panic about chickens and other living creatures that might find themselves staring at the headlights of a self-driving car that looms upon them.
The hope is that self-driving cars will be as safe, and presumably safer at driving than are human drivers. Logically, we can clear out the count of car crashes that occur due to drunk driving, simply because AI driving systems are not going to drink and drive. There is also the aspect that much of today’s car accidents are based on distracted driving by humans, which will thankfully be avoided since the AI driving systems will be devoted to the driving chore and not watching cat videos or trying to sneak glances at their text messages while at the wheel.
All told, there are about 40,000 fatalities each year in the U.S. alone due to car crashes, and approximately 2.3 million injuries (see my collection of driving stats at this link here). Those numbers will inevitably be reduced whence self-driving cars become available and prevalent. This adoption won’t happen overnight, and you can expect that the 250 million conventional cars in use today will take many decades to gradually be replaced by self-driving cars.
Also, for those that keep insisting we will achieve a goal of zero car crash-related deaths once there are self-driving cars, this is a decidedly false expectation and there will (sadly) still be car-related fatalities. The physics of cars is still valid, even for self-driving cars, and thus a pedestrian that suddenly darts in front of a self-driving car is going to get smacked, assuming that there were no viable means to stop in time or avoid the collision (for more of my discussion on this, see the link here).
This brings us back to those vaunted chickens.
The reason that I’m picking on chickens, well, actually providing some added fame to chickens, dives into how self-driving cars are being crafted to try and detect and avoid objects in the roadway. Human drivers know that there are lots of ways in which objects get into the path of their moving cars. You might be on a long stretch of highway and spy a tumbleweed that is rolling and dancing across the roadway. Or maybe a truck ahead of you has some pieces of discarded wood from a construction site that manages to flop off the back of the truck and onto the highway, scattering and bouncing around to create a dangerous obstacle course.
When confronting non-living objects, you don’t have to try and decide whether hitting the object involves sparing a life of that obstruction. You are certainly worried about your life, as a driver and trying to keep from hitting the object such that you lose control of your car or otherwise swerve and start a violent and death producing cascade of cars crashing into each other. The lives of those in your car and any nearby people will abundantly come to mind, but the non-living object is not in the living category and therefore can be struck if so needed.
I dare say, even if the object is the revered painting of the Mona Lisa, presumably striking that inanimate thing is at a lower priority than saving the lives of humans (for those that cherish famous paintings, please do not get upset at this example, and be relieved that the chances of having a famous artwork at severe risk in the middle of the road seem as remote as living on Pluto, one so assumes).
The matter of confronting a living object that perchance has come forth in front of your moving car is an altogether tougher proposition (I’ve explored this topic extensively, a vital AI Ethics aspect, including for example the maligned but useful Trolley Problem, see the link here).
What do you do when that dreaded moment arrives?
If the living creature is a beloved type, such as a dog or cat, presumably you will take more desperate measures to avoid it. Suppose the living beast was a rat, I’d guess that most drivers would not give much thought to striking the interloper, assuming that doing so was the safest way to proceed and there were no other means to either stop in time or swerve to avoid striking the rodent (admittedly, there are some that might intentionally seek to strike it, depending upon prior biases and already formed opinions).
The use case of having a human that is in the roadway requires added attention, which is not to suggest that animals of all kinds ought to not be given proper due consideration. Generally, it seems relatively legally prudent to assert that avoiding a human while driving a car has got to be at the top of the driving tasks assigned via our laws and customs. You need to take whatever plausible avoidance tactics can be taken and try mightily to avoid such a collision.
Let’s though get back to those chickens since they are waiting around for this discussion to turn in their direction.
Here is an interesting question: Will chickens be able to successfully cross the road in an era of self-driving cars?
Seems like a bit of an off-the-wall question (a reader kindly brought this weighty topic to my attention), and perhaps by delving closely into the answer we can find some notable points about the emergence and use of self-driving cars.
Time to unpack the matter and see.
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered a Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And Chickens Crossing
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
Imagine the following scenario.
A self-driving car is making its way along a rural road. There are farms nearby. A chicken has wandered afar, circumventing whatever perimeter might have existed to keep it fenced in. The chicken opts to meander out onto the roadway.
Maybe it spotted something enticing on the other side of the road and decided it made sense to skedaddle over to it. Perhaps the chicken liked the roadway surface and figured it would be fun to do a bit of a chicken dance on it. There are a plethora of reasons for that chicken to end-up in the roadway. We don’t know for sure what is taking place inside the mind of the chicken, so we’ll need to settle on the plain fact that there is a chicken in the road, for whatever reason or basis it has chosen to be there.
Consider two varying scenarios.
In one scenario, the chicken has been standing on that road for a while, and thus from a sizable distance, you could potentially spot the chicken long before you reach its position. We’ll refer to this as the dillydally chicken.
A contrasting scenario is the instance of a chicken that was well-hidden off the side of the road, perhaps by some clump of bushes or whatever else might be there, and it suddenly and dramatically struts out into the roadway with essentially no due notice and just as a car happens to be zipping along. We’ll refer to this as the spontaneous chicken (some might say it is the dead-duck chicken, though that seems like a foul pun).
What would a self-driving car do in these two scenarios?
We need to first realize that the chicken needs to be detected.
The AI driving system makes use of a suite of specialized sensors, typically consisting of a mix of video cameras, radar, LIDAR, ultrasonic units, thermal image devices, and so on. These are continually scanning the driving environment and the collected data is analyzed by the AI driving system. For example, video images are examined to visually identify recognizable shapes and objects. Most AI driving systems make use of Machine Learning and Deep Learning (ML/DL), a kind of computational pattern matching that tries to ascertain whether objects being sensed are akin to training sets that have included those shapes and patterns.
If the ML/DL training set did not encompass chickens, this implies that the AI driving system will not recognize what the object in the roadway is, though it nonetheless might still ascertain that there is an object there. Some argue that it doesn’t matter if an AI driving system can connect the presence of an object to the nature of the object. Simply detecting an object is sufficient, they argue, and there is no particular need to go beyond the barebones fact that an object is there in the road.
I think we can undercut that abysmal logic.
Suppose the object is a tumbleweed. Most of the time, you are willing to simply strike a tumbleweed, rather than making any oddball swerving actions that could endanger you and anyone else nearby. On the other hand, if the object is a dog or cat, as mentioned earlier you are likely going to want to take evasive action if at all feasible. We also have the special case of humans in the roadway.
The point is that if the AI driving system only is established to find objects, doing so without any semblance of trying to associate those objects with what they are, you are summarily treating all objects as pretty much a blob. Per the comments already made, not all blobs are the same. A blob that is a dog or cat is different than a blob that is a tumbleweed.
In short, we ought to expect and perhaps demand that any viable and bona fide AI driving system should be doing what it can to not simply detect objects, but also try to figure out what those objects represent.
There is another reason to identify an object definitively. Suppose the object is a piece of wood and is residing on the roadway, planted right there in the middle of your lane. It is reasonable to generally expect that a piece of wood is not going to move on its own. You can predict it is going to be stationary, all else being equal.
Consider a chicken that is standing in the roadway. Is it like a piece of wood and therefore we can anticipate it won’t move? Unless the chicken is pushing up daisies (and nailed to a perch, for those of you that are fans of Monty Python), we can abundantly anticipate that the chicken is going to go into motion.
In fact, the problem with a chicken is its unpredictability. A dog or a cat is probably going to be astute enough to try and get out of the roadway, and much of the time attempt to dart directly to the sides of the roadway. Unless you come upon a specially trained chicken, it does not seem as assured that the chicken will try to avoid the oncoming car, and even if it does do so, it will maybe circle around and not necessarily try to get out of the way by going directly to the sides of the road.
I apologize if that seems unfair to chickens as an overall characterization and please do know that some chickens would do just as well as a dog or a cat in such a circumstance. Hooray for chickens.
Anyway, we can dispense with the argument that there is no need for an AI driving system to do both detection and identification. The identification is also crucial and should not be discounted. By identifying the object, there is a greater chance that the rest of the AI driving system will do a better job of figuring out what driving actions to take.
Returning to the dicey scenario, the doomed chicken, or earlier referred to as the spontaneous chicken, which hopefully has nine lives, this circumstance presents a quite difficult problem for the AI driving system. Assuming that the chicken gets detected, and assuming it is identified as a chicken, the amount of time available for then taking any evasive action is shortened and doesn’t allow for many choices.
You need to use some form of calculus in that let’s assume there is a passenger inside the self-driving car and that any radical driving action by the AI is going to shake up the passenger. Furthermore, suppose that a last-minute braking effort is not going to save the chicken because the stopping distance available is insufficient.
Should the AI driving system try to stop or not?
Should it just proceed and likely ram into the chicken?
If you insist that the self-driving car should come to an immediate halt, even though we’ve agreed that this won’t save the chicken, you are then willing to potentially harm the passenger, either via a whiplash effect or other possible injury.
But that poor and innocent chicken, you exhort!
Well, not wanting to seem callous, you ought to know that there are an estimated eight billion chickens consumed in the United States each year. In that context, what risk are you willing to assign to harming a human passenger versus causing that chicken to go to chicken heaven?
The first scenario of the dillydally chicken is presumably a lot easier to solve. The AI driving system hopefully detects the chicken at a lengthy distance, identifies the chicken as a chicken, and opts to gradually slow down the self-driving car. Upon reaching the chicken, the self-driving car tries to see if there is a means to maneuver slowly around the chicken, doing so without ruffling any of its feathers. This can be hopefully done without also ruffling the human passenger.
Now that we’ve gotten those aspects onto our plate, let’s add some twists and turns.
The human passenger happens to see the chicken in the road. The AI driving system does not seem to be slowing down. Assume this is the dillydally chicken instance.
What should or can the human passenger do?
You might assume that the passenger can merely yell out to the AI driving system to slow down, darn it before that chicken gets whacked. This raises a slew of other considerations.
First, is there a means for the passenger to warn the AI driving system about this predicament? Some automakers and self-driving tech firms are focusing solely on having passengers indicate a destination of where they want to be driven, and there is not much other interaction involved. The added interaction is something on the so-called edge or corner cases list, meaning that it will someday get attention but not now.
Second, even if there is AI interaction provided, should the AI driving system do what the passenger says? You might insist that the human is always right and always to be obeyed. Sorry, there are lots of possibilities of a human passenger being absolutely wrong, such as suppose they are drunk or suppose they are mistaken about what they see, etc. Blindly abiding by the wishes of a human passenger is not a sensible way to program the AI.
Third, most self-driving cars will undoubtedly be outfitted with an OnStar-like remote human agent access capability. The idea is that a passenger can invoke this capacity and discuss whatever is happening, and the remote agent might be able to somehow instruct the self-driving car accordingly. The problem with this approach is that you never know for sure that a communication line is going to be working, such as this case of being in a rural area and there might be little or spotty connections.
There is also the timing aspect that once again rears its ugly head.
Even the dillydally chicken would need to be spotted a heck of a long distance away, as they say at least at a country mile, so to speak, in order to allow time for the passenger to invoke the remote communication, get connected, explain the situation to the remote agent, and have the remote agent take some kind of action.
Regrettably, if the AI driving system didn’t detect the chicken, and you are betting that the passenger will be able to persuade a remote human agent to do something, the odds are that this wandering chicken is going to be on someone’s food platter later that day.
The life of a chicken is not an easy one.
There are quite a lot of additional permutations and combinations that can be divined about this chicken trying to cross the road.
For example, suppose the AI driving system misclassified the chicken, perhaps assigning it the label of a dog (if you are interested in AI self-driving cars misclassifying objects, see my piece about snowmen misclassified as pedestrians, in my January 2, 2021 column) . If the AI driving system has been built to try and take riskier saving actions for a dog rather than a chicken, this suggests that the self-driving car will possibly go through some extreme maneuvers. The passenger gets heavily jostled.
When the passenger realizes that the AI driving system took this crazed evasive action for a chicken, there is likely going to be heck to be paid. Perhaps a lawsuit is filed against the automaker or self-driving tech firm. During the discovery process, it comes out that the AI misclassified the chicken as a dog. Oops. This could be an oopsie that wins the day for the passenger and gets the self-driving car maker into the doghouse.
I’ll leave things to your imagination to come up with the myriad of scenarios and how they can aid in revealing the nuances and societal aspects underlying the advent of self-driving cars.
One last point.
Why did the horse decide to cross the road?
Because the chicken needed a day off.
A round of applause for all of the chickens on our planet and their ongoing sacrifices for humanity, including when crossing the road, albeit coming under the (sometimes) watchful eye of human drivers and AI driving systems.
This post was originally published by Lance Eliot, Contributor at Forbes (Innovation)