By Lance Eliot, the AI Trends Insider
The Shadow knows! That was the famous line used in the popular pulp novel series, comic book series, and radio series about a fictional character known as The Shadow. If you are familiar with the mysterious legacy of this clad-in-black superhero-like vigilante, you likely know that preceding the exclamation was a question essentially asking what evil lurks nearby. Eventually, the popular expression “the shadow knows” has become an integral part of our global lexicon and often used as an idiom to express being able to magically or inexplicably know what is going on.
We tend to not particularly notice our own shadow. How often do you glance around to see your own shadow? Probably not very frequently. I’d bet you tend to ignore other people’s shadows too. Unless you happen to a landscapes painter or a photographer, the odds are that you take shadows for granted. I’m not faulting you for any lack of attention to shadows, since they usually don’t seem to do much or have any special purpose.
There are times though when a shadow can be a very handy thing.
I remember when my children were quite young that we devised a clever and fun hide-and-seek kind of game at the local playground, and, as I’ll mention in a moment, shadows became a crucial aspect to be paid attention to.
This playground had very few sizable objects and thereof was absent of anything notable for us to hide behind. You could not hide behind the swings, nor could you hide behind the climbing posts. You would have to be paper thin to use those as objects to hide behind. Fortunately, with a great insight by the kids, we jointly came up with a hide-and-seek game based on a handball wall that was there at the playground.
The handball wall allowed people to play handball on either side of the wall (it was a free-standing wall). People would stand on one side, and with their hands slightly cupped, they would bat a small ball against the wall. One person would bat at the ball and then the other player would bat at the ball. This is an obviously simplified description of the sport of handball and I am assuming that you know what handball consists of.
In any case, the wall could serve as a means to hide. At first glance, it seemed like a rather silly object to hide behind. There was just the one wall, standing by itself in an open and plain asphalt area or pad, and nothing else was nearby. If you hid “behind” the wall, it would mean that you would be standing merely on the other side of the wall that the person seeking you was not standing at. The seeker could immediately find you by just walking around the either end of the wall and voila you’ve been caught. Not much of a hide-and-seek.
We put our heads together to devise a more viable means to use the wall as a hiding obstacle, since it was the only viable place to “hide” and therefore play hide-and-seek.
Here’s what we came up with.
We considered this version of hide-and-seek to be a two-player game. One of my children would stand on one side of the wall and position themselves at the mid-point of the wall. I would stand on the other side of the wall and also be positioned at the mid-point. At this juncture, we cannot see each other. We are “hidden” from each other by the wall. Yes, I realize it is apparent that we each know where the other one is, but I’ll explain how quickly that will change.
Upon yelling out the word “Start!” to get the game underway, the seeker can choose to dart toward either end of the wall, likewise the hider is supposed to dart to either end of the wall, each of us staying for the moment on our respective side of the wall.
Once they each get to the corner of the wall, the seeker needs to decide whether to then go onto the other side of the wall, or instead wait where they are. The seeker is hoping to turn the corner and when doing so will catch the other person (the hider) at that same end of the wall, in which case the seeker wins the game.
If the hider has chosen to rush to the other end of the wall, the seeker upon revealing themselves by coming onto the side where the hider is, will “lose” that round and the game continues. In which case the hider moves to the other side of the wall, namely the side that the seeker just came from. The seeker now can rush to the corner that the hider was just at, or instead stay at the corner they just turned. The seeker will need to try and guess again as to where on the other side the hider is now positioned.
Maybe this sounds complicated, but I assure you it is a quite simple and easy version of hide-and-seek.
There are fun strategies you can employ and for which I believe boosted their cognitive skills, in addition to the physical exercise of running back-and-forth.
One trick involved the use of deception, in which you might try to make a lot of noise with your feet as though you are running along the wall in a particular direction, doing so to perhaps fool the other person into guessing which corner you are heading toward. You can also run to one corner and turn around and run back to the other corner, doing so repeatedly, on your side of the wall, as a means to confound your opponent.
Admittedly, this game only works well with very young children. An adult having any keen sense of sound and motion can pretty much figure out where the other person is. The great thing about small children is that they are willing and eager to play along and enjoy the game. We would play this hide-and-seek seemingly endlessly.
There is another interesting element and that is the classic tit-for-tat strategy that can be used. The kids would try to outthink me in terms of where I was going to be. If during the last round I had right away gone to the corner at the northern edge, maybe this implied that on the next round I would go to the same edge. Or, maybe I figured they figured that’s what I would do, and so they figured that I would purposely not go to that edge, since I was trying to trick them. I relished that this taught them the tit-for-tat aspects.
For my article about AI and tit-for-tat strategies, see: https://www.aitrends.com/selfdrivingcars/tit-for-tat-and-ai-self-driving-cars/
After playing the game many times, I detected something that I wondered if the children had yet figured out.
When you stood at a corner of the wall, it was possible that your shadow would be cast and therefore the person on the other side of the wall would know where you were standing. Without having to actually try and peek around the corner, you could just quietly tiptoe up to the corner and see if there was a shadow there.
The shadow knows!
Since we each were supposed to end-up at the respective corners, you could pretty much look for a shadow and if you did see one then the person was standing at that corner, while if you were at the corner where there wasn’t a shadow cast you could deduce that the person wasn’t at that corner (an exception being if the person was rushing from one side to the other at the moment that you tried to look for a shadow).
This also meant that your own shadow could possibly give you away. As such, I would at times drop down to my belly or crouch, trying to minimize the size of my shadow, when standing at any of the corners. I remember even thinking that maybe I could go grab a tree branch and put it at a corner as a means to cast a shadow and trick the seeker into believing I was standing or crouching there. It would have been at best a one-time trick and so I opted not to try it.
The shadow detection ploy was not guaranteed since it all depended upon where the sun was in relation to the position of the wall. Throughout a day, the shadow casting would obviously change as the sun moved across the sky. You might think of the wall as a giant kind of sundial. You could nearly tell the time of day by how the shadow cast off the wall. Per weather conditions, the shadow might not appear at all if the sky was filled with clouds, or the shadow was so minimized that you could not discern it, thus you could not rely upon the shadow as a means of “cheating the system” (well, was it cheating or just darned clever to use the shadows in this manner?).
I debated in my own mind whether I should reveal the shadow trickery to the kids. Once revealed, it was relatively easy to defeat the shadow maneuver and so neither of us could likely rely upon it again. On the other hand, when my children played the game against other children, I wanted to make sure they had every trick up their sleeve and also that they would not be tricked by other children. That’s the father in me. My kids first.
Well, it turns out they figured out the shadow trick on their own (probably best way to do so!). Unless either player got sloppy, the shadow no longer mattered. But, there was a chance that the other player in their haste and excitement might neglect to pay attention to the shadows, in which case it was still possible to use it as a game playing advantage.
Allegory of the Cave by Plato
As the children got older, they eventually in school read the famous “Allegory of the Cave” that Plato had included into his collection of writings known as the Republic. Did you read it while you were in grade school or maybe later on in college?
I bring it up because it is all about shadows.
The fascinating and allegorical story consists of people that are chained inside a cave and can never leave the cave. The manner of how they are chained is such that they must face a wall of the cave. They can only look at the wall. They cannot turn away from the wall. Their gaze is only focused at the wall. You might quibble with this premise and wonder how someone could live their life in this manner, but just go with the flow and try not to butt heads with it.
Behind the people that are chained-up and living in the cave is a controlled fire. The people cannot directly see the fire. They cannot look behind themselves. They can only gaze forward at the wall in front of them.
Anyone or anything that goes behind the chained people will via the light from the fire have a shadow cast upon the wall of the cave. The only experience that these people have about the world is entirely based on the shadows cast onto the cave wall. They opt to give names to the shadows. Their entire belief system about reality is based entirely on the shadows that they see on the cave wall.
You can imagine for a moment the weird things that you might believe about the world if you only experienced the world via these shadows. Keep in mind that we are going with the story as is. Those chained-up people are raised from birth in this manner and they have no other contact with the outside world. Even the people and objects brought into the cave are only seen by these people via the shadows.
Could you know what a tree is, assuming that you never saw an actual tree, and only knew about a tree via the shadow of the tree? Could you know what a dog is, having only seen it via the shadow of the dog? It’s a quite interesting thought experiment, brought to you by Plato. Clever of him.
The practitioners reading this story by Plato might find it preposterous and see little value in the story. There are lots of ways to interpret what he was trying to teach us.
One point that seems pertinent herein is that the human condition is bound by the impressions we receive through our senses. We take for granted our senses, until we lose them, or they falter. If you’ve ever temporarily lost your hearing due to swimming in a pool or maybe going to a loud rock concert, you at that moment might have realized the importance of your ears and being able to hear. It is said that blind people, those blind from birth, perceive the world in a different manner than those that have had sight and the use of their eyes for throughout their existence.
If you carry forward Plato’s allegory a bit more, presumably the way in which we come to know things about the world, some would say the epistemological aspects (a theory about knowing and knowledge), becomes shaped by our senses. Our senses provide the input for which our cognition builds mental models about reality. This implies that the nature of the sensory input will shape your cognition and what it crafts as a model of the world.
I’ll be saying more about this in quite practical aspects momentarily. I’m not going to go overboard on the Plato aspects and I bring it up to primarily highlight the potential importance of shadows. Maybe that’s a relief for those of you that were concerned that this Plato stuff was veering us away from the real-world.
Introducing Computational Periscopy
Indeed, I’d like to now introduce the topic of computational periscopy.
I’d wager that many of you might not be familiar with this field of endeavor. It can be directly associated with the hide-and-seek game that I used to play with my children. Handy that we played the game and I perchance mentioned it to you.
The notion of computational periscopy involves the use of a computer-based approach to effectively devise a kind of periscope. We all know that a periscope is normally a physical device that you can use to look around a corner or over the top of an object, doing so without you hopefully being seen. Perhaps you had one when you were a child. These had quite cheap optics and allowed you to be a pretend army soldier.
In computational periscopy, one key area of interest is how to figure out what you cannot directly see, namely when you have non-line-of-sight (NLOS) of something and possibly use other clues to guess at what might be there. How did I try to figure out when my child, acting as a seeker, might be on the other side of the wall and standing at the corner? I had NLOS at that moment of my offspring. As mentioned, I opted to try and use the shadow as a surrogate of what might be on the other side of the wall.
Computational periscopy can try to use that same shadow trick. I forewarned you, the shadow knows!
For those of you interested in this topic of computational periscopy, please be aware that there is more than just shadows involved, though shadows are certainly significant. There are other elements encompassing capturing radiated light that comes from an object, either by a natural lighting source or via the use often times ultrafast laser pulses to get light to bounce off an object. Furthermore, one aspect of periscopy is to try and refrain from revealing the periscope, in the sense that if you had a normal periscope you would typically put it into line-of-sight (LOS), but this means that the periscope can potentially be seen, which you either might not want to do or it is prohibitive to position the periscope.
Herein let’s focus on the shadows aspects.
Robot Meandering Around a Room
Suppose you have a robot that is meandering around a room. It is trying to navigate the room and do so without bumping into things. Suppose there is a refrigerator standing in the middle of this room. The robot wants to go around the refrigerator. The robot sensors do not allow it to see magically around the refrigerator and thus the robot will come up to the refrigerator and then turn the corner, yet not know what to expect. What might be on the other side of that refrigerator?
Imagine that there is sufficient lighting in the room that shadows are being cast. The image processing of the camera images streaming into the robot “eyes” could analyze the scene and try to determine if there are any shadows being cast beyond the edge of the refrigerator. If so, the robot could try to figure out what kind of object might be on the other side of the refrigerator.
When you consider this shadow analysis for a moment, consider again my hide-and-seek game.
When I was looking to see the shadow of my children, I would have already generally known that the shadow must be their shadow (because there was no one else on the other side of the wall and no other object nearby that would be casting the shadow).
I also knew the height, weight, and overall size of my children. I knew where the sun was in the sky and how shadows were being cast. Based on the size and shape of the shadow, I could deduce that the shadow was being cast by my children.
Remember that I mentioned the idea of my possibly getting tricky and using a tree branch to cast a shadow? If I had done so, the shadow cast by the tree branch would not likely be the same size and shape as the shadow cast by my body (I am not a tree branch, I assure you). Of course, you can distort a shadow and position even a tree branch in a manner that it might cast a shadow similar to the shadow of a person. You’ve certainly done the classic shadow puppets with your hands, showing a rabbit or a flying dove. We’ll all done this, though some more effectively than others.
Let’s pretend that I didn’t know my child was standing on the other side of the wall. Suppose I could only see the shadow of them. Could I reverse engineer from their shadow and try to guess at what most likely is casting the shadow? Sure, this is possible. I likely could have at least guessed the height and shape of the object that was casting the shadow, along with where the object most likely was positioned.
The robot in the room can try to do the same thing. Besides “seeing” objects directly, it can try to guess at the nature and position of objects not seen, if it can detect shadows of the objects. Suppose that a human is standing on the other side of the refrigerator and doing so out-of-sight of the robot (this is the NLOS). Via the lighting in the room, it turns out that the human is casting a shadow. The shadow is visible to the robot. The shadow of this human extends beyond the refrigerator, at the front of it, and lays cast onto the floor area that the robot is about to navigate.
Based on the shadow, the robot using computational periscopy algorithms and techniques would “reverse engineer” from the characteristics of the shadow and estimate that there is a person standing beyond view on the other side of the refrigerator.
Or, maybe the shadow shape is poor, due to the stance of the object and the lighting aspects of the room, and perhaps the robot cannot discern that it might be a human casting the shadow, but it is pretty sure there is something there casting the shadow. The periscopy algorithm might suggest that it is some kind of object that stands about six feet in height and has a width of about a foot or two. That’s enough of a guess that it permits the robot to be cautious when going around the refrigerator, allowing it to anticipate that there is something standing there and will need to be quickly navigated around too.
Computational periscopy provides another means to collect sensory data and try to make something useful out of it.
I’ll tie that to Plato. We use our senses to make sense of the world around us. There are things we detect and things we don’t detect, and yet sometimes the things that we detect are useful and yet not well utilized. I earlier said that most of us don’t think much about shadows. Most of today’s AI systems that are doing image processing are usually discarding any shadow related data. It is not something they are setup to examine.
Sadly, regrettably, this is tossing out some potentially valuable data that can give further clues to the environment in which the AI system is operating. Sometimes any clue is better than no clue. You can argue that the shadows are perhaps not overly helpful or that they are only going to be helpful some of the time, which I well concede, but at the same time if you are trying to push the envelope and get AI to be as good as it can get, squeezing out every ounce of the sensory data might make a significant difference.
Let’s not kid ourselves though and assume that shadows are an easy matter to analyze. If you walk around later today and start looking carefully at shadows, you’ll realize there is a tremendous variation in how a shadow is being cast. Trying to reverse engineer the shadow to deduce what casted it, well, this can be tough to do. Plus, you are going to end-up usually with probabilities about what might be there or not there, rather than pure certainties.
The other “killer” (downside) aspect right now is that computational periscopy tends to require humongous amounts of computing processing to undertake. Much of the work to-date has soaked up supercomputer time to try and figure out the shadow related aspects. It can be costly to purchase such premium computing power.
There are also the real-time aspects that are daunting too.
If a robot is moving around a room, and if we want it to do so in any reasonable amount of time, sauntering around like a person might, this means that any of the shadow related processing has to happen in near real-time. You are now upping the ante in that the robot has to have supercomputing capability either natively or via other reliable access, and it needs to pump the images into that processing and get back the results in near real-time to make good use of the analyses performed.
In the case of having a rolling robot on say Mars, if the robot is moving one inch every 24 hours, you perhaps might have a greater chance of doing the analyses of the shadows in time for when it is needed. The everyday robot that we envision walking around in our malls, homes, and the like, they aren’t going to have that same luxury of being able to move at a snail’s pace.
In short, the computational periscopy is handy, yet it still is in need of faster algorthims and improved techniques so that it can readily be used in near real-time situations, along with finding a means to cut back on the computing power needed so that this kind of processing can be done on more everyday hardware.
For my article about uncertainties and probabilities in AI systems, see: https://www.aitrends.com/ai-insider/probabilistic-reasoning-ai-self-driving-cars/
For my article about omnipresence in AI systems, see: https://www.aitrends.com/selfdrivingcars/omnipresence-ai-self-driving-cars/
For robot navigation and the use of SLAM, see my article: https://www.aitrends.com/selfdrivingcars/simultaneous-localization-mapping-slam-ai-self-driving-cars/
For supercomputing and AI, see my article: https://www.aitrends.com/selfdrivingcars/exascale-supercomputers-and-ai-self-driving-cars/
A recent study at Boston University provides a glimpse at how computational periscopy might ultimately be advanced for prime time and be amenable to more mass usage. Rather than using a specialized ultrafast optical system, which is usually used in these periscopy research efforts, they instead used a common digital camera. The digital camera was inexpensive and can be considered ubiquitous since we have something similar in our smartphones, plus they only utilized 2D.
In brief, the experiment consisted of having a LCD display that showed a particular image. The light radiating from the image shone onto the back of an occluding surface. Some of the light manages to get through, while some of it gets cast as a type of shadow. The resulting combination casts onto an imaging wall, appearing as a kind of blurry shadowy image, from which the periscopy algorthim tries to reverse engineer the image, attempting with some modest success to reconstruct what the original image looked like.
See the Boston University Computational Periscopy Study
If you are interested in this particular study, they’ve posted their research data and details on GitHub at https://github.com/Computational-Periscopy/Ordinary-Camera. Some critics would say this is interesting but a far distance from being usable in a real-world setting. Others would say you need to crawl before you walk, walk before your run, and so on.
It’s a healthy sign that we are hopefully going to be able to move computational periscopy toward being practical and usable for everyday purposes, though the road ahead is still long.
Speaking of roads, I’d like to mention something that happened to me the other day.
I was driving along on a busy street. A delivery truck had decided to double-park. This is dangerous and generally illegal. Anyway, I’m sure you’ve seen it happen quite frequently. One can be sympathetic to the delivery agent driving the truck that it is often impossible to find a safe and open spot to park a delivery truck, and doing when they are just quickly dropping off a package would make their day dreadfully long, thus it seems “permissible” to do a double parking to get the delivery job done. To clarify, I am not condoning this. It is still dangerous and can lead to injury or harm.
I could not see the delivery driver. I assumed the driver had stepped out of the truck and was dashing to someone’s door to make the delivery of a package. The question was whether or when the delivery driver would get back to the truck. They would need to either likely weave their way in front of their own truck and then get into the open cab to start driving to the next destination, or maybe the agent might come around the back of the truck and snake their way along the side of the truck up to the open cab area.
I was zipping along on the street. There was going to be almost no space left between the right-side of my car and the left side of the delivery truck. A salami could barely fit between the two. And that would be at most one slice.
I realize you could say that I should slow down, come to a halt, and wait for the delivery truck driver to return and move the truck out of the way. Preposterous! Like most drivers, I felt that the truck driver was in the wrong, which he was, and I was going to zip down the street and pass his double-parked truck, come heck or high water. Does my urge to drive past at a fast speed mean that there are now two wrongs in this equation? If so, do two wrongs make for a right? Probably not.
My main concern was when and how the truck driver was going to materialize. If he was smart, he would peek out his head to make sure the traffic was clear and then go alongside his double-parked truck to get into the cab. Usually these delivery agents are being clocked to get their deliveries done in time and so the odds are that the driver was going to do what he usually does, namely just go for it and assume that there isn’t traffic or that any traffic will not hit him or her.
Sure enough, just as I came alongside the truck, I saw a shadow and a rapid motion at the front of the truck. I mentally calculated that it was presumably the delivery agent, returning to the truck, though I suppose it could be someone else like a jaywalker or maybe a wandering giraffe. Whatever it was, it was something. Because it was something, I figured that I ought to be swerve away and also add some braking to my car to slow down as I came upon whatever or whomever it was.
Turns out that it was the delivery driver. I veered into the opposing lane to avoid him. Fortunately, there wasn’t any traffic coming my way. The delivery driver jumped into his cab and tipped his hat in my direction, presumably saying thanks for making his job easier. I nearly thought I should get a reward from the delivery company for having saved the life of the driver. I’m watching my mail to see if I get a nice letter and beefy check from the company (not holding my breath!).
Did you notice an important and relevant word in my narrative about the delivery truck and the saving of the life of the truck driver? You should have. The magical word was “shadow.” I had seen the shadow of the truck driver. This clued me that someone or something was potentially coming along. I had been expecting that someone or something might come along, so I was keeping my eyes peeled.
When you are driving a car, you are somewhat unlikely to usually be noticing the shadows around you and your car. As humans, and as car drivers, we typically take shadows for granted. I would even say that there might be some kind of mental processing taking place about shadows and we might not just realize we are doing so. It is like breathing air. You don’t give it direct thought.
You are so used to shadows that your mind likely is processing them but most of the time deciding it either isn’t worthwhile to put much mental effort toward, or that it will only do so when it becomes necessary.
Have you ever been driving your car on a sunny day, and all of a sudden, a large cloud formation goes in front of the sun? This casts a large shadow onto your car and the road. I’d bet that your mind noticed that something light-related just happened. You might even turn to someone else in your car and say, hey, did you notice that, it all of a sudden got dark. This suggests that your mind is on the alert for shadows, and giving it low priority most of the time, until or if something happens to get the priority pumped up.
What does this have to do with AI self-driving cars?
t the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect about the visual processing of images coming from the cameras on the self-driving car is that we can potentially boost the AI driving capabilities by making use of computational periscopy, including detecting and analyzing shadows.
Allow me to elaborate.
I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.
For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.
For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/
For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/
For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/
For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/
Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.
Here’s the usual steps involved in the AI driving task:
- Sensor data collection and interpretation
- Sensor fusion
- Virtual world model updating
- AI action planning
- Car controls command issuance
Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.
Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.
For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/
See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/
For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/
For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/
Returning to the topic of computational periscopy, let’s consider how this innovative approach can be leveraged by AI and especially in the case of AI self-driving cars. There are various research studies on shadows detection and usage for AI self-driving cars that go back a number of years and it is an ongoing field of study that will continue to mature over time.
If the use of computational periscopy could aid the AI in being a better driver, we’d certainly want to give this approach a solid chance of being utilized.
Admittedly, the odds that the periscopy via shadow detection and interpretation is going to be a dramatic difference in improving driving is somewhat questioned, at least right now. Thus, it’s the case that many AI developers for AI self-driving cars would likely put the periscopy onto an “edge” problem list, rather than a mainstay problem list.
An edge problem is one that is regarded as sitting at the edge or far corner of the core problem you are trying to solve. Right now, AI developers are focused on getting an AI self-driving car to fundamentally drive the car, doing so safely, and otherwise are covering a rather hefty checklist of key elements involved in achieving a fully automated AI-driving self-driving car. Dealing with shadows would be interesting and would have some aded value, but devoting resources and attention to it is not as vital as covering the fundamentals first.
I often disagree with pundits about what they consider to be edge problems for AI self-driving cars. There are too many so-called edge problems that those pundits try to carve out. By carving out seemingly small piece after another, they usually have not only pared things to the barebones, they’ve also in my view chopped into the bone itself. In essence, with lots of hand waving, they are skipping over edges that are actually integral to core.
For once, in this case of the periscopy, I would tend to agree that it indeed should be considered an edge problem (they’ll be happy to know this!).
Now that I’ve made the confession, don’t overstate the edge aspects of periscopy. I believe it nonetheless does add value. I would be so bold to suggest that the second or third generation of true Level 5 AI self-driving cars will consider the adoption of periscopy as a standard item. By then, hopefully most of the difficulties of trying to put in place periscopy will have been ironed out and it will be viable to use it for an AI self-driving car.
AI Self-Driving Car Being Loaded Down with Computer Processing
I’ve already mentioned that there are some tough barriers, such as the amount of computer processing needed to carry out the shadow detection and analysis. We are already loading down an AI self-driving with a ton of computer processing capabilities to do the sensor data collection and analysis for the cameras, for the radar, for the LIDAR, for the ultrasonic, and so on. Plus, the sensor fusion needs to bring together all of these sensory analyses and try to balance them, figuring out how they can be pieced together like a jigsaw puzzle to craft a cohesive indication of what’s happening surrounding the self-driving car.
Would it be worthwhile to devote processing power to doing the shadow detection and analysis?
Would it be worthwhile to include the shadow analyses into the sensor fusion that is already trying to connect the dots on the other sensory analyses?
If this addition would mean that time delays might occur between sensor data collection to sensor fusion and ultimately to the AI action planner, we’d need to weigh whether that time delay was worth the benefits of doing the shadow analyses. Might not be.
Also, if we are limited to how much computer processing power we can pack into the AI self-driving car, and if the shadow analyses occurred at the sacrifice of using processing power for other efforts, we wouldn’t want that to be a consequence either, unless we knew that the shadow analyses had a substantive enough payoff.
You might argue that we can just add more computer processing on-board the self-driving car but doing so continues to raise the cost of the self-driving car, and raises the complexity of the AI system, and adds weight and potential bulk to the car. These are factors that need to be compared on an ROI (Return on Investment) basis as to whatever the shadow detection can likely provide.
For more about cognition timing of AI and self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/
For the nature of edge problems in AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/edge-problems-core-true-self-driving-cars-achieving-last-mile/
For my article about the power consumption of an AI self-driving car, see: https://www.aitrends.com/selfdrivingcars/power-consumption-vital-for-ai-self-driving-cars/
For my article about the affordability of AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/affordability-of-ai-self-driving-cars/
Optimize Periscopy Algorithms To Consume Less Processing Power
Let’s set aside for a moment the concerns about the on-board processing and other related factors. It might be helpful to consider the difficulties involved in the shadow detection and analysis. This might also inspire those of you sparked by this problem to help find ways to improve the periscopy algorthims and techniques. It would be handy to get them optimized for being faster, better, and consume less computer processing power and memory. Well, of course that’s just about always a goal for any computer application.
Close your eyes and imagine a shadow, whichever one that comes to mind. Or, if you are in place where you easily create a shadow, please do so.
Where did the shadow cast onto? That’s important. If you have the shadow casting onto a flat surface like a floor or a wall, it’s likely easier to detect. Once the shadow appears on a surface that is irregular, or if the shadow spreads across a multitude of differing surfaces, trying to detect the shadow can become harder to do.
Another aspect is whether you have two objects that each cast a shadow and the shadows intersect or merge with each other. You have to assume that you cannot see the original objects that are casting the shadow. This means that when you are looking at the merged shadow, you cannot readily figure out which portion of the shadow refers to which of the original objects.
I remember putting my children sometimes up on my shoulders when they were toddlers. I would point at the shadow we cast. It looked like the shadow was showcasing some monstrous creature that was over seven feet tall. If you did not know or could not have guessed what cast the shadow, and you only had the shadow itself, it would be problematic to reverse engineer it and be able to say with any certainty that it was me and my son or daughter on my shoulders.
That being said, if you have some clues or at least guesses about what might be casting a shadow, you can use that to your advantage when trying to decipher the shadow. In the case of the delivery truck driver, I was waiting expectantly for the driver to come back to the double-parked truck. The shadow that appeared was not something I carefully scrutinized. I was betting that whatever shadow appeared, if any, it was a likely signal that the driver was coming back to the truck.
Had I been more like a computer system with a camera, I could have perhaps analyzed the shadow and tried to match it to whatever an adult sized person’s shadow at that time of the day in terms of the lighting would cast as a shadow. This might have been handy. Suppose a dog happened along and it was casting the shadow, rather than the driver. The shadow of the dog would likely be different than that of the say 6-foot-tall adult.
Another facet of a shadow involves motion and movement. When I had my children on my shoulders, I would stand still, and we’d look at the shadow. The shadow was relatively stable and clearly seen. I could play tricks by twisting my body, getting the light to cast off a different angle. But what would really make a difference was moving around. By walking or running with them on my shoulders, and with them shifting back-and-forth, the shadow does a kind of dance.
It is going to be more challenging to decipher a dancing shadow. The stationary shadow already has challenges. Add to the shadow that it is moving, along with the aspect that the object can be twisting and turning, you’ve got yourself quite a shadow detection task.
I’ll make things even more intriguing, or shall I say more complex and arduous. We are going to have cameras mounted into the AI self-driving car that are capturing the images or video of what is outside of the self-driving car. The self-driving car can be standing still, such as at a stop sign or red light. The self-driving car is more likely to be in-motion during a typical driving journey.
You now have a series of streaming images, which are being generated while the self-driving car is in motion, and meanwhile you are trying to detect shadows, of which the objects casting those shadows is likely moving to. I hope this impresses you as to the underlying hardness of solving this problem. We should applaud us humans that we seem to be able to do this kind of detection with relative ease. There’s a lot more to it than might meet the eye, so to speak.
I would be remiss in not also emphasizing the role of light in all of this. The light source that is casting the shadows can also be in motion. The light source can be blocked, temporarily, while the AI is in the midst of examining a series of images. The light source can get brighter or dimmer. All of the effects of the lighting will consequently impact the shadows.
I had mentioned earlier that we’ve all had moments while driving a car on a sunny day and a set of clouds blocks momentarily the sun, altering the shadows being cast. Let’s combine that aspect with my desire to ascertain if the delivery truck driver was heading back to his truck. Imagine that the moment the driver got to the truck, a cloud floated along, blocking the creation of his shadow.
No Shadow Does Not Mean No Object Is There
Just because there is no shadow does not ergo always mean there is no object there. The shadow detection has to take this aspect into account. Likewise, an object that casts a shadow that seems to be unmoving does not necessarily mean the object itself is rooted in place. The shadow of a street sign is likely to be motionless, which makes sense because it is presumably rooted in place. The truck driver might have gotten to the front of his truck and frozen in place, for an instant, which might allow me to detect his shadow, but the stationary aspect of the shadow cannot be used to assert that the object itself will remain stationary.
Shadows got a lot of intense attention by the entertainment industry for purposes of developing more realistic video games. For those of you that remember the bygone days, you know that there was a period of time whereby animated characters in a video game lacked shadows. It was a somewhat minor omission and you could still enjoy playing the game.
Nonetheless, it was well-known within the video gaming industry that game players were subtly aware that there weren’t shadows. This made the characters in the game less lifelike. A lot of research on shadows and computer graphics poured into being able to render shadows. The early versions were “cheap” in that the shadow was there but you could discern easily that it wasn’t like a real shadow. Sometimes the shadow would magically disappear when it shouldn’t. Sometimes the shadow stayed and yet the character had moved along, which was kind of funny to see if you happened to notice it.
Another area of intense interest on shadows involves analyzing satellite images. When you are trying to gauge the height of a building, the building might be partially blocked from view by trees or camouflage. Meanwhile, the shadow might be a telltale clue that is not also obscured. The same thing with people that are standing or sitting or crouching. You can potentially figure out where the people are by looking at their shadows.
I mention this other work about shadows to highlight that the shadow efforts are not solely for doing computational periscopy. There are a lot of good reasons to be thinking about the use of computers for analyzing shadows.
Pretend that you are in a Level 5 AI self-driving car. It is coming up to an intersection. The light is green. The cross-traffic has a red light. The AI assumes that it has right-of-way and proceeds forward under the assumption that the self-driving car can continue unabated into and across the intersection.
There are tall buildings at each of the corners of this intersection. The AI cannot see what’s on the other sides of those buildings. This means that there could be cross-traffic approaching the intersection, but the AI could not yet detect the traffic, only once those cars come into view at their respective red-light crosswalk stopping areas.
This might be a handy case of potentially detecting the shadow of a speeding car that is in the cross-traffic and not going to stop at the red light. It all depends on the lighting and other factors. This is though a possibility. I already gave another possibility of the truck driver, a pedestrian for a moment in time, trying to step out from behind a large obstacle, his double-parked truck.
One approach to trying to do a faster or better job at analyzing shadows by an AI system, assuming that a shadow can be found, involves the use of Machine Learning (ML) and Deep Learning (DL).
Conventional computational periscopy algorthims tend to use arcane calculus equations to try and decipher shadows. Another potential approach involves collecting together tons of images that contain shadows and trying to get a Deep Learning artificial convolutional neural network to pattern on those images. Perhaps shadows of a fire hydrant are readily discerned by pattern matching rather than having to calculate the nature of the shadow and reverse engineering back to the shape of a fire hydrant.
The neural network would need to catch onto the notion that the lighting makes a difference in terms of the shadow cast. It would need to catch onto the aspect that the surface of where the shadow is cast makes a difference. And so on. These though presumably could become part of the neural network pattern matching and ultimately be able to do a quick job of inspecting a shadow to stipulate what it might be and what it might portend for the AI self-driving car.
For more about Deep Learning, see my article: https://www.aitrends.com/selfdrivingcars/plasticity-in-deep-learning-dynamic-adaptations-for-ai-self-driving-cars/
For more about convolutional neural networks, see my article: https://www.aitrends.com/selfdrivingcars/deep-compression-pruning-machine-learning-ai-self-driving-cars-using-convolutional-neural-networks-cnn/
For my article about ensemble Machine Learning, see: https://www.aitrends.com/selfdrivingcars/ensemble-machine-learning-for-ai-self-driving-cars/
For my article about federated Machine Learning, see: https://www.aitrends.com/selfdrivingcars/federated-machine-learning-for-ai-self-driving-cars/
We can come up with a slew of ways in which shadow detection and analysis could be meaningful while driving a car.
Some human drivers overtly use shadows to their advantage. Most of the time, shadows are quietly there, and the odds are that a human driver is not especially paying attention to them. There can also be crucial moments, a key moment in time, during which a shadow can provide an added clue about a roadway situation that could spell a life-or-death difference.
Recent efforts to forge ahead with computational periscopy are encouraging and illustrate that we might someday be able to get a shadow detection and analysis capability that can function well in real-time, doing so without hogging the computing power available in a self-driving car and nor requiring the Hoover Dam to empower it.
Still, all in all, we have a bumpy and complicated way yet to go.
This shadow detection “trickery” isn’t a silver bullet for AI self-driving cars.
On a cloudy day there might not be any discernable shadows. At night time, you might not have any shadows to detect, depending upon the available street lighting. The shadows themselves might be cast onto surfaces that won’t show well the shadow, or the shadow is dancing, and you cannot get a good reading on what the size and shape of the shadow is. We can easily derive a long list of ways in which shadows won’t either work or they will have little probative value.
Does the shadow know? I assert that sometimes the shadow does know. Maybe we can use the shadow to avoid the evils of car accidents that lurk on our roadways and await our every move. Bravo, computational periscopy.
Copyright 2018 Dr. Lance Eliot
Follow Lance on twitter @LanceEliot
This content is originally posted on AI Trends.