By Lance Eliot, the AI Trends Insider
Do you allow your mind to sometimes wander afield of a task at hand?
I’m sure you’ve done so, particularly if your are stopped or otherwise waiting for something or someone to re-instigate your temporarily suspended or on-hold task back into operation or gear. We often find ourselves faced with idle moments that can be used for additional purposes, even if there’s not much else directly that we are supposed to be doing during those moments.
Perhaps you might turn that idle moment into a grand opportunity to discover some new flash of insight about the world, maybe even becoming famous for having thought of the next new equation that solves intractable mathematical problems or you might via an unexpected flash of genius realize how to solve world hunger.
It could happen.
Or, you could refrain from utilizing idle moments and remain, well, idle. You don’t necessarily have to always be on-the-go and your mind might actually relish the idle time as purely being idle, a sense of non-thinking and being just there.
When developing real-time systems, especially AI-based ones, there are likely going to be times during which the system overall is going to come to a temporarily “halt” or waiting period, finding itself going into an idle mode or idle moment. The easiest thing to do when programming or developing such a system is to simply have the entire system on-hold, doing nothing in particular, merely awaiting its next activation.
On the other hand, let’s assume that there are precious unused hardware computer cycles that could be used for something, even if the software is nonetheless forced into an “idling” as it awaits a prompt or other input to continue processing.
In those idle moments, it might be useful to have the AI system undertaking some kind of specialized efforts, ones that are intentionally designed to occur during idle moments.
It should be something presumably useful to do, and not just figuring out Fibonacci numbers for no reason (or, if have it doing cryptocurrency blockchain mining, which though admittedly might be enriching, just make sure it doesn’t become distracting of the core focus of the system). It also should be an effort that can handle being priority interrupted due to the mainstay task getting underway again. This implies that the idle moment processing needs to be fluid and flexible, capable of getting some bursts of time, and yet not requiring one extensive uninterrupted continuous length of time to perform.
You would also want to likely refrain from having the idle moment effort be undertaking anything crucial to the overall operation of the AI system, since the notion is that the idle-time processing is going to be hit-and-miss. It might occur, it might not if there isn’t any idle time that perchance arises.
Or, it might occur, but for only scant split seconds and so the idle moment processing won’t especially guarantee that the processing taking place will be able to complete during any single burst.
Of course, if this idle moment processing is going to potentially have unintended adverse consequences, such as causing freeze-up and hanging the rest of the AI system, you’d be shooting yourself in the foot by trying to leverage those idle moments. The idle moments are considered a bonus time, and it would be untoward to turn the bonus into an actual failure mode opportunity instead of an advantage to the system. If you can’t be sure that the idle moment processing will be relatively safe and sound, you’d probably be wiser to forego trying to use it, rather than mucking up the AI by a non-required effort.
Let’s consider this notion of idle moments in the context of driving a car.
Driving and Experiencing Idle Moments
Have you ever been sitting at a red light and your mind goes into idle, perhaps daydreaming about that vacation to Hawaii that you’d like to take, but meanwhile you are there in your car and heading to work once again.
I’m sure all of us have “zoned out” from time-to-time while driving our cars. It’s obviously not an advisable thing to do. In theory, your mind should always be on alert while you are sitting in the driver’s seat.
Some of my colleagues that drive for hours each day due to their jobs are apt to say that you cannot really be on full-alert for every moment of your driving. They insist that a bit of a mental escape from the driving task is perfectly fine, assuming that you do so when the car is otherwise not doing something active. Sitting at a red light is usually a rather idle task for the car and the car driver. You need to keep your foot on the brake pedal and be ready to switch over to the gas pedal once the light goes green. Seems like that’s about it.
In such a case, as long as you are steadfast in keeping your foot on the brake pedal while at the red light, presumably your mind can wander to other matters. You might be thinking about what you are going to have for dinner that night. You might be calculating how much you owe on your mortgage and trying to ascertain when you’ll have it entirely paid off. You might be thinking about that movie you saw last week and how the plot and the actors were really good. In essence, your mind might be on just about anything – and it is likely anything other than the car and driving of the car at that moment in time.
Some of you might claim that even if your mind has drifted from the driving task, it’s never really that far away. You earnestly believe that in a split second you could be mentally utterly engaged in the driving task, if there was a need to do so. My colleagues say that they believe that when driving and in-motion they are devoting maybe 90% of the minds to the driving task (the other 10% is used for daydreaming or other non-car driving mental pondering). Meanwhile, when at a red light, they are using maybe 10% of their mind to the driving task and the rest can be used for more idle thoughts. To them, the 10% is sufficient. They are sure that they can ramp-up from the sitting still 10% to the 90% active-driver mentally and do so to handle whatever might arise.
We can likely all agree that while at a red light there is still a chance of something going amiss. Yes, most of the time you just sit still, and your car is not moving. The other cars directly around you might also be in a similar posture. You might have cars to your left, and to your right, and ahead of you, and behind you, all of whom are sitting still and also waiting for the red light to turn green. You are so boxed in that even if you wanted to take some kind action with your car, you don’t have much room to move. You are landlocked.
Those that do not allow their thoughts to go toward more idle mental chitchat are perhaps either obsessive drivers or maybe don’t have much else they want to be thinking about. There is the category of drivers that find themselves mentally taxed by the driving task overall. For example, teenage novice drivers are often completely consumed by the arduous nature of the driving task. Even if they wanted to think about their baseball practice or that homework that’s due tonight, they are often so new to the driver’s seat and so worried about getting into an accident that they put every inch of their being towards driving the car. Especially when it’s their parent’s car.
By-and-large, I’d be willing to bet that most of the seasoned drivers out there are prone to mental doodling whenever the car comes to a situation permitting it. Sitting at a red light is one of the most obvious examples. Another would be waiting in a long line of cars, such as trying to get into a crowded parking lot and all of the cars are stopped momentarily, waiting for some other driver to park their car and thus allow traffic to flow again. We have lots of car driving moments wherein the car is at a standstill and there’s not much to do but wait for the car ahead of you to get moving.
Variety of Idle Moments Can Arise
Many drivers also stretch the idling car mental freedom moments and include circumstances whereby the car is crawling forward, albeit at a low speed. Presumably, you should not be thinking about anything other than the driving task, and though maybe there might be a carve out for when the car is completely motionless, it’s another aspect altogether to be doing idle thinking when the car is actually in motion. I was inching my way up an on-ramp onto the freeway this morning, and all of the cars were going turtle speeds while dealing with the excessive number of cars that all were trying to use the same on-ramp. We definitely were not motionless. It was a very slow crawl.
I noticed that the car ahead of me seemed to not be flowing at the same inching along speed as the rest of us. The car would come almost to a complete halt, and then with the few inches now between it and the car ahead, it would jerk forward to cover the ground. It happened repeatedly. This was a not very smooth way to inch along. My guess was that the driver was distracted by something else, maybe listening to a radio station or computing Fibonacci numbers mentally, and so was doing a staggered approach to the on-ramp traffic situation.
At least twice, the driver nearly bumped into the car that was ahead of it. This happened because the driver was doing this seemingly idiotic stop-and-go approach, rather than doing what the rest of us were doing, namely an even and gradual crawling forward motion. The car ahead of that driver seemed to realize too that they were almost getting bumped from behind. Several times, the car ahead put on their brake lights, as though trying to warn the other driver to watch out and not hit them. In theory, nobody had to be touching their brakes, since we all could have been crawling at the same speed and kept our respective distances from each other.
I hope you would agree that if indeed the driver was mentally distracted, they were doing so in a dicey situation. Once cars are in-motion, the odds of something going astray tend to increase. In fact, you might even say that the odds increase exponentially. The car that’s motionless, assuming it’s in a situation that normally has motionless involved, likely can allow more latitude for that mentally distracted driver.
Notice that I mentioned the motionless car in the context of motionlessness being expected. If you are driving down a busy street and suddenly jam on the brakes and come to a halt, in spite of your now being motionless, it would seem that your danger factor is going to be quite high. Sure, your car is motionless, but it happened in a time and place that was unexpected to other drivers. As such, those other drivers are bound to ram into your car. Imagine someone that just mentally discovered the secret to those finger licking good herbs and spices, and they were so taken by their own thoughts that they arbitrarily hit the brakes of their car and out-of-the-blue came to a stop on the freeway. Not a good idea.
So far, we’ve covered the aspect that when your car is motionless in an expected situation of motionless that you are apt to let your mind wander and turn towards idle thoughts, doing so while the car itself is presumably idling. We’ll acknowledge that something untoward can still happen, and there’s a need to remain involved in the driving task. Some people maybe reduce their mental driving consumption a bit lower than we might all want, and there’s a danger that the person is not at all ready for a sudden and unexpected disruption to the motionless.
Idle Moments and AI Autonomous Cars
What does all of this have to do with AI self-driving driverless autonomous cars?
At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars. As part of that effort, we also are considering how to best utilize the AI and the processors on-board a self-driving car during so-called idle moments.
Allow me to elaborate.
Your AI self-driving car comes up to a red light. It stops. It is boxed in by other surrounding cars that have also come to a normal stop at the red light. This is similar to a human driver car and the situation that I was using earlier as an example of idle moments. Nothing unusual about this. You might not even realize that the car next to you is an AI self-driving car. It is just patiently motionless, like the other nearby cars, and presumably waiting for that green light to appear.
Here’s a good question for you – what should the AI being doing at that moment in time?
I think we can all agree that at least the AI should be observing what’s going on around the self-driving car and be anticipating the green light. Sure, that would be standard operating procedure (SOP). A human would (should) be doing the same. Got it.
Suppose though that this effort to be looking around and anticipating the green light was able to be done without using up fully the available set of computational resources available to the AI system that’s on-board the self-driving car. You might liken this to a human driver that believes they are only using a fraction of their mental capacity for driving purposes when sitting at a red light. The human assumes that they can use some remainder of their underutilized mental prowess during these idle moments.
For many of the auto makers and tech firms, they right now are not seeking to leverage these idle moments for other purposes. To them, this is considered an “edge” problem. An edge problem in computer science is one that is considered at the periphery or edge of what you are otherwise trying to solve. The auto makers and tech firms are focused on the core right now of having an AI self-driving car that can drive a car down a road, stop at a red light, and proceed when the light is green.
For my article about the AI self-driving car as a moonshot, see: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/
For aspects about edge problems, see my article: https://aitrends.com/selfdrivingcars/edge-problems-core-true-self-driving-cars-achieving-last-mile/
For my overall framework about AI self-driving cars, see: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/
If there are untapped or “wasted” computational cycles that could have been used during an “idle” moment, so be it. No harm, no foul, with respect to the core aspects of the driving task. Might it be “nice” to leverage those computational resources when they are available? Sure, but it isn’t considered a necessity. Some would argue that you don’t need to be going full-blast computationally all of the time and why push things anyway.
When I’ve brought up this notion of potentially unused capacity, I’ve had some AI developers make a loud sigh and say that they already have enough on their plates about getting an AI self-driving car to properly drive a car. Forget about doing anything during idle moments other than what’s absolutely needed to be done.
For my article about burnout among AI developers, see: https://aitrends.com/selfdrivingcars/developer-burnout-and-ai-self-driving-cars/
For my article about the importance of defensive driving by AI self-driving cars, see: https://aitrends.com/selfdrivingcars/art-defensive-driving-key-self-driving-car-success/
For the foibles of human drivers, see my article: https://aitrends.com/selfdrivingcars/ten-human-driving-foibles-self-driving-car-deep-learning-counter-tactics/
They even often will try tossing up some reasons why not to try and use this available time. The easiest retort is that it might distract the AI from the core task of driving the car. To this, we say that it’s pretty stupid of anyone considering using the computational excess resources if they are also going to put the core driving task at a disadvantage.
Allow me therefore to immediately and loudly point out that yes, of course, the use of any excess capacity during idle moments is to be done only at the subservient measure to the core driving task. Whatever else the AI is going to do, it must be something that can immediately be stopped or interrupted. Furthermore, it cannot be anything that somehow slows down or stops or interrupts the core aspects of the driving task.
This is what we would expect of a human driver, certainly. A human driver that uses idle moments to think about their desired Hawaiian vacation, would be wrong in doing so if it also meant they were ill-prepared to handle the driving task. I realize that many humans that think they can handle multi-tasking are actually unable to do so, and thus we are all in danger whenever we get on the road. Those drivers that become distracted by other thoughts that are non-driving ones are putting us all at a higher risk of a driving incident. I’d assert that my example of the driver ahead of me on the on-ramp was one such example.
In short, the use of any of the excess available computational resources of an AI self-driving car, during an idle moment, must be only undertaken when it is clear cut that there is such available excess and that it also must not in any manner usurp the core driving task that the AI is expected to undertake.
Difficulties Of Leveraging Idle Driving Moments
This can admittedly be trickier than it might seem.
How does the system “know” that the AI effort — while during an idle moment — does not need the “excess” computational resources?
This is something that per my overall AI self-driving car framework is an important part of the “self-awareness” of the AI system for a self-driving car. This self-awareness capability is right now not being given much due by the auto makers and tech firms developing AI systems for self-driving cars, and as such, it correspondingly provides a reason that trying to use the “excess” is not an easy aspect for their self-driving cars (due to lacking an AI self-awareness to even know when such excess might exist).
For my article about self-awareness of AI, see: https://aitrends.com/selfdrivingcars/self-awareness-self-driving-cars-know-thyself/
For the cognition timing aspects of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/
The AI of a self-driving car is a real-time system that must continually be aware of time. If the self-driving car is going 75 miles per hour and there’s a car up ahead that seems to have lost a tire, how much time does the AI have to figure out what to do? Perhaps there’s a component of the AI that can figure out what action to take in this time-critical situation, but suppose the time required for the AI to workout a solution will take longer than is the time available to avoid hitting that car up ahead? There needs to be a self-awareness component of the AI system that is helping to keep track of the time it takes to do things and the time available to get things done.
I’m also focusing my remarks herein toward what are considered true AI self-driving cars, which are ones at a Level 5. A self-driving car of a Level 5 is considered one that the AI is able to drive without any human driver intervention needed and nor expected. The AI must be able to drive the car entirely without human assistance. Indeed, most of the Level 5 self-driving cars are omitting a brake pedal and gas pedal and steering wheel, since those contraptions are for a human driver.
Self-driving cars at a less than Level 5 are considered to co-share the driving task between the human and the AI. In essence, there must be a human driver on-board the self-driving car for a less than Level 5. I’ve commented many times that this notion of co-sharing the driving task is raft with issues, many of which can lead to confusion and the self-driving car getting into untoward situations. It’s not going to be pretty when we have increasingly foul car incidents and in spite of the belief that you can just say the human driver was responsible, I think this will wear thin.
For responsibility about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/responsibility-and-ai-self-driving-cars/
For my article about the dangers of co-sharing the driving task, see: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/
For the ethical aspects of AI for self-driving cars, see my article: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/
For the levels of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/
Go along with my indication that there will be a type of firewall between the use of computational resources for the core driving task and those computational resources that might be “excess” and available, and that at any time and without hesitation the core driving task can grab those excess resources.
Some of you that are real-time developers will say that there’s overhead for the AI to try and grab the excess resources and thus it would introduce some kind of delay, even if only minuscule. But that any delay, even if minuscule, could make the difference between the core making a life-or-death driving decision.
The counter-argument is that if those excess resources were otherwise sitting entirely idle, it would nonetheless also require overhead to activate those resources. As such, a well-optimized system should not particularly introduce any added delay between the effort to provide unused resources to the core versus resources that were momentarily and temporarily being used. That’s a key design aspect.
The next objection from some AI developers is that they offer a cynical remark about how the excess resources might be used. Are you going to use it to calculate pi to the nth degree? Are you going to use it to calculate whether aliens from Mars are beaming messages to us?
This attempt to ridicule the utilization of the excess resources is a bit hollow.
In theory, sure, it could be used to calculate pi and it could be used to detect Martians, since presumably it has no adverse impact on the core driving task. It is similar to the human that’s thinking about their Hawaiian vacation, which is presumably acceptable to do as long as it doesn’t undermine their driving (which, again, I do agree can undermine their driving, and so the same kind of potential danger could hamper the AI system but we’re saying that by design of the AI system that this is being avoided, while with humans it is something essentially unavoidable unless you can redesign the human mind).
How then might we productively use the excess resources when the AI and the core driving task is otherwise at an idle juncture?
Let’s consider some salient ways.
Useful AI Processing During Idle Moments
One aspect would be to do a double-check of the core driving task. Let’s say that the AI is doing a usual sweep of the surroundings, doing so while sitting amongst a bunch of cars that are bunched up at a red light. It’s doing this, over and over, wanting to detect anything out of the ordinary. It could be that with the core task there has already been a pre-determined depth of analysis for the AI.
It’s like playing a game of chess and trying to decide how many ply or levels to think ahead. You might normally be fine with thinking at four ply and don’t have the time or inclination to go much deeper. During the idle moment at a red light, the excess resources might do a kind of double-check and be willing to go to say six ply deep.
The core driving task wasn’t expecting the deeper analysis, and nor did it need it per se. On the other hand, a little bit of extra icing on the cake is likely to be potentially helpful. Perhaps the pedestrians that are standing at the corner appear to be standing still and pose no particular “threat” to the AI self-driving car. A deeper analysis might reveal that two of the pedestrians appear poised to move into the street and might do so once the green light occurs. This added analysis could be helpful to the core driving task.
If the excess computational cycles are used for such a purpose and if they don’t end-up with enough time to find anything notable, it’s nothing lost when you presumably dump it out and continue to use those resources for the core driving task. On the other hand, if perchance there was something found in time, it could be added to the core task awareness and be of potential value.
Another potential use of the excess resource might be to do further planning of the self-driving car journey that is underway.
Perhaps the self-driving car has done an overall path planning for how to get to the destination designated by a human occupant. Suppose the human had said to the self-driving car, get me to Carnegie Hall. At the start of the journey, the AI might have done some preliminary analysis to figure out how to get to the location. This also might be updated during the journey such as if traffic conditions are changing and the AI system becomes informed thereof.
During an otherwise idle moment, there could be more computational effort put towards examining the journey path. This might also involve personalization. Suppose that the human occupant goes this way quite frequently. Perhaps the human has from time-to-time asked the AI to vary the path, maybe due to wanting to stop at a Starbuck s on the way, or maybe due to wanting to see a particular art statute that’s on a particular corner along the way. The excess resources might be used to ascertain whether the journey might be taken along a different path.
This also brings up another aspect about the idle moments. If you were in a cab or similar and came to a red light, invariably the human driver is likely to engage you in conversation. How about that football team of ours? Can you believe it’s raining again? This is the usual kind of idle conversation. Presumably, the AI could undertake a similar kind of idle conversation with the human occupants of the self-driving car.
Doing this kind of conversation could be fruitful in that it might reveal something else too that the AI self-driving car can assist with. If the human occupant were to say that they are hungering for some coffee, the AI could suggest that the route go in the path that includes a Starbucks. Or, the human occupant might say that they will be returning home that night at 6:00 p.m., and for which the AI might ask then whether the human occupant wants the AI self-driving car to come and pick them up around that time.
For conversational aspects of AI self-driving cars, see my article: https://aitrends.com/features/socio-behavioral-computing-for-ai-self-driving-cars/
For further aspects about natural language processing and AI, see my article: https://aitrends.com/selfdrivingcars/car-voice-commands-nlp-self-driving-cars/
There are some that wonder whether the excess resources might be used for other “internal” purposes that might benefit the AI overall of the self-driving car. This could include doing memory-based garbage collection, possibly freeing up space that would otherwise be unavailable during the driving journey (this kind of memory clean-up typically happens after a journey is completed, rather than during a journey). This is a possibility, but it also begins to increase the difficulty of being able to stop it or interrupt it as needed, when so needed.
Likewise, another thought expressed has been to do the OTA (Over The Air) updates during these idle moments. The OTA is used to allow the AI self-driving car to transmit data up to a cloud capability established by the auto maker or tech firm, along with the cloud being able to push down into the AI self-driving car updates and such. The OTA is usually done when the self-driving car is fully motionless, parked, and otherwise not involved in the driving task.
We have to keep in mind that the AI self-driving car during the idle moments being considered herein is still actively on the roadway. It is driving the car. Given today’s OTA capabilities, it is likely ill-advised to try and carry out the OTA during such idle moments. This might well change though in the future, depending upon improvements in electronic communications such as 5G, and the advent of edge computing.
For my article about OTA and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/
For my article about edge computing and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/edge-computing-ai-self-driving-cars/
Another possibility of the use of the excess resources might be to do some additional Machine Learning during those idle moments. Machine Learning is an essential element of AI self-driving cars and involves the AI system itself being improved over time via a “learning” type of process. For many of the existing AI self-driving cars, the Machine Learning is often relegated to efforts in the cloud by the auto maker or tech firm, and then downloaded into the AI of the self-driving car. This then avoids utilizing the scarce resources of the on-board systems and can leverage the much vaster resources that presumably can be had in the cloud.
If the excess resources during idle moments were used for Machine Learning, it once again increases the dicey nature of using those moments. Can you cut-off the Machine Learning if needed? What aspects of Machine Learning would best be considered for the use of the excess resources. This and a slew of other questions arise. It’s not that it isn’t feasible, it’s just that you’d need to be more mindful about whether this makes sense to have undertaken.
As a few final comments on this topic for now, it is assumed herein that there are excess computational resources during idle moments. This is not necessarily always the case, and indeed on a particular journey it might never be the case. It is quite possible that the AI core driving task will consume all of the resources, regardless of whether at idle or not. As such, if there isn’t any excess to be had, there’s no need to try and figure out how to make use of it.
On the other hand, the computational effort usually for an AI self-driving car does go upward as the driving situation gets more and more complicated. The AI “cognitive” workload for a self-driving car that’s in the middle of a busy downtown city street, involving dozens of pedestrians, a smattering of bicycle riders, human driven cars that are swooping here and there, and the self-driving car is navigating this at a fast clip, along with maybe not having ever traversed this road before, and so on, it’s quite a chore to be keeping track of all of that.
The AI self-driving car was presumably outfitted with sufficient computational resources to handle the upper peak loads (it better be!). At the less than peak loads, and at the least workload times, there is usually computational resources that are not being used particularly. It’s those “available” resources that we’re saying could be used. As stated earlier, it’s not a must have. At the same time, as the case was made herein, it certainly could be some pretty handy icing on the cake.
The next time that you find yourself sitting at a red light and thinking about the weekend BBQ coming up, please make sure to keep a sufficient amount of mental resources aimed at the driving task. I don’t want to be the person that gets bumped into by you, simply because you had grilled burgers and hot dogs floating in your mind.
Copyright 2019 Dr. Lance Eliot
This content is originally posted on AI Trends.