Zero Knowledge Proofs (ZKP) and AI Systems: The Case of AI Self-Driving Cars


By Lance Eliot, the AI Trends Insider

Anyone that knows me, knows that I am a fan of spy novels and spy movies. Maybe that’s why I am also fascinated by Zero Knowledge Proofs (ZKP or sometimes shortened to ZK), a type of cryptographic method of intriguing possibilities.

I recently had an opportunity to attend ZK Day 2019 at the Simons Institute for the Theory of Computing at the University of California Berkeley, an all-day symposium entitled “Blockchains, Micropayments and Zero Knowledge.” Luminaries of the field were there, and the stellar event was moderated by Dr. Shafi Goldwasser, Director of the Simons Institute and a pioneer and co-founder of the field.

Those of you in the AI field might not be particularly versed in ZKP, perhaps you have heard of it, distantly, faintly, yet you are unsure of what it consists of. I’ll go ahead and provide you with a layman’s explanation and include some handy references of salient works on the topic. The mathematics of ZKP can be daunting, which if you relish “pure” computer science offers a richness of a high order, while if you are less-so mathematically focused and more-so software engineering inclined, there are ways to make use of ZKP and do so by reviewing open source code and ZKP programs that are available on GitHub.

You might be already puzzled that I’ve said that my fascination with ZKP ties into my savoring spy stories. How do the two connect, you might be wondering?

Let’s start at the beginning. Suppose you want to prove to someone that you know something, a particular something. You could just tell them the something, but in so doing, you have completely revealed it. There might be situations whereby you don’t want to have to reveal the something, and yet nonetheless be able to somehow validate or “prove” that you really do know it.

Seems like a conundrum. How might I convince you that I know something, and yet not reveal the actual something in the process of doing so?


This reminds me of a kind of real-world use case when I was in college getting my degree in computer science. I was working in the campus computer center to cover my tuition and admittedly would have worked for free due to my love for computer science (what, they’ll pay me to develop software and help others with their programs, sign me up!). There was a bunch of us computer science types that inhabited the computer center, often toying with each other to see who could come up with the cleverest showcasing of our budding knowledge and skills.

We had a manager that worked full-time as a regular administrative job in the computer center, and she kept a tight rein on our potential antics. She had an office that was enclosed within the computer center and had windows that viewed into the area where the computer stations were provided for students doing labs. She would open her blinds when she got to work, watching us like hawks, making sure that those sometimes lazy or inattentive student-workers manning the lab were keeping on their toes (I don’t blame her, a number of my co-workers would bury themselves at a workstation and not help anyone else, true to an anti-social natural nature).

One day, on a Thursday, she indicated to me that she was going on vacation the next day, and would lock-up her office, but she also realized that she kept spare equipment in her office, which might be needed while she was away camping. She told me the PIN to the numeric lock on her office door. It was quite an honor, since she seemed to distrust nearly all of us, and she was giving me the key to the kingdom, as it were.

But she also didn’t want others to know that she had given me the PIN. She worried that word would spread about her doing so, and I might get picked-on for being a favored son, as they say, and she also figured that it would get me into a pickle of having to say yes or no whenever spare equipment requests might arise. All in all, she was giving me the PIN as a last resort option and for a contingency use only, which she hoped and assumed would not need to be invoked while she was gone for just one measly day.

Friday morning, I came into the computer center, and was immediately confronted by a pal of mine. He had heard a rumor that the manager had given me the PIN. He was flabbergasted. Why didn’t he get the PIN? The idea of my having the PIN was so outlandish that he dismissed the rumor entirely. It made no sense to him that I would have the vaunted PIN.

I realize that I probably should have remained mum, but, hey, I felt that my honor was at stake, namely why couldn’t I have been chosen to have the PIN? So, I whispered to him quietly, and swore him to utter secrecy, yes, I did indeed possess the PIN. It was locked away in my mind, like a steel vault, never to be opened, never to be revealed.

Of course, he did not believe me. I was saying that I had the PIN to merely fuel the rumors and to make myself look important, at least that’s what he retorted. We are now at the crucial point of my story.

Are you ready?

Prove it, he said.

How was I to prove that I had the PIN? I could have told him the PIN number straight out, but I had been sworn to keep it private and only I was to have it. Telling him the PIN was not the way to go.

I could have walked him over to her door, which was on the other side of the lab area and not readily visible and used the PIN to open the door. He might though get a glance of the PIN, and I didn’t want that to happen. Also, he would be able to later on tattletale to the manager and say that I had actually shown him my entering into her office. Something that might have gotten me fired from my dream job.

Here’s what I did. I told him to wait right where he was. He could see the windows of her office and the blinds were closed, since she was not present in the office. I instructed him to carefully watch the windows of her office.

I then walked around to where her office door was, let myself into her office, quietly, unseen, and turned on the lights. I left them on for about ten seconds, and then turned them off. I snuck out of her office, making sure that I had not been observed.

When I got back to my pal, I asked him what he had seen. He reported that the lights in her office had come on and then gone off. It was me, I did it, I explained. This was my proof that I had the PIN.

He pondered this. Well, it was the case that the manager wasn’t at work, and it was the case that I had seemingly disappeared for a few moments, and it was the case that the lights mysteriously came on and went off, which logically should not have occurred, therefore my claim was plausible.

Notice that I had not told him the PIN number. He could have asked me about the PIN number, such as how many digits in length it is, which might have been a means for me to try and prove that I knew the PIN, but if I told him the number of digits, it would be tantamount to giving a clue about the PIN. I did not want to tell him the PIN and nor provide any clues whatsoever about the PIN.

I had wanted to provide zero-knowledge about the PIN, not even a scrap of knowledge about it.

I had wanted to provide proof-of-knowledge that I actually did know the PIN.

Thusly, I had performed a zero-knowledge proof-of-knowledge, doing so about my claimed in-hand PIN.

Since saying “zero-knowledge proof-of-knowledge” is a somewhat lengthy statement, a mouthful, we abbreviate it to simply Zero Knowledge Proof. I mention this because it can be confusing to make sense of just the three words, Zero, Knowledge, Proof, and you aren’t sure what the word zero applies to and nor the word knowledge and nor the word proof. I hope you can see the sensibility of it, using the long form, it is zero-knowledge revealed, in the act of providing a proof-of-knowledge.

Wait a second, you might be saying, did my pal really believe that I had the PIN, due to the lights on and off evidence?

No, he did not. Being a suspicious person, he pointed out that I might have had an external means to turn her office lights on and off, maybe a remote button of some kind. Or, maybe it was a fluke that the lights perchance went on and off at that particular time. Or, maybe I knew beforehand that the lights were set to run on a timer and would go on and off at just that precise moment.

I shook my head. Really? I would go to that kind of trouble to falsely try to prove that I had the PIN? Well, anyway, I decided that perhaps I could deal with his doubting ways. I told him to once again wait and watch the office window. Away I went, and this time I entered in her office, using the PIN again, and twisted the blinds so they flipped one way and to the other way, doing so very fast, fast enough that no one could see into the office.

I came back to my waiting and formerly doubting pal. He admitted that it sure seemed like I must have the PIN. He had witnessed two events, one after the other, and the first one gave him a kind of probability or certainty level about my having the PIN, which got further reinforced upon my second alleged visit that flipped the blinds.

You could say that I was the Prover, having to prove something that I claimed to know. You could say that my pal was the Verifier, trying to verify the veracity of my claim that I know something. When you read the literature and research on ZKP, you’ll find that it is customary to label the Prover as a thing or person labeled as “A” and the Verifier as a thing or person labeled “B” – allowing you to then portray the situation in a mathematical way of using A and B.

The letters A and B are somewhat unassuming and so there are those that like to refer to A as being “Alice” and B to being “Bob,” providing a bit more catchiness to the matter. Since it might be hard to remember which is which, whether Alice is the prover or the verifier, or whether Bob is the prover of the verifier, some like to use the word “Peggy” as representing the Prover and the word “Victor” as representing the Verifier, an easier alignment of the letters that start the respective words.

You now know the rudiments of ZKP. There is a Prover that claims to have some kind of knowledge, which the Prover does not want to reveal per se, including not even offering clues that might reveal it, and there is a Verifier that is trying to assert the veracity of the Prover’s claim, which the Prover then uses some kind of proof to try and convince the Verifier about the Prover’s veracity.

I think you can directly discern the privacy value of such an approach.


Take the advent of cryptocurrency as an example.

When using blockchain and a cryptocurrency such as bitcoin, the traditional approach involves making openly available the history of a given bitcoin, allowing you to trace it from its origins and to each of the times that it was spent on something. When you ponder this, you realize that the cash you carry in your wallet or purse is not that way. Pull out a five-dollar bill. Can you tell me where it has been, prior to arriving in your hands? Not likely.

The cash you use every day is considered fungible. That five-dollar bill is the same as any other and there is no ready way to trace where any of them have come and gone. Sure, there is a minted number on the bill that you can use to identify its origins, but the path thereafter is pretty much a mystery. You likely know where you got the five-dollar bill, and when you spend it you’ll know what you did with it, but anything else in-between is unknown to you and pretty much unknown to everyone else too.

In the case of cryptocurrencies, the ease of traceability comes with being digital. You might like the traceability and it gives you comfort to know where a particular cryptocurrency item has been. On the other hand, it opens the door to allow knowing who had it, what they used it for, when they used, etc.

I’m guessing you’ve had a moment when you were about to use on your normal credit cards, realized that by using it you could be traced, and perhaps switched over to cash, doing so to keep away prying eyes (if that’s never occurred to you, perhaps I’ve just triggered you, sorry). Generally, any use of a “traditional” cryptocurrency is going to be similar to the notion of using a credit card that can be traced, though in some ways even less private, since your credit card info might be known mainly by your issuing bank, while the blockchain you are using might be available to anyone on planet Earth.

One of the presenters at the ZK Day event was Dr. Alessandro Chiesa, a Senior Advisor at the Simons Institute and a faculty member at UC Berkeley. He is a co-inventor of the Zerocash protocol, which endeavors to use ZKP for blockchain and cryptocurrencies. This adds a capability of privacy that otherwise is not inherent in traditional cryptocurrencies. For a handy foundational paper from 2014, which also covers the zk-SNARKS (Succinct Non-interactive Arguments of Knowledge), see:

In terms of classic reading on ZKP, you might want to look at this groundbreaking paper that essentially got ZKP rolling: The paper was co-authored by Shafi Goldwasser, Silvio Micali, and Charles Rackoff. Dr. Silvio Micali was also a presenter at the ZK Day event, providing insights about a version of blockchain called Algorand:

I’ll soon herein be discussing the ZKP as it applies to AI self-driving cars, and meanwhile some devoted readers will remember this piece on blockchain for AI self-driving cars that I wrote for my column:


You might be anxious that so far, I have not seemingly said anything about spy novels and spy movies, yet I had promised earlier that I would connect the dots on ZKP and spy stories.

For those of you that like classic spy movies, consider the famous Alfred Hitchcock directed film entitled Torn Curtain.

The story takes place during the Cold War era. Actor Paul Newman plays a character named Michael Armstrong that is an American rocket scientist, and before I say anything else, PLOT SPOILER, he opts to defect to East Germany on the other side of the Iron Curtain, but it is a ruse to get him access to Professor Lindt, played by actor Ludwig Donath, in hopes of America learning a rocketry formula that Lindt has perfected. Sidenote, there is a love interest twist to the story, involving Julie Andrews playing the role of Sarah Sherman, so it’s got both spying and romance, for those of you that like that cup of tea.

In any case, there is a pivotal moment in the film when finally, the American rocket scientist has a moment alone with the East German (Soviet Union, essentially) rocket scientist, occurring after all kinds of shenanigans and plot turns. They are standing at the chalkboard that contains Lindt’s secret formulas, and there is an empty space for the crucial missing link that the Americans have not been able to solve.

We are now ready for the connection to ZKP.

Lindt asks the American rocket scientist, Michael Armstrong, to fill-in the missing piece. He does so. Say what, have the Americans just given the missing secret away? Well, Lindt immediately tells Michael Armstrong that the missing piece shown will not work. Michael Armstrong insists that it does. This gets Lindt into a dander, and he erases the formula portion written by the American, and proudly shows him the right missing piece, the one that Lindt had discovered. Oops, bad news for Lindt, he has just revealed the secret to the American. The rest of the movie involves the American rocket scientist trying to get away with the new knowledge embedded in his mind and bring it back to the USA.

Lindt was fooled into giving away his secret.

I mention this because there is an added twist to the ZKP aspects.

You might have a Prover that is genuine (let me use the letter G to mean being genuine), which I’ll call P-G for ease of reference. You might also have a Verifier that is genuine, which we’ll call as V-G. But we might have spies, adversaries are what they are usually called in ZKP research, which I’ll label with the letter “A” for adversarial and thus we could have a P-A (a Prover that’s an adversary) and we could have a V-A that’s a Verifier that’s an adversary.

To be clear, we have four instances, two types of a Prover and two types of a Verifier:

  •         P-G: Prover Genuine
  •         V-G: Verifier Genuine
  •         P-A: Prover Adversary (spy)
  •         V-A: Verifier Adversary (spy)

Let’s make the safest assumption about things, namely that a genuine Prover does not necessarily know that the Verifier is genuine, and nor does the genuine Verifier know that the Prover is genuine. A spy, or adversary, might be standing in place and trying to act like they are genuine. One never knows.

Thus, we have these four circumstances when an event occurs:

  •         P-G parlays with V-G (this is the usual expected parlay)
  •         P-G parlays with V-A (the Verifier is an adversary, a spy in our midst)
  •         P-A parlays with V-G (the Prover is an adversary, trying to trick a genuine Verifier)
  •         P-A parlays with V-A (it’s classic spy-versus-spy, though neither might realize it!)

These circumstances are important because the Prover should be cautious about revealing anything about the secret when trying to prove the veracity of the claim about the Prover knowing the secret, since otherwise they might be revealing something to an adversary (a spy!), the no-good evil V-A. Don’t want to do that.  Lindt got so riled-up about the American scientist’s lousy formula that Lindt revealed entirely the missing piece, the one thing that the American scientist had been seeking all along and had endured his false defection to obtain.

Generally, with ZKP, you want to avoid having to pre-establish trust between the Prover and the Verifier, if you can avoid it (this is considered a trustless setup, meaning they don’t need to particularly trust each other per se, which reduces other potential burdens). If feasible, the Prover and Verifier should be able to come upon each other, transact their business, and yet feel relatively safe that the Prover did not give away something, and they were in essence complete strangers when they came together. If they have to be trusted buddies beforehand to make things work, it’s going to create potential holes in the robustness and make the scheme more arduous to employ (you can do it that way, I’m just saying it is perhaps less parsimonious).

I’ll make things even harder.

Let’s assume that you have an observer, O, which might be one or more others that are watching the Prover and the Verifier. The observers might be prying adversaries, spies, bent on finding out the secret that the Prover possesses. Once again, the Prover is not supposed to reveal the secret, and nor any clues about it, such that then any adversarial observers are still in-the-dark about the secret itself.

Consider too the Verifier. The Verifier might know the secret already, and they are merely trying to find out whether the Prover knows it too, such as if my pal at the computer center had already known the PIN, he might have wanted to discover whether I really know it too. In fact, it just so happened that he did not know the PIN, and it was only me that knew it. The point is that the Verifier can be in one of two conditions, they know the secret, or they do not know the secret.

This is worthy of consideration since sometimes people get confused about ZKP and assume that both parties, the Prover and the Verifier, both know the secret. This does not need to be the case. My pal did not know the PIN. He was solely interested in knowing whether I knew the PIN. I’m sure he would have gladly wished that I might have divulged the PIN, either accidentally or intentionally, but that would have gotten me into great trouble.


This brings me to my second movie that is a favorite spy story and ties to ZKP.

Sean Connery, usually known as James Bond, the infamous “spy” (a good guy spy), appeared in the movie The Russia House, playing the character Barley Scott-Blair, a has-been British book publisher, and a normal person that has no particular special skills or powers. He routinely visits Moscow for business purposes. He meets a beautiful and low-key Soviet woman, Katya Orlova, played by actress Michelle Pfeiffer, and SPOILER ALERT, they fall for each other (spying and romance, once again!), somewhat at her doing, since she is trying to connect a character known as “Dante” (superbly played by actor Klaus Brandauer), which is not his real name and he is hiding his identify, but for which Katya knows, with Barley.

It turns out that Dante is a top Soviet physicist and he has a manuscript that details nuclear arms secrets, which he wants to reveal to the British and the Americans in hopes of staving off an all-out nuclear war. But the CIA and MI6 are dubious that Dante is really a top Soviet physicist. So, the CIA and MI6 concoct a list of questions for this Dante and ask Barley (Sean Connery) to meet privately with Dante and have Dante answer the questions.

Sounds good. A private meeting between Dante, the alleged Soviet physicist that wants to do “the right thing” and share Soviet nuclear secrets with the Americans and the British, via the British book publisher. Keep in mind that Barley, the British book publisher, knows nothing at all about nuclear arms. He will merely collect the answers from Dante and bring them back to the CIA and MI6, so that nuclear experts there can ascertain whether Dante knows what he claims to know.

Are you ready for the connection to ZKP?

Once the CIA and MI6 give the questions to Barley, and Barley goes to meet with Dante, it suddenly dawns on the CIA and MI6 that maybe they’ve just shot their own foot. How? They realize that the list of questions is a kind of reveal. The questions themselves showcase aspects about nuclear arms in a manner that inadvertently suggests what the Americans and British know about nuclear arms. Ouch!

The mere act of asking questions can be a tell.

Suppose you have planned a surprise birthday party for a friend of yours. Nobody leaks the secret. The day before the event, you see your friend, and you ask them pointedly what they are doing tomorrow at noon. Your friend, not having previously suspected that a surprise birthday party might be planned, gets highly suspicious that you would ask such a question. You have revealed a clue about a secret.

When you have a Prover that is trying to prove the veracity of a claim of a secret, I’ve emphasized that the Prover is not to reveal the secret and nor any clues about it. You might have a Verifier that rather than merely receiving your proof, might instead be asking a question or series of questions, doing so to essentially manufacture the proof by the answers that you provide. If so, the Verifier has to be mindful of not asking questions that could either reveal the secret and nor any clues about the secret, assuming that the Verifier knows what the secret is (as learned from The Russia House movie!).

In ZKP, there are interactive proof-of-knowledge processes and there are non-interactive proof-of-knowledge processes. As you can likely see, the interactive approach has the potential for either the Prover leaking the something, or for the Verifier leaking the something. They both need to be quite careful in what they say or do. You don’t want happenstance of loose lips to sink the ship.

This brings up another variant that sometimes is considered. Should you consider using potential falsehoods or extraneous elements in your proof-of-knowledge to possibly throw-off an adversary that is playing the role of a Prover or Verifier, or to throw-off any observers (spies) that are watching the matter?

Putting on your spy hat, think of the clue’s aspects of ZKP in terms of a set of clues, which I’ll call C. So far, I’ve emphasized that the set C is supposed to be the null set, namely there aren’t any clues provided. That’s the conventional wisdom, and likely the more facile way to go.

There might though be a tradeoff involving reducing computational expense or time-effort if you are willing to reveal some clues in the course of displaying the proof (or, as I’ll mention in a moment, maybe raise the bar on your adversaries).

I might be willing to showcase some genuine clues, refer to them as C-G, depending upon the payoff involved in doing so. Meanwhile, I might also be willing to offer some false clues, which I’ll refer to as adversarial clues, C-A, doing so to distract or confound any adversaries that might be immersed in the matter.

I could have a set C that has only C-A members, meaning it is entirely fake clues, or I might have a set C that has only C-G members, meaning all bona fide clues, or I might have a mixture of both C-G and C-A, doing so to potentially hide the genuine ones, the C-G’s, among the fake ones, C-A’s. This set will be ascertained for any particular instance of a Prover and Verifier encounter.

Notice that the use of clues is going to hopefully make life harder for any adversaries, since they are now needing to contend with clues, for which they aren’t sure which are genuine, and which are not. That’s handy. The downside potentially is that the clues might undermine the efforts of a P-G parlaying with a V-G, making it more arduous for them to achieve their goals. Again, it’s a balance of the need to make life hard for adversaries, and still get to the desired proof-of-knowledge.

Some would say this is heresy since the core principle is “zero knowledge” revealed, and the use of clues would potentially undercut that notion, though if the clues are all C-A, you could claim that it is still a zero knowledge reveal in terms of not having revealed any genuine facets.

I presumably would only reveal any minimal C-G’s, if any at all, and so you might say this is a “zero-min knowledge” threshold (that’s my own verbiage, not a standard), meaning that I might provide zero knowledge or some minimal amount of knowledge, doing so as a variant of trying to confound adversaries and also if it might provide some other worthy payoff such as reductions in computational expense or time.


Let’s ponder the potential of falsehoods or extraneous and potentially misleading elements.

My manager at the computer center had gotten her degree at another university, a college considered a rival of the university she now worked at and that I was attending. In her office, she had banners of the rival college, but none of the one that was now providing her a weekly paycheck. Strange perhaps, but true.

Suppose my pal, while trying to get me to verify that I had the PIN to the manager’s office, asked me to go back into the office and wave a banner of our college. If he asked me to do so, I would be highly suspicious of him, since I knew that there wasn’t such a thing in her office. Upon his asking me to show a banner, if I said yes, it would make him suspicious of me that I apparently thought there was a banner in there, which would certainly be implied by his having asked me to show one.

The banner has nothing to do with the PIN in terms of showcasing the PIN or providing any clue about the PIN. It has to do with me, the Prover, and him, the Verifier, trying to sound out each other and ascertaining whether one of us, or maybe both, are really spies and don’t know the secret PIN at all.

This can though lead us down a bit of a rabbit hole. Did the Verifier ask a question that was by-design trying to determine my veracity, or did they accidentally ask a mistaken question, or were they the spy and didn’t realize that the question was a potential reveal?

This brings up a related matter. If an observer were watching me and my pal, and they watched as I disappeared, and they saw the light go on and off, and the blinds getting flipped, what would they now know? They still do not know the PIN, and nor have any clues about the PIN.

You might say though that the observers do know that I apparently have the PIN, which in of itself might be a bad thing for them to be able to ascertain. Imagine if other student-workers in the computer center were watching me and my pal, they could now potentially confront me and claim that they believe I have the PIN, maybe asking me to grab some spare keyboards and printer cartridges from the manager’s office.

There is a chance though that me and my pal had actually colluded all along, neither of us having the PIN. Maybe beforehand, we prearranged to have a timer or switch on the lights, and we had a contraption that would flip the blinds. The whole demonstration might be a ruse. We could simply tell the other student-workers that challenged us that we did the whole thing as an April Fool’s prank, and they would not have any means per se to disprove the claim.


Recall that I had earlier stated that upon turning the light on and off, my pal was somewhat convinced that I had the PIN, but he was not entirely convinced. I offered to make a second visit, which then added to his level of confidence that I likely did have the PIN. I might have done a similar act a third time, a fourth time, and so on, and in so doing the odds that I really did have the PIN would seemingly continue to increase, assuming that I was each time able to do something that showcased the likelihood that I did have the PIN.

Typically, with ZKP, the Prover is demonstrating that they likely know the secret and yet it is not necessarily to a one-hundred percent degree certainty that the Prover does know the secret. The Verifier has to decide what degree of certainty or probability they are willing to accept as sufficient for the proof-of-knowledge. For my pal, the apparent fact that I had seemingly turned the lights on and off, well, it alone would or could have been sufficient proof that I have the PIN.

The Verifier though might want to be more assured about the chances of the Prover really possessing the secret. If you can setup a set of challenges, doing so in a savvy statistical way, it allows the certainty to be so high, assuming the Prover attains the challenges, you can be relatively assured that the Prover does have the knowledge claimed. You might still have a bit of nagging doubt, but it is hopefully so small that you are willing to absorb it and not feel you are taking an excessive risk at believing the claim.

In that sense, Zero Knowledge Proofs are probabilistic verification.

There are some that say they don’t want to use something that provides any chance of risk or uncertainty. Not even if the risk is infinitesimal. For them, a one-in-a-zillion chance that you are perhaps falsely accepting the proof-of-knowledge is too much for them. The usual retort is that they likely already are absorbing risk that they don’t even realize they are, and they are blind to the risk and uncertainties of things that do have uncertainty and yet they are unaware that they do.

Remember, the only “certain” things in life are death and taxes.


There are several instructive stories or tales that are often used to explain the nature of ZKP.

One of the classic ones involves the fictional variant of the folk story of Ali Baba, used in a paper published in 1998 and entitled “How to Explain Zero-Knowledge Protocols to Your Children” (see the paper at ).

The Ali Baba version for ZKP has evolved over time. I’ll give you a quick recap as illustrative further about the nature of ZKP.

Imagine a cave that is a circular tunnel, consisting of one entrance, and from that entrance you can take a path to the right or a path to the left, all of which leads you back to the entrance. In the middle of the circular tunnel, we’ll put a door. It’s an impressive door. There is no means to open the door, unless you know the secret phrase. Okay, you twisted my arm, it’s the phrase “Open Sesame” (SPOILER ALERT!).

Anyway, we have Peggy (remember, she’s our Prover), and we have Victor (he’s our Verifier), and Peggy knows the secret phrase, but Victor does not. Please do not reveal the secret phrase since I’ll get in trouble with Peggy, thanks. Victor wants to have Peggy prove or provide proof-of-knowledge about the secret phrase that opens the door, meanwhile Peggy does not want to reveal what the phrase is (she’s a zero-knowledge non-revealer). This should seem familiar as a setup by now.

Here’s what they do. Peggy goes to the entrance, steps inside, and unseen by anyone else, she chooses either the path to the right or the path to the left. Victor then comes up to the entrance. He stands just inside and yells out the word “right” or the word “left” which means he wants Peggy to come to the entrance from that respective side (as based on facing the tunnel entrance).

Peggy indeed complies and let’s say Victor yelled out “left” and she shows up from that side of the tunnel. Did she do so because she happened to have gone to the left, but got stuck at the door, and waited until she heard Victor yell, or did she perhaps go to the right and had to use the secret phrase to open the door and come along to the entrance via the path on the left?

You don’t know for sure which way she did it, but if she did come out to the left, you are at least at a 50/50 chance. Victor steps out of the entrance. Peggy once again picks a choice of either right or left, whichever suits her fancy. Victor steps in. He yells out say “right” again. Peggy comes out via the right side. If she can keep succeeding, you eventually are going to be statistically pretty certain that she must be using the door and she must therefore know the secret phrase (assuming that there aren’t ways to cheat, like having a superpower of being able to walk through magical locked doors).

That’s a handy example of ZKP.

For those of you into complexity theory and computer science, you likely know about the famous 3-color graphs problem. Whether you happen to know the 3-color problem or not, you might enjoy the online interactive demonstrator that MIT has setup, allowing you to learn about ZKP by playing a game as a Verifier and using a 3-color graph (it’s easy to play and, most importantly, there isn’t any homework, nor grades assigned, nor any mandatory final exam), here’s the link:

A handy paper on ZKP that provides mathematical formulations that you might find of interest is this one: The author, Boaz Barak, uses a Coke versus Pepsi challenge, now a popular ZKP example, as illustration.

Essentially, you claim as the Prover that you can tell the difference between Coke and Pepsi. As the Verifier, I have Coke in a paper cup in one hand, and Pepsi in a paper cup in my other hand, I put them behind my back, I might switch the cups from one to the other hand, I present you with the cup, you take a sip, and you tell me whether you have just drank Coke or Pepsi. I know which one I’ve given you to taste. We do these enough times, until ultimately, I either believe you or not (again, the statistical matter comes to play, akin to Ali Baba).

Rather than using the Coke versus Pepsi challenge, some like to use an example of presenting you with two balls, one let’s say is red in color, the other is blue in color, and there is nothing else to distinguish one ball from the other. Maybe I’m color blind and both balls look the same to me. You inform me that you are not color blind and can discern which ball is which. I doubt you. I show you the balls, put them behind my back, maybe switch them or not, and I present the two balls, asking you to say which is red or which is blue. We do these enough times, until I believe you are in fact not color blind.

Back to the Coke versus Pepsi challenge, Barak provides two essentials about ZKP that I’ve somewhat implied, one about the soundness and the other about completeness, though I didn’t yet formally tell you about those important criteria, he says (see page 2 of his paper):

“A proof system is sound if you can never derive false statements using it. Soundness is a minimal condition, in the sense that unsound proof systems are not very interesting.”

“A proof system is complete if you can prove all true statements using it. Similarly, we say it is complete for a family L of true statements, if you can prove all statements in L using it.”

One aspect to keep in mind is that there is not just one way to do Zero Knowledge Proofs. This means that somebody might approach you and say they have a means of achieving ZKP, but you should be cautious in assuming they really do. You would want to know how they came upon their approach, and as a minimum you would ask:

  •         Is it sound or does it exhibit soundness to a sufficient degree?
  •         Is it complete or does it exhibit completeness to a sufficient degree?
  •         Is it truly zero-knowledge in that it reveals nothing about the core knowledge at hand and nor any clues, whether overt, subtle, or by happenstance, about the core knowledge?

One of my favorite watchable videos on ZKP is Michael Rabin’s talk for the Ada Lovelace Lecture on Computability (the lecture series is in honor of August Ada Lovelace):  Rabin is a well-known mathematician and computer scientist, and received the Turing Award with Dana Scott for their paper entitled “Finite Automata and Their Decision Problems,” which laid out the foundation for nondeterministic machines. It’s a classic paper:

In the video, Rabin uses the Where’s Waldo example, which has become popular as an example of ZKP.

Essentially, as you likely know, Where’s Waldo typically consists of trying to find the character Waldo on a page that has lots and lots of other characters and images, thus, he is there but “hidden” by obscurity. The ZKP version is that you take a sheet of paper, cut out a hole, just large enough to showcase Waldo, you then place the paper over the book page, making sure that the hole you’ve made reveals Waldo.

Assuming you do this well enough, you have not revealed the location of Waldo, which is the secret, and yet “proved” that you have the knowledge of where Waldo is.

Another popular example consists of using Sudoku, the numbers game typically involving nine of 3×3 subgrids, resulting in a 9×9 overarching grid, and a kind of mathematical puzzle requiring you to use only the digits 1 to 9 to fill-in the cells. Suppose that you are able to solve the puzzle, of which there are various ways to solve it, and want to convince someone else that you did indeed find a true solution, but you don’t want to show them the solution per se. Once again, you can use ZKP to essentially prove that you likely do have a solution in-hand.

You’ll be happy to know that there are lots of examples involving a myriad of mathematical puzzles, such as using Hamiltonian cycles, using discrete logs, and so on. You can take a look at the papers I’ve already mentioned and there is a plethora of such in-depth mathematical examples.

A practical consideration about Zero Knowledge Proofs is to consider the amount of effort or time required to undertake the proof. It’s handy to be able to avoid revealing your secret, but suppose it takes a lot of back-and-forth to convince the Verifier, consuming time in doing so, along with the potential for lots of computations by either or both of the Prover and Verifier in their own respective right.

For a real-time system making use of ZKP, you’d want to be relatively confident that the time crunch does not otherwise undermine what the system needs to accomplish. I’m not saying that ZKP is a computational or time-consuming beast, only pointing out that it is an important aspect to tune when implementing ZKP, as would be the case with any kind of cryptographic approach. Likewise, you’d want to ensure that there is error detecting and mitigating features included, in case there is any disruption or difficulties encountered during the course of the proof process.

One thought is that perhaps a ZKP can be done in a one-step or one-shot manner, which would obviously cut down on the back-and-forth of a multi-step approach. This is a keen idea, though difficult or hard to do under certain situations and desired thresholds. I’ve mentioned in my writings that one-shot Machine Learning or Deep Learning is a hoped for aspiration, yet continues to be something outside of our grasp to-date in any generalizable way:

Here’s a classic paper that discusses one-step ZKP:

Speaking of practical aspects, I’ve already noted that the Zero Knowledge Proof approach can be used for blockchain and cryptocurrencies, which I’d bet will gradually become a wider realization of the value that such an approach provides. To me, this is a great applied example. Blockchain and cryptocurrencies though still in an infancy stage of maturation, will likely take a while to gain the kind of acceptance and widespread use of ZKP, but inexorably it makes sense to do so.

Another practical example that I like to cite involves the use of Zero Knowledge Proofs for nuclear warhead verification. In a study done for the U.S. Department of Energy, one of the difficulties about verifying whether a nuclear warhead has been made inert or not, or the magnitude of the warhead, involves having to potentially reveal secrets about the warheads themselves.

Given the distrust between the counting parties, generally the US and Russia, there is naturally resistance to do verifications if you also need to show your secrets. Here’s the study:

You’ll be glad to know that it does not involve putting red and blue colored balls behind your back. Instead, it involves irradiating two or more putative warheads, using high-energy neutrons, and capturing a kind of radiation result fingerprint, consisting of the neutron transmissions and emission counts. Doing so in a ZKP manner appears to keep secret the design of the nuclear warhead and the composition of the warhead hidden and therefore avoids the touchy sticking point about such verifications. The approach could be used for deployed nuclear arms and for non-deployed ones.

Throughout these examples, the manifestation of the proof is typically:

  •         The Prover takes some action that showcases they likely know the secret (such as my turning the lights on and off in the office managers office, or Peggy showing up at the correct path in Ali Baba when called by Victor, or showing Waldo), yet does not reveal the secret itself (I didn’t reveal the PIN, Peggy didn’t reveal “Open Sesame,” and Waldo’s location is not revealed).
  •         The Prover calculates something, reveals the calculated result, suggesting they likely know the secret (as done for mathematical puzzles and the like), yet the calculated result does not reveal the secret and no clues about the secret. This is the cornerstone of computer-based ZKP.

Some in the ZKP arena have explored whether you might relax some of these characteristics, and if so, what implications arise due to the relaxation. For example, suppose you sternly cling to not revealing the secret itself, but are willing to providing clues about the secret (as per my earlier discussion herein), perhaps doing so in a manner that the clues are quite difficult for others to discern or exploit. These are variants worthy of exploration as to strengths versus weaknesses of a desired approach to ZKP.


Now that you are up-to-speed about ZKP, let’s consider how this applies to AI systems.

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. There are some interesting potential applications of ZKP in the AI autonomous vehicles space, which touch upon various AI elements overall, and for which we are currently exploring.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article:

For the levels of self-driving cars, see my article:

For why AI Level 5 self-driving cars are like a moonshot, see my article:

For the dangers of co-sharing the driving task, see my article:

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other. Period.

For my article about the grand convergence that has led us to this moment in time, see:

See my article about the ethical dilemmas facing AI self-driving cars:

For potential regulations about AI self-driving cars, see my article:

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article:

Returning to the topic of Zero Knowledge Proofs, let’s explore some areas that ZKP can be applicable to AI self-driving cars.


As a rule-of-thumb, when exploring opportunities for the application of ZKP, you should consider as part of your an that you are most likely seeking a situation involving two parties (or possibly more) that want to or need to confer with each other and do so without revealing a something of interest to both parties, yet allow the parties to be comfortable that the something is known to (at least) one and believed by the other (or proven) that it is known by the one (or more).

In a straightforward manner of speaking, if you have a potential Prover and a potential Verifier, and you are concerned about what is communicated between the two, namely you don’t want any secret traffic that might get broken into, you should be thinking about ZKP. It is not axiomatic that you would always choose ZKP in such a situation, but it certainly is a tool you should have in your toolkit. If it’s the only tool you have, that’s probably not good, since as they say, when you only have a hammer, the whole world looks like a nail.

A savvy computer scientist has an entire bag full of tricks, or tools, upon which the appropriate tool should be considered for the appropriate circumstance. If you don’t yet have ZKP in your toolkit, you ought to bone-up and make sure it is included.

If you are developing an AI system involving situations of two (or more) parties carrying on a transaction of some kind, and you are at first tempted to have them exchange a password or some means of establishing trust, you might want to rethink the matter and ponder whether ZKP might be a handy option.

I’ll also mention that there is not anything mutually exclusive about using ZKP and also using a more secret-based exchange-style cryptographic method, since sometimes the more the merrier. A cybersecurity mantra is to normally have multiple layers of security, similar to how your house might have locks on the doors (one layer), be in a gated community (another layer), and you have a guard dog that barks at strangers (a scary sounding all-powerful dachshund in the house).


In the case of AI self-driving cars, I’ll outline some key circumstances that fit the notion of having a Prover and a Verifier, of which they might want to preserve privacy about some aspects of the matter at-hand that causes them to communicate with each other.

They are:

  •         V2V (vehicle-to-vehicle) electronic communication, say car to car
  •         V2I (vehicle-to-infrastructure) electronic communication, say car to roadway infrastructure
  •         V2P (vehicle-to-pedestrian) electronic communication, say car to pedestrian
  •         OTA (over-the-air) electronic communications, say car to auto maker cloud
  •         GPS (global positioning system) electronic communications, say car to GPS cloud
  •         In-Car NLP (natural language processing), human to car interaction

In each of these above use cases, there is a potential Prover and a potential Verifier, and they want to have some kind of discourse with each other.

For example, there is a notion that AI self-driving cars will possibly communicate electronically with pedestrians, V2P, and could include the AI sending out a message warning a pedestrian that the self-driving car is about to take the corner ahead and perhaps step back from the curb, or it might involve the pedestrian sending a message to the AI self-driving car stating that they are urgently needing to cross the street and want to jaywalk because of an emergency.

One ongoing debate involves whether or not the AI self-driving car should be emitting some kind of identification when it undertakes these electronic communications. It could be the VIN (Vehicle Identification Number) or some other kind of presumably uniquely assigned number. You might say that it makes sense for the AI self-driving car to identify itself, rather than being anonymous.

Privacy though is a serious concern and AI self-driving cars have a doozy of a privacy matter. If your AI self-driving car is continually transmitting messages with other cars (V2V), and with the roadway such as traffic signals and bridges (V2I), and with pedestrians (V2P), or any variant thereof (referred to as V2X), you are effectively leaving a trail of where your AI self-driving car has been, assuming that each such message is accompanied by an identifier of the AI self-driving car.

Would you want to go riding in your AI self-driving car and find out later on that someone could reconstruct your trip? Spooky. As such, each of these AI self-driving car messaging matters could potentially make use of ZKP.

You might be thinking that you can just strip out any identifier and send such messages, and thus there appears at first glance to be little need to do ZKP. The problem though is that imagine if someone untoward decides to send out V2V, V2I, V2P, etc., and yet they are not sincere about it, they are playing tricks on other cars, on the roadway, and on pedestrians. You want the parties to indicate they have a genuine stake in their messaging, requiring a type of verification, and prefer to do so without also conveying private info such as an identifier.

That’s where ZKP can come to the rescue.

In a somewhat similar manner, consider the situation of your AI self-driving car using a GPS system to navigate around town. Assume that there is a GPS system in the cloud that your self-driving car is using. When your self-driving car asks the GPS system for the path to your destination, the reveal tells the GPS system where you are going. Once again, this is a potential privacy invasion.

I realize you might suggest that the self-driving car should merely download the GPS info, whole hog, and therefore there is no need to tattle on the location of where you are and where you are going. This though presents other problems, including the need to have sufficient memory on-board the car, plus this suggests you might not get needed real-time updates about road conditions, etc.

There is quite interesting work going on about GPS and preserving your privacy.

One such notable paper is entitled “Privacy-Preserving Shortest Path Computation” by David Wu, Joe Zimmerman, Jeremy Planul, and John Mitchell, see: The focus in their research is mainly about using compression techniques for next-hop routing matrices, but the potential for using ZKP is also a consideration. Touching more so on ZKP in this domain, the papers of “Privacy and Integrity Considerations in Hyperconnected Autonomous Vehicles” (, and “Plug-in Privacy for Smart Metering Billing” ( are of keen relevance. For more overall work on Location-Based Services (LBS) and privacy, including Mutually Obfuscating Paths (MOP), and ZKP, see the paper entitled “Preserving Location Privacy of Connected Vehicles With Highly Accurate Location Updates” (

Some have proposed that blockchain ought to be used for many of these communications between AI self-driving cars and other systems, and indeed we are working on that aspect too. Similar to the earlier point about using ZKP for blockchain privacy in the cryptocurrency use case, the use of ZKP in the AI self-driving car use case of blockchain has equal applicability.

For my article about privacy aspects of AI self-driving cars, see:

For the potential of back-door hacks of AI self-driving cars, see my article:

For my article about V2X and omnipresence, see my article:

For my article about NLP and AI self-driving cars, see:

For aspects of AI socio-behavioral interactions, see my article:


I’ll be providing additional indications about V2X, OTA, NLP, and other such matters in future columns and will also provide a more detailed glimpse at how ZKP can be applied in that milieu.

For now, I hope that I’ve whetted your appetite that Zero Knowledge Proofs are a useful tool and something that you should aim to consider. Many software developers are somewhat mystified or confounded by what ZKP is, and I hope that I’ve turned the corner on that by my offering you an explanation herein, with examples, and with some references for you to further study, along with some pointers toward the use of ZKP in the AI self-driving cars realm.

You might say that I’ve offered you a proof of sorts, a proof about the viability of ZKP, and for which there’s actually no need for me to keep the secret that it has viability, I’m willing to openly divulge that. But I won’t though tell you the PIN number to the door of the office manager that I once worked for, some years ago. I’ll never tell that.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.