We explore the synergy between cryptographic protocol theory and financial auditing. Auditing principles and practice have undergone many centuries of development and provide valuable lessons for commercial protocol design. We develop an abstract model of data relationships between parties which reflects significant portions of this tradition. In turn, modern cryptographic protocols can enhance auditing controls with high security and introduce heretofore inconceivable capabilities. We focus on a significant enhancement, unforgeable transaction logs, and introduce a substantial new capability, confidential auditing. We briefly discuss the use of data commitments and secure timestamping protocols to detect post-commitment forgery, and introduce an abstract model of data relationships and reconciliation to detect preforging. We show how algorithmically verifiable relationships between data, in particular the arithmetic relationships specified in financial spreadsheets, can be verified with greatly reduced exposure to loss of data confidentiality by the parties involved. We examine arithmetic relationship verification in two cases: between two parties with confidential input from one party (implemented with zero-knowledge proofs), and between multiple parties with confidential input from any of the parties (multiparty computation). As these protocols are costly, we examine their efficiency and suggest applications where integrity and confidentiality are at a premium. Besides efficiency, we briefly discuss some other practical barriers to the implementation of unforgeable logs and confidential auditing, such as legacy systems and the canonicalization problem. Finally, we show how our selective monitoring architecture addresses problems which arise in a wide variety of business relationships.
Outside of the financial cryptography community, and long predating it, there is a deep tradition of protocols used in the course of performing contracts. These protocols consist of a flow of forms ("data flow", canonically displayed in data flow diagrams), along with checks and procedures called "controls". Controls serve many of the same functions as cryptographic protocols: integrity, authorization, and so on. This paper uses "control protocols" or simply "controls" to refer to this combination of data flow and controls.
Control protocols, and the professions of auditing and accounting [BH95] based on them, play a critical but ill-analyzed role in our economy. Economists lump them, along with other costs of negotiating and ensuring the performance of contracts, under their catch-all rubric of "transaction costs". But without controls, large corporations and the economies of scale they create would not be possible. Controls allow a quarrelsome species ill-suited to organizations larger than small tribes to work together on vast projects like manufacturing jumbo jets and running hospitals. These control protocols are the result of many centuries of business experience and have a long future ahead of them, but the digital revolution will soon cause these paper-era techniques to be dramatically augmented, and eventually transformed. We show two ways in which cryptographic protocols can play a leading role in this transformation.
Controls enable the auditing of the performance of agents and counterparties by principals and third parties, allowing more precise inference of the behavior of an agent. Controls are typically based around amounts of money and quantities of goods. A canonical control is double entry bookkeeping, where two books are kept, and there must be arithmetic reconciliation between the books. To conceal an irregularity, necessary to omit from both sides, or to record entries offsetting the irregularity.
Notice that there is a problem distinguishing error from fraud. This problem crops up in many areas in both auditing and cryptographic protocol design. Indeed, the distinction between a security attack and a mistake is a second-order concern in both areas: the first-order purpose of protocols and controls is to detect or prevent faults which may either be accidental or fraudulent in nature.
To illustrate, here are two common control techniques:
Imprest: this is a family of controls involving the receipt or disbursement of bearer certificates (usually notes and coins). A familiar example is the protocol used at most movie theaters. Entry is segregated from payment by introducing tickets and establishing two employee roles, the ticket seller in a booth, and the ticket stub salesman at the entrance. Periodically, a bookkeeper reconciles the number of tickets with the total paid. Discrepancy again indicates fraud or error.
Customer audit: Techniques to get the customer to generate initial documentation of a transaction. For example, pricing goods at $.99 forces the employee to open the cash register to make change, generating a receipt.
A complete control protocol typically features the generation of initial documentation, segregation of duties, and arithmetic reconciliation of quantities of goods, standard service events, and money.
It has long been recognized that an intermediary is more trustworthy when it is distributed. In a large business, transactions are divided up so that no single agent can commit fraud. For example, the functions of warehouse/delivery, sales, and receipt of payments are each performed by different parties, with a policy that each party reports every transaction to a fourth function, accounting. Any singular reported activity (e.g., delivery without receipt of payment) indicates potential fraud (e.g., a delivery was made to a customer and the payment pocketed instead of being put into the corporate treasury). Seegregation of duties is the auditor's favorite tool. Where it is absent the auditor cries "foul", just as a good engineer would react to a single point of failure. Many cryptographic systems have rightfully gone down to commercial failure because they ground down to trust in a single entity rather than segregating functions so as to require collusion.
Controls, while highly developed and well integrated into the business world, are showing signs of their age as we enter the computerized era. Controls are based on static forms passed as messages and processed in tables and spreadsheets. Controls focus on money and counts of standardized goods and service events, easily recorded by numbers and manipulated by arithmetic, while mostly ignoring other kinds or aspects of contractual performance. Checksums on numbers, the basis of reconciliation, are crude and forgeable compared to cryptographic hashes. Electronic Data Interchange (EDI) keeps these static forms and maintains reliance on controls. It uses cryptographic hashes for nothing more sophisticated than integrity checks on individual messages. This paper focuses on the application to auditing of modern cryptographic protocols, in particular data commitment, timestamping, zero-knowledge proofs, and multiparty computations.
Multiparty computation protocols can be used to verify assertions without revealing the underlying data to the auditor. Also, commitments can be made that are only verifiable by certain designated auditors. This allows relationships with trustworthy auditors without being exposed to extortion, a great improvement over paper based auditing.
1. pre commitment 2. within committed data 3. post commitment
The idea of a post-unforgeable transaction log is to freeze a copy of the transaction, like a fly in amber -- available thereafter for auditing. The log should not be alterable by any party without this forgery being detectable by the auditor. A simplified protocol for post unforgeability involves, first, publishing signed bit commitments (or more efficient variations on bit commitments for most kinds of log data). When the transaction log is later published it can be compared against the commitments; any discrepancies indicate that the log has been altered. Secondly, hashes of these commitments are chained together using a secure timestamping scheme.
Data Commitment Schemes Should these be commitments ready for use in ZK, or just general signature,hash(msg,pad)?
Look at the efficiency of committing our ideal blocks: (committ e.g. All revenues and expenditures >$1,000 plus "petty cash" accounts which shouldn't exceed $10,000) -- How many transactions per day/month/year w/this granularity? Amount Product Code Timestamp Hash Signature
Timestamping Schemes An good overview of secure timestamping schemes, as well as a more efficient binary linking scheme suitable for our purposes, can be found in [BLLV98]. Here we review these authors' efficiency results and look at the resource demands of transaction log commitment. (same granularity of as committments -- if this is too expensive are there useful subsets of the log we can commit?)
Describe timestamp verificatInterparty ion & its efficiency. Later we'll redo this analysis w/confidential checking. Canonicalization Hal has some stuff on SHA-1 and reformatting
Preforging seems to only be meaningful for data which is supposed to have some relationship with some other data, but fails to. This may be a functional dependency by either datum on the other, or there may be dependency on a mutual source. An example of dependency on a mutual source is a view of the same object from two different camera angles. We call these required relationships "interparty integrity constraints". The failure of these predicates indicates error or fraud. These assertions represent the roles parties have agreed to play in a transaction.
We can combine post-unforgeable transaction logs with interparty integrity constraints. For example, an auditor might check the following assertions between two parties to a transaction: that the transaction took place at the same time according to secure timestamps, involved payment or receipt of the same amount of money, and involved shipping or receiving of the same good as indicated by product code. Post-unforgeable logs can be implemented with secure timestamping schemes, which catch an unforgeable copy of the transaction, like a fly caught in amber. From that time forward the transaction is securely committed so that nobody can forge its contents without being detected by an audit. Forging the transaction prior to commitment requires collusion between parties to falsify records consistently on all sides so that they pass the interparty integrity tests. Segregating duties between several parties, and embedding them in a web of assertions, minimized the possibility of such collusion.
These are also called "invariants" in the database/transaction world. In transaction processing, consistency is required pre and post transaction. Consistency can be proven by protocol design or checked via interparty integrity constraints. Within a transaction we are allowed to be inconsistent.
Protocols Constrained By Verifiable Promises
-- promises define the proper _role_ of the party
-- (promises as integrity constraints defining consistency
<--> transaction as protocol between consistent states)
-- protocols frozen in amber for later inspection
-- deviating from this role can at some point be
detected by confidential auditing
-- web of constraints gives minimal collusion
groups much tougher to achieve than corrupting
a good coalition in a quorum system.
-- applications incl.
+ multiparty computation? are there any
constraints on general computations?
particular classes of functions? (pre & post conditions?)
++ much more efficient to just us quorum systems/MPC for random sampling
of assertions than to use it for the protocol itself
+ Byzantine agreement?
Reconciliation is the comparison of these two pieces of data which should be related in some way. A discrepency can, in general, indicate either error or fraud by either party. The particular context and further reconciliations between other related pieces of data can often indicate which party is at fault, and the likelihood of error rather than fraud.
We invoke the following basic setup to crack the preforging problem:
(1) Segregation of duties among several parties: internal employees or groups, and external entities such as customers, suppliers, and intermediaries.
(2) A web of relationships between these parties. Violation of a specified relationship constitutes a fault, which could indicate either error or fraud in the transaction. By embedding transactions in a web of relationships, we've greatly reduced the set of collusive partners. (Can I prove there's little or no choice?)
Data Fields: Product/Service Quantity Product/Service Code Price Total $amount (redundant) Timestamps -- Simultaneous -- Synchronous Signatures/Receipts
What happens if one party bows out of a transaction early? How is this recorded? -- doesn't ship -- doesn't pay -- doesn't give receipt
Some algorithmically verifiable controls: (foreseeable & specifiable error conditions) Purchase price vs. market range at that time -- How to compare a hidden value vs. a public value? Sale price vs. normal price range Amount out of range for normal usage or production quantities Excessive penalties or interest due to late payment assets value according to market price & proper depreciation schedules reconcile inventory w/earlier purchasing department records reconcile shipping records w/sales orders reconcile receiving records w/purchasing orders
-- Detailed diagram, showing three-party segregation of
duties on _both_ sides, along with optional financial
and shipping intermediaries
-- smaller diagram for each, or table Revenue Production Finance
Guessing: Amounts (easy) Product Code (easy) Timestamps -- Simultaneous (hard) -- Synchronous (easy) Signatures/Receipts (very hard) Relationship Reconfiguration Colluding Spoofing
A-B-C-D-E-E-E-E.....
If any of A,B,C,or D does not collude, than E cannot preforge any of his commitments. For this special case we get a strong result, t=1 out of N.
Internal Segregation of Agent Duties Each department holds its own log before commitment. How about during audit?
Counterparty Audit This is a generalization of the idea of a customer audit. Example: distribute client software that automatically engages in a protocol which audits a server. While a some colluding counterparties may hack the client software to turn off the embedded auditing protocol, usually this will be too expensive and counterparties will lack the incentive to do so. #### stuff from Telecommuting Cashiers goes here #### ### reconciliation requires three parties: auditor, #### party, and counterparty
Standard (Nonconfidential) Reconciliation
For example, we could run a spreadsheet across the Internet on this virtual computer. We would agree on a set of formulas and set up the virtual computer with these formulas. Each participant would have their own input cells, which remain blank on the other participants' computers. The participants share output cell(s). Each input our own private data into our input cells. Alice could only learn only as much about the other participants' input cells as she could infer from her own inputs and the output cells.
There are three major complications. The first is that this virtual computer is very slow: in some cases, one machine instruction per network message. The second is that some parties learn the results before others. Several papers have discussed the fraction of parties one must trust in order to be assured of learning the correct output. The mechanism must be constructed so that a sufficient number of parties have an incentive to pass on the correct result, or reputation, side contracts, etc. used to the same effect. The third is that, far from being omniscient or omnipotent, the protocol will accomplish only what is specified in the algorithm and the inputs. It won't be able to replace human trusted third parties where those parties provide insight or knowledge that cannot be provided by a computer.
Performance phase analysis with multiparty secure computer theory would seem to apply only to those contracts which can be performed inside the virtual computer. But the use of post-unforgeable auditing logs, combined with running auditing protocols inside the shared virtual computer, allows a wide variety of performances outside the virtual computer to at least be observed and verified by selected arbitrators, albeit not proactively self-enforced.
The participants in this mutually confidential auditing protocol can verify that the books match the details of transactions stored in a previously committed transaction log, and that the numbers add up correctly. The participants can compute summary statistics on their confidentially shared transaction logs, including cross-checking of the logs against counterparties to a transaction, without revealing those logs. They only learn what can be inferred from the statistics, can't see the details of the transactions. Another intriguing possibility is that the virtual computer can keep state over long periods of time, allowing sophisticated forms of privy and self-enforcing secured credit.
With mutually confidential auditing we will be able to gain high confidence in the factuality of counterparties' claims and reports without revealing identifying and other detailed information from the transactions underlying those reports. These provide the basis for solid reputation systems, and other trusted third party systems, that maintain integrity across time, communications, summarization, and preserve confidentiality for transaction participants. Knowing that mutually confidential auditing can be accomplished in principle may lead us to practical solutions. Eric Hughes' "encrypted open books" was one such attempt.
The protocol designer should first specify what coalitions are to be allowed and forbidden. If we relax arbitrariness to self-interest, only coalitions whose members have an incentive to not collude, to either compute an incorrect result, or to violate the privacy of any player, should be allowed the potential to disrupt the protocol. Invoking "incentive" implies an economic or game-theoretic analysis of coalitions, which will be considered in a future article.
For now, we just observe that the protocol designer needs to draw from C a set of allowed ("good") coalitions GC. Any set of parties in GC is a coalition sufficient to successfully complete the protocol. Also draw from C set disjoint from GC of disallowed ("bad") coalitions BC which cannot be allowed the opportunity to disrupt the protocol. If GC union BC = C then we say that the partition is unambiguous. Nick Szabo Bar Nick Szabo To produce a secure protocol, these coalitions need to meet certain criteria. Particularly interesting is the _quorum system_, a set of good coalitions, every member of which intersects in at least one party. Each quorum can act on behalf of the system to complete the protocol. Intersection facilitates consistency across quorums. For example, if an operation occurs in two quorums, then at least one party observes both. Indeed, a fault-tolerant or secure quorum system intersects in a set containing sufficiently many parties to guarantee correctness. See, for example, the masking and dissemination quorum systems used by Malkhi & Reiter [MR97] to design a replicated database secure against Byzantine faults by bad coalitions.
For multiparty computations, quorum systems are needed to assure that the protocol can be completed if a bad coalition at some point refuses to participate. Since there is no fair exchange of outputs, a bad coalition (such as a single party) will learn of results before good coalitions. If the results are unfavorable, the bad coalition can halt the protocol before transmitting the result to others. To combat this threat, the good coalitions have shared amongst themselves enough information to play the roles of any bad coalition. All the information the quitters would have supplied is already in the possession of each good coalition. To partition into good and bad coalitions with these properties turns out to be equivalent to partitioning into a quorum system.
Beaver & Wool [BW98], following the trail blazed by among others [HM97] and [NW96], have shown that if any single party can be trusted with correctness, then a quorum system is necessary and sufficient for the privacy of inputs to a multiparty computation[S97] against resource unbounded adversaries. Classical analysis of multiparty computation concluded that a threshold of more than half of the parties is necessary and sufficient for private computation. This is just a special case of BW98 result. Any threshold system whose threshold exceeds N/2 is also a quorum system -- there are only N parties, so two coalitions of size >N/2 must contain at least one party in common, thus forming a quorum system.
The classical analysis also concluded that a majority threshold is necessary and sufficient for _correct_ multiparty computation, i.e. secure against active Byzantine faults in minorities with polynomial amounts of resources. A two-thirds majority is necessary against resource unbounded adversaries. I am aware of no correctness result as yet for quorum systems in multiparty computation.
The study of coalitions, and in particular quorum systems, provides a seemingly comprehensive theory of multiparty protocols which goes well beyond the confining linear world of thresholds.
(1) no possible output would be considered unfavorable by any party, or such an output has negligible probability, (2) it's a defined feature of the protocol to restart if the output is unfavorable to any party, (3) prior to starting the protocol, the output is already known to those who might unfairly benefit from quitting early, (4) for each party there is only one unfavorable result, and innocent quitting is implausible, so that quitting implies the result unfavorable to the quitter, (5) the protocol can be restarted with different inputs, and the output of the previous run does not give advantage in the restarted run.Denial of service is also a concern in some applications, but not in many others. In the case of confidential auditing, (3) and/or (4) often hold. We assume that the auditied parties are identifiable or reputable. I.e., our protocols need not tolerate anonymous, disreputable, or first-time pseudonymous parties. Hostile attacks such as denial of service simply brand the attacker as disreputable. (###There is also a major issue of colluding pseudonyms -- we may need to assume all parties are unique individuals, via True Name. Do Chaumian credentials allow us to distinguish individuals?)
\bibitem[Bea89b]{bea89-def}
D.\ Beaver.
``Formal Definitions for Secure Distributed Protocols.''
{\em Proceedings of the DIMACS Workshop on Distributed Computing and
Cryptography,} Princeton, NJ,
October 1989, 47--64.
\bibitem[CTG95]{CrTaGr95}
C.\ Cr\'epeau, A.\ Tapp, J.\ van de Graaf.
``Committed Oblivious Transfer and Private Multi-Party Computations.''
{\em Advances in Cryptology -- Crypto '95 Proceedings,}
Springer--Verlag LNCS 963, 1995,
110--123.
Fully private computation bears similarity to the Gong/Syverson[1] idea of a "fail-stop" protocol: "any attack interfering with a message sent in one step will cause causally-after messages in the next step or later not to be sent".
Gong and Syverson refer to attacks by third parties, and assume that cryptographic integrity checks detect such attacks. It follows that any active attack cannot cause the release of a secret to third parties. Analysis of security can then be restricted to the much simpler case of passive attacks. Our current thinking might generalize this to counterparty attacks (besides unfair exchange of outputs and denial of service), using commitments and zero-knowledge proofs to prove secrecy while detecting false computations of shared functions.
These authors argue that, by eliminating the need to analyze active attacks, fail-stop protocols are much easier to prove correct. They give the following procedure for proving a protocol correct:
(1) verify that the protocol is fail-stop (2) validate the secrecy assumption against passive attacks (3) apply BAN-like logics
If this methodology generalizes, fail-stop multiparty computation also makes proving (2) easy, and the whole procedure would prove security against counterparties as well as against third parties. So I suggest that the value of fully private computation may lie not only in the added privacy, but also the simplicity and provability of the overall security when composing protocols.
[1] Li Gong and Paul Syverson, "Fail-Stop Protocols: An Approach to Designing Secure Protocols", 5th Intl. Working Conference on Dependable Computing, 1995. also under http://www.csl.sri.com/gong/public_html/pubs95.html
Another related idea, used for Byzantine agreement (e.g., agreeing on what function to compute in the first place), is that of the "unreliable failure detector"[2]. Failure detectors provide the local process with a list of other processes suspected of being faulty. Failure detectors are defined according to their completeness (ability to detect actual failures) and their accuarcy (success in avoiding false failure detection).
One big problem is that a Byzantine faulty process may visibly behave like a correct one. Malkhi and Reiter punt the solution of this to an unspecified higher level of the protocol. Assuming eventual visible difference in behavior, the problem can be simplified to just requiring the detection of those processes from which no messages will arive. Failure detector class eventual-S(bz) requires that every process from which no more broadcast messages are received is eventually detected as faulty (strong completeness), and there exists a time after which some correct process is never suspected by any of the correct processes (eventual weak accuracy).
The authors show that eventual-S(bz) and a broadcast primitive suffice for agreement as reliable as that of the broadcast primitive (robust for failure of up to 1/3 of the processes). I conjecture that eventual-S(bz), self-reliability, and a fully reliable broadcast primitive would provide eventual fully reliable agreement. Here's one such protocol: Alice just ignores any process that doesn't agree with her!
I gather self-reliability is not normally assumed in Byzantine agreement, but it can be a reasonable security assumption, as with crypto protocols. Such "agreement" would not be between all parties, but simply between those parties which chose to agree. This gives us an interesting theoretical model, different from Byzantine agreement, and likely solvable in a way much more tolerant to faults (except in self): a model of agreement suitable
[2] Dahlia Malkhi and Michael Reiter, "Unreliable Intrusion Detection in Distributed Computations", 10th Computer Security Foundations Workshop (CSFW97), 1997. also under http://www.research.att.com/~dalia/
Reporting to stakeholders
Inventory coordination
Discovering tax-saving opportunities
"Black budget" auditing
Medical records auditing
Confidential auditing uses 2-party/one-party input circuits to to reconcile any kind of relationship between within a party -- i.e., trusted book entries. (These entries may be trusted because they were previously reconciled between parties).
The two-party confidential auditing protocol requires -- two party computations with unconditional privacy for the input of one party -- we have the following (o = empty set, X' = set of sets X):
Obviously G' is a quorum system. P is a member of all coalitions in G', so she can keep her input private. V is not so can't. But since V has no input, all the inputs are private. This is equivalent to the zero-knowledge argument protocols for arithmetic circuits given in [DC98].
Of Verifying Timestamping Chains
Above we described timestamp chain verification & its efficiency. Now we redo this analysis w/confidential checking. Do we care if chain is public? -- chain known only to auditor -- chain & time not know to auditor (except knowing in ZK that two chains are equal -- how to compare two whole chains in ZK?)
Of Verifying Arithmetic Relationships
Potential Avenues for Efficiency Improvement
some regular text.
etc.
some regular text.
(Coordinated ZK with two parties?) Trust auditor, distrust audited parties to not collude during audit re: correctness Constrain collusion to pre-commitment -- don't allow during audit. Can we unbundle correctess trust but privacy trust? We trust auditor with correctness (because auditees can demand second opinion) but not privacy (once lost, can't get it back).
To reconcile binary relationships between two parties.
Now let's take my favorite case: 2 parties P1 and P2 with private inputs, and a third party V. I can't find a 3-party quorum system preventing either prover from colluding with the verifier to violate the privacy of the other prover. And I wasn't able to improve this by giving the provers two blinding addends apiece, turning them virtually into 2 parties each in a 5 party protocol. Instead let's add two moderators, giving us:
There is no pair of sets in B' such that B1 union B2 = U. Therefore its complement G' forms a quorum system, and thereby gives unconditional private computation up to collusion by a coalition in G'. Privacy violation of one prover requires collusion of the other prover with at least _two_ of {V,M1,M2}. This is an improvement over the threshold result since no number of verifiers or moderators can collude without the cooperation of the other prover.
Thus if P1 finds her privacy violated her counterparty P2 is surely to blame. Furthermore, if all of {V,M1,M2} are professional verifiers or moderators she can ding _all_ their reputations. At least two are surely to blame, and the other was stupid enough to team up with snoopers. (However, setting up a verifiable sting may be a challenge).
Now let's go to four moderators (*=any other element):
Privacy violation of one prover now requires collusion of the other prover with at least three of {V,M1,M2,M3,M4}. The general pattern seems to be that for two provers, we can construct a protocol requiring collusion of the other prover with >=ceiling((N-2)/2) V or Ms where N-3 is the number of moderators. 2 out of 3 is probably best.
Trust auditor, distrust audited parties to not collude during audit re: correctness Constrain collusion to pre-commitment -- don't allow during audit. Can we unbundle correctess trust but privacy trust? We trust auditor with correctness (because auditees can demand second opinion) but not privacy (once lost, can't get it back).
(Briefly mention Ran Canetti paper on MPC w/deniable commitments)
-- ownership transfer (ex-ante) -- insurance (ex-ante & ex-post) -- agency: (ex-ante & ex-post) * unsecured credit * stockholder/management * management/employee -- rental (ex-ante & ex-post) -- franchise (ex-ante & ex-post)
As a result, in a wide variety of business relationships there is great demand for monitoring of behavior both ex-ante and ex-post. On the other hand, such monitoring may violate privacy. Specifically it can be used by third parties to impose costs on principles (including counterparties to previous relationships). Formally, we have the set C of information by which principles improve the efficiency of contracts, and the set E of information by which third parties decrease the efficiency of contracts. One possible goal is to achieve exclusion of E-C. Another is to achieve only C-E. The intermediate and practical, but imprecise goal is to balance costs of including some E and excluding some C.
trusted agent: panopticon --> trusted principle panopticon --> TTP --> principle panopticon --> ZK --> principle panopticon --> tamperproof hardware --> principle untrusted but noncolluding agents w/integrity constraints: panpticon (A1,...,AN) --> trusted principle panopticon(A1,...,AN) --> TTP --> principle panopticon(A1,...,AN) --> MPC --> principle panopticon(A1,...,AN) --> tamperproof hardware --> principleCryptography allows us to extend the privacy of auditing in two important ways: confidentiality from the principle and auditor (via private computations) and uncoerceability by denying use of commitments to untrusted third parties (Canetti et. al.) Confidential auditing allows us to maximize C while minimizing E, including eliminating gratuitous information E-C, which the principle or trusted third party could use to decrease the efficiency of other agent contracts. Uncoerceable auditing makes any information (and hence any information in E) unavailabe to all save a few chosen trusted third parties.
briefly describe a simple online account-based, 2-blinded cash protocol, Ref: Chaum 2-blinding even more important with public lists -- any payor or payee can collude with self to ID counterparty! No blinding => need confidential auditing public issued & spent coin lists withdrawer audit of issued list: ZK sufficient for confidential audit depositor audit of spent list: ZK sufficient for confidential audit Center, which is neither a mint nor a client, but serves only to distribute software and audit mints Center distributes client separately from mints ability to terminate fraudulent mints via updating good-mint/bad-mint list (periodically downloaded by clients from Center) cf. franchise contracts
BH95 G. Bodnar and W. Hopwood, _Accounting Information Systems_, 6th edition, 1995.
BLLV98 A. Buldas, P. Laud, H. Lipman, J. Villemson, "Time-Stamping with Binary Linking Schemes", Crypto 98 CD98 R. Cramer & I. Damgard, "Zero-Knowledge Proofs for Finite Field Arithmetic, or: Can Zero Knowledge Be for Free?", Crypto 98 B91 D. Beaver, "Efficient Multiparty Protocols Using Circuit Randomization", ACM STOC 91 RB89 T. Rabin & M. Ben-Or, "Verifiable Secret Sharing and Multiparty Protocols with Honest Majority", ACM STOC 89 GRR98 R. Gennaro, T. Rabin, & M. Rabin: "Simplified VSS and Fast-Track Multiparty Computations", PODC 98 Ref: BW98 D. Beaver and A. Wool, "Quorum-based Secure Multi-Party Computation", Eurocrypt '98, also at http://www.transarc.com/~beaver/research/publications/biblio.html#mpp HM97 M. Hirt and U. Maurer, "Complete characterization of adversaries tolerable in secure multi-party computation", 16th ACM PODC MR97 D. Maklhi & M. Reiter, "Byzantine Quorum Systems", 21st ACM STOC, also at http://www.research.att.com/~dalia/ For an important application of Byzantine tolerant replication, see http://szabo.best.vwh.net/securetitle.html NW96 M. Naor and A. Wool, "Access control and signatures via quorum secret sharing", 3rd ACM Conf. on Computer and Communications Security S97 For a gentle introduction to multiparty computation and its potential applications,see http://szabo.best.vwh.net/msc.html *** need a hidden actions/knowledge & contract monitoring reference **