Introducing Project Kratos

Warwick Powell, Chairman

Supply chain sub-optimality arises in conditions of information asymmetry and associated, the risk of capricious conduct or malfeasance by one or a number of colluding actors. (For a detailed discussion on these issues see Working Paper No. 1 HERE.)

In these conditions, where there is either hidden information or hidden actions, ignorant supply chain actors are exposed to a range of risks – and even uncertainties – that increase costs for all concerned. There are also distributional implications, where the information rich benefit at the expense of the information poor.

Blockchain technologies can contribute to supply chain optimisation. To do so, they need to address head-on issues of information asymmetry so as to militate against the risks of capricious conduct or the deleterious effects of collusion.

All supply chain participants rely upon a basis of common or mutual knowledge that they can all depend upon for going about their business. This common knowledge basis can be secured through common action, whereby the infrastructure and protocols by which the information is secured are treated as a network “utility” rather than a source or arbitrage.

Project Kratos

BeefLedger has been developing its Project Kratos over the past 12 months. Kratos is from the Greek meaning power. We name our consensus community governance project so, so as to focus attention on questions of authority vis-a-vis the production and maintenance of mutual knowledge.

Design Principles

Project Kratos addresses this set of interrelated common or mutual knowledge challenges by embedding the following design principles into the common data ecosystem, underpinned by decentralised consensus protocols:

  1. Data requirement priorities are user-determined. This is because value from data can only be determined by either those paying for it, or by regulators who demand it to provide effective “externality” policing.
  2. Consumers have an inalienable interest in the integrity of supply chains, and supply chain data. Any data ecology that does not meaningfully engage the consumer is incomplete at best. The consumer is not only a user of data, but is also a producer of value-creating data from which they should benefit.
  3. All data proposals require a multi-party governance structure and protocol to be valid. No-one can act alone.
  4. The data commons governance – as enabled by the underlying blockchain architecture – should verge towards empowering self-governance and organic adaption, with decision-making capacity vested with network members.

Multi-Sig + Schelling Points

At the heart of the BeefLedger data community is a 2-part set of processes by which data is proposed, validated and published to the blockchain:

  1. A multi-sig procedure so that any data proposal requires a number of actors to share responsibility for proposing and witnessing the information. We apply an organic philosophy here so that self-governing “thresholds of trust” can be determined over time by the community at large. In other words, no one size fits all. Rather, the precise composition of multi-sig protocols is something that can be determined over time by network members wherein variable protocols can conceivably be applied depending on the class of data proposal under consideration; and
  2. A whole-of-community attestation procedure underpinned by the economics of “Schelling” points (see diagram below, which shows the basic game theoretic parameters of this mechanism). Data validation is a common utility resource, and as a service, is something proposers and the community at large needs to pay for. By rendering explicit the value of data validation, we create novel transparent mechanism that incentivise “truthful convergence”. Incidentally, these mechanisms also effectively valorise reputations, which over time contribute to rich trustfulness for actors that demonstrate informational virtue within the ecosystem.

At present, these two processes are offered as choices to data proposers. The first is live and in use, and the second (community attestation) is in testing phase and soon to be incorporated.

Avoiding Data Cartels

Cartels have long been recognised as having deleterious impacts on consumers ever since Stigler’s 1964 classic “A Theory of Oligopoly”. This has typically taken the form of collusion to control production volumes and manipulate pricing. Cartels may also, through collective action, restrict new product introductions or hold back product improvements, so as to maximise profits for participating members at consumers’ expense.

In The Wealth of Nations, Adam Smith observed:

“People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.”

Little has changed in the intervening centuries.

We have identified a specific category of cartel risk in our work on supply chain optimisation in an era of digitalisation: namely data cartels. Here, we extend the foundational thinking around collusive activities associated with pricing and production of goods and services to fundamentally address the issues of the production, storage and dissemination of data (the data function) itself.

Mitigating risks of data cartels or collusion amongst some subset of actors in relation to the data function, at the expense of others in the supply chain, is central to improved supply chain performance and long-run outcomes. Detecting cartels or collusion has long been a challenge for regulatory authorities. This is because cartels are typically covert or implicit in nature. Similar challenges go to the cartelisation of the data function, particularly when provided with the metaphorical fig leaf of blockchain technology.

To achieve this, enlisting the community at large (the “buy-side”, so to speak) is fundamental to whole-of-ecosystem integrity. Otherwise, there is no transparent means by which those most affected by egregious supply chain conduct associated with misleading data (eg., counterfeiting, product misrepresentation etc.) can have confidence in the claims of upstream actors.

Under conditions where consumers are precluded from the mechanisms of data integrity validation, there is an in-built risk that upstream actors would collude at the expense of downstream buyers. Private data networks, created in the name of blockchains, which do not open membership to those most at risk of supply chain malfeasance, cannot – by the very nature of their design – overcome information asymmetry downsides that are endemic to traditional supply chains.

The production of data is something that should not fall into the hands of cartels, which can act against the interests of those least capable of doing anything about it. Data ecosystems need to be designed from the ground up to “bake in” consumer-driven integrity. After all, the entire bankable value of a supply chain is ultimately determined by the extent to which consumers are willing to pay for the goods or services.

Data integrity is a public good that can be embedded into data systems through the right design approaches. Unless consumers can have a more-or-less permissionless* mechanism to participate in data validation protocols, in which the barriers to participation are extremely low, the credence claims of other parties will always be open to doubt.

Yet, the remedy is straightforward: redesign the data ecosystem – like we have at BeefLedger – so as to ensure all impacted parties have a common interest and active responsibility in data integrity.

That, fundamentally, is what Project Kratos has been and continues to be about.

Future and Ongoing Research

An area of further research we are embarking on will focus on the relative benefit / cost trade offs between these two processes. The design approach has been to make these options available as choices for data proposers, rather than explicitly mandate a preferred approach.

A multi-sig process is likely to be comparatively quicker, and involve a lesser risk of the approval threshold not being met. The multi-sig groups will typically involve signatories who know each other, and are also likely to be actually involved in the events for which the data applies. The risk for the community-at-large is that small multi-sig groups are a vector for collusion. To some extent, transparency of signatories, as provided for on the POA, will go towards militating this risk (but certainly not entirely).

A community-wide attestation via a voting mechanism will in most cases take longer than a multi-sig approval, and also opens up a range of risks (for proposers) associated with failure to achieve approval. The community-at-large, via the voting mechanism, may “say no”. On the flip side, however, a whole-of-community attestation is arguably more robust by comparison as it provides the data proposer with access to the safety that comes with numbers.

Our working instinct is that in the long-run the whole-of-network benefits of the community wide attestation procedure are generally superior to a situation in which data proposals and validations are principally achieved via multi-sig procedures, especially when there are comparatively few members of the multi-sig group. If that is the case, then there may well be a case for the costs to data proposers of preferring the community wide attestation procedure to be lower than the cost of using a multi-sig procedure.

In other words, economically speaking, data proposers should be incentivised to prefer the more open democratic mechanism. (Mitigating factors could, however, include the legal requirement that certain data be signed off by a limited range of authorities or oracles.)


*We say more-or-less permissionless because we also propose the need to ensure appropriate identity verification of network members. In financial services terms, this is described as Know Your Client (KYC for short). The BeefLedger community also requires new members to join via invitation and acceptance by existing members, via the multi-sig and vote procedures briefly described above.