Project Empower – Part 2: don’t let perfection be the enemy of the good

This post continues our updates on the development of Project Empower, which we introduced in Part 1. In this Part, we review our design-centric approach to problem solving. This approach meant we worked closely with different users across the supply chain, and incorporated their perspectives into project priorities and design solutions.

Connecting to the ‘Real’ world – Project Revelation

Establishing the right kinds of connections between the analogue and digital worlds is, without a doubt, one of the key challenges for supply chain projects. BeefLedger is no different, and our Project Revelation is all about this issue.

Our starting point was the interface between the end-consumer and the product. On this score, the project has explored a number of technologies that have potential to contribute to the overall aim of minimising the risk of counterfeiting. Over the last 24 months, we’ve deployed a plethora of interface solutions from various third parties, including:

  • QR Codes, enabling consumers the ability to access product details. A number of different configurations of QR Code interfaces were implemented;
  • Near Field Communication (NFC) chips, which activated when scanned and which enabled the time and location of the scanning event to be registered to a blockchain. The user app generated a map showing the location of the scanning event; and
  • Optical scanning interfaces, also enabling users to access product data.
A WeChat mini app enables consumers to access rich product information via the Laava-powered BeefLedger Smart Fingerprint.

We won’t ‘name names’ as all the different approaches deliver their respective pro’s and con’s. We thank all our collaborators and look forward to working with colleagues in this ‘consumer facing’ area to strengthen integration opportunities.


The common challenge of the focus on the end-user interface alone was the need to enable bulk data ingestion in a cost-effective manner. After all, we were potentially dealing with a world in which a consignment of beef from the processor involved over 520 cartons, each of which held between 15-25kg of product. Taken to a consumer-ready portion (eg., 200g individually packed), we were talking about 39,000-65,000 individual units.

A time-in-motion study we conducted showed that a solution based on NFC technology alone, individually scanned with a smartphone, was simply cost prohibitive. We were looking to assist Australian producers, not hinder!

The BeefLedger Ethereum RFID Scanning App.


To deliver industry-grade interaction between the ‘real world’ and the blockchain, we focused on a more traditional and very mature inventory management technology – UHF RFID. The BeefLedger team proceeded to develop an Ethereum-native App embedded into a standard RFID scanning gun, which uniquely signs the data to the blockchain from an Ethereum address.

Merging the ‘old and the new’ was getting interesting.

This solution supports individual or multi-tag scanning enabling cost-effective industrial scale deployment of the blockchain as an asset tracking infrastructure.

The journey in essence saw the team begin from the end-consumer point of view, but rapidly realised that the more fundamental issue lay in cost-effective industrial scale interfaces that could rapidly ingest data from more than one item at a time. The NFC-based technology simply couldn’t do this.

Now, we are pleased to say, any operation using RFID chips can be smoothly integrated to our asset tracking infrastructure. We also recognised that not all users would comfortably submit data directly from the scanning device as their operating environments and existing business processes saw administrative functions fulfilled ‘back at the office’ and often ‘at the end of the day’. As such, it’s possible for users to simply upload pre-configured .csv files and propose them to the blockchain network via the multi-sig protocol.

Humans and Machines

As much as people talk about automation, we designed an information system that first and foremost enabled agents to actively engage in the processes associated with each of the stages in the supply chain of data. That’s the ‘5 Commons’ described in Part 1 of this post series.

Dynamics of agent identification and responsibility in data validation.

Design Foundations

The conceptual underpinnings of our approach to agent engagement and responsibility in the processes of data proposal and validation is shown in the diagram above. As can be seen, in every instance of data state update, we focus on addressing:

  • Messenger Identities;
  • Message content eg., Asset Identification; and
  • Message version eg., timeliness.

Project Kratos

Project Kratos is all about opening up the supply chain of data to the community at large. The first part of this is the multi-sig decision-making method.


Our multi-sig protocol is designed to be organic. This means members of any multi-sig group can propose changes to the composition of the group as well as the ‘signing rule’ applicable for that group. Note that there are no limits on the number of groups within the network, nor any limits on how group members may choose to propose changes to the applicable ‘signing rule’.

Multi-sig schematic – an illustrative example.

Data state updates proposed via the multi-sig can only be made by valid identities. Different identities have access to different capabilities within the network. If proposers are valid, the proposal is then determined through the applicable multi-sig protocol. This takes the form of x signatures of y Group Members.  With sufficient signatories, the proposal is validated and approved to the blockchain.

All members of the network can view the signatories that approved any data proposal. Reputational staking encourages good behaviour.

Community Attestation

To extend the opportunity for the community at large to be involved in the supply chain of data, we have also developed the Community Attestation protocol. Community Attestation is an application in applied game theory / mechanism design. For the present version, we have applied a simple Schelling-inspired approach to incentivising the convergence on a ‘common truth’ from a group of strangers.

The mechanism is straightforward:

  1. Data state update proposers submit the data update to a Community Attestation, and pay a Proposal Fee. Note that the proposal can include what we generally call ‘evidence files’ (more on this below);
  2. Any member of the network can participate in the Community Attestation. To do so, they pay an Attestation Fee when they submit their vote. The vote options are binary: approve / reject.
  3. The Attestation process is open for a fixed duration of time.
  4. At the conclusion of the Attestation period, the Attestation Pool (the Proposal Fee + Attestation Fee) are distributed as follows:
  5. Attesters in the majority = their own Attestation Fee + a pro rata distribution of 75% of the Proposal Fee + 100% of Attestation Fees of those that voted with the minority; and
  6. BeefLedger 25% of the Proposal Fee.

All network fees and payments are made in BEEF tokens.

BEEF token is a general purpose ERC20 token that has many uses within the beef and related supply chains.

Votes are secret. Network members cannot see who voted which way.

Validation is a public good, as a common basis of valid data is required by all network participants as a basis upon which they go about their business in the supply chain and with others.

Enabling proposers to submit ‘evidence files’ is an important part of the process. Here, ‘evidence files’ would typically be application documents or photographs / videos of events. We encourage proposers to submit ‘evidence files’ that are human-centric, meaning that the evidence is readily meaningful to other members of the network.

Rejected proposals can be re-proposed via either a multi-sig or another Community Attestation round.

This approach to data state validation opens up many areas for further research and development. For example, we will continue to explore the relative benefits / costs trade-off between these two processes. Additionally, we are mindful of the importance of additive learnings or experiences emerging from different agents utilising various processes, and how these may be rendered as ‘value’ over time. In particular, we are cognisant of the role of earned reputations, not to mention ‘referred’ reputations, in social information systems.

These are areas requiring further work.

Continual Improvement

We called this part ‘don’t let perfection be the enemy of the good’. We are the first to acknowledge the need for more work, more development, more refinement. We welcome user feedback enormously and are actively engaging trials to ensure we meet the users expectations. Contact us if you’re interested in getting involved in further trials.

In terms of the interactions between the analogue and digital worlds, we continue to work with a host of IOT innovators and vendors on the one hand, and consumer-interface technologies on the other.

As for enriching the processes by which the community at large can participate in, and contribute to, data validity, we are confident that we have made great strides in the right direction. But there’s a lot more work to be done. We’d welcome thoughts in this arena especially.

Part 3 will explore another dimension of our approach to involving humans and communities – Project Demos and its signature BeefLegends initiative.