Piotr Cichocki April 2026
Secro Inc. – Maritime & Trade Solutioning, Managing Director
„I love standards. There are so many of them.”
Why commodity trade’s digitalization keeps arriving at the wrong destination – and what to do about it.
Three platforms. Four standards bodies. Five working groups. And your operations team is still emailing PDFs.
The quote in the title – attributed variously to engineers, bureaucrats, and exhausted IT managers – has become the unofficial motto of trade digitalization. It gets a knowing laugh. Then everyone goes back to their desks and opens Outlook.
This paper is not about the next standard. It is about why the sequence matters more than the solution – and why the industry keeps getting that sequence wrong.
We keep confusing the map for the territory
Ask ten people in commodity trade what “digitalization” means. You will get eleven answers. This is not a communication problem. It is a symptom of an industry that has been attempting revolution while skipping the evolutionary steps that would make it work.
Digitalization is not the same as digitization. Digitization is converting a paper document into a file – a scanned PDF, a data entry form, an electronic record. It is useful. It is also largely finished. The industry has been digitizing documents for decades.
Digitalization is something else entirely. It means redesigning processes, governance structures, and data flows to take advantage of what digital environments make possible: real-time collaboration, conditional logic, automated compliance checks, and transferable electronic records with legal standing. Digitalization requires asking not “how do we move this document into a system?” but “what was this document actually trying to accomplish, and is there a better way to accomplish it?”
The distinction matters because most platforms currently sold as “digitalization solutions” are in practice digitization tools dressed in workflow clothing. They mirror the paper process rather than rightfully challenging it. The result is analogue operations at digital speed – which is not transformation, it is transcription. The terminology confusion is not innocent: it shapes procurement decisions, implementation failures, and misaligned expectations on all sides of a transaction.
The shipper’s name problem
Consider something as fundamental as a shipper’s name and address.
In one system, it is a single free-text field. In another, three fields: legal name, address, contact details. In a third, twenty-three structured fields with partial normalization. In a fourth, fifty-three fields – LEI code, registered address, operational address, authorised signatory, contact hierarchy – validated against an external registry.
All four representations are “correct.” None of them are talking to each other. And when a bill of lading, a letter of credit, a cargo manifest, and a sanctions screening system each expect a different version of that same shipper’s identity, the result is not a data quality problem. It is a governance failure dressed as a formatting question.
The shipper’s name example reveals four distinct conversations that the industry routinely collapses into one:
Data element definition – what is being captured and what does it mean?
Normalization – is it expressed in a common format and language?
Validation – is it checked against an authoritative source?
Source of truth governance – who owns it, under what conditions can it be amended, and by whom?
Most standards initiatives address one or two of these. Almost none addresses all four. Source of truth governance – the hardest and most consequential – is almost never addressed.
What needs standardizing – and in what order
The standards conversation in commodity trade almost always begins at the document layer. How should a bill of lading be structured? What fields does a certificate of origin require? This is understandable: documents are visible, tangible, legally recognized, and the obvious pain point for anyone who has reconciled a discrepant presentation under a letter of credit.
But the document layer is downstream of everything that determines whether digitalization works. The proper sequence runs deeper:
Data elements – atomic units: what fields exist, what they mean, how they are expressed.
Documents – structured subsets of data elements, presented in a defined format for a defined purpose.
Actions – discrete events that trigger or modify document status: approval, endorsement, amendment, surrender, release, transfer.
Processes – sequences of actions, involving multiple parties, governed by rules.
Procedures – the codified institutional and legal frameworks within which processes operate.
Standards initiatives that address documents without first agreeing on data elements are building on unstable ground. Initiatives that encode processes before optimizing them are digitizing inefficiency at scale.
Where do the current major initiatives sit against this hierarchy?
The UN/CEFACT Reference Data Model (RDM) provides the semantic foundation on which much of the industry’s digitalization architecture is ultimately built. It is syntax-neutral – implementable in XML, JSON, REST APIs, or blockchain – and structured around specific domain models: the Buy-Ship-Pay RDM for end-to-end trade, the Multimodal Transport RDM (MMT-RDM) for logistics documentation, the Supply Chain RDM for commercial procurement, and the Cross-Border Management RDM connecting trade data to customs requirements. Its role is less “standard to implement” and more “semantic master from which implementable standards are derived.” The gap between its theoretical reach and its direct operational adoption is real – but characterizing it as orphaned misreads its function. It is the foundation others build on, not the building itself.
The ICC DSI Key Trade Documents and Data Elements (KTDDE) initiative, released in April 2024, occupies a more pragmatic position. It analyses 36 critical trade documents – bills of lading, commercial invoices, certificates of origin, packing lists, and others – decomposing each into its constituent data elements and mapping them against the UN/CEFACT Buy-Ship-Pay Reference Data Model. Of the 36 documents, 21 already have standardized electronic versions; 15 require further alignment, and the KTDDE provides the roadmap. Backed by over 50 organizations including the WTO, WCO, and UN/CEFACT, it represents a genuine cross-sector coalition working at the data element layer before attempting document or process standardization. Its interactive Key Trade Data Glossary, housed within the Cross-Border Paperless Trade Database, is a practical tool rather than an aspirational framework.
The accompanying ICC DSI Digital Trade Readiness Assessment evaluates organizational capability across management systems, data management, and cybersecurity – helping organizations understand where they stand before committing to a digitalization roadmap. Both initiatives reflect the same honest starting position: you cannot standardize what you have not yet defined, and you cannot define what you have not yet mapped.
FIATA’s electronic Forwarder’s Bill of Lading (eFBL) takes a further step: an open-source data model, maintained on GitHub, explicitly mapped to the UN/CEFACT MMT-RDM, covering all necessary shipment details across sea, air, road, and rail. It is designed for direct implementation by TMS providers and is interoperable with DCSA and BIMCO standards through the FIT Alliance. This is what principled standards development looks like – semantic grounding, open access, interoperability by design.
DCSA – the Digital Container Shipping Association – is the most operationally advanced initiative in maritime trade. Driven by the nine largest container lines, it has built a process-driven data model and open API specifications for booking, bill of lading, and – shortly – arrival notice that are actively being implemented. Critically, DCSA maps its Information Model against the UN/CEFACT MMT-RDM, ensuring that container shipping data can interface with broader customs and multimodal frameworks. It is industry-funded, practitioner-anchored, and produces genuine traction.
BIMCO’s eBL standard for bulk shipping takes a different approach: twenty predefined data fields, common to bulk bills of lading, aligned with UN/CEFACT and DCSA through the FIT Alliance, and free for technology providers to adopt. Twenty fields versus DCSA’s far more granular model is not a criticism of BIMCO – it is an accurate measure of where bulk shipping currently sits on the digitalization journey, and of the genuine complexity of building consensus across a more fragmented ownership structure than container lines present.
ISO/DIS 5909:2026 represents a different order of institutional ambition. Developed jointly by ISO/TC 154 and the UN/CEFACT Transport and Logistics Domain, it was published in 2026 simultaneously as ISO 5909:2026 and as a UN/CEFACT Business Requirements Specification (BRS) approved by the UN/CEFACT Bureau – a convergence that elevates it from a draft standard to the aspirational, international baseline for eBL platform design. Its data structure is organised into nine document segments – covering parties, transportation, cargo, freight, and terms – mapped in full against the UN/CEFACT MMT-RDM and UNTDED/ISO 7372, with explicit MLETR compliance framing. Notably, the standard acknowledges in its own text what practitioners in bulk and tank shipping have long observed: that the maritime industry’s segmentation into container, bulk, and tank modes creates “barriers to semantic consistency across different sea transportation modes” – and it explicitly scopes itself to accommodate all ocean transport types, not containers alone. Two aspects of the standard warrant careful reading by eBL practitioners. First, its technical scope centers on distributed ledger technology platforms – but the BRS structure treats DLT-specific data elements as optional extensions, preserving MLETR-compliant applicability across technology implementations. MLETR is explicitly agnostic as to the technical mechanism used to satisfy possession, singularity, and control; existing eBL solution providers deploying non-DLT architectures will need to assess the standard’s scope claims against their own compliance positioning. Second, fragmentation described in this paper is being recognized and addressed at the highest level of international standardization – this joint publication is significant. Audit and certification infrastructure does not yet exist, and adoption in live operations remains ahead. But the direction of travel is now institutionally anchored. The question, as always, is sequence and timing. represents a different order of institutional ambition. Developed jointly by ISO/TC 154 and the UN/CEFACT Transport and Logistics Domain, it defines business processes and data requirements for electronic bills of lading implemented on distributed ledger technology platforms. Its data structure is organised into nine document segments – covering parties, transportation, cargo, freight, and terms – mapped in full against the UN/CEFACT MMT-RDM and UNTDED/ISO 7372, with explicit MLETR compliance framing. Notably, the standard acknowledges in its own text what practitioners in bulk and tank shipping have long observed: that the maritime industry’s segmentation into container, bulk, and tank modes creates “barriers to semantic consistency across different sea transportation modes” – and it explicitly scopes itself to accommodate all ocean transport types, not containers alone. ISO 5909 is at an early stage: at the time of writing, audit and certification infrastructure does not yet exist, and adoption in live operations remains ahead. But as a convergence signal – evidence that the fragmentation described in this paper is being recognised and addressed at the highest level of international standardization – it is significant. The direction of travel is towards alignment. The question, as always, is sequence and timing.
The WCO Data Model deserves a place in this landscape that it rarely receives in commodity trade conversations. With approximately 727 data elements, XML and JSON support, and business process models for customs procedures, it is the global standard for cross-border regulatory data exchange. It is already being implemented by customs administrations across the EU, UK, US, Japan, Canada, Australia, and a growing list of others. For any platform handling international commodity trade, the WCO Data Model is not a future consideration – it is a current operational reality that a foundational data architecture must accommodate.
The joke is now explained: these initiatives are not competing versions of the same answer. They are answers to different questions, at different layers, for different constituencies. The reason there are so many standards is not because the industry is disorganized. It is because the industry has not yet agreed on what it is standardizing for – and has largely avoided the governance layer that would force that agreement.
The container solution is not your solution
DCSA’s achievement deserves genuine respect. Building an industry-wide data model that deliberately maps to UN/CEFACT MMT-RDM, while simultaneously producing API specifications that major shipping lines are implementing in live operations, is a serious accomplishment. Container shipping is ahead of the rest of maritime trade on digitalization – materially and structurally.
Which is precisely why borrowing the container solution for bulk commodity trade is a category error.
Dry and liquid bulk cargo does not behave like a container. It can be commingled in a single hold or tank, loaded without physical separation between parcels belonging to different owners, mixed or split at destination, and subjected to quantity and quality testing that determines final quantity and price after loading. A bill of lading for a bulk cargo can be negotiable, non-negotiable, or bearer type. It is frequently issued under a voyage charter party – CONGENBILL being a common example – that governs the entire legal relationship between shipper and carrier in ways that a liner BL under standard carrier terms does not.
None of this is accommodated in the DCSA data model – not by oversight, but by design. The model was built for the case it serves. BIMCO’s twenty something -field eBL standard for bulk is the honest acknowledgment of where industry consensus actually exists in this sector: sufficient for the most common bulk BL data points, insufficient for the full complexity of a negotiable bulk instrument in a back-to-back commodity sale structure.
The practical consequence: any platform seeking to integrate upstream with container booking systems, with voyage management systems operated by bulk shipowners, or with transport management systems used by commodity traders faces a data model problem that cannot be solved by selecting a single standard. It requires an agnostic foundational model capable of conforming simultaneously to DCSA’s structure, VMS data outputs from bulk carriers, and TMS data inputs from commodity trading houses – while maintaining source of truth governance across all three and supporting upstream pre-population of transport document data at rates between fifty and ninety-five percent depending on the integration depth.
Downstream, the same foundational model must speak to WCO-aligned customs systems – already live across the EU, UK, US, Japan, Canada, Australia, and an expanding list of jurisdictions. For carriers, this means pre-arrival notifications structured to national customs requirements. For beneficial cargo owners, it means export and import declarations built from the same data that originated the transport document, without re-keying, without mapping exercises, without the reconciliation errors that manual re-entry produces. The value of a properly structured foundational data model is not abstract interoperability. It is the elimination of redundant data entry across the full transaction lifecycle – from booking confirmation to customs clearance.
This is not a technology problem. It is an architecture problem. And architecture precedes software and digital data and documents’ management.
You cannot wait for the perfect standard
There is a version of the standards conversation that leads to paralysis. If data element definitions are incomplete, if process standards do not yet exist, if source of truth governance framework is still being designed – does that mean nothing can be built?
No. But it does mean that what gets built needs to be honest about what it is – and designed to absorb the standards that are coming rather than resist or challenge them.
Readiness assessment work conducted across international commodity trade consistently reveals that many organizations are uncertain about their own digital capabilities, unclear on which standards apply to their specific workflows, and effectively waiting for someone else to resolve the interoperability question before committing. This is not irrational. It reflects genuine uncertainty about which investments will retain value when standards eventually consolidate. But collective waiting does not produce standards. It produces more working groups.
Software development resolved an analogous problem decades ago through iteration: start with a minimum viable implementation of the most constrained, lowest-risk use case. Build from evidence. Design for extensibility rather than completeness from day one. The solution that is “good enough” and actually in production generates more useful knowledge – and more pressure for standards consolidation – than the comprehensive solution perpetually in design.
Applied to commodity trade, this translates cleanly: where law and counterparties are aligned, issue an electronic bill of lading. Where legal recognition is incomplete or stakeholder readiness is uneven, normalize the data and digitalize the collaborative workflow on a future-proof platform – and continue producing paper documents today, from that same structured data, without sacrificing the architecture that will make the transition to electronic instruments straightforward when the conditions are met.
The ultimate low-hanging fruit is the most constrained case where the complete value chain – from data entry to document issuance to customs clearance – can be demonstrated end to end. That demonstration is what builds the organizational confidence and the evidentiary base that the next iteration requires.
What you are actually buying
There is a persistent misconception about what digital platforms for commodity trade are or should be.
Software-as-a-Service implies off-the-shelf: configure a few settings, pay the license fee, go live. This model works when standards exist at sufficient depth so that a platform can be pre-built to accommodate them. In retail supply chain and financial services, the underlying data standards, process standards, and identity management frameworks are mature enough that SaaS works as advertised.
In commodity trade, they are not. Not yet. Not at the depth required for electronic transferable records that must satisfy possession, singularity, and control requirements under MLETR and eUCP-aligned frameworks, while simultaneously conforming to the data model requirements of counterparties operating on different systems across different jurisdictions using different, industry specific processes.
What the market actually requires is better described as Managed-as-a-Service (MaaS) – a configured, operator-specific deployment built on an agnostic foundational data model. The distinction from SaaS is not cosmetic. SaaS delivers a pre-built product that works within its own assumptions. MaaS delivers a configurable architecture that accommodates the operator’s specific cargo types, document structures, workflow requirements, and counterparty data model constraints – while remaining legally reliable for electronic transferable records, technically extensible as standards mature, and structurally capable of mapping to upstream and downstream integration requirements without rebuilding the data architecture each time a new counterparty or jurisdiction is added.
The workflows must be configurable. The data model must be agnostic. The legal architecture must be load-bearing from day one. The gap between SaaS expectation and MaaS reality is a direct consequence of the standards vacuum described above. When operators understand that distinction, and when vendors name it honestly, procurement decisions and implementation timelines become significantly more realistic.
The sequence is the strategy
The commodity trade industry does not need another standard. It needs to decide what it is standardizing for – and then work backwards through the hierarchy: governance first, data elements second, documents third, processes when the evidence from the first three is sufficient to encode them reliably.
Two propositions that appear contradictory are in fact both true:
Standards matter. Without agreed data element definitions, normalization, and source of truth governance, electronic documents cannot achieve the legal and operational reliability that paper documents currently provide – for all their inefficiencies.
You cannot wait for them. The comprehensive standard is always three years away. The evidence base that will eventually drive consolidation comes from systems in production, not from systems in design.
Evolution over revolution. Each layer is built on the one before. The sequence is not a constraint on ambition. It is the only path to the destination.