Article
Blockworks: Offchain Labs, Espresso Systems link up on transaction ordering tech

Offchain Labs and Espresso Systems will integrate both Timeboost and decentralized sequencer technology with the Arbitrum technology stack

Ethereum scaling solution Offchain Labs is partnering with blockchain infrastructure company Espresso Systems to bring Timeboost — a transaction ordering technology — to life.

The teams will also work on integrating both Timeboost and the Espresso Sequencer with the Arbitrum technology stack.

The Espresso Sequencer is a decentralized sequencing layer that layer-2s can choose to opt into, Ben Fisch co-founder and CEO of Espresso Systems told Blockworks.

“Having Offchain Labs’ support of this vision is a strong signal to us and to the Ethereum community that even teams with strong technology affiliations of their own will continue to prioritize permissionless approaches to coordination and technology,” Fisch said.

Article
Offchain Labs & Espresso Systems: Transaction Ordering Technology to Ethereum Rollups

TLDR: We’re partnering with Espresso Systems to bring decentralized and open shared sequencing technology across Ethereum rollups — improving safety, security, and the user experience across networks. Our team is contributing key research and resources towards our previously proposed transaction-ordering policy, Timeboost, and opening the doors to allow any network — including any Arbitrum chain — to adopt Timeboost and integrate directly with the Espresso Sequencer.

Overview

Today, we’re excited to announce that we’ve partnered with Espresso Systems to bring decentralized and open shared sequencing technology to Ethereum rollups. Our teams will undertake joint research and development of Timeboost — a transaction-ordering design we proposed earlier this year — and will also support technical integrations between the Arbitrum technology stack, Timeboost ordering, and the Espresso Sequencer.

Our teams have a shared vision for a decentralized and user-aligned future of shared transaction sequencing on Ethereum rollups. To achieve this vision, we’re supporting Espresso Systems in building a production-ready, open-sourced, and distributed implementation of Timeboost that can be integrated into the Espresso Sequencer. Support for the Arbitrum tech stack will enable any Arbitrum chains to integrate with the Espresso Sequencer and further the implementation of a neutral and open protocol that is compatible with all of Ethereum’s rollups.

Article
Cortado Testnet Integrates with OP Stack

Espresso Systems releases testnet 3 (Cortado), enabling OP Stack developers access to the Espresso decentralized shared sequencing network.

Today we are releasing our third testnet of the Espresso Sequencer, Cortado, which includes an integration with the OP Stack. We are glad to support the Optimism ecosystem through offering OP Stack developers a means to not only decentralize transaction sequencing but also share sequencing with other rollups for enhanced interoperability. The Espresso Sequencer is a network that will be shared across many rollups in multiple ecosystems to enhance interoperability by making bridging and atomic transactions more efficient and more secure for users. Underneath the hood, it is a consensus protocol with fast finality, high throughput, and the ability to scale to thousands of nodes.

Any rollup can leverage the Espresso Sequencer for transaction ordering and (optionally) data availability, replacing dependance on centralized sequencers. The Espresso Sequencer is designed to offer rollups a means of achieving more credible neutrality, more secure and efficient interoperability, mitigation of harmful MEV, and long-term economic incentive alignment with L1 validators.

The OP Stack is the standardized, shared, and open source development stack that is maintained by the Optimism Collective. It underpins OP Chains including Base, Zora, PGN, with new OP Chains committing to the vision regularly. These early chains, along with OP Mainnet, are already scaling on-chain activity. In the last month, the four OP Chains combined to use 4.6m gas per second, about 3.7x of the gas used on L1 Ethereum in the same period.

The code we have developed to integrate with the OP Stack is open source and available here and documentation, including architecture, can be found here. You can spin up and experiment with a local version of Cortado testnet and the OP Stack integration by following the instructions here.

Optimism Foundation Mission: Leader Election Proof of Concept

Last June, we undertook work on an Optimism Foundation Mission (RFP) calling for development of a Leader Election Proof of Concept. This work is designed as an open, public contribution to support the Optimism Collective’s progress toward technical decentralization. We at Espresso Systems are glad to have the opportunity to work closely with engineering leaders in the Optimism ecosystem and are proud to be able to contribute to this mission. You can find the code specifically related to this work here, documentation here, and our past updates to the community here.

“The integration of Cortado is an exciting milestone for the OP Stack. Not only does it lay the groundwork for additional sequencing protocols for OP Chains, it also underscores the Espresso team’s commitment to open-source values and community contribution.” said Ben Jones of the Optimism Foundation. “This milestone represents a step towards a standard that benefits developers, projects, and the Optimism Collective on the whole.”

Public Deployment with Caldera

In the coming weeks, we will be continuing our work with Caldera to deploy a publicly hosted testnet, including an OP Stack rollup. On this test rollup, users and developers will be able to deploy test contracts, submit transactions, and interact with applications in a real-time testing environment where all rollup transactions are sequenced by the Espresso Sequencer. Through familiar interfaces like MetaMask, users will be able to submit transactions and experience the rapid pre-confirmations provided by the Espresso Sequencer.

Caldera is a leading rollup-as-a-service company that enables developers to launch customized rollups with one click, leaving it up to the developer to choose what data availability, sequencer, and other layers underpin the system. Caldera supports rollups including Manta, Loot Chain, Syndr, and others.

Matt Katz, CEO of Caldera, said: “We’re incredibly excited to work with Espresso to bring decentralized sequencing to the OP Stack. Rollups today are centrally sequenced—meaning they do not yet fully live up to the decentralization ideals that crypto users will come to expect. We’re looking forward to collaborating further with Espresso, and offering decentralized sequencing to our users.”

Join The Espresso Ecosystem

With this release, we are also glad to welcome rollups and applications including Airchains, Kinto, Opside, Cartesi, Omni, and Vistara to the Espresso ecosystem. We are working with all of these teams to support research and integration of the Espresso Sequencer. There are currently a dozen different rollup teams working on their own bespoke integrations. If you are building a rollup or developing an application on a rollup and think you could benefit from the Espresso Sequencer, please reach out here.

Our work on Cortado follows on from our Doppio (Testnet 2) release in July. Doppio achieved competitive throughput benchmarks that showcased 1000 nodes (with a committee size of 10) achieving 29.41 MB/s which is approximately 100–200k ERC-20 transfers per second. Doppio also enabled users to experience fast pre-confirmations as the Espresso Sequencer sequenced their transactions. For that release, we featured an integration with the Polygon zkEVM stack. Now, with the OP Stack integration, the Espresso Sequencer is taking its first steps toward shared sequencing, with the platform being shared by multiple rollups—and multiple rollup stacks.

Be sure to follow along on Twitter and at our website for further updates on Cortado, including our upcoming public network release.

Article
Blockworks: Blockchain scaler to offer third testnet integration

Shared sequencers are an essential part of decentralization that may eventually lead to mass adoption

Blockchain scaling and privacy infrastructure company Espresso Systems will release its third testnet of the Espresso Sequencer for OP stack builders.

The testnet, named Cortado, will include work for an Optimism Request for Proposal (OP RFP) that aims to decentralize sequencers.

Rollup solutions today run their own sequencers that have their own execution environments. These sequencers are responsible for ordering transaction information that is then sent to a virtual machine.

To ensure there is no single authority that orders these transactions, projects such as Espresso Systems are exploring ways to diversify these sequencers.

The initial release will be a locally hosted demo version of an OP stack rollup running on Espresso that enterprises can test.

Over the coming weeks, a publicly hosted canonical OP stack testnet will be released. This testnet will be similar to the testnet Doppio, which went live in July, according to Jill Gunter, chief strategy officer at Espresso Systems.

Article
Cortado Testnet Integrates with OP Stack

Espresso Systems releases testnet 3 (Cortado), enabling OP Stack developers access to the Espresso decentralized shared sequencing network.

Today we are releasing our third testnet of the Espresso Sequencer, Cortado, which includes an integration with the OP Stack. We are glad to support the Optimism ecosystem through offering OP Stack developers a means to not only decentralize transaction sequencing but also share sequencing with other rollups for enhanced interoperability. The Espresso Sequencer is a network that will be shared across many rollups in multiple ecosystems to enhance interoperability by making bridging and atomic transactions more efficient and more secure for users. Underneath the hood, it is a consensus protocol with fast finality, high throughput, and the ability to scale to thousands of nodes.

Any rollup can leverage the Espresso Sequencer for transaction ordering and (optionally) data availability, replacing dependance on centralized sequencers. The Espresso Sequencer is designed to offer rollups a means of achieving more credible neutrality, more secure and efficient interoperability, mitigation of harmful MEV, and long-term economic incentive alignment with L1 validators.

The OP Stack is the standardized, shared, and open source development stack that is maintained by the Optimism Collective. It underpins OP Chains including Base, Zora, PGN, with new OP Chains committing to the vision regularly. These early chains, along with OP Mainnet, are already scaling on-chain activity. In the last month, the four OP Chains combined to use 4.6M gas per second, about 3.7x of the gas used on L1 Ethereum in the same period.

The code we have developed to integrate with the OP Stack is open source and available here and documentation, including architecture, can be found here. You can spin up and experiment with a local version of Cortado Testnet and the OP Stack integration by following the instructions here.

Optimism Foundation Mission: Leader Election Proof of Concept

Last June, we undertook work on an Optimism Foundation Mission calling for development of a Leader Election Proof of Concept. This work is designed as an open, public contribution to support the Optimism Collective’s progress toward technical decentralization. We at Espresso Systems are glad to have the opportunity to work closely with engineering leaders in the Optimism ecosystem and are proud to be able to contribute to this mission. You can find the code specifically related to this work here, documentation here, and our past updates to the community here.

“The integration of Cortado is an exciting milestone for the OP Stack. Not only does it lay the groundwork for additional sequencing protocols for OP Chains, it also underscores the Espresso team’s commitment to open-source values and community contribution.” said Ben Jones of the Optimism Foundation. “This milestone represents a step towards a standard that benefits developers, projects, and the Optimism Collective on the whole.”

Public Deployment with Caldera

In the coming weeks, we will be continuing our work with Caldera to deploy a publicly hosted testnet, including an OP Stack rollup. On this test rollup, users and developers will be able to deploy test contracts, submit transactions, and interact with applications in a real-time testing environment where all rollup transactions are sequenced by the Espresso Sequencer. Through familiar interfaces like MetaMask, users will be able to submit transactions and experience the rapid pre-confirmations provided by the Espresso Sequencer.

Caldera is a leading rollup-as-a-service company that enables developers to launch customized rollups with one click, leaving it up to the developer to choose what data availability, sequencer, and other layers underpin the system. Caldera supports rollups including Manta, Loot Chain, Syndr, and others.

Matt Katz, CEO of Caldera, said: ““We’re incredibly excited to work with Espresso to bring decentralized sequencing to the OP Stack. Rollups today are centrally sequenced — meaning they do not yet fully live up to the decentralization ideals that crypto users will come to expect. We’re looking forward to collaborating further with Espresso, and offering decentralized sequencing to our users.”

Join The Espresso Ecosystem

With this release, we are also glad to welcome rollups and applications including Airchains, Kinto, Opside, Cartesi, Omni, and Vistara to the Espresso ecosystem. We are working with all of these teams to support research and integration of the Espresso Sequencer. There are currently a dozen different rollup teams working on their own bespoke integrations. If you are building a rollup or developing an application on a rollup and think you could benefit from the Espresso Sequencer, please reach out here.

Our work on Cortado follows on from our Doppio (Testnet 2) release in July. Doppio achieved competitive throughput benchmarks that showcased 1000 nodes (with a committee size of 10) achieving 29.41 MB/s which is approximately 100–200k ERC-20 transfers per second. Doppio also enabled users to experience fast pre-confirmations as the Espresso Sequencer sequenced their transactions. For that release, we featured an integration with the Polygon zkEVM stack. Now, with the OP stack integration, the Espresso Sequencer is taking its first steps toward shared sequencing, with the platform being shared by multiple rollups — and multiple rollup stacks.

Be sure to follow along on Twitter and at our website for further updates on Cortado, including our upcoming public network release.

Podcast
Sovereign Radio: Espresso Systems and Shared Sequencing

Meeting up at Modular Summit, in a conversation with Chjango, Jill Gunter (Chief Strategy Officer of Espresso Systems) unpacks why we need decentralized sequencing, how shared sequencing supports cross-rollup interoperability, and the important role that builders and proposer-builder separation might play in the ecosystem. They attempt (though sometimes fail) to steer clear of buzzwords, making this a more accessible conversation for those starting their journey down the rollup infrastructure rabbit hole.

Podcast
Modular Summit: Dumb Blockchains Need Smart Solutions - Talk by Ben Fisch

The Modular Summit was a two-day event to learn from the visionary builders at the forefront of the modular blockchain revolution. The Modular Summit was a two-day event to learn from the visionary builders at the forefront of the modular blockchain revolution. In this talk, Espresso Systems CEO Ben Fisch reviews the current landscape of sequencing approaches, talks through various approaches to proof of stake and consensus for sequencing, and the role that builders and proposer-builder separation can and will play in the rollup ecosystem.

Podcast
Espresso Systems x Injective Labs: Conversation with Albert Chon

While at Modular Summit, Espresso Systems' COO, Charles Lu, sat down with Injective Labs' CTO Albert Chon. The two discussed Injective's genesis, the shared sequencing design space on Injective, and why Injective is prioritizing rollup interoperability and sequencer decentralization.

Article
Opening the Doppio Testnet to the Public

A few weeks ago, we announced the release of our second testnet for the Espresso Sequencer, Doppio. In that post, we shared a number of improvements made to the HotShot consensus protocol, numerous partnerships and collaborations, and released a demonstration of the Espresso Sequencer’s end-to-end integration with the Polygon zkEVM stack.

Today, we’re thrilled to share that Doppio is the first decentralized, shared sequencer testnet made available to the public. As a part of this public release, we’re releasing our benchmarks for the testnet, documentation on how users can submit transactions to a Polygon zkEVM fork running over the Espresso Sequencer, and next steps.

Doppio overview

Doppio is the second testnet release for the Espresso Sequencer. In the announcement, we covered a number of the improvements made to the HotShot consensus protocol, and also published the HotShot paper.

HotShot is a fast finality consensus protocol that offers a property called optimistic responsiveness, meaning it can confirm transactions as fast as the network will allow. We leverage this property by using a CDN-like architecture to boost performance when network conditions are favorable. This means the Espresso Sequencer can achieve Web2 performance, while maintaining the security of a decentralized consensus protocol.

We also covered our three-tiered data availability solution that provides data sharding with guaranteed availability. This design, which we call Tiramisu, was further explained in a separate blog post.

In the Doppio testnet, we have implemented the first two layers of Tiramisu, which are responsible for scalability. In future testnets, we will implement Savoiardi, which will improve the security of our data availability layer.

Two other major improvements to HotShot, featured in the Doppio release, include a new view synchronization subprotocol (based on Naor-Keidar) and signature aggregation.

Publishing Doppio’s Benchmarks

As a part of launching the Doppio testnet, we are publishing benchmarks related to performance of the Espresso Sequencer. The benchmarks show that a network of 1000 nodes attains very high throughput comparable to that of 10 nodes.

These benchmarks illustrate that the HotShot consensus protocol can achieve Web2 performance due to the design of our data availability layer, Tiramisu. As mentioned, the Doppio testnet includes the first two layers of Tiramisu, and future testnets will incorporate a third layer known as Savoiardi.

The benchmarks for the Doppio testnet are listed below.

Benchmarks for Doppio. Further details on experimental setup are published in the Espresso Sequencer documentation.

User demo with the Polygon zkEVM stack now public

We recently published a video demonstration highlighting the end-user experience for the Espresso Sequencer’s integration with a fork of Polygon zkEVM. The demonstration showcases a user submitting a transaction through MetaMask, which is then propagated through Espresso sequencer nodes. The transaction is then included in a block sequenced by HotShot. After the transaction is ordered and included in a rollup block, it is sent to Polygon zkEVM nodes and provers.

Users will now be able to try this out for themselves. All they’ll need to do is download MetaMask, join our Discord to get testnet tokens, and simply follow the steps outlined in our documentation site.

You can learn more about our integration with the Polygon zkEVM stack here.

Next steps

In our Doppio announcement, we shared that we’re beginning to onboard a number of rollups and rollup-as-as-service companies to the Espresso Sequencer. One of those integrations includes collaborating with Caldera to develop and deploy an OP Stack rollup integrated with the Espresso Sequencer.

We’re also excited to further contribute to the OP Stack by beginning our work on Optimism’s leader election proof of concept. We’ll be hosting a number of Twitter spaces with partners and posting updates on our work over the coming weeks on social media. For updates, be sure to follow us here.

We’re excited to make this release of the Espresso Sequencer available to the public. If you’re developing your own rollup or rollup-as-a-service platform, or building on the OP Stack, please reach out to us for early access to integrate with the Espresso Sequencer.

Article
Releasing the Espresso Sequencer Testnet II: Doppio

Rollups are horizontally scaling the application layer of Ethereum. But as computation is sharded across different rollups, the interoperability of applications running on these rollups becomes fragmented, significantly impacting user utility. Moreover, today rollups are operated by centralized servers that decide on which transactions to include and in what order (aka sequencing). Fragmented interoperability and sequencing centralization are some of the biggest challenges facing rollups today as they undermine the core benefits of running applications on Ethereum in the first place.

The Espresso Sequencer is a decentralized shared sequencing layer designed to solve these challenges for Ethereum rollups. As we advocated in a previous post, a shared decentralized sequencing layer has the potential to improve the overall decentralization and interoperability of Ethereum’s rollup-centric future.

Back in November, we released the Americano testnet, our first demonstration of HotShot, the consensus protocol underpinning the Espresso Sequencer. Today we published our paper on HotShot and are announcing our second major milestone and testnet: Doppio.

HotShot Recap

HotShot is a fast-finality consensus protocol designed to reach the same scale as Ethereum’s validator set — not just in theory, but in practice, as restaking will enable Ethereum validators to use their staked ETH to participate in operating the protocol. HotShot offers a property called optimistic responsiveness, which is the ability to confirm transactions as fast as the network will allow. We take advantage of this property by using a CDN-like architecture to boost performance when network conditions are favorable, achieving “Web2 performance with Web3 security”.

As a consensus protocol, HotShot is optimized for the use-case of lazy sequencing, where the nodes participating in the consensus protocol only need to agree on an ordering of available transactions — they do not need to execute these transactions or run a virtual machine. Moreover, given this relaxation, while the nodes must collectively ensure the availability of transaction data, there is no strict requirement that all nodes must receive all the data. This allows for data sharding that can help massively scale the communication efficiency and throughput of consensus without compromising on availability.

Article
Blockworks: First shared sequencer tech to go live on Polygon zkEVM testnet

Blockchain infrastructure company Espresso Systems has released a testnet version of the Espresso Sequencer on a forked version of Polygon zkEVM.

The testnet, named Doppio, has been operating within the company and will be open to external nodes over the next few months.

Sequencers are responsible for ordering transactions from the mempool and then sending the information back to a virtual machine. Similar to validators on a layer-1 network, they play a critical role in operating layer-2 blockchain networks.

Today, sequencers remain relatively centralized. Rollups run their own siloed sequencers with their own execution environments — and in the case of zero-knowledge (ZK) rollups, they also have their own provers.

In an interview with Blockworks, co-founder and strategy lead at Espresso Jill Gunter explained that existing sequencers operate in a relatively monolithic way.

“In existing rollup solutions, the sequencer is just a component bundled with the rest of the rollup software,” Gunter said. “Nothing is programmable, upgradable or swappable in a very easy way.”

As a result, many rollups today have experienced some type of downtime, something Gunter believes is downplayed in today’s rollup environment.

“It’s not a total disaster because you can always force the transaction back to the [layer-1],” she said. “But that might be prohibitively expensive, and cause what I call soft censorship issues where the transactions are not being prioritized.”

Article
Designing the Espresso Sequencer: Combining HotShot Consensus with Tiramisu DA

We introduced the motivation, design principles and high-level requirements of Espresso Shared Decentralized Sequencer in an earlier note. In this series of notes, we will dive deeper into understanding the design of the two key components of our sequencer network: HotShot Consensus and the Tiramisu Data Availability layer. We will cover this in three sections: Part I: Understanding the constraints of a sequencer network, and how it is different from a state machine replication (SMR) system
Part II: HotShot Consensus
Part III: Tiramisu Data Availability

A Byzantine fault tolerant state machine replication (BFT SMR) system requires a set of nodes to agree on a sequence of transactions and then execute these transactions such that the system functions even if some of the nodes are Byzantine faulty. This typically falls into satisfying three separate requirements:

Consensus: ensuring an agreement on the ordering of these transactions among all non-faulty nodes
Data broadcast: ensuring that the transaction data is broadcast to every non-faulty node
Execution: ensuring that every non-faulty node executes the transactions and updates the state machine
Each of these are separate requirements and from a performance standpoint, they are not necessarily at odds with one another. The overall performance of an SMR system is thus dictated by the slowest of them.

Article
Espresso Systems and EigenLayer Announce Ecosystem Partnership

The organizations will collaborate to bring restaking to the Espresso Sequencer network.

We are excited to announce that we will collaborate with EigenLayer to leverage restaking on the Espresso Sequencer network. The partnership signals both organizations’ intention to implement restaking on future Espresso Sequencer testnets, and ultimately bring restaking to mainnet.

In designing the Espresso Sequencer, we want to ensure that our architecture correctly balances decentralization, security, and incentive alignment. Enabling restaking on the Espresso Sequencer will align the network closely with Ethereum and more quickly bootstrap strong economic security through EigenLayer’s pooled security model.

EigenLayer is a protocol that offers restaking, a technique that enables users to stake their Ether across multiple protocols, extending economic security beyond the beacon chain. Through EigenLayer, the Espresso Sequencer will gain access to Ethereum’s staked capital base and decentralized validator set, optimizing node usage and enhancing capital efficiency.

Launching a decentralized proof-of-stake consensus protocol is an intensive process. Achieving a meaningful level of economic security requires a high amount of capital, and acquiring a sufficient number of network participants to run a consensus protocol can also be a challenge. Restaking mitigates these problems by allowing the Espresso Sequencer nodes to restake Ether, which further backs the protocol with Ethereum’s high level of security and decentralization. Restaking is also a perfect match for HotShot, the Espresso Sequencer’s underlying consensus protocol, which scales to thousands of nodes with optimistic responsiveness.

As we’ve written before, we view re-staking as an important method for aligning incentives between L1 validators and the L2 ecosystems they are increasingly underpinning. In a centralized sequencer, nearly all of the rollup value (e.g., fees, MEV) is likely to be captured by the sequencer. If none (or relatively little) of the value generated by a rollup is captured by the L1 validators, then a noteworthy concern is that this will destabilize the security of the rollup. This is for the simple reason that the L1 validators can be bribed to fork the rollup smart contract state, and in doing so profit more than they would managing the rollup contract honestly. Decentralizing the sequencer, and empowering L1 validators to participate in operation of the sequencer, mitigates this concern.

We’ll work through different design choices with EigenLayer with the goal of bringing restaking to the Espresso Sequencer network. We will be regularly sharing updates around this work and recommend that you follow us on our social channels to stay up-to-date on our progress.

About EigenLayer: EigenLayer is a protocol that introduces restaking, revolutionizing the way stakers can secure and participate in multiple protocols within the Ethereum ecosystem and beyond. EigenLayer, led by EigenLabs, has garnered significant support, raising approximately $65 million in funding. Backed by notable investors including a16z, Blockchain Capital, Polychain Capital, and Coinbase Ventures, EigenLayer is at the forefront of leveraging Ethereum’s staked capital base to empower developers, validators, and stakers. Through EigenLayer, participants can optimize capital efficiency, enhance network security, and unlock groundbreaking possibilities in diverse blockchain ecosystems.

About Espresso Systems: Espresso Systems are the developers of the Espresso Sequencer, supporting the decentralization, scaling, and interoperability of rollups in the Ethereum ecosystem and beyond. Espresso Systems has raised over $30mm from backers like Electric Capital, Greylock Partners, Sequoia Capital, and Polychain Capital.

Article
Espresso Systems and Injective Collaborate on Decentralized Rollups

Espresso Systems and Injective are Working Together to Bring Decentralization to Rollups built on the Interoperability-Focused L1

Espresso Systems is glad to support Injective as they integrate the Espresso Sequencer to bring decentralization to their rollups. The collaboration will begin with Cascade, the first interchain Solana SVM rollup for the IBC ecosystem and showcases Injective’s commitment to long-term decentralization and scalability.

Injective is a leading interoperable layer 1 network, forging connections with blockchains such as Ethereum, Cosmos, and Solana while also being IBC-enabled. Injective offers fast transaction speeds, while also providing plug-and-play modules (e.g. on-chain orderbook features) which enable developers to rapidly launch new dApps. Cascade, which is currently in public testnet, allows developers to deploy Solana contracts for the first time on Injective and the broader IBC ecosystem.

The Espresso Sequencer is designed to enable rollup decentralization and improved interoperability without sacrificing on the fast user experience granted by today’s centralized sequencers. Rollups can also benefit from Espresso Systems’ highly scalable data availability (DA) layer, which offers a cost-effective and decentralized solution with robust guarantees.

A first milestone in the collaboration will be a testnet integration with Cascade in late 2023, to be deployed to mainnet in early 2024. This integration will mark a key step towards enabling Injective rollups to smoothly transition into a fully decentralized sequencer framework in a generalized manner, without sacrificing their lightning fast speeds. Future users of rollups on Injective will enjoy fast latency while trusting that their transactions are being processed in a decentralized, credibly neutral manner.

“The lack of decentralized sequencers is a salient problem for nearly all rollups and layer 2 solutions today. Injective as always is continuing to break boundaries in the industry and the work with Espresso Systems will help to further bring true decentralization to the space,” said Injective Labs co-founder and CEO Eric Chen.

Of Injective’s choice to prioritize sequencer decentralization, Espresso Systems COO Charles Lu said, “Injective is one of the most innovative Web3 projects and has consistently pushed the envelope by using state-of-the-art technologies to achieve core blockchain ideals such as decentralization and credible neutrality. We are excited to support these goals.”

If you are interested in learning more about Injective and their Cascade rollup, you can read more on their website.

If you are developing rollups that you think could benefit from the Espresso Sequencer, we’d love to chat about supporting you with early access to integrate with us. Head to our website at www.espressosys.com and click “Participate” to get in touch.

About Injective: Injective is an interoperable layer 1 blockchain optimized for building Web3 financial applications. Injective provides developers with powerful plug-and-play modules for creating decentralized applications. Injective is incubated by Binance and backed by investors such as Jump Crypto, Pantera and Mark Cuban.

About Espresso Systems: Espresso Systems are the developers of the Espresso Sequencer, supporting the decentralization of rollups in the Ethereum ecosystem and beyond. Espresso Systems has raised over $30mm from backers like Electric Capital, Greylock Partners, Sequoia Capital, and Polychain Capital.

Article
AltLayer and Espresso Systems bring the Espresso Sequencer to the AltLayer Stack

We’re excited to welcome AltLayer to the Espresso Systems ecosystem, offering developers more options to accelerate decentralization via AltLayer’s Decentralized Verification and the Espresso Sequencer. The two companies will explore integrations between rollups built with the AltLayer platform and the Espresso Sequencer. AltLayer is a decentralized and elastic rollup-as-a-service (RaaS) platform to launch highly scalable layer 2s (L2s) with multi-VM support (EVM & WASM) with proofs.

Espresso Systems is supporting AltLayer in incorporating the Espresso Sequencer as an option for their developers and users over the coming months. The teams will be undertaking the following:

  • Iterating on the Espresso Sequencer developer experience. AltLayer will be providing early feedback on the Espresso Sequencer integration experience to make this as seamless as possible for AltLayer’s users.
  • Collaborating on the design space for Decentralized Verification and the Espresso Sequencer. AltLayer and Espresso Systems will explore various ways that our Sequencer designs can complement each other, providing greater optionality and support for developers in deploying decentralized rollups from launch.
  • Adding the Espresso Sequencer as a feature for AltLayer developers. The AltLayer rollup launchpad is a no code solution that sees developers launch their rollup in as little as 2 minutes. Rollups integrated with future AltLayer testnets will be able to have their transactions sequenced by the Espresso Sequencer.

Developers can decide if they want to launch their rollup with AltLayer’s Decentralized Verification solution and/or the Espresso Sequencer when deploying on the AltLayer stack. The partnership will give application developers an easy way to launch scalable and customized L2s, while ensuring that future rollup users benefit from the properties that sequencer decentralization provides.

If you are interested in developing on AltLayer, you can learn more by heading to their website at www.altlayer.io.

If you are developing a rollup and want to get early access to integrate with the Espresso Sequencer, you can reach out to use on our website www.espressosys.com and getting in touch via the “Participate” link. We look forward to hearing from you.

Article
Spire Announces Plans to Integrate with Espresso Sequencer

Spire, a Layer 3 Rollup-As-A-Service offering, comes out of stealth and announces plans to use Espresso Sequencer for sequencing & data availability

We are excited to welcome a new addition to the Espresso Sequencer ecosystem: Spire, a Layer 3 (L3) Rollup-as-a-Service. This announcement sees Spire come out of stealth and signal their intentions to integrate with the Espresso Sequencer.

Spire provides developers with infrastructure that enables them to easily spin up their own L3 app chain on top of a zkEVM Layer 2 (L2). Through Spire’s developer friendly infrastructure, application developers will be able to easily spin up L3 app chains that are credibly neutral and sufficiently decentralized from day one. By using the Espresso Sequencer, the Spire ecosystem will have a clear path towards cross-rollup interoperability as more L3s deploy on the protocol.

Over the coming months, the Spire team will be working, with the support of Espresso Systems, to integrate their L3 framework with the Espresso Sequencer and Espresso Data Availability (DA) for sequencing and storage. Spire will deploy the Espresso Sequencer on an upcoming testnet to bring decentralized sequencing to mainnet in the future. Spire’s app chains will benefit from the decentralization that the Espresso Sequencer brings without compromising on the performance that developers and users expect. The Espresso Sequencer leverages an optimistically responsive consensus protocol called HotShot that enables high throughput and fast finality in normal environments, with a robust fallback in adversarial conditions.

Spire’s execution environment uses RiscZero and will support smart contracts in a number of programming languages. It will also support the zkEVM within its virtual machine. Spire plans on releasing a public testnet in 2024, where application developers will be able to spin up L3 rollups and test transactions that will be sequenced by the Espresso Sequencer. For more information on their architecture, design and integration with Espresso Systems, please head to Spire’s website.

If you are developing a rollup and are interested in getting early access to integrate with the Espresso Sequencer too, we want to hear from you. Head to our website at www.espressosys.com and get in touch via the “Participate” link.

About Spire: Spire is a L3 Rollup-as-a-service (RaaS) protocol that provides customized functionality and scalability for Ethereum app chains. Spire focuses on L3 on top zkEVMs, and through its recursive zk-rollup architecture, Spire L3s can scale vertically and also leverage applications on L2s.

About Espresso Systems: Espresso Systems are the developers of the Espresso Sequencer, supporting the decentralization of rollups in the Ethereum ecosystem and beyond. Espresso Systems has raised over $30mm from backers like Electric Capital, Greylock Partners, Sequoia Capital, and Polychain Capital.

Article
Espresso Systems and Catalyst Collaborate to Improve Interoperability

Today, we are glad to welcome Catalyst as the first application to commit to the Espresso Sequencer ecosystem. Catalyst is a cross-chain AMM platform focused on connecting modular blockchains. In committing to leverage the Espresso Sequencer, Catalyst lays the foundation for secure and seamless cross-rollup interoperability. Over the coming months, Catalyst plans to prioritize deploying on rollups that use the Espresso Sequencer. Catalyst’s aim is to make it easy for developers and users alike to safely access liquidity, and more, from multiple rollups using the Espresso Sequencer.

As we’ve written about in the past, one of the key motivations for developing the Espresso Sequencer is to help defragment the quickly growing modular blockchain ecosystem. By enabling a diverse set of blockchains to share an ordering layer, rollups can gain improved atomicity with each other. For example, a user can specify that a transaction on one rollup should only be included if a transaction on another rollup is simultaneously included. Through proposer-builder separation and support of builders, this functionality can be extended even further, granting users atomic execution guarantees.

Catalyst is developing a leading cross-chain AMM, and can currently be tried out on testnet. They are building a novel solution for cross-chain swaps utilizing their unit of liquidity model, a universal redemption mechanism of liquidity from any chain. As a result, any chain that integrates Catalyst can easily move value to/from any other Catalyst-enabled chain. By removing friction of cross-rollup swaps, Catalyst will lead to increased economic activity for rollups on the Espresso Sequencer.

Catalyst can further generalize its functionality using logic-dependent swaps. This lets users define specific outcomes for their transactions, for example: ‘only swap my 1,000 USDC on chain A if I get at least 1 ETH on chain B in return’. The Espresso Sequencer complements this feature, by enabling atomic transaction inclusion. This allows block builders to give atomic execution guarantees to end users and makes cross-chain arbitrage in a single block possible.

The Espresso Sequencer also improves the user experience for cross-chain applications like Catalyst in other ways. Owing to the fact that blockchains using the Espresso Sequencer share the same consensus, it is impossible for one chain to reorg independently from another, as long as they both use the Espresso Sequencer for consensus. This means that attempting a double spend by reorging one of two chains in the Espresso Sequencer involved in a swap is impossible. Another benefit of sharing a consensus set is that cross-chain swaps can have very low latency, and, as mentioned earlier, can even execute atomically in the same block with the help of a block builder guarantee.

We are glad to bring these benefits to Catalyst, the rollups they deploy on, and their end users.

If you are developing an application that you think would benefit from the Espresso Sequencer, too, then we’d like to hear from you. Head to our website at www.espressosys.com and get in touch via the “Participate” link!

About Catalyst: Catalyst is a leading provider of permissionless liquidity for modular blockchain ecosystems. Its mission is to enable scalable and secure solutions for users to access any application on any chain. Its liquidity layer connects any new modular chain to hubs like Ethereum and Cosmos.

About Espresso Systems: Espresso Systems are the developers of the Espresso Sequencer, supporting the decentralization, scaling, and interoperability of rollups in the Ethereum ecosystem and beyond. Espresso Systems has raised over $30mm from backers like Electric Capital, Greylock Partners, Sequoia Capital, and Polychain Capital.

Article
Espresso Sequencer Unveils Testnet Integration with Polygon zkEVM Stack

Espresso Systems shares Doppio testnet which has integrated with Polygon zkEVM codebase fork, showcasing the possibilities of decentralized sequencing and fast finality for zk rollup chains

Today, Espresso Systems has released a testnet integration between the Espresso Sequencer and Polygon zkEVM client. This marks the first rollup to run transactions on a modular sequencing testnet. Espresso prioritized the Polygon zkEVM stack for its first integration: the stack is a leader in the space, offering a fully audited, open source, and feature-complete ZK rollup client, including fast exits and verifiable zero-knowledge proofs.

The Espresso Sequencer is designed to solve core issues facing rollups on Ethereum: achieving decentralization, maintaining fast finality of transactions, and offering developers and users a more seamless and secure path to interoperability among rollups. The testnet allows users to submit transactions to a fork of the Polygon zkEVM which are then routed to and sequenced by nodes running Espresso’s HotShot protocol.

The testnet, Doppio, is currently run internally by the company with intent to open up participation to external nodes in the coming months. The nodes running the Espresso HotShot protocol can provide almost immediate pre-confirmations to users and also provide data availability guarantees. Espresso Systems has open sourced the code underpinning the network. Of the work with Polygon zkEVM, Espresso Systems CEO Ben Fisch said, “As our first rollup integration, this represents a milestone for the Espresso Sequencer project. We are glad to offer the Polygon ecosystem, their developers, and their users a path toward decentralization without compromising on quick confirmations.”

Polygon zkEVM is a leading EVM-equivalent rollup that leverages zero knowledge proofs to lower transaction costs. Launched in March, Polygon zkEVM already has over 200,000+ active wallets and over $50 million in total value locked. Polygon co-founder Sandeep Nailwal shared, “We are glad that Espresso Systems prioritized the Polygon zkEVM as their first integration, recognizing the the lead Polygon has amongst other zk rollups. It is hands down the most mature zk rollup project. Polygon is committed to decentralization as a core principle and welcomes innovative contributions like the Espresso Sequencer to the ecosystem.”

In the current testnet stage, the Polygon-Espresso integration is able to provide users with preconfirmations with finality guaranteed by a quorum of stake decentralized over many nodes.

About Espresso Systems: Espresso Systems are the developers of the Espresso Sequencer, supporting the decentralization of rollups in the Ethereum ecosystem and beyond. Espresso Systems has raised over $30mm from backers like Electric Capital, Greylock Partners, Sequoia Capital, and Polychain Capital.

About Polygon zkEVM: Polygon is a premier platform for Ethereum scaling and infrastructure development using zero knowledge technology. The Polygon ecosystem has seen the development of 50,000+ applications and has processed over 2.8bn transactions. Polygon zkEVM is a leading zero knowledge rollup that prioritizes EVM-equivalence, security, low fees, and decentralization.

Article
Espresso Systems and Caldera Bring Decentralized Sequencing to OP Stack

The organizations will work together to integrate the Espresso Sequencer with the OP stack, utilizing Caldera’s platform

We are excited to announce our upcoming contributions to the OP ecosystem with Caldera, a rollup-as-a-service (RaaS) company that enables developers to deploy custom-built rollups for on-chain applications. We’ll collaborate to develop an integration between the Espresso Sequencer and the OP stack. This integration also creates a path towards Caldera layer 2s (L2s) taking advantage of the Espresso Sequencer’s support for decentralization, scale, and interoperability.

Over the coming months, Espresso Systems and Caldera will deploy an optimistic rollup that uses the Espresso Sequencer for ordering and fast confirmations and Espresso Data Availability (DA) for storage. Caldera will provide the OP rollup interface, site hosting, block explorer and indexers. The Espresso Sequencer will integrate with the OP stack, to support developers in building decentralized rollups from launch and accelerating the Superchain vision. An upcoming testnet will make it easy for anyone to test this new OP chain and see how transactions are sequenced by the Espresso Sequencer. Future developers building on Caldera will be able to easily opt into using the Espresso Sequencer and Espresso DA as a plug-in component of the modular rollup stack.

The Espresso Sequencer and Espresso DA both leverage HotShot, a consensus protocol specifically designed to enable robust decentralization without compromising on the fast, low-fee user experience that users have come to expect from rollups. HotShot achieves this through optimistic responsiveness. You can read more in our blog post or our paper on our approach to consensus here and here.

Caldera enables developers to easily build dedicated blockchains for their applications. Through its ecosystem, Caldera chains are interoperable while also being EVM compatible. Caldera simplifies the process of launching a blockchain by providing everything needed to run a rollup: an interface, site hosting, spinning up block explorers, indexers, and oracles. Caldera has also curated a select ecosystem of recommended bridges and more, which lessens the lift to launch an L2.

We are building the Espresso Sequencer so that it can support any rollup framework or architecture. On the heels of unveiling our first zk-rollup integration, we are thrilled to share that our next testnet project, alongside Caldera, is to prioritize optimistic rollup frameworks — starting with the OP Stack. By combining the Espresso Sequencer and Espresso DA with Caldera’s infrastructure, developers have an opportunity to build dedicated blockchains for their applications, without trading off decentralization for performance.

If you are building on the OP Stack and interested in getting early access to integrate with the Espresso Sequencer, we would love to hear from you. Head to our website at www.espressosys.com and get in touch via the “Participate” link.

And, in case you missed it, our proposal to build the OP Stack’s Leader Election proof-of-concept was accepted. We’re thrilled to be contributing to the OP Stack and the Superchain, and are excited to work through these processes with Caldera.

About Caldera

Caldera specializes in building high-performance, customizable, and application-specific layer-two blockchains. These custom-built blockchains (Caldera Chains) offer high throughput, low latency, and customizable features for optimizing the performance and user experience of decentralized applications. Caldera is backed by backers like Dragonfly, Sequoia, Ethereal Ventures, and Zonff Partners.

About Espresso Systems

Espresso Systems are the developers of the Espresso Sequencer, supporting the decentralization of rollups in the Ethereum ecosystem and beyond. Espresso Systems has raised over $30mm from backers like Electric Capital, Greylock Partners, Sequoia Capital, and Polychain Capital.

Article
Shared Sequencing: Defragmenting the L2 Rollup Ecosystem

As Ethereum has grown, its scalability worries have gone from theoretical to practical. Especially during periods of high network activity, many users have been priced out of sending transactions over the Ethereum blockchain. A solution to this problem is moving the execution of transactions off-chain through rollups. At a high level, rollups outsource a L1 blockchain’s computation to a single party, which in turn is tasked to prove to the L1 blockchain that the computation was performed correctly.

There are at present two main methods by which this proof is performed. In optimistic rollups it is done through fault/fraud proofs, and in zk-rollups through validity proofs. The popularity of rollups as a scaling solution can’t be understated, with a proliferation of rollups being developed, ranging from ZK to optimistic rollups and EVM to app-specific VMs. This trend is likely to continue, supported by a host of new rollup-as-a-service startups, and Ethereum rolling out its rollup-centric roadmap.

While rollups are an excellent solution for inheriting some form of economic security at scale, they introduce two key new problems:

  1. By relying on a single party for transaction ordering and inclusion in a rollup, they are prone to monopoly pricing and censorship.
  2. The proliferation of multiple rollup solutions breaks composability within the Ethereum ecosystem. Liquidity will fragment between rollups, and assets and other data will have a hard time moving across domains.

Article
The Espresso Sequencer: Motivations & Principles
  • Layer 2 rollups are delivering on their promise to scale Ethereum and make it useful for a wider range of users and applications, but currently rely on centralized sequencers.
  • Espresso Systems is developing the Espresso Sequencer to support rollups in decentralizing, without compromising on scale.
  • The Espresso Sequencer is designed to offer rollups a means of achieving credible neutrality, enhanced interoperability, mitigation of negative effects of MEV, and long-term economic incentive alignment with L1 validators.

In November, we shared our work on the Espresso Sequencer: a platform to support rollups in decentralizing.

Rollup projects across the ecosystem are solving the problem of scaling Ethereum every day. Where we come in is in helping them decentralize their sequencer component to deliver on credible neutrality, security, reliability, and interoperability without sacrificing performance.

Now, we are sharing more about our goals, design principles, and how the Espresso Sequencer fits into the landscape of L2 components and architectures.

Podcast
Shared Sequencers in the Modular Stack

Join us on 0xResearch as Josh Bown (Astria) and Ben Fisch (Espresso Systems) join us to discuss the role of shared sequencers in the modular future. Josh and Ben explain how shared sequencers remove the risk of censorship, liveness and regulation for rollups while creating a layer for atomic composability. Will L2s choose to leverage a shared sequencer? What is the value proposition? Where will value and MEV capture occur? We discuss all of this and more!


And as usual, we start the episode with our analyst bullpen to discuss Voltz v2 cross-margining, Crypto's mobile-first UX improvement and an update on Arbitrum.


Timestamps:

(00:00) Introduction

(00:59) Hot Seat/Cool Throne

(21:50) Interview Start: The Modular Stack

(24:23) The Role of Centralized Sequencers

(28:29) Lightspeed Promo

(29:10) Decentralized Sequencer Design

(36:28) Why Will L2s Use a Shared Sequencer?

(45:00) Cross-Rollup Atomic Composability

(53:12) Rollup MEV Capture

(1:01:00) A Winner-Takes-Most Market?

(1:04:20) Restaking: Aligning L1 and L2 Economic Value

(1:10:23) Who are the Ideal Users?

(1:14:33) Supported Virtual Machines

Podcast
Empire Podcast: How Rollups Will Decentralize Their Sequencer

Today we are joined by Josh Bown (Astria) and Ben Fisch (Espresso Systems) to discuss the role of shared sequencers in the modular future. Josh and Ben explain the risks of centralized sequencers, why rollups will use a shared sequencer and the core tradeoffs. What is the value proposition of a shared sequencer? How will MEV be allocated between rollups and sequencers? Why haven't rollups already decentralized their sequencer? We discuss these questions and more in this epic conversation!

Timestamps:

(00:00) Intro

(04:27) The Role of a Sequencer

(11:24) The Risks of Centralized Sequencers

(20:13) Using the L1 as the Sequencer

(23:12) Why Use a Shared Sequencer?

(34:56) Quicknode

(36:43) Kwenta

(37:40) The Tradeoffs

(40:35) Who will be the Sequencers?

(45:23) The AWS Analogy

(50:27) Economic Capture and Allocation Challenges

(58:28) How Block Times Impact Shared Sequencing

(01:01:12) Competition

(01:05:04) Restaking Risk and Timelines

Article
Releasing Espresso Testnet 1: Americano

The Espresso Sequencer is a system designed to decentralize layer-2 (L2) scaling solutions. Today, we are releasing our first milestone in our development of the Espresso Sequencer: Espresso Testnet 1 — Americano.

Americano is our first demonstration of HotShot, an optimistically responsive consensus protocol underpinning the Espresso Sequencer that delivers Web2 performance with Web3 security. Our benchmarking and profiling of Americano establish a baseline of performance and precisely pinpoint performance bottlenecks for which we have plans to address in our future releases.

Though this release does not yet integrate the more advanced techniques on our roadmap to scale throughput, it already achieves a high baseline of performance in “good” network conditions. In our benchmarks, we measured throughput in network configurations of 10, 100, and 1,000 nodes with fixed stake distributions. Nodes were ECS instances with 4 GB memory and 2 CPU cores. The leader instance and centralized broadcast server were m6a.xlarge EC2 instances with 4 CPUs and 16 GB memory.

A throughput of 2598 kb/s maps to 10,000–20,000 ERC-20 transfers or 700+ CAPE transactions per second.

As a note, we also experimented with randomized committee sampling: on 1,000 nodes we limited the voting nodes to 100 and got a small but unsubstantial improvement to 284 KB/s. Even without committee sampling, we believe our initial HotShot benchmarks are a strong starting point.

Explore Americano

We’re excited to be sharing this key milestone with our community and invite enthusiasts and developers to get involved by checking out the code and sharing your thoughts.

This first testnet is an internal testnet, meaning it is not currently persistently deployed for public participation. If you are interested in becoming a validator for future Espresso Sequencer testnets and our mainnet release, you can stay tuned for updates by joining our Discord, following us on Twitter, and reach out to us here.

Along with this release, we’ve made the HotShot and espresso repos publicly available along with several additional utility repos. Together, these form the entirety of the Americano testnet, integrating the Espresso ledger with HotShot. We look forward to engaging with you on Discord and on Twitter!

What’s Next

Americano is just our first public release of the Espresso Sequencer.

Americano leverages a centralized network architecture to exploit the benefits of optimistic responsiveness. As a reminder, on account of this property, a centralized architecture helps increase throughput in optimistic conditions, but does not impact safety! With our next testnet, Doppio, we plan to further optimize the optimistic path (e.g., with load-balancing, parallelism, and even higher-bandwidth centralized servers) while integrating this with a robust gossip-based fallback path, which helps to retain liveness in pessimistic conditions. We will also integrate a data-availability mechanism to move data (i.e., full block) dissemination off the critical path of consensus.

The Doppio testnet will also take the first steps toward supporting live rollups. If you are working on a zk-VM or optimistic rollup and interested in leveraging the Espresso Sequencer to decentralize your layer-2 scaling solution, please reach out to us here.

There is still much to come. Over the upcoming weeks, we will be sharing more details on our approaches to key issues like data availability, rollup decentralization, and acceleration of proving. We will also be releasing a formal paper on HotShot consensus. Stay tuned!

Article
Espresso HotShot: Consensus Designed for Rollups

The Espresso Sequencer is a system that decentralizes transaction sequencing for layer-2 scaling solutions on Ethereum without compromising on their scale and speed.

At the core of the Espresso Sequencer is a consensus protocol that prioritizes high throughput and fast finality. We have designed this protocol to complement the trade-offs inherent to Ethereum’s consensus, which prioritizes liveness under pessimistic conditions rather than responsiveness under optimistic conditions. Rollups on the Espresso Sequencer can offer users a more performant experience than rollups that directly rely on Ethereum for sequencing. We believe in empowering users to access a combination of speed, scale, and security they require. Building on the HotStuff protocol, we call the Espresso Sequencer consensus protocol HotShot.

The Espresso Sequencer is designed around a single decentralized proof-of-stake security model that underpins both a consensus protocol for ordering transactions and a data availability mechanism which allows for further performance benefits. It also encompasses a system of rollup contracts that (a) register committed blocks of sequenced transactions, verifying their consistency with the consensus protocol and availability certificates, (b) register updated state commitments for each zk-VM deployed to the Espresso Sequencer, and © receive and validate proofs for the state updates.

Our first public milestone in this effort, the Americano testnet, implements the first version of HotShot.

Optimistic Responsiveness

HotShot prioritizes high throughput and fast finality, complementing the dynamic availability of Ethereum’s consensus (Gasper). Fast finality, or more formally optimistic responsiveness, is the ability of the protocol to confirm transactions as fast as the network will allow. Confirmation can be nearly instantaneous under optimistic network conditions. This stands in contrast to protocols in which the confirmation delay is tuned to worst-case network conditions, or where transactions are only probabilistically final. Dynamic availability, the hallmark achievement of Nakamoto’s longest-chain consensus protocol, is the ability of a protocol to remain live under sporadic participation, even if most nodes at any given time are offline. Consensus protocols must choose between optimistic responsiveness and dynamic availability — these two properties are incompatible. Most practical BFT protocols to-date, including Tendermint and Casper, achieve neither property.

HotShot extends HotStuff to the decentralized “proof-of-stake” setting with large-scale dynamic participation, while retaining optimistic responsiveness.

Web2 Performance with Web3 Security

Scalability in consensus systems is measured by throughput and latency. Throughput is best described by the bytes of data that can be finalized by the system per unit of time (e.g. per second). This is more precise than TPS as it accounts for the variability of size and complexity across transactions. Meanwhile, latency can be defined as the average time it takes for a transaction to be finalized after it’s submitted. The primary scalability challenge of consensus protocols is to achieve the highest possible throughput while maintaining decentralization and a reasonably low latency.

Consensus, or state-machine replication, is not only a protocol for all participating nodes to agree on an ordering of transactions, but also to replicate the state (or at least a transaction log that can be replayed). While in theory these two functionalities can be separated, and the quantity and/or identity of nodes participating in each could be distinct, both are substantially large and diverse in a decentralized system. Thus, at the heart of any decentralized blockchain is a mechanism for propagating information in a resilient way among all nodes participating in the protocol.

Resilient communication protocols (e.g., peer-to-peer gossip) are one reason why decentralized blockchains achieve much lower throughput than traditional “Web2” transactional systems, particularly when there is extreme heterogeneity among nodes participating in the network. The typical “Web2” architecture utilizes a star network configuration, whereby all traffic is routed through one or more designated high-bandwidth servers. This optimizes the communication (particularly the broadcast rate) in a network where most participating nodes have much lower bandwidth than these central servers, but it is less resilient to byzantine corruption.

A primary advantage of optimistically responsive consensus protocols (e.g., HotStuff) is the ability to perform better when network conditions are favorable. Such protocols can even leverage a typical “Web2” architecture to optimistically achieve extremely high throughput, and in the worst case fall back to a high-resilience gossip-based path with lower-throughput. In this sense, optimistically responsive protocols have the potential to achieve the best of both worlds: Web2 performance with Web3 security.

Even in our initial testnet implementation, Hotshot already demonstrates the scalability benefits of optimistic responsiveness. As we extend HotShot, leveraging SNARKs, verifiable information dispersal (VID), and other techniques, it will be able to sustain high throughput even under pessimistic conditions when the only available communication channel is a lower-throughput gossip protocol. Read more about our first testnet, Americano, and our future plans here.

Article
Decentralizing Rollups: Announcing the Espresso Sequencer

At Espresso Systems, we are building the tools and infrastructure to bring Web3 applications mainstream, taking on challenges from privacy to performance. Over the last year, we have been excited to witness and contribute to the development of layer-2 (L2) rollups that promise to bring higher throughput and lower fees to Ethereum. Today we are introducing our plans to further Ethereum scaling efforts and we are unveiling our first milestone in this direction. For the last several months, we have been developing the Espresso Sequencer, a system designed to decentralize rollups without compromising the scale and speed users require. Web2 performance with Web3 security.

Rollups consist of several distinct system components: a virtual machine (VM), a sequencer, a proving system (for zk-VMs), and a rollup contract on the L1 (e.g., Ethereum). The sequencer component is responsible for ordering submitted transactions (i.e., instructions) to the VM, while the proving system executes these transactions and generates a proof of the resulting VM state transition. The rollup contract ultimately registers the state transition and verifies the proof.

An external sequencer is not always necessary. Instead, the contract itself could also be utilized for ordering transactions. The benefit here is that users only need to trust the L1 for liveness. However, in this case, the rollup system would only alleviate computational bottlenecks of the L1. Its throughput would still be limited by the data sequencing rate of the L1. Furthermore, users would experience the same transaction confirmation delays as on the L1.

Introducing an external sequencer promises higher throughput and faster confirmation of transactions. In this scenario, users can choose to either trust the sequencer for finality or wait longer for ultimate confirmation from the L1, perhaps depending on their risk tolerance for a given transaction (e.g., selling a $1 coffee versus a $1 million home).

Separately, the amount of data the L1 contract processes and stores can also be reduced by registering only a cryptographic commitment to the transaction log and state. The rollup proof attests to the correctness of this commitment, while an additional rollup system component is relied upon for the availability of the committed data.

While beneficial for performance, the introduction of external sequencer and data availability components is precisely where rollups lose their decentralization. The challenge lies in designing these components to provide fast finality and high throughput while maintaining decentralization.

The Espresso Sequencer supports the decentralization of L2s. It handles the decentralized sequencing and data availability of rollup transactions, functioning as middleware between rollups and their underlying layer-1 (L1) platforms. The Espresso Sequencer is designed as a platform upon which any zk-VM or optimistic VM can be deployed. Ultimately, Espresso may also serve as an interoperability layer by replicating zk-VMs and optimistic VMs to multiple L1s simultaneously.

We share more about the current designs and implementations of the Espresso Sequencer in our posts about HotShot, the Espresso consensus protocol, and about Americano, the first Espresso testnet. Read on, check out the code we’ve shared, and find us for more on Twitter and Discord.

Article
Espresso Systems and EigenLayer Announce Ecosystem Partnership

The organizations will collaborate to bring restaking to the Espresso Sequencer network.

We are excited to announce that we will collaborate with EigenLayer to leverage restaking on the Espresso Sequencer network. The partnership signals both organizations’ intention to implement restaking on future Espresso Sequencer testnets, and ultimately bring restaking to mainnet.

In designing the Espresso Sequencer, we want to ensure that our architecture correctly balances decentralization, security, and incentive alignment. Enabling restaking on the Espresso Sequencer will align the network closely with Ethereum and more quickly bootstrap strong economic security through EigenLayer’s pooled security model.

EigenLayer is a protocol that offers restaking, a technique that enables users to stake their Ether across multiple protocols, extending economic security beyond the beacon chain. Through EigenLayer, the Espresso Sequencer will gain access to Ethereum’s staked capital base and decentralized validator set, optimizing node usage and enhancing capital efficiency.

Launching a decentralized proof-of-stake consensus protocol is an intensive process. Achieving a meaningful level of economic security requires a high amount of capital, and acquiring a sufficient number of network participants to run a consensus protocol can also be a challenge. Restaking mitigates these problems by allowing the Espresso Sequencer nodes to restake Ether, which further backs the protocol with Ethereum’s high level of security and decentralization. Restaking is also a perfect match for HotShot, the Espresso Sequencer’s underlying consensus protocol, which scales to thousands of nodes with optimistic responsiveness.

As we’ve written before, we view re-staking as an important method for aligning incentives between L1 validators and the L2 ecosystems they are increasingly underpinning. In a centralized sequencer, nearly all of the rollup value (e.g., fees, MEV) is likely to be captured by the sequencer. If none (or relatively little) of the value generated by a rollup is captured by the L1 validators, then a noteworthy concern is that this will destabilize the security of the rollup. This is for the simple reason that the L1 validators can be bribed to fork the rollup smart contract state, and in doing so profit more than they would managing the rollup contract honestly. Decentralizing the sequencer, and empowering L1 validators to participate in operation of the sequencer, mitigates this concern.

We’ll work through different design choices with EigenLayer with the goal of bringing restaking to the Espresso Sequencer network. We will be regularly sharing updates around this work and recommend that you follow us on our social channels to stay up-to-date on our progress.

About EigenLayer: EigenLayer is a protocol that introduces restaking, revolutionizing the way stakers can secure and participate in multiple protocols within the Ethereum ecosystem and beyond. EigenLayer, led by EigenLabs, has garnered significant support, raising approximately $65 million in funding. Backed by notable investors including a16z, Blockchain Capital, Polychain Capital, and Coinbase Ventures, EigenLayer is at the forefront of leveraging Ethereum’s staked capital base to empower developers, validators, and stakers. Through EigenLayer, participants can optimize capital efficiency, enhance network security, and unlock groundbreaking possibilities in diverse blockchain ecosystems.

About Espresso Systems: Espresso Systems are the developers of the Espresso Sequencer, supporting the decentralization, scaling, and interoperability of rollups in the Ethereum ecosystem and beyond. Espresso Systems has raised over $30mm from backers like Electric Capital, Greylock Partners, Sequoia Capital, and Polychain Capital.

Article
Sequencer Decentralization and Liveness

Sometimes blockchains (and by extension L2s) “go down.” It’s not great, but nobody’s perfect. When it happens, it’s worth reminding ourselves what we are talking about, why we care, and what we can do about it.

Decentralized blockchains have two key properties: “safety” and “liveness.” Safety ensures that everyone agrees on the state of the blockchain, e.g. which transactions have been confirmed. Liveness, on the other hand, ensures that the system actually processes transactions, i.e. is online. Loss of liveness is not only a nuisance but can also pose security issues. Financial transactions, such as DeFi transactions, depend on being processed in a timely manner, especially for trades and transactions that involve multiple networks.

The liveness of any system hangs on its weakest link. For a rollup system to be live, the rollup prover, the smart contract, and the sequencer all have to be live and functioning. If even one of them goes offline, the entire system goes offline. If we use a single roll-up prover and centralized sequencer, then these can go offline due to a simple hardware failure or a software bug.

Decentralization can mitigate many of the conditions that lead to liveness loss but it’s worth noting that no system, decentralized or otherwise, can guarantee 100% uptime. The risk of a centralized system (or single server) is that it is a single point of failure. If the single server (or datacenter) goes down, the whole system stops. A decentralized system involving many diverse nodes (in terms of hardware, software, location, etc) mitigates this specific risk vector: indeed, assuming uncorrelated failures, it is significantly less likely that one-third of the entire network of nodes goes down simultaneously than an individual node. On the other hand, decentralized systems introduce new complexities that are not present in centralized systems, such as dependence on network communication. These complex dependencies lead to new, incomparable risk vectors that can also lead to temporary loss of liveness.

There are several types of consensus protocols that each rely on different types of network assumptions for liveness. We will not go through an exhaustive list of all the options, but will highlight a few examples.

Dynamically-available (or longest-chain) consensus protocols like Bitcoin or Ethereum can remain live even if 90% of nodes are offline, as long as more than half of the online nodes are correct. However, to preserve both safety and liveness, these protocols rely on synchronicity, meaning all messages between online nodes in the network need to be delivered within a known amount of time that is hardcoded into the protocol. The dynamic availability guarantee also comes at a performance cost as these protocols have minutes-long latency even if all nodes are online (you need to wait for multiple blocks to confirm a transaction).

In contrast to dynamically-available protocols are optimistically responsive consensus protocols. All optimistically responsive protocols will go down if one-third or more nodes simultaneously fail, as compared to the higher threshold tolerated by dynamically available protocols. On the other hand, there are responsive protocols that make weaker assumptions of the network (remaining safe under partial synchrony or even asynchrony). Nonetheless, these protocols may still lose liveness when the network behaves asynchronously. Some protocols are only guaranteed to regain liveness after a global stabilization time when all messages are delivered within a known (hardcoded) delay parameter, while others regain eventual liveness when messages are delivered within some finite time bound. Most importantly, optimistically responsive protocols do not incur minute-long delays in the best case when nodes are online and well-connected. They can operate as fast as the network will allow.

As such, there is a spectrum of options available to developers depending on what they care about. For greater robustness to nodes going down, but slowest experience, a dynamically-available protocol is perhaps the wisest choice (assuming synchronous network communication). For a balance of speed and security, optimistic responsiveness may be a better choice. The degree to which safety/liveness depend on network synchronicity is a third dimension: dynamically-available protocols are generally unsafe in asynchronous networks (though mitigations through “finality gadget” hybrid solutions like Gasper exist), while there exist responsive protocols safe under weaker synchrony assumptions. Finally, if you only care about speed (or speed to market!), then a centralized system might be the best option.

When it comes to decentralizing sequencers, it is worth considering how these options will interact with the underlying L1 protocol. For this reason, the Espresso Sequencer prioritizes responsiveness under optimistic conditions, complementing Ethereum’s Gasper protocol.

There’s no silver bullet to achieve 100% uptime for a blockchain system. And, the decisions that L1 and L2 protocols make in decentralizing are complicated by the range of usability tradeoffs they take on. If you are working on a zkEVM, optimistic rollup, or rollup-as-a-service platform and you are interested in leveraging the Espresso Sequencer to decentralize your layer-2 scaling solution, please reach out to us here.

You can learn more about the Espresso Sequencer by visiting our documentation here, which we will be continuing to update in the upcoming weeks.

There is a lot left for us to research and build and we welcome conversation, feedback, and collaboration from the community. You’ll find us discussing all this and more in the replies of Twitter threads and in the comments of forums like ETHResearch. Please join us in our Discord and on Twitter to chat with us about the proposals we have laid out here and the systems we have begun to build.

Article
Espresso Systems: tools & infrastructure for Ethereum & beyond

We started out two years ago with the mission of making blockchains useful for the mainstream. To us, that meant providing better options for builders and users when it comes to (1) their on-chain privacy and (2) the scale and performance they can expect, without sacrificing on the credible neutrality of the infrastructure.

Espresso Systems began with the idea of building a highly scalable, privacy-focused, decentralized L1 protocol. As we have built, we have become convinced that the world does not need us to build another L1. Rather, as of today, we are best positioned to achieve our mission by building for and within the Ethereum ecosystem. This insight has led us to take on two separate efforts, tackling privacy and performance respectively.

In service of providing better privacy options, we have developed the CAPE application, our smart contract system that enables custom configurations of transaction-level privacy. CAPE can be deployed on any EVM chain. We are in the process of bringing it to mainnet.

When it comes to performance , rollup projects across the Ethereum ecosystem are now solving the scaling problem every day. Where we come in is in helping them avoid trade-offs. We support rollups in decentralizing their sequencer components to deliver on credible neutrality, security, reliability, and interoperability without sacrificing performance.

As a team, we are also glad to build, maintain and contribute to open source developer tools through our Jellyfish cryptography library, as well as share research breakthroughs on privacy and scaling for the benefit of the industry, as we have done with VERI-ZEXE and Hyperplonk.

Espresso Systems is building the tools and infrastructure to provide more safe, open, and performant options for interacting on-chain, starting with issues of privacy and performance in the Ethereum ecosystem.

Look out for updates on CAPE and our privacy efforts soon.

In the meantime, you can learn more today about our efforts on decentralizing rollups. In November, we shared our work on the Espresso Sequencer: a decentralized platform to support rollups in decentralizing. Now, we are sharing more about our goals, design principles, and how the Espresso Sequencer fits into the landscape of L2 components and architectures.

Read more in our post here: https://hackmd.io/@EspressoSystems/EspressoSequencer

And in our docs here: https://docs.espressosys.com/

There is a lot left for us to research and build and we welcome conversation, feedback, and collaboration from the community. You’ll find us discussing all this and more in the replies of Twitter threads and in the comments of forums like ETHResearch. Please join us in our Discord and on Twitter to chat with us about the proposals we have laid out here and the systems we have begun to build.

Article
Open Sourcing HyperPlonk

Tue, January 31, 2023

Today we are happy to announce that we are open-sourcing our implementation of HyperPlonk under the open-source MIT license. We are collaborating with the Ethereum Foundation on integrating HyperPlonk with the popular Halo2 library and frontend. We hope that this will benefit the community and that HyperPlonk can become a vital component for zkEVMs and other rollup solutions.

We are also pleased to share that the HyperPlonk paper has been accepted by the EUROCRYPT conference.

Our work on HyperPlonk complements our efforts in building a decentralized sequencer (read more about it here, here, and here). Our Espresso sequencer can sequence transactions for arbitrary rollups, but we are optimistic that HyperPlonk-based rollups will be among the fastest and can take full advantage of the Espresso sequencer’s speed.

HyperPlonk is a zero-knowledge proof system that is designed for large circuits as it removes the requirement for FFTs (fast Fourier transforms) and is more parallel than other proof systems. It also enables high-degree custom gates, which are important for designing efficient proofs for complex circuits.

HyperPlonk is specifically designed for super-large circuits, but even starting from about 16,000 gates, HyperPlonk outperforms our state-of-the-art Jellyfish Plonk implementation in both single and multi-threaded mode:

Implementation details and polynomial commitments

Our implementation is written in Rust and uses the arkworks library as the backend. This makes the implementation very flexible, as it can be used with any elliptic curve. Currently, the implementation uses the multilinear version of the KZG polynomial commitment (first described here). Our implementation is very modular and can support other polynomial commitment schemes (such as the Bulletproofs IPA or FRI-based ones). In fact, in the paper, we introduce Orion+, a super-fast polynomial commitment scheme that is optimized for very large circuits. It is based on Orion, which is the PCS with the fastest-known prover time, but unlike Orion, Orion+ has very small proofs (about 6 KB).

We also measured the different components of our system and can see that the sumcheck MLE operations currently dominate. There are further optimizations regarding this component that we haven’t implemented yet.

Supporting frontends and future work

Currently, our HyperPlonk implementation requires that you directly encode the gates using selector and wiring polynomials. There are multiple frontends, such as our Jellyfish frontend, the Halo2 frontend, or Circom, that could make this process significantly easier. We are glad that this step of open-sourcing will contribute to the Ethereum Foundation’s Privacy Scaling Exploration group with their effort of integrating Halo2 and HyperPlonk.

Additionally, there are significant optimizations, such as further batching and faster sumcheck implementations, that we haven’t implemented yet. These will make HyperPlonk even faster. We also recently wrote a post on how to run a variant of sumcheck that is particularly optimized for custom hardware and alleviates the concerns about hardware-friendliness raised in a different blog post.

Article
Decentralizing Rollups: Announcing the Espresso Sequencer

Mon, November 28, 2022

At Espresso Systems, we are building the tools and infrastructure to bring Web3 applications mainstream, taking on challenges from privacy to performance. Over the last year, we have been excited to witness and contribute to the development of layer-2 (L2) rollups that promise to bring higher throughput and lower fees to Ethereum. Today we are introducing our plans to further Ethereum scaling efforts and we are unveiling our first milestone in this direction. For the last several months, we have been developing the Espresso Sequencer, a system designed to decentralize rollups without compromising the scale and speed users require. Web2 performance with Web3 security.

Rollups consist of several distinct system components: a virtual machine (VM), a sequencer, a proving system (for zk-VMs), and a rollup contract on the L1 (e.g., Ethereum). The sequencer component is responsible for ordering submitted transactions (i.e., instructions) to the VM, while the proving system executes these transactions and generates a proof of the resulting VM state transition. The rollup contract ultimately registers the state transition and verifies the proof.

An external sequencer is not always necessary. Instead, the contract itself could also be utilized for ordering transactions. The benefit here is that users only need to trust the L1 for liveness. However, in this case, the rollup system would only alleviate computational bottlenecks of the L1. Its throughput would still be limited by the data sequencing rate of the L1. Furthermore, users would experience the same transaction confirmation delays as on the L1.

Introducing an external sequencer promises higher throughput and faster confirmation of transactions. In this scenario, users can choose to either trust the sequencer for finality or wait longer for ultimate confirmation from the L1, perhaps depending on their risk tolerance for a given transaction (e.g., selling a $1 coffee versus a $1 million home).

Separately, the amount of data the L1 contract processes and stores can also be reduced by registering only a cryptographic commitment to the transaction log and state. The rollup proof attests to the correctness of this commitment, while an additional rollup system component is relied upon for the availability of the committed data.

While beneficial for performance, the introduction of external sequencer and data availability components is precisely where rollups lose their decentralization. The challenge lies in designing these components to provide fast finality and high throughput while maintaining decentralization.

The Espresso Sequencer supports the decentralization of L2s. It handles the decentralized sequencing and data availability of rollup transactions, functioning as middleware between rollups and their underlying layer-1 (L1) platforms. The Espresso Sequencer is designed as a platform upon which any zk-VM or optimistic VM can be deployed. Ultimately, Espresso may also serve as an interoperability layer by replicating zk-VMs and optimistic VMs to multiple L1s simultaneously.

We share more about the current designs and implementations of the Espresso Sequencer in our posts about HotShot, the Espresso consensus protocol, and about Americano, the first Espresso testnet. Read on, check out the code we’ve shared, and find us for more on Twitter and Discord.

Article
Configurable Privacy Case Study: Partitioned Privacy Pools

Sun, September 11, 2022

By: Ben Fisch

In wake of the US Treasury department sanctioning Tornado Cash (TC), people (including myself) have resurfaced the idea that users of privacy protocols like TC can use zero-knowledge proofs to demonstrate that their withdrawals are not transactions from sanctioned entities. The goal is to provide an alternative way to implement sanctions without entirely banning privacy protocols like TC or requiring an institution to have full transparency. This idea has been floating around for some time in the blockchain community, and has recently reentered the spotlight (e.g., here and here at minute 35). In fact, a similar application of zero knowledge proofs was discussed as early as 2013 in the context of TOR. Here’s my take on this idea, some practical challenges, and potential solutions. This post does not express any opinion on sanctions, but rather a proposal for how Treasury could achieve their same stated goal in a less invasive way. While applicable to risk management more broadly, this post focusses on the SDN List example for sake of concreteness.

What is the SDN list? The Specially Designated Nationals List of the US Treasury’s Office of Foreign Assets Control (OFAC) contains sanctioned entities, their operating companies, bank accounts, or cryptocurrency addresses, with which anyone subject to US law is prohibited from transacting.

How dynamic is the SDN list? Couldn’t an illicit actor evade sanctions by simply moving funds to a new freshly generated cryptocurrency address? Entities and persons with sufficient ties to the U.S. jurisdiction are expected to take reasonable measures to avoid transacting with, or providing goods or services to, sanctioned entities, even if it is a related bank account, entity, property, or cryptocurrency address that does not explicitly appear on the SDN List. Since blockchains are public ledgers, it is easy to see the movement of funds from an SDN address to a new one in real time and thus compliant institutions (e.g., exchanges) can easily block these as well. It is more difficult to trace the origin of funds after several hops, but several blockchain analytics companies offer this more complex risk analysis via software-as-a-service to institutions.

How does TC work (approximately, at a high level)? TC can be used to move funds from one public Ethereum address to another while hiding the link. Specifically, it hides the links between deposits into and withdrawals from the TC pool. Depositing a certain amount into the TC pool creates an on-chain digital receipt, and the depositor retains a secret key needed to use this receipt later. A user withdraws a certain amount from the pool by presenting a zero-knowledge proof that it knows the secret key of an unused receipt for this exact amount, and a keyed hash of the receipt called a “nullifier”, which still hides the receipt but prevents it from being used twice.

Why might TC impact SDN compliance? According to Treasury’s press release, TC “facilitates anonymous transactions by obfuscating their origin, destination, and counterparties, with no attempt to determine their origin.” An institution can see that an Ethereum address received funds withdrawn from TC, but it could not easily tell if those funds were originally deposited into TC from a sanctioned address, or a high-risk address that traces to a sanctioned address upstream. As a result, in practice, many institutions began labeling everything coming from TC as higher risk, even before OFAC officially designated the TC pool address. At risk of stating the obvious, while TC may be seen by compliant institutions as creating provenance risk, it provides little incremental benefit to risky actors who cash out at non-compliant exchanges that are not screening in the first place.

How can selective disclosure help? Just as withdrawals from TC require zero knowledge proofs about the deposit receipt to which they are uniquely linked, a diligent institution can require a prospective customer with funds withdrawn from TC to disclose more information about this deposit receipt, such as non-membership on the SDN list or even a risk score upper bound of the origin address. This can be done via a zero-knowledge proof if there is a published list of risk scores for all addresses that ever deposited into TC. Chainalysis and TRM Labs both already provide on-chain oracles for the wallet addresses that have been published on the SDN List. For maximum efficacy, the present proposal requires the more complex risk analysis of related (e.g., co-spending) addresses to be published on-chain as well.

Should this disclosure be on-chain? In the case that a customer holds funds labeled as high-risk because they trace to an upstream TC withdrawal that the customer is not responsible for, they may be unable to prove an upper bound on the risk score. Automating the risk score disclosure of TC withdrawals through a designated smart contract that verifies the proofs helps avoid this situation. Users have an incentive to automatically disclose that their withdrawn assets have low risk as it positively impacts the liquidity of these withdrawn assets.

How will this disclosure impact privacy? Put simply, it partitions the anonymity set of withdrawals into high-risk and low-risk association with the SDN. There is some dependency of privacy on behavioral conventions. Users who prove their withdrawals are low-risk are clearly anonymous among the set of other users who do the same, but even if there is only one user who opts into revealing their risk level they are not necessarily revealing their identity (unless it is already known who this one person is). This becomes problematic for privacy if, for example, there are multiple risk scoring conventions or even multiple SDN lists that are each correlated with a particular group of users (e.g., a jurisdiction). In this case, a user’s decision of which SDN list to reference in their proof reveals their association with a particular user group. This is perhaps the greatest pragmatic challenge of this proposal.

Would this corner the illicit actors into a smaller anonymity set? Not necessarily, but I argue that it does not matter for SDN compliance. If there is a universal SDN list and risk scoring convention, and all but a small fraction of users in the low-risk category with respect to this list opt into disclosing their risk level, then indeed the users in the high-risk category will be squeezed into one anonymity set. However, consider a scenario in which there are two SDN lists A and B and half the users elect to prove non-association with A while the other half use B. Any actor who is only included on list A can still prove non-membership on list B and thus belongs to an anonymity set comprising half of all users. However, they would still be unable to pass compliance screening at an exchange that requires non-association with list A. What matters in practice is not the theoretical size of the anonymity set but the fact that institutions can set their own screening requirements, and this in turn will drive what users elect to selectively disclose.

Could illicit actors move funds quickly to evade this? Someone might quickly move funds from an SDN-listed address A to a fresh address B, deposit into a TC pool, and immediately withdraw to address C before the oracle has time to assign high risk to B, enabling them to prove a low-risk score for C. There are two potential solutions. The first is to simply regard all fresh addresses as high-risk until otherwise assigned. Put another way, if a deposit source address has not yet received a risk score from the on-chain oracle then the user withdrawing will be unable to produce a zero-knowledge proof of any score, and thus the withdrawal would be de facto labeled as high-risk. Alternatively, the zero-knowledge proof of a low-risk score verified by the smart contract could also prove that the referenced deposit receipt is at least some sufficiently large number of blocks old. This has minimal impact on anonymity and usability. For maximum privacy, due to the deanonymizing capabilities of pattern analysis, a user should anyhow wait a randomly distributed amount of time before withdrawing from the pool.

In summary, this approach would enable users withdrawing from TC to preserve liquidity by facilitating downstream compliance screening, while otherwise remaining anonymous within the pool of other transactions that have low-risk association with the SDN List. The zero-knowledge scoring also maintains the ability of end points to conduct risk management. It is just as effective in stopping sanctioned wallet addresses, but without the same collateral impact on licit users as a blanket sanction on everything that engages the TC smart contracts.

Espresso Systems, which launched a product called CAPE (Configurable Asset Privacy for Ethereum) earlier this year, is creating an ecosystem for blockchain applications with flexible privacy. Whether you are a developer, VASP, regulatory agency, analytics service, or general user of blockchain privacy tools, please get in touch if you are interested in learning more about how we can meet the needs of both privacy and risk management or exploring collaboration. Follow us on Twitter or join us on Discord for more.

Article
User Education and Consent in Decentralized Applications

Sun, August 14, 2022

By: Jill Gunter

Last week, several key players in the cryptocurrency and Web3 ecosystem halted assets or stopped providing services to certain wallets in an effort to ensure compliance with new sanctions against the privacy-preserving smart contract application, Tornado Cash. Circle (the creator of major stablecoin USDC) froze over $75,000 of the asset linked to sanctioned addresses associated with the mixer, Tornado Cash. DeFi applications including Aave and reportedly others like Uniswap and Balancer blocked addresses that had recently received funds from the Tornado Cash smart contract.

Based on the backlash that ensued across social media, this was the first time many crypto users and enthusiasts realized that some assets can be frozen and addresses could be blocked. On one hand, it is understandable how all the crypto-talk of decentralization and censorship-resistance might leave people confused by this move. On the other hand, this was not the first time these types of actions were taken by the companies around these products. Circle has been transparent in both its marketing and its code about its freezing functionality and has even previously used its ability to freeze its USDC product. Tether does the same. Most decentralized finance protocols have long leveraged blockchain analytics firms like TRM to block sanctioned addresses.

The scale and widespread implications of these actions, however, do mark an important moment to reflect on the issues of user consent and education. Crypto as an industry touts values like decentralization, censorship-resistance, privacy, and security that are not uniformly delivered on across products.

This is an issue we have thought a lot about in the development of our first product, CAPE (Configurable Asset Privacy on Ethereum).

CAPE empowers asset creators like stablecoin providers or NFT artists to design policies (or custom rulesets) for the assets they offer users. Some of these policies can include:

- Making the asset private to the public, while keeping transaction details visible to them as the asset creator

- Creating NFTs with hidden properties that can be selectively revealed.

- Keeping transaction details private if the amount transacted is below a certain threshold (for example below 3,000 USD-equivalent in accordance with travel rule guidance)

- Reserving the right to freeze assets they created

- Delegating other parties to be able to view or freeze transaction details

We have designed the system with flexibility and pragmatism in mind for average users with average needs around privacy. The product is not designed to be suitable for those in extreme situations with extreme privacy needs. For this reason, we also expect the product will be rejected by those with strong ideologies. We are happy to co-exist with the people and products who are building for stronger needs around privacy. Instead, we are seeking to enable asset creators to innovate in new ways and to empower users with more choices for how they use, custody, and transact crypto assets.

If we are to empower users with more choices, though, those users need to be educated about what those choices are. This education can happen through content via blog posts (like this one!) or social media engagement, through documentation (here), and through choices made in the design of the user interface. Today, the UI for CAPE that we have developed surfaces information to users about who can see and do things related the asset.

This is okay, but we know there is a lot of room for improvement in making it clear and obvious to users what they are opting into. Some ideas include: translating view keys and freezing keys to be human readable wherever possible so users know what institutions or individuals are behind them; adding “verified” tick marks to assets that known institutions have created; surfacing information about the privacy pool to users; showing curated lists of the most popular CAPE assets; and more.

There are also inherent challenges in all of this. Many of these designations are subjective (who decides who is verified?) and therefore problematic for any one party to work on. Additionally, CAPE has been designed such that anyone will be able to spin up their own front-end. This makes it impossible to control how much of this information will get surfaced to users through alternative interfaces.

We think the conversation around user education and consent is an important one and we’d love to have it with all of you.

What should the terms and conditions look like for Web3 applications? How can we better educate users about what they are opting into and out of? What do you want to know in the CAPE interfaces? Write to us on Twitter or in Discord to start the conversation and let us know.

Article
VERI-ZEXE: Decentralized Private Computation with Universal Setup

Wed, July 06, 2022

By: Alex Xiong

TL;DR: As part of our effort in exploring private smart contract solutions, we developed a decentralized private computation (DPC) system, VERI-ZEXE, that supports universal setup. VERI-ZEXE improves the state-of-the-art [1] by ~9.0x on transaction generation and ~2.6x on memory usage, and will be used in future versions of CAPE to enable arbitrary user-defined asset policies while maintaining configurable asset privacy.

See ePrint paper here and GitHub here.

ZEXE: background and limitations

Smart contract systems such as Ethereum and Solana are great breeding grounds for web3 innovations. However, they suffer from lack of privacy or scalability, and sometimes both. To ensure public verifiability, all program states and state transitions are public and transparent, sacrificing privacy, and all transactions or computation are re-executed by all (full) nodes, affecting scalability.

In 2019, Bowe et al. proposed a scheme called decentralized private computation (DPC) that allows users to execute arbitrary computation off-chain and submit a transaction attesting to the correctness of this computation using zero-knowledge proofs [2]. They implemented a system named ZEXE (zk-execution) that instantiates the DPC scheme to tackle both pain points above. Roughly, ZEXE is a “programmable Zcash”, generalizing from a single-application system to a smart contract system while preserving the privacy guarantees.

Some noteworthy features of ZEXE are:

  • Data & function privacy: ZEXE hides all program states as well as inputs/outputs to function calls (data privacy) and, importantly, which functions/programs are invoked in each transaction (function privacy). Even with numerous follow-up alternative approaches to private smart contracts, ZEXE remains the only concrete construction that achieves function privacy (see related work section in the paper).
  • Programmability: With ZEXE, users can attach arbitrary policies/predicates to a record (similar to the idea of Bitcoin Script) which specify the transition rules for relevant program states. Bowe et al. have shown how to program user-defined assets, DEXes, and regulation-compliant stablecoins under the ZEXE model.
  • Succinct verification: On-chain validators don’t need to re-execute the computation. Instead, they verify the short transaction validity proofs, which takes constant time regardless of how expensive the off-chain computation is.

The existing implementations (SnarkVM testnet 1 and SnarkVM testnet 2) deliver on the features described above but have drawbacks that limit their practicality and leave much room for improvement:[3]:

  • Circuit-specific setup: ZEXE (SnarkVM testnet1) use non-universal SNARKs like GM17 and Groth16 for certifying the correctness of smart contract executions, leading to a trusted setup for each application/program, which is highly impractical.
  • Performance: ZEXE with universal SNARK (SnarkVM testnet2) suffers a significant performance deterioration due to the higher complexity of the universal SNARK verification logic and the fact that ZEXE requires producing a SNARK proof for a statement that encodes the SNARK verification logic.

VERI-ZEXE: universal setup without performance loss

VERI-ZEXE addresses the two foregoing challenges, while preserving all features and properties of the original ZEXE, by introducing many optimizations on reducing the circuit complexity of the universal SNARK verifier gadget. We refer readers to the VERI-ZEXE paper for detailed descriptions of these techniques; in this post we only showcase and interpret our benchmark results.

Firstly, we compare ourselves against the state-of-the-art ZEXE implementations, among which the most important metrics include the transaction generation time (i.e. Execute algorithm), the memory usage, and transaction validity proof size. As shown below, our performance is on-par with the original non-universal ZEXE. Compared to the best universal ZEXE implementation, we attain a ~9.0x improvement on transaction generation and a ~2.6x improvement on memory usage.

Comparison of three DPC implementations for 2-input-2-output transaction, run on AMD EPYC 7R13 at 2.65 GHz with 64 cores and 128 GB of RAM.

Secondly, we provide a more elaborate comparison against SnarkVM testnet2. As shown below, our outer circuit (used during depth-2 proof composition) size is much smaller, and thus prover time is an order of magnitude faster, which leads to much faster proof generation across all transaction dimensions:

Finally, to illustrate our innovation in TurboPlonk and UltraPlonk constraint system design, and our optimization techniques, we provide a breakdown of constraint cost for cryptographic building blocks used in VERI-ZEXE. We shared these highlights on gadget complexity when we introduced our Jellyfish library earlier this year.

Number of PLONK constraints for major cryptographic building blocks and algebraic operations. These numbers are specific to the TurboPlonk design.

VERI-ZEXE + CAPE: customizable asset policy

While ZEXE is not a silver bullet as a private smart contract design (due to more problems mentioned in [3]), it is a great solution for applications with minimal shared states among users that would enjoy high parallelism and low transaction-dependency under the UTXO model. The CAPE product is a great example of how this might look to users.

Currently our CAPE protocol only supports a predefined set of asset policies (e.g., enabling minting, anonymous transfers, freezing keys, designated viewing policies etc.), but not dynamic, user-defined asset policies. Imagine you want to create a new “concert token” that has a total supply of 1000 units and can only be minted if a specific asset was paid to a designated address. This new minting policy associated with our new concert token goes beyond what the current CAPE product can enforce. It’s a common requirement and desirable feature for asset issuers to be able to design customized, more complicated policies for their future assets. VERI-ZEXE will make such programmability possible.

“VERI-ZEXE can serve as a substrate for private smart contracts. Compared to prior work, it supports a universal setup and a ~9x speed up on transaction generation as well as ~2.6x saving on memory usage, pushing applications like private stablecoins with arbitrary, customizable asset policies within the realm of practicality,” says Alex Xiong, first author.

Binyi Chen, Chief Cryptographer for Espresso Systems added: “The techniques underlying VERI-ZEXE are not limited to enabling privacy, but also help with scalability. For example, the same techniques can be used to optimize the performance of so-called zk-zk-rollup, where rollup is applied to zero-knowledge (private) transaction formats such as CAP or Zcash.”

We will release more about our designs for advancing CAPE in future posts, and are extremely excited about what these groundbreaking innovations will bring to Espresso. Stay tuned!

Footnotes

[1]: SnarkVM by Aleo has many iterations: their testnet1 version is a reasonably faithful realization of ZEXE described by the S&P19 paper which requires application-specific trusted setups; early version of testnet2 swaps the proof system used to generate proofs for birth/death predicates from GM17 to Marlin, making the system “universal”; later version of testnet2 and future testnet versions shift away from the original DPC model, by simplifying the model but restricting its programmability (e.g. disallowing inter-program call). The new, simplified yet restricted DPC model since testnet3 is outside the scope of this blog and our code is measuring against testnet1 and earlier version of testnet2. We’ve made extra modifications to their code to make fair comparison during benchmark, see more details in Section 4.2 of the paper.

[2]: The general flow of computing off-chain and verifying computation integrity proof on-chain may sound similar to zk-rollup. Users in ZEXE generate the transaction validity proof and send transactions directly on chain whereas users in zk-Rollup send transaction to rollup validators who will then generate the zk proof to be sent on chain. Technically ZEXE doesn’t care how the off-chain computation is done, either by a single user, or a multi-user MPC protocol, or in a rollup-server fashion, therefore we deem the two approaches complementary.

[3]: There are many other challenges with ZEXE including concurrency issue (when multiple users trying to transition a shared state to a new state simultaneously), atomic composability capability (achieving Flashloan-like application in ZEXE would require a technique we call “in-bundle accumulation” which slightly modify record nano-kernel (RNK) of ZEXE), difficulty of predicate programming in UTXO model (existing solution such as Chialisp). We will try to introduce our approaches to some of these problems in future blog posts.

Article
Start Using CAPE on Goerli Testnet

Wed, June 15, 2022

Today we are excited to release the wallet and graphical user interface for Configurable Asset Privacy for Ethereum (CAPE). The product is currently live on Ethereum’s Goerli testnet for users to start building and experimenting with.

You can get started here.

Configurable Asset Privacy offers asset creators the ability to configure fine-grained rules around their assets. For instance, an asset creator can define a viewing policy which specifies exactly who can see transaction data for the assets they issue. Configurable Asset Privacy can be rolled out on any EVM-compatible chain, and will be a native feature-set of the Espresso layer 1 blockchain where users will also be able to benefit from a higher-throughput, lower-fee environment. While the Espresso L1 is in development, we are thrilled to offer Configurable Asset Privacy for Ethereum.

CAPE runs as a smart contract application on Ethereum and can be used to enhance existing Ethereum assets with greater privacy guarantees, allowing users to create wrappers with custom privacy and transparency policies into which any individual can wrap their ERC-20s or (soon) their ERC-721s. CAPE can also be used as an easy interface to develop brand new Ethereum assets with previously-impossible privacy features.

What can I do with CAPE?

CAPE solves a long-standing problem in Web3: the vast majority of transactions are fully exposed to anyone who cares to look. In Web3, if you wanted to, you could find out the net worth of your colleagues (also making them a target for criminals), how much your friend is making as a freelancer, what DeFi strategies different funds are deploying, what your partner’s taste in NFTs was 5 years ago, and the amount of money I have lost trading. This involuntary exposure removes even the most basic assumptions of privacy, and therefore is unsuitable for the vast majority of real user cases.

These dynamics have prevented Web3 products from being adopted by enormous cohorts of potential users. As an example, consider how limited the utility of something like a stablecoin is for a major business if it entails revealing real-time data about their operations, sales, and profitability to competitors. For the next set of applications and for the next groups of users of crypto, it will be necessary to empower developers with the ability to determine the levels of privacy and transparency of transactions. Espresso Systems is building for that future and CAPE is a preview of what that looks like.

Using CAPE, a stablecoin provider can enable users to transact privately from the public, even while the issuer maintains real-time insights into transactions with the asset. A DAO can move their treasury into CAPE to limit the transparency of its balance sheet, trades, transfers, and distributions to only the admins of the DAO. A freelancer who earns in ETH can receive funds through CAPE so her income is not broadcast to the world. An artist can use CAPE to create NFTs that users can buy, hold, sell — revealing their actions only to the artist who is receiving royalties from the trades and not to the general public.

With CAPE, we are opening the design space of Web3 to include the possibilities of data privacy. You can interact with the full functionality of CAPE on testnet through the command-line interface or you can engage with the core product functionality through the front-end GUI that we have created.

We can’t wait to hear from you about what kind of assets you create and wrap using CAPE, what kind of privacy policies you care about, and what other features and functionality you would like to see supported.

What is CAPE?

CAPE is an implementation of Espresso Systems’ Configurable Asset Privacy Protocol on Ethereum. Configurable Asset Privacy is a protocol enabling asset creators to issue private digital assets while designating parties that can see specified data regarding ownership and transactions. Using CAPE, a digital asset creator like a stablecoin provider or NFT artist can define viewing policies for their assets concerning any of the following data:

  • Sender and receiver addresses
  • The amount of an asset sent, received, and held
  • The type of asset being set, received, and held

CAPE also supports more advanced policies making use of private verifiable credentials, freezing keys, or threshold schemes. An asset creator can leverage credentials by requiring users of their product to have, for example, KYC or accreditation status or membership of a given DAO. Freezing policies may be used by organizations like stablecoin providers that need to maintain control over their assets in order to maintain avenues to be able to address fraud, theft, and dispute resolution. A threshold scheme would mandate that multiple key holders must come together to view or freeze assets, which might be applicable to a group of DAO admins. CAPE aims to provide sufficient flexibility to cover a wide range of possible applications.

There are two main actions that an asset creator, like a stablecoin provider or an NFT artist, can undertake with CAPE: (1) origination of a brand new asset on CAPE and (2) the configuration of a CAPE-wrapped version of an existing ERC-20 token.

To originate a new asset on CAPE, the creator simply configures the asset as a new record with a name, symbol, icon, and associated policies. Each asset is defined by a unique identifier code and set of rules to be compiled when transferring the asset, including who has viewing and freezing capabilities.

To configure a CAPE-wrapped version of an existing asset, the asset creator enters the contract address for the relevant Ethereum address and defines a name, symbol, icon, and policies for the CAPE-wrapped version of it. Holders of the existing Ethereum asset can then freely deposit their assets into the CAPE wrapper to make use of the privacy properties that the creator has set up.

To get started using CAPE, go to the CAPE User Guide found in Espresso’s docs here. We would love to hear your thoughts, feedback, and how you’d like to use CAPE. You can find us on Twitter and Discord.

What is next?

CAPE is the first Espresso Systems product to be released to users and is a showcase of some of the privacy functionality that will be featured natively on the Espresso layer 1 blockchain. CAPE today runs on Ethereum’s Goerli testnet so that users can start experimenting with its functionality, provide feedback, and start designing new types of assets and applications. We are excited to contribute a new data privacy solution to the Ethereum ecosystem, and look forward to sharing more of what we are building soon as we look to solve for privacy, scalability, and anything else required to make Web3 usable for everyone. Keep an eye out as later this summer, we will be releasing the first Espresso testnet which will feature Configurable Asset Privacy as one type of transaction possible on the high-throughput, low-fee platform.

Go get started with CAPE here!

We want to hear from you! Drop us a message in Discord or on Twitter letting us know what you think of CAPE and what you want to use it for.

We are excited to stay in touch with you, our earliest users. To hear about new product releases, events, and more, sign up here.

Article
Releasing the Jellyfish cryptography library

Sun, March 06, 2022

In conjunction with announcing Espresso, we’re excited to release the Jellyfish cryptography library. Jellyfish is a toolkit of various cryptographic primitives — ranging from hash functions to accumulators to zero-knowledge proof systems. It is fully implemented in Rust, allowing for remarkable efficiency and correctness. One particular highlight of Jellyfish is our implementation of the zero-knowledge proof system, PLONK, which we believe is currently the most feature-complete and fastest open-source implementation of PLONK, according to our benchmarks.

While all of the tools in Jellyfish are being used in developing Espresso’s scalable and privacy-enabled blockchain infrastructure as well as our release of Configurable Asset Privacy for Ethereum (CAPE), these tools are fundamental cryptography building blocks for all web3 and blockchain systems. Therefore, we’ve open sourced Jellyfish under the MIT license, a highly permissive free software license. We’re excited to play our part in bolstering the rapidly growing blockchain ecosystem and in supporting the rapid adoption and development of zero-knowledge proofs. We are already seeing significant interest in Jellyfish from projects in the community and we couldn’t be more thrilled to contribute to standardization efforts for zero-knowledge.

Components of the Jellyfish library

Jellyfish implements a number of key cryptographic primitives:

  • PLONK, a zk-SNARK with universal setup
  • Hashes, pseudorandom functions (PRFs), and commitments based on Rescue, an arithmetization-friendly cipher which is ideal for zero-knowledge proofs
  • Merkle tree accumulators
  • Schnorr signatures
  • ElGamal public key encryptions
  • AEAD symmetric encryptions

Jellyfish’s PLONK implementation

The past several years have seen a snowballing of improvements to general-purpose zero-knowledge proof systems. For even full-time researchers in the space, it has been difficult to keep track of all the new constructions like Bulletproofs, Marlin, PLONK, and Supersonic, and their respective benefits and tradeoffs.

Central to Jellyfish is our implementation of PLONK, introduced in 2019 by Ariel Gabizon, Zachary J. Williamson, and Oana Ciobotaru. PLONK has gained a lot of interest in the last couple years because its tradeoffs make it suitable for many privacy-related use cases in blockchains. Though PLONK still requires a trusted setup, it uses a universal and updatable trusted setup procedure. It is “universal” in that a single trusted setup is enough for any program, rather than being limited to a single program. PLONK’s setup scheme is also “updatable” which means that the trusted setup can be continually updated by new parties — as long as a single participant is honest, then the proof system is secure.

Our customized PLONK constraint system contains many key optimizations:

  • An efficient Rescue hash circuit: a single 3-to-1 hash gadget (over BLS12–381 scalar field) takes 148 PLONK constraints
  • Implements Plookup and enables efficient lookup gates and range-check gates
  • Circuits for PLONK verifiers
  • a PLONK verifier gadget (that verifies a single PLONK proof) takes only ~ 22000 PLONK constraints
  • Efficient elliptic curve operation circuits
  • e.g. an elliptic curve addition/doubling circuit only takes 2 PLONK constraints
  • e.g. a variable-base multi-scalar multiplication gadget with 128 base points and 256-bit scalars uses only ~36000 PLONK constraints, which is 8 times smaller than the size of a naive circuit.
  • Efficient modular multiplication circuits (i.e. constraining a + b = c % N or a * b = c % N for a parameter N that is different from the size of the circuit field).
  • e.g., for a 384-bit circuit field, a modular multiplication gadget with 256-bit modulus only takes ~20 PLONK constraints
  • Circuits for various crypto-primives, e.g., Merkle tree, Schnorr signatures, ElGamal encryptions, PRFs, commitments, etc.

Meanwhile, our implementation of the PLONK proof system:

  • Integrates Plookup arguments
  • Enables batch proving, i.e., generating a single (larger) PLONK proof for multiple SNARK instances.
  • Enables fast batch proof verifications
  • Supports PLONK proving/verification key merging and circuit merging
  • one can merge the proving key/verification key/circuit for SNARK instance A with those for SNARK instance B, and obtain a proving key/verification key/circuit for the instance AB (i.e. the conjunction of A and B).

Finally, the Jellyfish PLONK implementation is generic in that it supports various prime fields and elliptic curves.

Getting Started with Jellyfish

We’re excited for the community to start exploring, integrating, and contributing to the development of Jellyfish. In the coming weeks, we’ll be publishing additional technical explainers and code samples using Jellyfish. Be sure to join our Discord community, where we have dedicated channels to discuss #cryptography and #development.

Join Us