Loading...

Follow The Blockchain Investments Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

CoinFund is excited to announce our investment in 3Box, a framework for managing self-sovereign user data.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
How the diffusion of responsibility and efficiency of blockchain tech drive the market toward more decentralization

As it turns out there are strong, natural economic incentives driving systems and products toward decentralization today. In this short piece, I’ll cover just three: regulatory arbitrage, liability protection, and public governance.

Regulatory arbitrage: the SEC and broker-dealer custody

Regulation, broadly, applies laws and rules to centralized actors in an effort to protect customers, create transparency, and make markets safer and fairer. But when regulation that is geared toward centralized actors meets decentralization-oriented products, a curious effect occurs: those products tend toward more decentralization.

A great example that demonstrates this is today’s SEC’s pronouncement regarding “broker-dealer custody of digital asset securities”. Not only does this guidance suggest that compliance may be much cheaper for “noncustodial” (read: “decentralized”) systems, it goes much further:

[…] a broker-dealer may face challenges in determining that it, or its third-party custodian, maintains custody of digital asset securities. […T]he fact that a broker-dealer (or its third party custodian) maintains the private key [to a digital asset] may not be sufficient evidence by itself that the broker-dealer has exclusive control of the digital asset security […]

In other words, if exclusive custody is a regulatory requirement for broker-dealers, it may be impossible to satisfy the existing rules with digital asset systems. Such circumstances of slow, inflexible frameworks coupled with the exorbitant costs of regulatory uncertainty create strong economic selection for noncustodial systems and, therefore, decentralization.

Liability protection: 3Box and the decentralized cloud

Between the hegemony of social media and the proliferation of Internet-powered devices, we are generating more sensitive user data than ever before. Traditionally, private user data is stored and secured on private company servers. But with the rise of malevolent data hackers one on side, and privacy-minded regulators on the other, storing sensitive user data is evermore a dangerous liability for companies.

As it turns out, a good solution comes from decentralized storage networks like IPFS and StorJ. Not only can users now increase and control the security of their data, they can own and monetize it without requiring the private hardware (and centralization risk) of a monolithic third-party provider.

Decentralization of user data is also why I am extremely excited about CoinFund’s investment in 3Box, and I expect that more and more companies will be removing data custody and security from their list of costs and liabilities in the coming year. Once again, decentralization has created a mechanism for lowering costs and giving the responsibility for data back to the user.

Public governance: the plight of Facebook and other public utilities

When I look at Facebook over the last fifteen years, I am more convinced than ever that we are seeing the dramatic trajectory of private company’s struggle to manage a gargantuan public good. Digital communication technology has become so ubiquitous that it is virtually the same as electricity and water as a public utility. Yet, any private company single-handedly making decisions about mass content distribution that impacts half the planet is going to run into impossible tradeoffs: you can please the Democrats, but then the Republicans rebel.

Other companies, such as YouTube, are happy to take sides in the tradeoff to the detriment of its users and content creators using a set of inconsistent methodologies which can only be described as, well, fascist. YouTube regularly makes unilateral decisions about what content and creator can operate on its platform.

But Facebook’s “Supreme Court” is perhaps the strongest validation for the decentralized governance space that is today embodied in blockchain projects like Aragon Network, MolochDAO, DAOstack, and dxDAO. It’s an inevitable realization (or admission) that the 20th Century’s centralized model of private governance is coming to a close — and doing so for strongly-economic reasons.

This announcement, underappreciated in crypto, is about tech finally moving on what we have intuitively understood in DAO space - that it is impossible for a private company to govern a public good without massive, undesirable, unmitigable liability. https://t.co/jh7gDel0jE

 — @jbrukh

As it turns out, the governance of public goods probably belongs in the hands of the users of those goods, not in the hardware and software platforms upon which they are implemented. Up until now, our best technology for public goods has been government. But decentralization technologies such as DAOs, voting systems, dispute resolution systems, on-chain cooperatives, and much more stand as a promising avenue toward effective governance of our modern public utilities.

As private companies increasingly shed the liabilities of governance, they will be looking toward decentralization technologies to make those solutions effective and robust. Even the great monolith that is Facebook sees the economic value in that.

Disclaimer: I touch upon some blockchain regulation matters above but, as always, this post should not be considered legal advice. For that, consult a legal professional.

Three modern decentralization drivers was originally published in The CoinFund Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

At CoinFund/Grassfed Network we pay close attention to “protocol arbitrage” as part of our generalized mining program. These are…

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Since the inception of bitcoin in 2008, blockchain technology has been described as anything from “snake oil” to “the solution to humanity’s biggest problems”. This post attempts to summarize and condense my answer to the question of “why blockchain?”, and explain it in a way that is both clear, practical, and accurate. In order to do so, we must clarify what we mean by “blockchain”. We then need to recognize that we are talking about a universe of technologies that is both broader and more general than blockchain proper.

So what is blockchain? Wikipedia defines blockchain as “a growing list of records, called blocks, which are linked using cryptography”. Here, blockchain is seen as a ledger or a log (“growing”), and is a specific implementation, one in which individual transactions are packaged into blocks. The purpose of this arrangement comes in the next sentence: “By design, a blockchain is resistant to modification of the data.”

This, truly, tells us nothing of value. First, the space of crypto, blockchain, and decentralization is formed by many different technologies, not all of which are based on the idea of a chain of blocks. Secondly, and most importantly, why is resistance to modification the key feature? And why is it necessary in the first place? What does this have to do with cryptocurrencies? Why is enterprise so interested in using this tech? What about smart contracts and where do they come in?

Let me propose an alternative framework that is clear as to the purpose (purposes!) that this type of technology serves. Once this is accomplished, I will draw a connection between this framework and the limited view of blockchain as a “chain of blocks that makes data hard to modify”.

The framework I am proposing is based on the notion of Single Source of Truth Systems (SSoTSs). I define SSoTS as a cohesive digital infrastructure that can be relied upon, with clear guarantees of availability, to maintain a consistent and truthful view of some commercial, real world, or social state, collectively termed “critical data”.

Let’s unfold this definition a bit.

Here are some examples of the relevant commercial, real world, or social state: a state of ownership of something, like a stock or a currency (how much of an asset do I own? how much money is in my bank account?); a state of a contract or invoice (has it been paid? when is it due?); provenance (where did an item come from?); reputation (what has this person done in the past? can I rely on him/her?); social choice (which candidate got the highest number of votes?). The reason why SSoTS are necessary in regards to these and other similar use cases, is that in all listed situations the state represented inside a digital system has significant bearing on some real world outcomes. In other words, it affects someone’s livelihood and wellbeing. Consequently, the desire to circumvent or compromise systems that hold such data is strong and gets stronger as more critical data is stored within a digital infrastructure.

Any data that has the power to affect someone’s life is termed critical data. The key point here is that the amount, use, and reliance on critical data has grown hyper-exponentially in recent years. This is because our ability to manage critical data digitally is the root cause of huge societal and economic improvements: anything from speedy travel to global commerce, global supply chain and shipping, food production, taxation, and legal process now relies on digital management of critical data.

Whereas, historically, critical data has been managed by people on paper, the introduction of computer networks has both increased the speed and efficiency of key economic and commercial processes, and has introduced a variety of new processes that could not have been reliably managed on paper. Because of this, the demand for digital management of critical data is only accelerating.

Simultaneously, it is becoming obvious that the database architectures that have been the predominant method of digitally managing critical data since the 1960s can no longer serve this purpose well. The increase of both the scope of digitally-managed critical data and the level of criticality of such data (think citizenship records, taxation information, and criminal histories) has shown that even a small number of errors and even a small frequency of cases where such data is compromised can have outsized effect on someone’s life and wellbeing.

This, then, leads to the demand for SSoTS where the number of errors and inconsistencies, and the probability of malicious actions is significantly reduced. I claim that this is the key component of my definition of SSoTS, it establishes the requirement of reliability.

My definition of SSoTS includes the requirement for there to be a clear guaranteed of availability. The reason is that in many systems that manage critical data, one subtle but important attack vector that would prevent the system from serving its stated purpose is to somehow restrict access to the system. Someone who has the power to choose who can and cannot make use of such critical digital infrastructure has an outsized amount of influence that can be easily abused for malicious purposes. So, in many systems that host critical data, the requirement of unfettered accessibility is essential to their proper functioning. This is because such systems are becoming as essential to human life as having access to food or water. In fact, such systems often do determine whether individuals have access to food and water. Consequently, availability (otherwise termed censorship resistance) must often be the feature of the technology, rather than a mere social convention. It must be guaranteed by the design of the system.

It is important to note, however, that the requirement of availability is only relevant to some but not all SSoTS. In many cases, when the user of a system is a business, and when the system is used to maintain business, but not individual, critical data, sufficient availability may be achieved through means other than technology. For example, banks that perform settlement with each other can rely on social and legal guarantees of availability, rather than the technological ones.

Finally, the definition requires that the infrastructure be cohesive. I am referring here to the “single” aspect of “single source of truth”. The infrastructure is cohesive if it can serve as the one system from which a true state can be gleaned with complete assurance. Singularity is crucial if management of critical data is to be economically viable, and if we are to manage an ever-increasing number of critical data repositories. Having critical data reliably stored in a single system removes the need for expensive additional components and processes that combine multiple data sources to create a view of the true state. Without the requirement for cohesiveness, the process around critical data becomes too expensive to be viable in many important cases.

Why do traditional data management approaches fail to meet the needs of critical data management? The answer is that in traditional, centralized database management systems (DBMS) there is a persistent notion of a superuser, and it is because of this that DBMS-based infrastructure fails to become a true SSoTS. Superuser is the one who may at any point in time alter or modify information stored in the database, perform maintenance tasks, or upgrade the infrastructure in some way, even change some of the features that it provides its clients. The need for a superuser in any such system seems legitimate: operators are required to have this level of access in order to correct mistakes, improve the systems, or to simply keep them up and running. At the same time, the superuser privilege is the attack vector through which systems are routinely compromised and fail to deliver the reliability that critical data management requires. Essentially, created to improve reliability, the superuser privilege in such systems is now the bottleneck preventing it from being improved further.

The malicious actions that are carried out through the superuser vector fall into four categories:

1. By a conscious choice of the superuser-operator, an action is carried out that disadvantages some of the users. Such is the choice social media companies often make when they sell data to advertisers, or silence legitimate free speech.

2. Superuser-operator is forced to carry out an action that is malicious towards some of the users of their system because of the pressure exerted on the operator by political or regulatory power structures. This often takes very subtle forms, for example, freezing bank accounts of certain businesses or individuals not because their activity is illegal, but because it is merely unpopular with government agencies.

3. Insider attacks, where the operator, or someone working for them, carries out an illegal or unethical action simply because they can. Such are insider attacks in banking, that may even result in significant losses.

4. Last, but not least, is the category of the attacks perpetrated by external entities (hackers) who, through some clever means, are able to take control of the superuser access. The reason why these attacks are hard to guard against is that this would require multi-layered and expensive security measures inside the operator’s organization, something only the largest organizations can afford. For all the rest, the trust we put on the superuser is often broken not through the operator’s malice, but through its inability to fully block this attack vector.

It is also important to note that the way systems are architected on top of traditional DBMS infrastructure permits a broad range of errors not through malice or unethical behavior, but through mere operator or programmer mistakes. Architecturally, the entirety of the database that holds critical data is accessed with minimal restrictions by the software that governs the way the data is written or modified. Essentially, the data is correct not because the database enforces correctness, but because the system that is writing the data most often chooses not to generate incorrect data. But the code that implements the business logic is often a source of errors or data inconsistencies.

Typically, systems that host critical data are broadly replicated, which is necessary to ensure that errors can be found and corrected. For example, systems that holds settlement and accounting information between multiple financial organizations typically exist inside of each such organization. The information is replicated between them, and the process of ensuring correctness involves a painstaking reconciliation between multiple systems, and a number of additional levels of control and auditing.

Replication of critical data among several systems is not a problem preventing existing commercial process from being carried out, but rather an impediment to new process being put in place and a limitation on how efficient this process can become. For example, in finance, code that implements the logic associated with a financial instrument typically exists within each organization that holds or trades the instrument. Consequently, introducing a new or novel financial instrument is an incredibly expensive process, one that requires multiple implementation of its business logic to be created among the organizations that hold and trade it. The “single” requirement of the “single source of truth” is not gratuitous, it simply answers the need for efficiency and agility. In SSoTS, the process of implementing or modifying business logic associated with a process or a financial instrument is carried out only once, making it possible to create new ones or change existing ones quickly and efficiently.

With this in mind, we return to the notion of SSoTS and formulate the most essential requirements that such systems must implement.

1. Because SSoTS is the single system that holds relevant data, it must guarantee that the correctness of data is assured to the greatest extent possible within the system, not outside of it. The strict and specific rules for how the data is modified must be implemented by the same layer that holds the data. Such is the bitcoin blockchain — it does not allow any users to alter the numbers that represent someones ownership of bitcoin in an inconsistent way. There are only two legal ways that bitcoin ownership changes: (1) as a result of a payment action, or (2) as a reward for mining. No other ways exist.

2. The superuser access vector cannot exist within SSoTS. It is simply contrary to the basis of the notion of SSoTS. No entity, including the operator(s) may alter the data in any way other than that expressly permitted within the system. Such is, once again, the bitcoin blockchain — its operators (miners) are not able to alter the data in any way other than that specified by the protocol.

3. In the cases where SSoTS manages critical data that determines livelihoods of people, it must be available to everyone, or censorship resistant. Censorship resistance, or more precisely, the inability of operators to prevent access to the network is, again, a property of bitcoin.

4. Much of the critical data managed by SSoTS must also adhere to strict requirements for privacy. In the same way that SSoTS are guarded against unauthorized modification of data, they must be also guarded against unauthorized access to it. This is one area in which modern public blockchain has not provided good generic solutions, although many ideas are in the works.

From the discussion above, the ways in which blockchain answers the requirements for a narrow subset of SSoTS are now apparent. Essentially, the conventional blockchain technology is a digital ledger in which transactions are strictly verified to adhere to certain rules that ensure that the ledger can be used as a single source of truth in financial accounting of an abstract digital asset, the native token of the blockchain. Because blockchain protocols are cleverly designed to ensure that no party can unilaterally break these rules, blockchain can serve as a single source of truth system. Bitcoin is the first system which ensured that the requirements on SSoTS are met, and met in technology, not by some convention or social/legal agreement. The subsequent explosion of crypto as an asset class attests to the deep implications of this advance.

The subsequent introduction of the notion of smart contracts by the Ethereum blockchain attests to the usefulness of SSoTS, specifically its singularity. Ethereum has demonstrated that financial instruments implemented within a SSoTS can interoperate in a way that makes them more liquid, accessible, and easier to create by many orders of magnitude.

In summary, the answer to the question of “Why blockchain?” is simple: because blockchain is an SSoTS, and as such, it provides cheap and reliable way to manage critical data, namely, financial records. But we can only truly capture the breadth and power of SSoTS if we abstract ourselves away from blockchain as a narrow solution, but instead focus on SSoTS more generically, based on the framework presented here. The space of SSoTS includes most blockchain systems, but is not limited to blockchain only. This reformulation of purpose allows investors and analysts to evaluate systems that come to market purporting to make use of blockchain on their merits and on the applicability of the SSoTS paradigm. Furthermore, it allows us to recognize viable and disruptive SSoTS approaches that are not blockchain-based.

Why (TF) Blockchain? Exploring the Nature and Purpose of Single Source of Truth Systems was originally published in The CoinFund Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
P2P Networks and Blockchains: A Perfect Match, Decades in the MakingIntroduction

P2P networks and blockchains are tales of many converging roads. While blockchain was invented in 2009, it combined multiple technologies from prior decades, such as Merkle trees (‘80), fault tolerance (‘89), linked timestamping (‘90), and proof-of-work (‘92). Similarly, the underlying components of P2P networks have also assembled themselves throughout the decades (e.g. Ethernet (’73), TCP/IP (‘83), distributed hash tables (‘01).)

It turns out that these are complementary technologies. P2P networks are a key component of cryptonetworks like Bitcoin, which describes itself as a “peer-to-peer electronic cash system”, since nodes need to broadcast transactions and blocks throughout the network.

While there were many attempts at other P2P networks prior to 2009, most of them failed. They did, however, leave behind a treasure trove of data and research about what worked, what didn’t, and what were the missing pieces. In this post I will discuss reasons why they failed, potential solutions, and arguments on why blockchains are an effective implementation of those solutions.

P2P 101

Formally, a distributed network architecture may be called a peer-to-peer network if the participants share a part of their own hardware resources (e.g. processing power, storage capacity, etc.). These shared resources are necessary to provide the service and content offered by the network and they are accessible by other peers directly without passing intermediary entities. The participants of such a network are both resource providers and resource requestors.

P2P networks have two flavors. It is “pure peer-to-peer” if any entity can be removed from the network without having the network suffering any loss of service. These nodes are referred to as “servents” (SERVer+cliENT). The benefit of this architecture is that it’s fault-tolerant and autonomous. That said, it has slow information discovery and no guarantees about the quality of service. Examples are Gnutella and Freenet.

“Hybrid peer-to-peer” has a central entity that’s necessary to provide parts of the offered network services. One example of such a service is indexing — while there exists a central server that maintains directories of information about registered users to the network, the end-to-end interaction remains between two peer clients.

There are also two flavors of indexing — centralized and decentralized. With centralized indexing, a central server maintains an index of the data or files that are currently being shared by active peers. Each peer maintains a connection to the central server, through which the queries are sent. This architecture is simple, operates efficiently for discovery information, and provides comprehensive searches. The downside is that it’s vulnerable to censorship and other types of attacks, and has a single point of failure. The most famous example is Napster, which had centralized indexing and distributed file exchange.

With decentralized indexing, the central server registers users and facilitates peer discovery, while “supernodes” maintain the central indexes and proxy search requests on behalf of peers. Queries are therefore sent to supernodes, not to other peers. Peers are automatically elected to become supernodes if they have sufficient bandwidth and processing power. Relative to purely decentralized systems, this architecture reduces the discovery time and message traffic between nodes. Relative to centralized indexing, this architecture reduces the workload on the central server but has slower information discovery. KaZaA is the most famous example here.

A visual summary of architecturesA Survey of Pioneers

While Napster and BitTorrent are the most famous examples of P2P networks, there were actually many attempts around other types of networks, mainly around compute and storage. Here’s a brief overview of the lesser-known but equally-interesting projects:

“All kinds of computations”, they said.

POPCORN, 1998: This project provided a market-based mechanism for trade in CPU time to motivate processors to provide their CPU cycles for other peoples’ computations. It had a notion of a “market” which was a trusted intermediary that was responsible for matching buyers and sellers, transmitting computation requests and results between them, and handling payments and accounts. Interestingly, the project also defined an abstract currency called a “popcoin”. All trade in CPU time was to be done in terms of popcoins, which were initially implemented as entries in a database managed by the market. Each user of the POPCORN system had a popcoin-account paying from it for CPU time required, or depositing into it when selling CPU time. An online Popcorn market was live from 1997 to 1998 but is no longer being maintained.

OceanStore, 2000: The goal was to provide a consistent, highly-available, and durable storage utility atop an infrastructure comprised of untrusted servers. Any computer can join the infrastructure, contributing storage in exchange for economic compensation. The project envisioned a cooperative utility model in which consumers paid a monthly fee in exchange for access to persistent storage. Pond, a prototype implementation of OceanStore, was published in 2003. It wasn’t commercialized, however, and the source code is available here.

I’d pay someone to NOT see that video.

MojoNation, 2002: This was an open-source P2P content sharing system that was used to share digital content. It used a swarm distribution mechanism that allowed file sharing from parallel and multiple peers, similar to BitTorrent (in fact, the founder, Bram Cohen later went on to create BitTorrent!). Interestingly, MojoNation created an economy of incentives, using micropayments called “Mojo” to reward users for distributing and uploading files to the network. To earn Mojo you can act as a server, allow your bandwidth or hard drive space to be used, or sell services, with those buying and selling determining the prices. Users also created a reputation system of sorts since the quality of service and the reliability of service providers were under review and tracked by agents. MojoNation ceased operation as a commercial enterprise in 2002, when it was replaced by the noncommercial Mnet project, which was open sourced but not actively maintained.

Tahoe-LAFS, 2008: This is a system for secure, distributed storage that uses capabilities for access control, cryptography for confidentiality and integrity, and erasure coding for fault-tolerance. It’s open sourced and still actively maintained, and is commercialized by LeastAuthority, which charges $25/month for storage. Notably, Zooko Wilcox of ZCash is one of the project founders.

“Sounds like most of these didn’t take off. Did anything succeed?”

Yes — BitTorrent. According to a 2018 report from Sandvine, file-sharing accounts for 3% of global downstream and 22% of upstream internet traffic, of which 97% comes from BitTorrent.

Even though it lost much of its market share over the past decade, today BitTorrent comfortably sits in fifth place with 4.10% of all Internet traffic. Here’s the full picture according to Sandvine:

Why did many P2P Networks fail?

There are many reasons, including legal issues, the convenience of alternatives, supply & demand dynamics, and overall PC & internet penetration. In this post, however, I will focus on the technical and economic reasons why P2P networks couldn’t scale and sustain themselves.

The Free Rider Problem

In 1954, Paul Samuelson highlighted the market failure of free-riding:

“It is in the selfish interest of each person to give false signals, to pretend to have less interest in a given collective consumption activity than he really has”.

This issue occurs with public goods, which have the properties of being non-excludable (everyone could use them) and non-rivalrous (one use does not reduce availability). Because the services provided over P2P networks have some characteristics of public goods, they often suffer from the under-provision of content. This is likely exacerbated by the voluntary nature of contributions. For example, an analysis of the Gnutella network found that more than 70% of its users contribute nothing to the system.

Large populations

Free-riding worsens with group size. For P2P networks, this limits scalability because it damages their “auto-replication” characteristic, which ensures that content on the network is available in proportion to its popularity because the consumption of resources through downloads is balanced by the provision of resources through sharing.

Natural disincentives

Cooperation consumes a user’s own resources and may degrade their performance; as a result, each user’s attempt to maximize her own utility effectively lowers the overall utility of the system.

The payoff matrix for an application like P2P file sharingImbalance of interests

Alice wants service from Bob, Bob wants service from Carol, and Carol wants service from Alice.

A visual representationZero-cost identities

While this is desirable for network growth as it encourages newcomers to join the system, it also allows misbehaving users to escape the consequences of their actions by switching to new identities.

How do we solve these issues?

Fortunately, there has been a large body of work in the ‘00s aimed at addressing many of these issues. Below are the five most relevant proposals based on published academic research:

  • Micropayment systems (Golle et al. 2001)
  • Shared History (Feldman et al. 2004)
  • Subjective Reciprocity (Feldman et al. 2004)
  • Network pricing (Cole et al. 2003)
  • Admission control systems (Kung and Wu 2003)

This is where cryptonetworks come in. In the next section, I will briefly go through each potential solution and provide a sense of whether cryptonetworks could be the missing piece to the puzzle

Micropayment Systems

In short, micropayment systems lead to a Nash equilibrium where the optimal strategy is to share resources.

Applicability: HIGH —In the ’00s, there was no effective solution to make the negotiation between resource consumers and providers automatic and it was too difficult to deploy the necessary infrastructure, such as secure e-cash and service auditing. Cryptocurrencies and smart contracts effectively solve both of these issues.

Shared History

Every peer keeps records about all of the interactions that occur in the system, regardless of whether he was directly involved in them or not. This allows users to leverage the experiences of others, resulting in higher levels of cooperation than with private history and scaling better to large populations and high turnovers.

Applicability: HIGH — Creating a shared history requires a distributed infrastructure to store the history; a *perfect* application for blockchains.

Subjective Reciprocity

While shared history is scalable, it is vulnerable to collusion, an issue magnified in systems with zero-cost identities since users can create fake identities that report false statements (i.e. Sybil attacks). To deal with collusion, entities can compute reputation subjectively, where player A weighs player B’s opinions based on how much player A trusts player B.

Applicability: MEDIUM — While this will help any P2P network, such algorithms do not benefit from smart contracts. That said, constructing a network graph will also require information to be stored using distributed infrastructure.

Network Pricing

Network users are assumed to selfishly route traffic on minimum-latency paths, but the outcome does not minimize the total latency of the network. One strategy, discussed informally as early as 1920, for improving the selfish solution is “marginal cost pricing”, where each network user on the edge pays a tax offsetting the congestion effects caused by its presence. This leads to a Nash equilibrium with the minimum-possible total latency.

Applicability: MEDIUM — Smart contracts provide the ability to create a contract which contains an algorithm computing the optimal price without the need for a trusted third party.

Admission control systems

A node desiring to receive content from other nodes will need to contribute to the P2P network at a certain level. A freeloader, which has a low service reputation and a high usage reputation, will be denied service. The node will need to either provide an increased level of service or reduce its content usage in order to be readmitted again.

Applicability: MEDIUM — While we haven’t yet solved decentralized reputation, blockchains provide a path forward for reputation without relying on a trusted third party.

Looking forward

Today, we’re seeing many cryptonetwork equivalents of last decade’s P2P networks (e.g. Golem, Livepeer, Filecoin). It’s unclear whether these networks have implemented all of the learnings above, but they did effectively implement at least two — micropayments and shared history.

In addition, the context within which these networks operate has changed. Global PC and internet penetration has dramatically increased, leading to a vastly larger amount of storage, compute, and bandwidth resources to be shared. Furthermore, the demand for resources has increased from primarily government and academia to organizations and individual that require additional resources for novel use-cases, such as machine learning and graphics rendering. Lastly, UX has come a long way to make these networks more accessible to non-technical users.

Lastly, the elephant in the room is the question of demand. In a 2001 article, Robert Metcalf said an interesting quote in regards to distributed computation as a commercial venture:

“The costs of computation keep going down, so why bother trying to recycle the waste of this renewable resource?”

Indeed, if people or companies don’t really need this or are fine with centralized alternatives that are cheap enough with a good enough quality of service, then this might be a solution looking for a problem.

As with so much in crypto, I think we’ll get preliminary answers over the next few years. If it doesn’t work out, at least we’ll walk away with more data to inform the next generation of resource sharing networks.

Sources:

A Survey of Peer-to-Peer Networks

An Empirical Analysis of Network Externalities in Peer-to-Peer Music-Sharing Networks

Robust Incentive Techniques for Peer-to-Peer Networks

BOINC: A Platform for Volunteer Computing

Experiences Deploying a Large-Scale Emergent Network

OceanStore: An Architecture for Global-Scale Persistent Storage

Tahoe — The Least-Authority Filesystem

The POPCORN Market — an Online Market for Computational Resources

P2P Networks and Cryptoassets: A Perfect Match, Decades in the Making was originally published in The CoinFund Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
How end-to-end device-level security will transform the world of software and computingTL;DR
  1. Secure hardware protects data and software from tampering by the owner or operator of the device where it resides. This includes CPUs, GPUs, embedded chips (such as in cameras, keyboards, and computer screens), and IoT devices.
  2. In the future, much of the IT infrastructure will run on secure hardware. Humans will never come into direct contact with some of the data protected by secure hardware.
  3. It is not possible to manufacture secure hardware within the traditional corporate frameworks of secrecy and intellectual property protection. This presents a huge opportunity for forward-looking organizations to compete with large incumbents such as Intel and AMD.

Let me tell you three stories about what the future will look like.

First Story: A Medical Emergency

Jim, a hiker, is injured on a mountain pass in an area with no cellphone reception. His brother Bob leaves him in a tent, and hikes down to the nearest town to call 911. During the call, Bob transmits to the operator a packet of secure information that contains Jim’s critical medical records, such as his blood type. Bob has downloaded the information from Jim’s phone via bluetooth. The packet carries a number of important restrictions. It can only be decrypted once, by a medical professional registered with a computer network that manages access to medical records. Once decrypted, it cannot be transmitted further, neither by Bob nor by the paramedics. The only way to decrypt the packet is to use secure computing chips installed on every relevant device. The chips talk to each other directly, without revealing cryptographic keys to people. After the operator transmits the packet to the rescue team, the packet is decrypted, and the paramedics are now fully prepared to assist Jim, who is given an onsite blood transfusion and is taken by helicopter to the nearest hospital. After spending a week there, Jim safely returns to his family.

Second Story: A Fake Deep Fake

A reporter films a politician taking a bribe from a local drug boss. Upon the release of the video by the media, the politician hires a team of AI engineers who produce a deep fake video of President Nixon taking a bribe from the same drug boss. The politician then asserts that the reporter’s video is fake. In response, the reporter produces a cryptographic attestation downloaded from her camera, which proves, beyond doubt, that the image was, in fact, taken by this camera on the date and time specified. This is enabled by a tamper-proof secure processor embedded into the camera and certified by the manufacturer. The politician is therefore discredited and has to resign from his position.

Third Story: Access Granted

Jennifer, a customer of GigaPay, an internet bank, is on a trip to Malindi, Kenya, where one day, her purse is stolen. Not only does she lose all her money and credit cards, but she is now without a cellphone, stranded in a foreign country. She walks to the internet cafe to find a computer of the latest model, one enabled with secure devices. When she opens the web page for GigaPay, the bank offers her to log in via secure devices available on the desktop she is using. As part of the login process, the bank’s server establishes a secure connection to three different devices: Jennifer’s keyboard, Jennifer’s monitor, and, separately, the fingerprint reader installed on the keyboard. On the next screen, the bank’s systems make a direct connection to Jennifer’s monitor and display a secret piece of information that ascertains to Jennifer that she is, in fact, connected to her bank, not to a team of hackers. Jennifer enters her password and allows her fingerprint to be scanned. Both her password and her fingerprint are transmitted directly between the endpoints, ensuring that nobody can either fake or intercept them. As a result, Jennifer can now access her bank account, block her stolen cards, and request urgent delivery of a replacement bank card to Malindi.

What Is Secure Hardware?

For those not weak of heart and mind, I recommend reading a detailed account of Intel SGX architecture detailed here by an amazing team at MIT. For the other 99.999% of readers: briefly, Intel SGX is an architectural addition to the latest wave of Intel CPUs that allows users to run software in a way that is fully protected from tampering or being compromised by the people or organizations that operate the data center where the software is running. In principle, Intel SGX and similar systems can enable cloud services to provide sufficient security guarantees (of privacy and integrity) to run highly sensitive code or store highly sensitive data, such as medical records and cryptographic keys.

While security-sensitive cloud computing is often cited as the main use case for secure enclave architectures (another word for the class of CPU architectures similar to Intel SGX), in my view, these types of applications barely scratch the surface of what becomes possible with secure hardware. The future architectures of decentralized networks, economically enabled IoT networks, and innovative approaches to authentication and operational security in centralized contexts offer a glimpse of other, broader and more impactful applications.

The key requirement for the specific class of secure hardware that I am focusing on is that the architecture of such systems strongly protects software and data from being tampered with or compromised by its owner-operator. When this requirement is fulfilled by a majority of the devices on the network, the architecture of the whole system achieves end-to-end device-level security, standing in stark contrast to the prevalent approach used today, which merely entails surface-level security and is entirely unfit for a broad range of important use cases.

General computation hardware that limits the agency of the operator currently exists in the form of Intel SGX already mentioned, as well as a number of similar designs, both proposed and manufactured (for example, Sanctum). There are many specialized systems (cryptocurrency wallets, point-of-sale systems) that have embedded secure chips. Unfortunately, embedded secure chips inside end-user peripherals, such as the ones we describe in two of our stories, are virtually non-existent today.

This post argues for a broad adoption of secure hardware for both general and embedded computing. We argue that in order to provide requisite safety to consumers of computer networks, secure hardware must be developed using the open source approach, and that manufacturers must provide unrestricted software access to such devices.

Why Secure Hardware?

Currently, organizations do their best to protect their internet-exposed surface, usually comprised of web-servers that connect client-side devices with the company’s internal data and software resources. The organizations are the ones to decide upon the internal security procedures they choose to employ. While well-known best practices exist for encryption, storage of passwords and other sensitive information, and for protecting the organization’s attack surface, companies routinely make costly mistakes and compromise the safety of their customers at an alarming scale and with increased frequency.

In the world where customers can’t trust the operator of the software on which they must rely, we need an approach that enables one to independently verify the security of connection, data, and computation. Unfortunately, traditional security models cannot offer anything of the sort. This is because the agency of the operator in traditional computing models is unbounded. Organizations then cannot fully protect themselves from breaches, because once the attackers is inside (either because they were insiders, or because they have compromised an insider’s system), their ability to cause damage is unlimited.

The picture is quite different for end-to-end device-level security. With secure hardware, end-users can directly verify the integrity and privacy of their data and connections, if they are willing to accept the fundamental assumption that secure hardware is sufficiently tamper-proof. Rather than relying on operators to follow good practices (something we should increasingly be unwilling to do), this approach provides the necessary guarantees by reducing the operators’ control over data and software.

In other words, with secure hardware, it is not enough for an attacker to penetrate the organizational boundary. They must penetrate the secure layer of a hardware component. While the design of secure chips is difficult, what sets them apart is the ability of their creators to ensure security at the systemic level once and for all. This not only changes the safety, but also the economics of secure networks, by removing the redundant need of every operator to implement their own protection.

AI and Security

We are quickly building a world in which artificial intelligence software systems deeply penetrate social interactions and may soon literally be controlling critical societal functions, such as determining central bank interest rates. I am going to be highly contrarian and will probably be ridiculed by many well-intentioned science fiction writers when I say that reducing human agency over such software is a good thing.

Consider the case of a social networking company. A good social network will display content that is highly relevant to the user, based on the user’s behavioral history. In order to work well, the software underlying the relevancy engine has to be trained on the user’s historical data. Within the traditional IT architecture employed by Facebook and other similar organizations, the user’s data is collected by the company that provides the social network as a service. It is then almost inevitably misused by the company, or compromised through a social attack or a security breach, harming both the individual users and the larger society. But what if I wanted to create a company that provably guarantees privacy to users, while at the same time employing a well-built AI model that uses data in ways that users actually want?

One way to do this is by using secure hardware. Instead of connecting to the company’s web server, beyond which the user has no visibility, the user’s device would talk directly to the database server running on secure hardware. The code running on this server could be verified by the community of users independently from its operator, by using a combination of code reviews and cryptographic attestations ensuring that only provably safe software is allowed access to the user’s private data. While the operator company may both provide the computing resources and develop all the relevant software, the user community has sufficient tools to directly ensure that the privacy of the users is not being violated. The key to this is, again, the way secure hardware diminishes the agency of the owner-operator.

Let’s consider another example. Imagine a conglomerate of supply chain companies that want to build an AI system to control and optimize logistics. Each company is in possession of an important dataset that needs to go into training such a system. There is one problem: companies don’t want to give away their data to their competitors. They also want to avoid a situation in which the resulting model becomes available to some but not all members of the consortium. There is a number of solutions that are available today, ranging from hiring a third party to train the model for them (this is unreliable and prone to security risks) to employing a complicated scheme for secure multi-party computation, a solution that has not matured to a point where it can handle a computation as complex as one underlying an artificial neural network.

If appropriate secure hardware were available, it would be the cheapest way to carry out this process. A network of secure servers could easily perform a joint computation of this sort while providing each party with the necessary assurances of both privacy and availability. The servers would remotely verify each other’s security guarantees, a feature only available with secure hardware.

The Case for Highly Embedded Chips

Our third story described a banking system that was directly connecting to the user’s keyboard, ensuring that the owner of the desktop computer was unable to compromise the way in which the user is entering her password or fingerprint. Granted, this is a futuristic architecture, but it’s not too far-fetched. In fact, such an architecture would eliminate many issues not just in traditional banking, but also in innovative cryptocurrency systems.

The way in which today’s hardware cryptocurrency wallets are compromised is not by extracting the private keys they contain, which is reliably hard, but by breaking into the way users interact with the wallet’s chip. The attacker would send a malicious transaction to the wallet’s secure chip for signing, while keeping the user convinced that they are sending the correct one.

In order to reliably prevent this, the end-to-end device-level security is required. The hardware wallet in possession of the private key must not rely on insecure components to actually display the relevant information to the user, but must instead ensure that the information the user sees on her screen provably corresponds to the information transmitted to the wallet. The system must not rely on the security of the host computer. Instead, it must have a tamper-proof security chip literally embedded into the computer screen. Such a chip can occasionally take control of a screen area to display secure information to the user. The cryptocurrency wallet must talk directly to this chip on an encrypted channel completely opaque to the host computer and the operating system. With this mechanism, both cryptocurrency wallets and traditional banking and financial websites can achieve a level of security that far exceeds today’s state of the art.

In fact, secure hardware across the board is the answer to the long-standing problem of user experience that arises in the context of cryptocurrencies and blockchain. A decentralized network based on secure hardware does not have to employ cryptographic keys as a mechanism to identify individuals in possession of cryptocurrency wallets. It can operate on usernames and passwords, and can implement multi-factor authentication on user-side devices where the appropriate secure hardware is installed.

A Bit About How

I have so far tried to convince the reader of the importance of secure hardware that enables end-to-end hardware-level security for many of the use cases that are already important today. I am convinced that without an architecture that enables all parties to independently ascertain the security of the overall system (which is only possible with secure hardware), some important use cases will never come to fruition, because they could never be trusted to have the right level of security guarantees.

But while secure hardware removes the need to trust the system’s operators, it introduces a diametrical issue of having to trust the hardware manufacturer. Many of the analyses of Intel SGX criticize the way in which Intel keeps its consumers in the dark as to the critical details of the architecture, and artificially introduces hoops through which software developers must jump in order to get Intel to “approve” their software for running under SGX.

The sane approach to building out the global layer of secure hardware is to go to the open-source model, in which the chip is designed and verified by the open community, and then produced by a number of competing manufacturers, for whom it would be fatal from the business standpoint to introduce backdoors into the end-product. In other words, a healthy ecosystem of secure hardware components cannot grow in the world of siloed monopolies.

The problem then is to come up with an incentive model that would enable the designers of secure hardware to be well-compensated for their work, while at the same time not keeping the design secret, and enabling multiple manufacturers to produce chips based on the same design.

On the surface, this problem may seem unsolvable. After all, we have been struggling with this exact issue in the space of open-source software for quite some time. I believe, however, that it is possible to make use of the fact that open-source hardware must still be manufactured and made into a physical object — a chip. Given that a chip is a physical device, and that it must be trusted by the customers, it may be possible to create the right incentives using the notion of fractionalized supply.

I have first proposed the notion of fractionalized supply in the context of the pharmaceutical industry. The key insight is that just like drugs, chips are expensive to design and cheap to manufacture at scale. Consequently, just like with drugs, computer chip design can be financed by pre-selling rights to a portion of the chip’s manufactured supply. The owner of, say, a one-percent fraction of the supply may produce or license someone else to produce up to one percent of the total number of chips manufactured in the given period of time. The owner is then allowed to use or sell these chips for profit.

The key benefit of this solution is that it enables funding of the costly initial development effort while preventing the formation of monopolistic structures down the road, which are highly undesirable for both drugs and secure chips.

In Closing

I suspect that the road that would take the secure hardware market to fruition is decades long. It may be another 20 or 30 years before people realize how important end-to-end device-level security is for the wellbeing of society. However, I have no doubt in my mind that the pervasive presence of computer software and hardware in our lives is going to drive a global cognitive transition to a different architectural framework. This transition will be characterized by adapting computer systems whose overall architecture has not changed since 1960s to a world where security is the foremost requirement across the board. In this world, a new generation of highly reliable secure hardware is going to be paramount.

Any takers for the new trillion dollar market?

Skynet 2.0: The Case for Secure Hardware was originally published in The CoinFund Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The merging of the roles of investor and miner has created an investment approach unique to decentralized networksSummary
  • Decentralized networks incentivize third parties to deploy services on their behalf in exchange for protocol-defined compensation. Examples of these services include security, transaction processing, lending, or providing computational resources like storage, computation, or transcoding.
  • Investors that have acquired a stake of tokens early on in the life of a decentralized network are well equipped to provide core network services during its bootstrapping phase.
  • By actively participating in networks, an investor can open or increase a position in a network at a potentially competitive cost basis, and increase the likelihood of network success.
  • In some cases, active network participation is the only way to gain financial exposure to network growth.
Introduction

CoinFund recently announced the launch of Grassfed Network. Grassfed is an initiative that supports investment strategies involving active network participation. This announcement is the culmination of months of research and test deployments that CoinFund has performed across a number of networks, and we are extremely excited to collaborate with more teams on this work in the future.

However, after the announcement came out, I received many questions from friends and family who live outside the world of #cryptotwitter. Why should a fund actively participate in a network that it supports? How is this an investment strategy? Why should I care?

These questions have inspired me to take a step back and publish an overview of active network participation as an investment strategy. First, I will walk through the lifecycle of a decentralized network. Then, I will discuss how this approach to investing fits in.

Decentralized network lifecycle

On its journey towards decentralization, a decentralized network will go through three high-level phases: the fundraising period, the bootstrapping period, and the live period. In reality, these periods may overlap considerably, but this breakdown will be useful for our purposes.

The fundraising period

When a founder and development team are just starting out, they need to raise money from a small set of early stage investors to fund themselves. Depending upon the design of the decentralized network and planned future business model, the team might issue equity, or the rights to future tokens via a SAFT. In this stage, we see angel and venture capital investors enter decentralized networks.

The bootstrapping period

A lot happens in the bootstrapping period, my catch-all for the transition of a more or less centralized early stage project to a functioning decentralized network. The development team is hard at work completing their minimum viable protocol, and going through rounds of testing and audits prior to a formal public launch.

If the network has a token, it will be distributed publicly during this phase. The public might receive the token via airdrop or Merkle Mine, whereas early stage investors’ tokens may begin to vest.

The network must also begin to bootstrap value through the incentivization of third party service providers. By virtue of their economic design and the culture of open source protocols, no single entity, including the initial development team, should have all of the resources required to bootstrap and manage a decentralized network. Bitcoin is a useful example here. The Bitcoin network operates on mining hardware running on computers around the world, all contributing hashpower that is used to validate transactions. “Miners” are incentivized to contribute their time and resources to the network because they are promised a Bitcoin-denominated reward in return for every block they mine. It would be prohibitively expensive for any one miner to buy enough hardware to provide 51% or more hashpower to the network (especially if Bitcoin were to, as a result, lose its value due to network centralization).

Over time decentralized networks evolved from the Bitcoin approach. They now offer many kinds of services, including infrastructure for micropayments, prediction markets, video transcoding, distributed file systems, and lending. The third party providers of these services are compensated through a mix of inflation rewards, arbitrage opportunities, or straight-up fees from users of the network. Although the development team may subsidize a large portion of service provision early on, over time the majority of the service provision will transition to a large network of service providers.

The live period

At this point, the network has formally launched, and the majority of value being provided on the network is coming from third-party service providers. The founding team may continue to work on the protocol directly or on an application running on top of the protocol. But decision-making and development is increasingly managed by tokenholders through a formalized governance process, maybe even a DAO. At this point, the token has been fully distributed and is trading on exchanges.

Takeaways from the decentralized network lifecycle

There are two takeaways I want to note from the decentralized network lifecycle. For one, power over the network purposefully shifts from the founding team to the larger group of network participants and tokenholders over time.

Secondly, there are two sets of stakeholders involved in growing the protocol outside of the founding team: the investors and third-party participants. Investors that are taking a more venture style approach might enter a network earlier on in the lifecycle during initial fundraising.

Participants, or “miners”, on the network respond to the financial incentives built into protocols and provide the software, hardware, and services required for the network to function. In the early days of a network, an investor who has established a large token stake and which has developed a deep knowledge of the protocol is likely the best suited to kick off decentralization during the key bootstrapping period, and thus become a participant on the platform as well. Especially in the early days of the growth, a cryptofund might even make the decision to loss-lead for the sake of growth.

The merging of the investor and network participant

There is a large set of opportunities for network participation that will expand over time as the number and type of decentralized networks continue to grow. A cryptofund might consider any one of the following opportunities.

Note that many strategies here do not require a pre-established token holding and some networks do not have a token at all.

Transaction processing

Connext has built infrastructure to allow for faster micropayments on the Ethereum network. A service provider to the network can run a hub to help process transactions, and collect fees in exchange for that service. Notably, the Connext network does not have a token. In order to gain direct access to the growth of the network, an investor’s only option is to run a hub.

Security provision (mining and staking)

The classic example here is Proof-of-Work (PoW) mining on the Bitcoin network. In return for work provided, miners are rewarded Bitcoin. Variations on Proof-of-Work mining like Proof-of-Stake delegate more mining power and rewards to miners which have staked their network tokens.

Market-making and lending

Service provision on Decentralized Finance (aka, #DeFi) networks includes market-making and lending. A participant here is rewarded in interest or transaction fees.

One timely example is Uniswap, a decentralized exchange that has seen exponential growth over the past few weeks. There is no investable token. In order to gain exposure to the growth of the exchange, a cryptoinvestor would have to add liquidity and earn fees.

Resource provision

This includes the provision of storage and computing power from decentralized service providers. A few examples are Filecoin, Storj, Golem, and Truebit. A service provider earns fees in the form of pre-programmed inflation rewards and/or fees in return for its work. Typically, there is a requirement for the service provider to hold some tokens and lock them up for a set period of time in order to provide the work.

Governance

Some tokens grant governance rights to their holders. This might include the ability to vote on platform proposals, core protocol variables (like a defined fee rate), or the allocation of network foundation funds.

A few examples of networks with tokens that have governance features are Aragon, MakerDAO, and Decred. Stakeholders are incentivized to govern the network in a way that accrues value to the token over the long term.

Active network participation as an investment strategy

There are several reasons why a cryptofund might want to participate in a network in addition to investing in it.

First and foremost, network participation can provide a differentiated source of returns. Through bull and bear markets, a participant is guaranteed to receive compensation in exchange for its services. Taking into account operational costs of participating on a network, it is very possible that a token can be acquired at a much cheaper rate in exchange for network participation than for purchase via SAFT or on an exchange.

In some situations, an investor’s potential returns and impact on the growth of a decentralized network are actually reduced by their inability to participate. For instance, when a protocol offers inflationary rewards as compensation to its participants, a buy-and-hold investor that does not actively stake or work on a network will see their stake erode in value over time. Some more popular networks offer relatively high yields — Livepeer clocks in at 150%, while Cosmos, Decred, and Loom all offer rates around 10–11%. (Read more network staking yields here).

As noted earlier, in the absence of a token, financial exposure to a decentralized network might come in the form of participating on the network (see: Connext and Uniswap). For some networks, strategic development decisions are made through a governance process that requires staking in order to vote. The inability to stake tokens would oust investors with large token stakes from this process.

For a venture-style investor, active network participation also represents a crypto-specific offering in its set of post-investment services. Providing value-added services after an investment is made is nothing new in the realm of venture capital funds, where investors pride themselves on their ability to provide their portfolio companies with mentorship, key introductions, marketing expertise, pitch guidance, and hiring assistance. In some cases, these services have looked very similar to active network participation. When BlackRock made investments in peer-to-peer lending startup Prosper a few years ago, it became a lender on the platforms. Today, in order to get financial exposure to the growth of lending platforms like Dharma and Compound, a cryptofund might follow a similar strategy.

Conclusions

The emergence of decentralized networks has blurred the line between the role of the investor and the network participant. Once the fundraising period has ended and the bootstrapping phase has begun, the cryptoinvestor may have the largest token stake in a network and know the team and protocol mechanics the best. To help their investments succeed, cryptoinvestors are incentivized to become a part of these platforms and help the networks serve customers Off-chain, investors bootstrap the community of developers and other participants that will contribute network growth in the future. Actively participating in networks in this way represents the ultimate alignment of incentives between an investor and his or her portfolio network.

The trend of utilizing cryptoassets productively has already begun to take off. More cryptofunds are beginning to explore developing network participation strategies in-house, working with Staking-as-a-Service (StaaS) companies like Staked or Figment Networks, or aligning themselves with the participatory strategies of other funds through delegation. The recent article in the Wall Street Journal about Coinbase’s move to stake on behalf of its clients for certain assets should further educate the masses on the benefits of using assets to participate in decentralized networks.

One of the most exciting aspects of this new investment approach is its potential to democratize investing in decentralized networks. The ability to contribute work in return for compensation will allow smaller, less well-known investors to gain otherwise unavailable exposure to projects in a more traditional venture-style fundraise. This trend could also lead to the support and success of more diverse founders, teams, and investment funds.

As a team, we have found that active network participation has helped us to better understand decentralized networks, support teams, and develop a vision for the development trajectory of protocols. We are excited to see where this work takes us.

Disclaimer: The content provided on this site is for informational and discussion purposes only and should not be relied upon in connection with a particular investment decision or be construed as an offer, recommendation or solicitation regarding any investment. The author is not endorsing any company, project, or token discussed in this article. All information is presented here “as is,” without warranty of any kind, whether express or implied, and any forward-looking statements may turn out to be wrong. CoinFund Management LLC and its affiliates may have long or short positions in the tokens or projects discussed in this article.

Active Network Participation as an Investment Strategy was originally published in The CoinFund Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I am proud to announce that CoinFund has led a seed round of GEO Protocol — an open source overlay protocol that enables creation and interoperability of value transfer networks in a lightweight, cost-efficient, and scalable way.

GEO was first conceived in 2013 and the early development efforts were bootstrapped by its founder, Max Demyan, from 2015 until the fall of 2018, when several local partners joined forces to bring the project to the global market. I met Max in the fall of 2017 and was immediately taken by the strength and precision of his vision for democratized and accessible global payments. I was impressed by his understanding of how decentralized technology can help people around the world achieve their dreams.

GEO started as a distributed-trust payment network long before the concept was recognized by the larger decentralization movement. As Layer 2 networks gained prominence in 2018 it became clear that GEO can serve as a uniquely lightweight overlay network connecting trustless payments via Layer 2 off-chain processing and fiat-based payments using a variety of distributed trust topologies. In our view, GEO is the best technical approach to enabling interoperability of different currency and payment infrastructures.

The strength of GEO Protocol is in its simplicity combined with versatility. The project emphasizes routing and cycle clearing features, and eschews the notion of specialized payment connectors, which significantly simplifies the network topology. As part of GEO Protocol, even your cell phone can process payments on behalf of others. GEO offers multi-path transactions out of the box, significantly increasing the liquidity of the network as compared to many other proposed Layer 2 approaches, such as the bitcoin’s Lightning Network and Ethereum-based Connext. GEO Protocol provides a uniquely fast and scalable payment infrastructure that is potentially less expensive on a per-unit basis than blockchain payments, both on-chain and off-chain/Layer 2.

On the roadmap of the protocol are transactions that involve one or more currency exchanges as part of the same payment. This will make it possible to send cross-border payments worldwide using cryptocurrency as the carrier of inter-jurisdictional value, exchanging it into fiat at the end-point, all as part of the same atomic transaction. This payment topology is likely to give rise to significant adoption of GEO Protocol in and between markets where the traditional payment infrastructure is underdeveloped or overly restrictive.

The fees within the protocol are subscription-based, and are not charged as the percentage of the notional value. Overall, GEO payment network delivers a product that is easier to access and that is cheaper than existing solutions, including blockchain systems. In our view this makes GEO highly disruptive to the current global payments marketplace.

According to a McKinsey study, the revenue of the global payments market is expected to reach two trillion dollars by the year 2020. Yet, global payments is just one of many markets that GEO Protocol’s technology has a potential of disrupting. Other potential use cases for GEO include freely tradable reward points and game currencies, inter-entity financial settlement, as well as the automated IoT-based micropayments. Together these use cases show a truly staggering growth potential for this technology.

To serve the customers of the payment network, GEO founders launched a separate entity — GeoPay — the mandate of which is to build out end-user B2C and C2C payment functionality on top of GEO Protocol. While the GeoPay mobile application is rudimentary today, partnerships that the team is creating with financial institutions and cryptocurrency trading platforms in Eastern and Western Europe will lead to a quick scaling of the payment network. Eventually, it will be as easy to use as Venmo and PayPal, but with unfettered accessibility. We are confident that in partnership with GeoPay and other last-mile partners, GEO Protocol can grow to become a premier network for value exchange world-wide.

GEO cryptoeconomics are tailored to attracting service providers — network observers, currency exchangers, and others — which highly aligns the project with the CoinFund’s generalized mining strategy. The team has worked for many months to understand the generalized mining landscape and to design its token economics in the way that will attract the necessary resources to the network, which includes servers, capital, and technology expertise.

We are supporting GEO because we are confident in the team’s ability to deliver a valuable technology platform that will democratize and simplify value exchange for everyone, disrupt the global payments market, and ultimately empower financial inclusion for many people worldwide. At CoinFund we look for projects that innovate deeply both in technology and in society. GEO Protocol is a perfect example of such a project, and we are excited to be the team’s lead partner.

Disclaimer: The content provided in this post is for informational and discussion purposes only and should not be relied upon in connection with a particular investment decision or be construed as an offer, recommendation or solicitation regarding any investment. The author is not endorsing any company, project, or token discussed in this article. All information is presented here “as is,” without warranty of any kind, whether express or implied, and any forward-looking statements may turn out to be wrong. CoinFund and its affiliates may have long or short positions in the tokens or projects discussed in this article.

CoinFund invests in GEO Protocol was originally published in The CoinFund Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Thanks to Alex Felix and Oleg Golubov for challenging me in the process of writing this post and providing comments both useful and difficult.

Picture this: you are having a romantic evening with your lover, but there is one extra person present, and he is in charge of you being able to interact with your date. What’s even worse, you can’t have a date at all unless such a person is present, and you even have to pay him for the “service”. Am I describing some crazy dystopian world that has nothing to do with how we live today? On the contrary! This is the reality of business and financial relationships today, because most of them are currently mediated by paid intermediaries that do exactly this: they enable, and hence can prevent, complicate, or simply diminish the quality of your business relationships.

When Bitcoin was first launched, it was designed to remove such intermediaries from one’s financial relationships. Soon after it became obvious that the ideas and technologies first pioneered by Bitcoin made possible disintermediation globally and in a variety of areas, not just payments. This takes me to the topic of this post: mutualized business relationship models, those that, akin to our lovers, are created in a way where the participants may remove the mad chaperone, and finally be alone.

Simply put, a mutualized business relationship is one where the ultimate participants are appropriately in charge of their interaction. This may have to do with payments, asset transfer, betting, accounting, borrowing, insurance, social networking, and even development of new pharmaceutical drugs. One can imagine a sharing economy system not run by a single huge company such as AirBnB or Uber, but constructed and operated by a large network of participants, customers and providers of services, for their ultimate benefit.

Two factors are driving the inevitable emergence of mutualized business. First, organizations that mediate the relationships of others have incentives that are misaligned with their customers. This prevents the increase in service quality and decrease in service price that would otherwise be the natural progression of the business trajectory of today’s fast-paced and rapidly evolving world.

Second, there are functions and services that intermediaries simply cannot provide using traditional business models. For example, digital assets emerge in the decentralized/disintermediated world of crypto precisely because the counterparty risk of a single issuer is not acceptable in many cases. Moreover, the legal and regulatory risks that fall on an intermediary are often more than a single organization can agree to bear.

Current efforts at decentralization follow a similar philosophy, but take it to the extreme. Decentralized systems reject human authority completely, which leads to architectures that are largely impractical. Moreover, radically decentralized architectures often achieve the opposite effect. Rejecting the human authority in places such as recourse and identity leads to systems that cannot be adopted in a compliant way by risk-averse and regulation-minded enterprise customers. To solve this problem, projects are forced to adopt architectures in which centralized hooks must be installed (for example, to reflect the state of a centrally run identity database on the blockchain), effectively reverting the entire system to a centralized operating model.

So to the extent that mutualization may be contrasted with decentralization as it stands today, the term refers to a balanced point on the spectrum between centralized and decentralized systems, where some human authority is intentionally built into the framework (again, recourse, KYC, reputation are notable examples) and is not decentralized away completely, but rather is limited in a way which improves the system’s social guarantees, while precluding the possibility of unilateral damage by a single party.

This post covers two specific trajectories for creating balanced mutualized systems. The first trajectory is one in which a centralized system moves towards greater decentralization to adapt to regulatory pressures, to align incentives betweens various types of stakeholders and customers, to reduce counterparty risks, or to gain access to digital ecosystems not possible in a fully centralized world. The second trajectory is one in which a radically decentralized system will acquire more centralized features for reasons of compliance, practicality, and user experience.

First, let’s look at some notable examples of situations that are ripe for mutualization along the first trajectory (centralized →mutualized).

Example 1: DTCC

Depository Trust & Clearing Corporation was created to simplify the process of inter-company settlement of securities. Note that DTCC is already somewhat mutualized by virtue of being a “user-owned and directed” organization. Nevertheless, it is clear that DTCC, which has settled $1.7 quadrillion of value in 2011, has too much power and introduces too many frictions into the process despite being governed by its (largest) customers. The main reason for this is the centralization of operational responsibility. Any technological advance to a complicated system that hosts such a tremendous amount of economic value will put an unbearable amount of responsibility on its operators. Any individual responsible for such a system will be more inclined to wait and be overly cautious with respect to any improvements, especially invasive ones. In other words, a guarantee of an insidious 0.1% inefficiency will be preferable to an improvement that risks a significant one-time loss. This is a situation where a well-governed blockchain infrastructure will prove to be more, not less, agile than incumbent centralized systems.

Example 2. Sharing Economy

Technology-enabled sharing economy networks like Uber and AirBnB provided a model for a new type of ultra-high-efficiency businesses. As compared to how taxis and hotels operated before, these organizations were able to provide a better service for a lower price by

(1) sharply increasing the size of the service-provider side of the market;

(2) enabling greater utilization of underutilized resources; and,

(3) inducing service providers to compete with each other on an open market.

These models are the beginning of mutualization and clearly demonstrate its benefits. We should expect this trend to continue. The early indication of the future direction towards even greater mutualization comes in the form of the sharing economy companies’ attempts to convince the regulators that their customers should be able to own shares of the companies.

The goal is to eliminate the incentive misalignment between shareholders and customers on service pricing. Sharing economy companies, once large enough, create a strong network effect that makes them into effective monopolies or near-monopolies. At that point, the pricing will suffer greatly because the companies are governed and operated to the benefit of their shareholders.

In order to fully mutualize a large sharing economy network, one must carefully consider the function it serves. To date, all attempts to create unmediated two-sided markets resulted in failures everywhere, except for the Dark Internet and shadow economy, where there are no other choices. This is because within such systems there must be individuals providing specialized mediation services (conflict resolution, recourse, guidance), which can not be eliminated or replaced by purely technological solutions.

Would a tour guide in Barcelona own tokens of some decentralized apartment sharing solution?

In such cases mutualization would mean a creation of competitive and appropriately incentivized marketplaces for such auxiliary services. For example, in each city where a Mutualized AirBnB operates, there must be a competitive marketplace for services for insurance, repairs, and help to provide a good experience for travelers and hosts in distress. This leads one to conclude that mutualized sharing economy solutions are not two-sided, but N-sided markets, where auxiliary services form a small but critical part of the process in a fair and productive way.

A depiction of an N-sided market for a sharing economy network (Image courtesy of Michael Zargham)

In the case of traditional sharing economy mediators, such as Uber and AirBnB, the company itself fulfills the need for such auxiliary services. Obviously, its incentives are such that it will provide the minimum necessary service for the network to operate (and to avoid liability — which is why they often offer very good insurance coverage), but will not improve on such services, because they do not, by themselves, generate profit, especially when a company dominates its market. In contrast, one can and should create incentives within a fully mutualized network that will enable small local businesses to provide auxiliary services in a self-sustaining way.

We now turn to the other trajectory (decentralized →mutualized).

Example 3. Mutualized Finance

In crypto over the past two years, we saw a rapid emergence of DeFi — decentralized finance. The term includes a broad range of financial services automated on blockchain, so it warrants some specifics. Two projects that first entered the DeFi market and largely defined it (long before the word “DeFi” was coined) were MakerDAO and Augur prediction market. Both drew on the blockchain’s ability to (1) cheaply issue fungible and programmable digital assets integral to their use case, and to (2) establish a secondary market ecosystem which enables price discovery and trading of these assets with minimal involvement from the asset creators.

DeFi, as it currently stands, is an extremely compelling proof-of-concept. The market currently includes a broad range of nascent offerings, such as insurance, derivatives, lending products, and exchanges. Two aspects of blockchain enable this market: operational decentralization, which leads to a decrease in counterparty and regulatory risk, and programmable assets, which enable creation of financial contracts that don’t require major technological efforts on part of every entity that interacts with them.

The reason why DeFi is no more than a proof-of-concept is because it is currently built on top of technology frameworks that are radically decentralized and so suffer from risks that are endemic to this environment. These risks include the risks of cyberattacks, lost private keys, compromised wallets, as well as regulatory risks related to KYC/AML requirements that are being increasingly applied by regulators to decentralized systems. In order to become broadly adopted, decentralized finance must account for these requirements and, while preserving the essence of decentralization that enables these assets in the first place, build platforms that deliver highly practical enterprise-grade solutions. Such improvements would fall along the decentralization→mutualization trajectory.

Decentralization vs. mutualization

In order to be useful, systems must include a variety of human-provided services. In mutualized systems it is expected that some services will be provided by people endowed with sufficient level of control to fulfill their role. In fully decentralized systems, no human has any more power over the system than any other. In other words, while decentralized systems see decentralization as their very purpose, mutualized systems will prioritize the quality of the service and compromise in favor of centralization in cases where this improves the quality.

Nevertheless, mutualized systems will mitigate centralization risks by marketizing the respective services, as in the example of local tourist offices that provide support to travelers in distress. When the human-based service is provided by a market where multiple entities compete with each other on cost and quality, the outcome is better cheaper service for the customers.

Software Development as a Mutualized Service

Importantly, one must ensure a continuous and uninterrupted funding and management of the teams that are responsible for the core technology in such networks. Our position as investors in the space of decentralized technology affords us great visibility into the protracted struggles that development teams face with respect to both funding and decision-making. This is a large topic that warrants a whole other blog post.

In summary, there are significant ongoing concerns as to the soundness of the business models such teams put in place. In some cases, teams run out of funds received from token sales and cease operating; in other cases teams issue equity in addition to tokens which causes significant incentive misalignments between different types of investors. While a well known issue in the advancement of any open source technology, all of this puts the future of any single decentralization experiment in grave peril.

A mutualized approach should rectify this issue, because it permits greater complexity of multi-sided relationships than in fully decentralized systems. It would account for developers as just another player in the N-sided service market, with development being one of the N sides, properly incentivized and jointly governed by all participants. We see some early attempts at creating markets for development work in the form of such projects as GitCoin and I am certain that the improvement of this type of services will continue and will eventually garner huge demand.

The Anatomy of Mutualization

Let’s now turn to a discussion of what sort of components, organizations, technologies, and efforts will enable the new wave of mutualization, as distinct from decentralization. The core insight is that mutualization requires a hybrid centralized-decentralized functional structure. While all functions that can be provided by software may and should gravitate towards decentralization, the functions that are essentially human in nature can and should remain in the charge of actual people. Where there exists an intrinsic need for centralized human-provided services (software development being a notable and uncontentious example) mutualized organizations should build markets for such services, enabled by transparency and well-thought-through reputation systems. When incentivized by a market, rather than a central mediator, such services arise in ways that are both resilient and fair to all participants.

The dichotomy of human-mediated and technology mediated relationships.

From our experience with decentralized finance and other types of decentralized services, we have learned that they critically require an existing ecosystem of services, components, infrastructure, and organizations. The components in such an ecosystem once again naturally fall onto either a centralized→mutualized trajectory, or onto a converse decentralized→mutualized trajectory. The former are:

(1) Issuers and exchanges, that enable trading of dedicated tokens of value. This allows token creators to establish the game-theoretical incentives for participants to provide required services. These tokens must establish a free economy that enables both trading and price discovery.

(2) Auxiliary components, such as wallets, developer tools, webkits, and other software, which enable the developers to focus on the business model of the network, rather than a host of labor-intensive peripheral tasks.

(3) Entities willing to almost immediately run nodes (generalized miners) and other elements of the network’s infrastructure. These entities are incentivized by the network’s ability to issue tokens of value and are willing to take long-term risks and engage in viability analysis for all networks they consider participating in.

On the decentralized→mutualized trajectory lie all the components that bridge the gap between radically decentralized systems and traditional enterprise, allowing decentralized networks to deliver enterprise-grade services. These are:

(1) Legal frameworks, which enable real-world recourse for digitally-mediated relationships, especially if real-world assets are to be represented and traded digitally (see Mattereum).

(2) Identity systems, which ensure reliable identity verification of network participants, implement appropriate privacy guarantees while enabling counterparties, regulators, and law-enforcement to find bad actors when required.

(3) Mutualization-aware recourse and arbitration companies, which are empowered via a clear governance procedure to rectify errors and malicious behavior in such networks.

(4) A broad range of support companies, which guide new or inexperienced users when needed, or simply assist people that encounter problems.

(5) Markets for developers, designers, experienced team managers, and other roles, which are critical for software-enabled networks to adjust to the changing conditions, fix problems, grow, and evolve.

(6) And, finally, working approaches to governance and decision-making, that enable a large group of people to jointly make decisions and coordinate around them.

Components that make possible cheap and fast creation of mutualized business models

Most importantly, the emergence of mutualized systems is predicated on technology platforms that answer their broad yet unique needs. Here we must recall that an effort to adopt blockchain technology to improve efficiency of DTCC was scrapped citing lack of benefits as measured against the complexity of the effort. To advance mutualization past the proof-of-concept stage, the tools we use to build such technology must advance significantly.

Most of the existing blockchain and peer-to-peer technology was built under a rather strict decentralization premise, and consequently created operational models too clunky for broad adoption. The most glaring consequence of the radical decentralization philosophy so far has been lack of recourse by design, leading to stringent security requirements and untenable key management processes on part of all users. Even if you can avoid being hacked, losing your private key is fatal in most existing decentralized networks — a situation that detracts most but the most ardent adopters.

Mutualized systems do not require radical decentralization. A network operated by ten well-known parties may be sufficient for a vast majority of the needs of a mutualized organization. It establishes a social contract much like that of a fully decentralized blockchain (resilience, some censorship-resistance, some dispersion of responsibility), and where such social promises are relaxed, the resulting improvement in user experience is so significant as to warrant the compromise.

In fact, even centralized systems can now be engineered in a way that provides social guarantees similar to those of blockchain. A centrally-run server that executes on trusted hardware, such as an Intel SGX enclave, can reliably ensure non-interference into the system’s functioning by its operator, and preserve privacy of participants.

The Drivers

I’d like to propose several ways in which mutualized systems will arise and take hold. These potential trends inform our view as to what we might expect in the future and what products we should support as investors.

Driver 1. Asset Tokenization and Programmable Finance

Observing the progress of DeFi to date, we can’t help but wonder what would drive adoption of programmable finance by large enterprise. After all, programmable digital assets possess unique properties and offer significant increase in operational efficiency, as compared to traditional financial contracts. The improvements come from the basic principle that blockchain-based digital assets are write-once-use-everywhere software products. Creating a new financial contract using such automation tools is many orders of magnitude cheaper, because it frees all trading participants from developing same or similar software in order to use the contract. Additionally, the game-theoretical structure of DeFi products and markets enables models that simply can’t exist in traditional finance, prime example being MakerDAO which demonstrates that decentralized systems are able to provide loans at significantly lower costs than traditional borrowing.

So if the incentives are all there, what’s the main barrier to adoption? Simply speaking, until decentralized system provide usability and recourse features appropriate for broad adoption by risk-averse enterprise customers, DeFi will remain firmly within the purview of but a few daring experimenters playing with it today. The required features include recourse, enterprise-level identity and KYC solutions, and organization-oriented features such as role-based delegation of responsibility.

Driver 2. Supply chain finance and accounting

Significant inefficiency in global trade stems from haphazard informational permeability of organizational boundaries. In some cases organizational boundaries are used to hide bad behavior, in other cases they prevent, complicate, or make expensive financial audits that by definition require accurate data from all counterparties in a given set of transactions. In some cases the difficulty collecting required information arises due to the sheer variety and number of the in-house systems involved in the process. Sometimes information is intentionally withheld, because organizations are unwilling to expose private data to each other.

In this area mutualized systems enable the best-of-both-worlds approach where data is shared on standard informational rails while preserving privacy when necessary. Additionally, the ability of mutualized systems to create useful economic incentives may be used to reward organizations for sharing, in a way that makes entire markets significantly more efficient.

Driver 3: Capital Markets

The early ICO experiments failed — and for a good reason — but the lesson remains front and center: given access to a large enough pool of investors, capital becomes cheaper. Attempts to give a large global pool of investors access to equity of startup companies have been made before, but the expense of entering a global market for a centralized issuer or trading platform is large. Conversely, decentralized systems made a splash in just this area because they are by definition global. Both demand for capital, and demand for access to investment are strong drivers that will cause this area to develop further as evidenced by the emergence of nascent security token platforms.

Driver 4: Customers are Tired

Yes, customers are tired. Customers of Facebook are tired of their data being sold to advertisers; patients are tired of being bankrupted by the drug manufacturers; home owners are tired of being screwed by their banks; who, in turn, are tired of the inefficiency of DTCC; courier companies are tired of Amazon; farmers are tired of paying exorbitant insurance premiums, most of which go to the insurance companies, not towards..

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Upcoming CoinFund Rabbithole Talkshttps://www.eventbrite.com/e/coinfund-rabbithole-talks-with-lane-rettig-ewasm-and-preston-van-loon-prysmatic-labs-tickets-55778404710

Rabbithole Talks is a monthly CoinFund meetup, focused on doing deep dives into innovative token economics and technical design decisions for interesting blockchain-based projects. Past guests have included Doug Petkanics of Livepeer, Hunter Hillman of Connext and Luke Duncan of Aragon. We’re excited to continue the 2019 series with a roster of incredible speakers.

February 28, 2019 with Lane Rettig (eWASM) and Preston Van Loon (Prysmatic Labs)
  • Presenting: Ethereum 1.x and 2.0 roadmap, eWASM vs EVM, Casper economics, sharding vs composability tradeoff, followed by a Q&A.
  • You can RSVP here.
March 26, 2019 with Kevin Owocki (Gitcoin)
  • Open Source powers billions of dollars economic value for the world. Why, then, is Open Source often built on the backs of volunteers? Why does Open Source lack a business model? Blockchain technology has the potential to solve the age-old problem of Open Source Sustainability. In this talk, Gitcoin’s founder, Kevin Owocki, will explore the emergent blockchain Open Source Ecosystem, and share insights from the millions of dollars of OSS transactional value that the Gitcoin network has facilitated. Read more on the Gitcoin blog here.
April 16, 2019 with Thibauld Favre (Continuous Organizations)
  • Aligning stakeholders’ interests in an organization is hard. The current fundraising models (ICO or private fundraising) impose significant limitations on the mechanisms available to align stakeholders’ interests. A Continuous Organization (CO) is a new model designed to make organizations more fluid and more robust by overcoming those limitations. Using the Continuous Organization model, organizations can set themselves in continuous fundraising mode while benefiting from solid and flexible mechanisms to align stakeholders’ interests in their financial success. Read more here.
April 30, 2019 with Daniel Heyman (PegaSys)

Stay updated on CoinFund events: follow us on Twitter, and sign up to our mailing list for additional event announcements.

CoinFund 2019Q1 | Calendar of Events was originally published in The CoinFund Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview