Follow CoinFabrik Blog on Feedspot

Continue with Google
Continue with Facebook



Bitcoin is said to bank the unbanked. Why? In contrast to the traditional bank system, where new clients must provide personal information to open an account, the only requirement to join the Bitcoin financial system is of technical nature: a device to run the software to communicate with the Bitcoin network. No entity decides who is allowed to participate in the network — an interesting feature also useful for a censorship-free press application.

Blockchains are already used to store non-financial data for diverse purposes, e.g. to prove authorship of ideas or to prove the existence of a document. One of the largest files stored successfully into the Bitcoin blockchain is an image of Nelson Mandela. A user managed to insert this photo of about 14 KB. In truly decentralized blockchains, a valid transaction — the standard format of transactions defined in each blockchain — almost always passes onto the ledger. Hence, not only there is no entity who gives permissions to join the network, there is also no one who might filter out content.

The mechanism to “chain” the blocks of transactions together does not allow to tamper data, once they enter into the blockchain. And, crucial for the avoidance of single point of failure, the blockchain is replicated, i.e. nodes run a local copy of it. Therefore, blockchains are not prone to downtimes, or, in the words of a chinese student, “there is no 404 on the blockchain” (source). All this makes blockchain a promising candidate for journalists seeking censorship-free media.

Solutions in Bitcoin Overview

Bitcoin transactions must reference to coins which have been sent previously to a public key address. To authorize a new transaction, the owner of the coin must provide a proof that this coin belongs to her public key address using her private key to generate a cryptographic signature. The relevant information to let the remaining network nodes prove the correctness of the signature is written in the input scripts of Bitcoin transactions. Furthermore, transactions specify the receivers of the transfer, again specified by public key addresses which are written into the output scripts of the transactions.

Non-financial transactions must appear valid to the Bitcoin miners so that they do not discard them. The conditions for a valid transaction are:

  • Minimum Transfer Amount: The minimum output value of a transaction is currently about 546 satoshis to be not considered as dust (as of June 7, 2019).
  • Minimum Fees: The fees are determined by the size of the output and the input script. The current average fee per byte is about 39 Satoshi (cf. bitcoinfees.info, as of June 7, 2019)
  • Maximum Data Size: The total upper limit of a standard Bitcoin transaction is 100 KB. Input and output scripts may carry specific data of limited size.
  • Unspent Coin: The input script must reference to an unspent output script.

Transactions that deviate from these rules are considered non-standard and will not be picked by most miners. As we will see below, transactions containing non-financial data may look different from standard transactions. Non-standard transactions may pass to the blockchain anyway, but some with lower probability, and are more critical to future protocol changes.

Data Insertion Methods

Output script:

In [1],[2], the authors identify 5 output script types that are template-compliant which do not involve the input script.  Since miners cannot distinguish between legitimate public key address hashes and arbitrary binary data, output scripts can easily be used to insert data indistinguishable to the miners. A disadvantage of the use of output scripts is that users must burn bitcoins as they replace valid receiver addresses with arbitrary data. The following output scripts can be used to insert arbitrary data:

Pay-to-Public-Key (P2PK): Data stored instead of an output of 33 bytes compressed key or 65 bytes uncompressed key together with a non-dust amount of bitcoin to burn.

Pay-to-Public-Key-Hash (P2PKH): Data stored instead of an output public key hash together with a non-dust amount of bitcoin to burn. This allows to store 20 Bytes per output.

OP RETURN: This is a place to store 80 bytes per transaction which is a provably unspendable UTXO that the miners do not need to track.

Multi-Signature: E.g. in case of 1 out of 3 multi-signature script, data can be stored instead of 3 public key hashes, or with 1 real and two unreal signatures in which case the transaction stays spendable (more details in [1]).

Coinbase transaction: Arbitrary data up to 100 bytes can be stored in one transaction per block, but this option is only available to miners.

Input script:

This requires a more sophisticated technique. Input scripts allow bigger size data to be inserted, but must maintain their valid semantics. To achieve this, the input script must refer to a valid output script, e.g. by using a dead branch inserted previously. These transactions are not stored in the list of unspent transaction output set. As described in [1] (see Loc. cit. for more details), there are two special ways to do so:

Pay-to-Script-Hash (P2SH): These data refer to the unspent coin. Data can be stored in the redeem script (limit 520-byte) and/or in the part of the input script followed by the redeem script (limited by the 1650 bytes total limit of the input script). More advanced methods to store data are mentioned in [1]:

  • Data Drop Method: Data get stored in the redeem script.
  • Data Hash Transaction: This uses the script following the redeem script.

Data Reconstruction:

Output scripts in the P2PKH larger than 20 Bytes must be spilt in various output scripts, either within one transaction or, if larger than the maximum size (see the table below), in various transactions. Data need to be linked together either onchain or offchain to allow a reconstruction of datasets stored in the blockchain. One may use the output script to store metadata, e.g. a reference to the transaction ID of the next chunk of data stored in another transaction.

Input scripts may store data using the methods Data Drop and/or Data Hash. As shown in [1], within a single transaction, the maximum file size can be of  96,060 bytes. Files larger than this, require again an indexing of the split data.

Selected Content Insertion Services:

  1. SatoshiDisk: The service uses a single transaction with multiple P2PK or P2PKH outputs. The inserted data is stored together with a length field and a CRC32 checksum to ease decoding of the content.
  2. Apertus: This service allows fragmenting content over multiple transactions using an arbitrary number of P2PKH output scripts. Besides further features, Apertus works also for Litecoin, Dogecoin, and others.
  3. CryptoGraffiti: This service is web-based which allows to read and write messages and files from and to Bitcoin blockchain.

The authors of [1] found that the P2FKH is the method more widely spread, although, it creates the most unspendable UTXO bloat, requires the largest overhead, and is the most expensive. They argue that its popularity can be explained by its simplicity of implementation.


[1] A. Sward, I. Vecna, and F. Stonedahl. Data Insertion in Bitcoin’s Blockchain. Ledger Journal, 2018.

[2] R. Matzutt, J. Hiller, M. Henze, J. H. Ziegeldorf, D. Müllmann, O. Hohlfeld, and K. Wehrle. A Quantitative Analysis of the Impact of Arbitrary Blockchain Content on Bitcoin. In Proceedings of the 22nd International Confer-ence on Financial Cryptography and Data Security (FC). Springer, 2018.

For our complete research on Censorship-free publishing on the blockchain click here.

The post Publishing Text and Images in Bitcoin appeared first on CoinFabrik Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
CoinFabrik Blog by Mariana Soffer - 1w ago

After our articles Smart Contract Auditing: Human vs. Machine and Auditing Solidity code with Slither we decided to test another static analysis tool from ChainSecurity called Securify.


It is offered in a nice and simple web interface that allows to insert code by either pasting it, uploading a zip file or cloning it from a git repository.

Once you press the “Scan Now” button the analyzer will make all the checks and eventually print the issues it found:

It will also highlight the related lines of code, for easier inspection:

The “Request Audit” button will lead you to a form you can use to request an audit from ChainSecurity.


We tested this tool using the same contracts we used in the other articles. Sadly, some of the contracts we wanted to test made the analyzer timeout. This made the zip and git options useless so we ended up pasting code individually for each contract.

These are the results:

Fails refers to when it failed to analyze the contracts altogether, not showing any results. Overall these results are within range of what it’s expected from other tools.


These results show once more that auditing tools still need to improve in order to be more consistent and dependable. It’s unlikely they will replace human auditors anytime soon. Even do, they are still useful for auditors as they highlight possible errors which can lead to detect more complex attack vectors.

The post Auditing with Securify appeared first on CoinFabrik Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

If you’ve been following crypto news over the past few months, you have probably heard the words Polkadot and Substrate. However, you might also be curious about what they are exactly, why they’ve been on the news so much and how relevant they are to your business. This article aims to give you a clearer picture and introduce you to these groundbreaking technologies so you can put them to work on your future projects!

What are Polkadot and Substrate?

Polkadot, developed by Web3 and Parity Technologies, seeks to solve the scalability problem through interoperability by building a ‘blockchain of blockchains’. Its architecture is influenced by sharding in that it splits the transaction-processing workload across a large number of different interoperable parallelised chains, or parachains, and then collates all this information on a single relay chain that cryptographically secures it, all via NPoS (Nominated Proof of Stake). The name Polkadot refers to the network as well as the protocol it operates with, which aims to be as general as possible: parachains can be either public or private, and, thanks to Polkadot’s reliance on WebAssembly, they can have any sort of logic functionality as long as they comply with a few simple requirements that make them ‘pluggable’ into the relay chain.

This is where Substrate comes into play. In order to facilitate mass adoption, Web3 and Parity have also developed a framework for building parachains as easily and as flexibly as desired, as well as bridging existing blockchains to Polkadot. However, Substrate is independent from Polkadot and can also be used for creating stand-alone blockchain projects without having to do cumbersome things like implementing consensus algorithms or P2P networking from scratch. It is also possible to build a Polkadot-compliant blockchain in any language, without using Substrate – this framework is just a tool to make everything easier. Furthermore, since Substrate allows for dynamic WebAssembly-encoded state-transition functions, Substrate-based chains can be upgraded at any time without the need for a fork. This article gives a more detailed overview of the framework’s benefits, inner workings and capabilities. Substrate is built with Rust (there is, however, a JavaScript client implementation) and right now you’ll need some proficiency in it to do anything important, but the development team aim for this to change in the future.

Right now, Polkadot is still in development, but several proof-of-concept testnets have been released and the mainnet genesis block is scheduled to come out by the end of 2019.

A small technical summary

As mentioned earlier, Polkadot uses NPoS and PoA for consensus, so running a node should not take a massive toll on your resources. Right now, it’s on its fourth proof-of-concept, running on a testnet called Alexander with the GRANDPA consensus algorithm (GHOST-based Recursive Ancestor Deriving Prefix Agreement). You can refer to the link above or the formal specification for a more thorough explanation of GRANDPA and PoS in general, but basically, instead of being mined, blocks are sealed by certain participants called validators who are chosen based on their stake. Stake is measured in tokens called DOTs, which are not intended to be valuable as a cryptocurrency but rather for internal use. Testnet 4 in particular has places for 20 validators, which you can see here.


According to the official FAQ, a considerable amount of earnings from Web3’s initial DOT crowdsale are destined to fund the development of Polkadot. Although the multi-sig wallet destined for Polkadot contributions was targeted by the notorious 2017 hack, the team have stated that since it did not contain all of Web3 Foundation’s funds, the plan to build Polkadot has not been affected. Polkadot’s development has also been funded by various parties, amongst them the British government (see page 2 of the whitepaper).

What can I do with Polkadot and Substrate? Polkadot

All of this sounds exciting from an R&D standpoint, but you might be wondering how your business could benefit from these technologies. One important perk of using Polkadot is pooled security: the fact that all parachain transactions are collated into relay chain blocks ensures that instead of competing with other chains for processing resources, your chain’s transactions are secured by all validators in the network. If you’ve ever heard of merge mining, it’s somewhat similar to that, but for every single parachain. Thus, aside from not needing to implement consensus from scratch, you will actually get stronger consensus without the need for a massive network of your own, which might take a while to build.

Another important benefit is interoperability. The Polkadot protocol allows arbitrary fee-less message passing through the relay chain between all blockchains in the ecosystem, be they parachains or bridged chains. No need to build additional bridges!

The Polkadot protocol is open source, so you can build a parachain or bridge an existing chain without having to pay anyone. This does not, however, impose any restrictions on the kind of things you can do with your chain, since it is in the Polkadot team’s plans to allow private businesses on top of it (note: the linked article was published before the Alexander testnet’s launch).


Substrate is currently on its 1.2 beta release. So far, it has a WebAssembly runtime engine, basic runtime modules (building blocks for all sorts of functionality) PBFT consensus and Libp2p networking, which means you can already have a chain prototype up and running. Depending on what trade-off you want to make between working high-level for ease of development and low-level for customisability, you can choose to either launch your chain almost instantly by running a binary, implement its logic with parts from the Substrate Runtime Module Library or build it from scratch with just some core components; you can have a look at the official repository for more information. Any of these options will save your developers time and effort so your business can focus on what your product does rather than reinventing the wheel. However you decide to create your blockchain, Substrate ensures high performance through light block headers and use of WebAssembly; your chain compiles from Rust down to a WASM executable! Right now, PBFT is the only available consensus, but support for other algorithms is planned.

Tools and ecosystem

In addition to these inherent features of Polkadot and Substrate, the Polkadot ecosystem already has several tools developed by the Web3 Foundation and various third parties so that you can build even more powerful projects! Here are just a few of them:

Some tools Some projects in development
  • Speckle OS, a comprehensive framework for a friendly Polkadot user experience. Planned functionalities include a wallet, a dApp discovery service and a browser.
  • Polkasource, an ITSM for running your own node in a reliable manner.

See here for more useful tools and tutorials.

Can I connect my existing blockchain to Polkadot?

The Polkadot FAQ lists the following two criteria as necessary for connecting a previously existing blockchain:

  • ‘…the ability to form compact and fast light-client proofs over the finality and validity of its blocks and state change information, [like] new UTXOs in a Bitcoin-like chain or logs in an Ethereum-like chain’
  • ‘…a means by which a large set of independent authorities […] can authorise a transaction, [for example] recognition of threshold signatures, such as the Schnorr scheme, or a smart contract able to structure logic against a multi-signature condition’

Ethereum-like and private PoA chains are especially suited for this (the FAQ lists a few nuances), whereas Bitcoin and Bitcoin-like chains are not (again, according to the FAQ). However, these blockchains can be bridged to Polkadot and maintain their own consensus, though without pooled security. Web3 and Parity have not yet released any specifications or documentation pertaining to bridges, so you’ll have to wait a while before actually being able to bridge your chain.

How can I start developing with Polkadot and Substrate?

Now that you have a clearer picture of what Polkadot and Substrate are, it’s time to start building with them! Here are some good first steps to try out – you don’t need to do all of them, though the first one is a prerequisite for pretty much all the others:

Join Substrate Technical chat for more info.

Projects with similar goals

It’s worth noting that Polkadot isn’t the only initiative to build an interoperable, secure and scalable network of blockchains – this is currently one of the most important issues in distributed systems, so it’s only to be expected that several different projects would try to tackle it.

Out of these, Cosmos is the one most similar to Polkadot: it’s also PoS-based with a PBFT-based consensus algorithm (Tendermint) and its purpose is essentially the same. It basically consists of many interoperable zones (analogous to parachains) connected to a main hub (analogous to the relay chain). It even has its own framework, Cosmos SDK, which is written in Go. Unlike in Polkadot, however, each zone has its own security. Although inter-blockchain communication is yet to be implemented, Cosmos launched its mainnet in March 2019. This article delves much further into Cosmos.AION Network, explained in far more depth here, is another initiative with blockchain interoperability as one of its goals; its proposed solution is called AION Transwarp Conduit. Similar projects include Wanchain, Blocknet and Ark.


If Polkadot achieves its goal of being a ‘blockchain of blockchains’, it will be a massive step for decentralised/distributed computing as a whole, so it’s probably a good idea to get your business on it. We hope that this article has given you a better idea of what Polkadot and Substrate are, why they can be useful for your future blockchain projects and how to get started with them.

The post Polkadot and Substrate: a Promising yet Challenging Blockchain Technology appeared first on CoinFabrik Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Micropayment take place in pay-as-you-go software service models, micro donations, and the Internet of Things (IoT). In these contexts payments for values which are usually under the smallest unit of fiat money (e.g. $0.001) are needed. Prepaid cards can solve this problem. Another approach uses cryptocurrencies in the blockchain, having the advantage of allowing exact real-time billing. Our focus in this post is on cryptocurrencies. We refer the interested reader to this survey for a general panorama of micropayment schemes.

Difficulties with low-value payments are that processing and transaction fees might be higher than the actual transaction amount. You wouldn’t send a payment of US$ 0.01 for a fee of US$ 0.5. The same applies to the blockchain world where transactions usually require a fee. In Ethereum the current minimum fee of a transaction is today about US$ 0.00851 which is almost the same amount as a US$ 0.01 payment. Micropayment solutions help lower the fees and make blockchain a viable payment medium for the services we mentioned above.

In this blog post we review existing micropayment solutions. We will continue the series with follow-up posts including in-depth studies of selected projects and a high-level micropayment comparison using metrics such as actual transaction fees, functionalities, etc.. This should serve as a “Guide” to answer the question: Which is the adequate micropayment system for my use case? There are many micropayment projects which are getting more and more mature these days! It is important to stay up to date.

Micropayment Solutions

Most micropayment systems share the idea of transacting many payments between two peers outside the blockchain while guaranteeing a secure way to claim the latest account balances onchain. That is the reason why offchain payments allow real-time payments with low values, since the possibility to claim final settlement gives real value to the cost-free transactions outside the blockchain. There are several ways to achieve this, and each approach has its own Pros & Cons. Another approach consists in condensing many small payments between many users into a single batched payment (this will be detailed in the second part of the post).

We will discuss the main micropayment concepts looking through the following simplified model:


  • Sender: User who transfers money – the source of a transaction.
  • Receiver: User who is getting paid – the destination of a transaction.


  • Setup: Everything which is necessary before sending a transaction.
  • Transaction: The process to realize a transaction between sender and receiver.
  • Withdrawal: The cash out of the micropayment account balance to the user’s address

We present the concepts from a high-level Ethereum-blockchain perspective, though the logic can be realized in different ways on different blockchain technologies.

Payment Channels

Payment channels are one of the most popular micropayment solutions. The Lightning Network for Bitcoin, or its equivalent Raiden for the Ethereum network, are certainly well-known projects. These networks allow peers to send payments through offchain communication channels where the blockchain is only needed to deposit funds and withdraw account balances when the payment cycle is closed or disputes between sender and receiver must be solved.

The Payment Channel Model:

Setup: This model requires an Ethereum smart contract where  the sender (and optionally also the receiver) deposits her fund. The addresses of both, sender and receiver are stored in the contract, and if there is a penalty escrow, the receiver also needs to deposit funds. The smart contract regulates the rules to close a channel.

Transaction: Sender and receiver sign mutually with their digital signature updates (labeled by a nonce) of their account balances through off-chain communication channels.

Withdrawal: To unlock the funds, either party is able to close the channel by sending the latest mutually signed update to the smart contract which distributes the funds accordingly. In a short time window, the withdrawal can be challenged by sending a more recent update.

There are two lines of research in the field of payment channels: what can be done/executed within a channel and how its nodes can be interconnected into a network. The lower end of this range are simple payment channels between two peers, while at the other end of the spectrum we “dream” of contract creation and execution within channels composed to networks via intermediaries (Generalized State Channels). There are many team working on this topic, e.g. Blockstream, Celer, Connext, Magmo, Perun, Raiden, Sprites, and more. Counterfactual just released its first MVP. There is research making use of Trusted Execution Environments (TEE) in order to replace the need to be online during challenge periods in the payment channel approach. While the payment channel is open, the TEEs maintain channel states internally, free from tampering due to the guarantees of trusted execution. Prominent examples are Teechain and Teechan.

Probabilistic Payment Channels

The origin of probabilistic payment schemes goes back to Rivest and the Peppercoin project (Rivest & Micali). Since that system still must assume trusted third parties to run the protocol, blockchain is a promising candidate to overcome this trust assumption. Payments are implemented on the basis of expected value of probabilistic payments, and micropayments are paid out by running an appropriately biased lottery for a larger payment. I.e., instead of paying each second US$ 0.01, a payment of US$ 1.2 goes through every minute with probability ½, so that in the long-run the expected amount is paid.

Probabilistic Payment Model:

Setup: This approach requires an Ethereum smart contract where the sender deposits tokens. This contract has an associated “puzzle”, and anyone having a solution to this puzzle can withdraw the deposit.  

Transaction: The sender sends “lottery tickets” periodically to the receiver using verifiable random functions which are impossible to tamper.

Withdrawal: The recipient can send the valid ticket to the Ethereum smart contract in order to retrieve the corresponding amount.

Early work on this approach is due to R. Pass and A. Shelat (see this article, which gives at the same time a good introduction to the topic). Probabilistic Lightning, and the Orchid Protocol are other proposals in this field. If we compare these approaches against payment channels, probabilistic payments can go even beyond the unit “Gwei” and are more efficient than payment channels in the sense that there is no mutual signing required. On the other hand, to be practical, the service provided must be continuous and granular enough for the probabilistic variance to become negligible.

Sidechain & Plasma Chain

In the sidechain approach, the parent blockchain is pegged to a secondary blockchain – the sidechain (see this article). A two-way peg enables interchangeability of assets at a predetermined fixed rate. Sidechain technology primarily helps to increase performance by outsourcing, but not necessarily to lower fees. BlockStream Liquid and RSK are examples of the use of sidechain technology. Plasma Chain is a special kind of sidechain technique that allows micropayments and also scalable smart contracting. A smart contract anchored to the Ethereum parent blockchain takes care to settle the off-chain transactions occurring in the Plasma child chain.

Plasma Chain Model (Layer-2):

Setup: This model needs to create the Plasma smart contract on the parent chain and a running sidechain – the Plasma chain. Operators or gateway oracles are necessary to redeem tokens in the Plasma chain, and vice versa. Checkpoints containing minimal information about the Plasma chain (Merkle roots) have to be sent to the parent chain which are used to challenge exit requests.

Transaction: After depositing tokens on the parent chain, senders receive the corresponding amount of tokens on the Plasma chain. The sender sends a transaction to the receiver on the Plasma chain and waits for the confirmation of the consensus in the Plasma chain.

Withdrawal: The Receiver makes an exit call, and the gateway prepares the redemption of tokens on the parent chain. If the request goes through the challenge period, the amount is transferred to the user’s account.

The Plasma chain approach was proposed in this article. Based on that, many teams dedicated their time to make these ideas into an easy to use and scalable solution for applications. Prominent examples are Loom Network with DelegateCall as a working example. There are OmiseGo, Plasma Cash, Minimal Viable Plasma, Plasma Debit, Plasma Snapp,  Plasma Nano, Mono Plasma, Bankex or SKALE, just to name a few. See also State-of-the-Art of Ethereum Layer-2.

Batched Payments / Pooled Payments

Batched payments are useful in situations such as dividend or reward payments, that are in reality one-to-many payments. Receivers might accumulate their funds and withdraw them when they need or it makes sense to withdraw the amount.

Setup: The sender and receiver IDs need to be registered in an Ethereum smart contract containing Ids. Depending on the system, there might be operators and monitors processing and observing the protocol.

Transaction: The sender deposits tokens in the smart contract which includes at the same time the list of receiver Ids with its corresponding amounts.

Withdrawal: The receivers collect many transactions by providing a proof of payment or just by calling an operator to process the withdrawal. In the latter case, a challenge period is required to allow starting the verification game.

The advantage of batched payments is that the setup costs are low so that already for non-recurrent payments, the protocol can be profitable. Proposals are mentioned in Pooled Payments, Scalable Payment Pools, Merkle Air Drops, How to send ether to 11.440 people, but most are missing yet a secure implementation of these proposals – at least as far as we understand. Another interesting example is Lumino from RSK using delta compression.

Offchain Payments Using Relayers and Crypto Proofs

This category is similar to batched payments, but makes use of cryptographic tools which allow to verify the correct computation. The only assumption in this context is the availability of operators or relayers who process the transactions, but cryptography gives guarantees that no wrong information can enter the blockchain.      

Setup: This approach might require a ceremony to setup the zk-SNARK cryptographic tool, make use of other protocols like zk-STARKs or any other convenient Verifiable Computation (VC) protocol. Senders and receivers register their addresses and balance accounts (plus a nonce for updates) in an Ethereum smart contract. Relayers are needed to batch transactions and send updates to the blockchain. The sender has to send a transaction to the smart contract in order to have a positive account balance.  

Transaction: A sender broadcasts a transaction and a relayer gathers many of them and produces a zk-SNARK proof that transactions are valid (e.g. transaction amount is smaller than balance account, etc.), together with a proof of a correct update of account balances.

Withdrawal: The receiver can call the Ethereum smart contract in order to withdraw from its account providing a proof of account balance (Merkle proof).

Vitalik’s zk-SNARK approach uses two Merkle trees to store account balances and account ids in a contract, and users receive the Merkle branch to deposit and withdraw from their accounts. A very recent proposal in this line of thought is reported in this post using zk-snarks and rollups. Liquidity Network is working on an implementation of the NOCUST protocol – a commit-chain using zk-SNARKs for regular checkpoints. Beyond that, there are ideas of SNARK based sidechain for ERC20 tokens and PlasmaSnark which replace the Merkle proof by SNARKs.

Thinking outside of the Bitcoin or Ethereum world, blockchains like EOS which runs a different consensus mechanism, can offer lower fees. Scalable blockchains are solutions, but there are security assumptions to be considered.

The post A Short Guide Through the Universe of Blockchain Based Micropayment Systems appeared first on CoinFabrik Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Coinfabrik has been hired to audit the EasyPool smart contracts.
We start this report writing a summary with the smart contracts provided by the client and a list of the analysis methods used to audit the contracts. Next, we will make a summary of the files we analysed and the public facing functions provided by the ProPool contract. Then we detail our findings ordering the issues by severity, followed by all observations we considered important to add. Furthermore, we ended up this audit report with a conclusion explaining how do the auditors value the code reviewed, and what are the most important things that need to be corrected to it to make it work flawlessly and securely.


The contracts audited are from the EasyPool repository at https://github.com/gitigs/easypool. The audit is based on the commit 17a1e1ae336a92e3d4d7686aa1cb26aaea3f1f82.

The audited contracts are:

  • ProPool.sol: Proxy contract for the investor pool manipulation functions
  • common/Affiliate.sol: Affiliate management functions
  • common/CMService.sol: Contract that provides functions for deploying a new pool contract and setting the fee service, the pool factory and the registry.
  • common/FeeService.sol: Contract that manages the fee in a given pool
  • common/PoolFactory.sol: Pool deployment function
  • common/PoolRegistry.sol: Pool registering implemented through the emission of events
  • common/Restricted.sol: Operator management functions for restricting use to given operators.
  • library/ProPoolLib.sol: Code for pool initialization and lifetime functions, managing deposits, whitelist and withdrawal.
  • library/QuotaLib.sol: Share claiming functions for manipulating the quota given by the pool

The ProPool contract provides a public interface for managing the pool. The implementation is in the ProPoolLib contract. This latter contract uses another library, defined in QuotaLib, to define an internal data structure within the contract’s representation of an investment pool. There’s also a dependency on a pair of functions from the FeeService contract.

As for CMService, it depends on the interface for PoolRegistry and PoolFactory, which inherit from the Restricted contract, adding operator management functions. PoolFactory also depends on ProPool, since it deploys new instances of the ProPool contract. CMService allows setting fee services, changing the pool factory contract, and such functions. PoolRegistry brings a function to register new pools, emitting an event.

Fortunately, on analysis of these contracts only a few minor issues, mostly with maintainability, were found. We proceed to detail the checks we’ve made, the improvements to the contracts which could help development, and a conclusion.

As for the audit, the following analyses were performed:

  • Misuse of the different call methods: call.value(), send() and transfer().
  • Integer rounding errors, overflow, underflow and related usage of SafeMath functions.
  • Old compiler version pragmas.
  • Race conditions such as reentrancy attacks or front running.
  • Misuse of block timestamps, assuming anything other than them being strictly increasing.
  • Contract softlocking attacks (DoS).
  • Potential gas cost of functions being over the gas limit.
  • Missing function qualifiers and their misuse.
  • Fallback functions with a higher gas cost than the one that a transfer or send call allows.
  • Fraudulent or erroneous code.
  • Code and contract interaction complexity.
  • Wrong or missing error handling.
  • Overuse of transfers in a single transaction instead of using withdrawal patterns.
  • Insufficient analysis of the function input requirements.
File Summary ProPool.sol

No vulnerabilities were found in this contract, but some compiler warnings were noted. This is a contract that provides public-facing presale pool management functions, which are implemented in the ProPoolLib library.


A lack of use of SafeMath was noted, but no issues were found. This contract has functions for controlling affiliates within the contract.


No vulnerabilities were found, as this contract was simple and straightforward. This contract can be called to set the addresses of the fee service, the pool factory, the pool registry and to deploy the ProPool contract by a call to poolFactory. This way the implementation can be changed without redeploying.


This contract does not use the SafeMath library consistently, but no major security concerns were found. The objective of this contract is to provide fee settings, which are then attached to the pool through the functions provided in CMService.sol


No vulnerabilities were found in this contract. Since it provides a simple function for the deploying of a new ProPool contract, no vulnerabilities are considered as possible.


No vulnerabilities were found in this contract. It’s a simple contract providing a function that emits an event when called, recording the pool address and details.


No vulnerabilities were found here. This is an implementation providing storage and functions that manage operators, which are used to restrict function execution in PoolFactory.sol, PoolRegistry.sol and Affiliate.sol


The inconsistent use of SafeMath was noted, but no major issues were found. This contract is the one providing the implementation that ProPool.sol uses.


Inconsistent SafeMath usage was noted, along with a lack of documentation. While later commits added some, it still wasn’t enough. Other than that, the contract was straightforward, without much complexity. This contract is used to manipulate the refund and token quotas in the ProPoolLib.sol contract.

Function summary

The ProPool contract has many public functions. We will describe how they fit together giving a brief description of what they do.


Calls init(), initializing the pool fields. It doesn’t initialize the pool state, leaving it in the default of open. It creates the first group of eight calling setGroupSettingsCore.

Once created, the pool accepts deposits. If it’s cancelled, the contributions can be withdrawn together with the remaining balance. If it proceeds, once the presale address is paid, the confirmTokenAddress function can be called, changing the state of the pool to Distribution, and sending the creator and service fees. Elsewise, if the refund sender address is set, the pool goes into FullRefund state, and investors can claim their refund share and tokens.


Calls setGroupSettingsCore, and then calls groupRebalance, which balances contributions made to the group.


Sets the state of the pool to Cancelled.


Contributes to a certain group. Internally, it calculates the contribution, sets the group as existing and updates the group contribution.


Enables whitelist, includes participants who should get in and excludes those who shouldn’t, as sent by parameter, in a certain group. Afterwards, it rebalances the group.


If presaleAddress isn’t set yet, it does so at this moment. Then sets the pool in PaidToPresale mode, and sets fee-to-token mode, if needed: commission is sent to the presale address. Then the funds are set to the presale address by calling addressCall, which calls the address with the needed value and data, and emits an event.


Sets presaleAddress and sets the lockPresale boolean, which should be used for verification. This boolean is part of a locking mechanism which should be used, but is never again used in the code.


Changes state to distribution, saves parameter tokenAddress in the tokenAddresses array if balance > 0.


Sets the refund sender for the pool (for use in the fallback function)

Fallback function

Verifies sender is the specified refund address.


Withdraws from a group: all of participant remaining balance and some of participant contribution. Then it transfers this amount to the sender.


What this function does varies according to what state the pool is in:

In FullRefund or Distribution states it calls withdrawRefundAndTokens, which calls withdrawAllRemaining, taking the group’s remaining balance and transferring it to the sender. Then it calls withdrawRefundShare. This is the part that withdraws the refund shares. Then withdrawRefundAndTokens checks if the pool is in fee-to-token mode (where the fee for the creator is sent directly to the presale), and if there was an effective contribution to the token addresses they are transferred.

In cancelled or open states it calls withdrawAllContribution, which takes all of the sender’s contribution and remaining from each and sends him that amount.

In PaidToPresale state it only withdraws the remaining balance of the participant and sends it his way.


Emits an event for registering. This function comes from ERC223, but this proposal didn’t prosper much.


Getter for pool structure.


Nothing if not in Distribution or FullRefund states.

Else, calculates the balance it will refund and returns the token addresses and balance in each


Gets contribution, remaining and whitelist array.


Calculates refund shares and calculates shares (total – claimed) of each tokenAddress in the pool for a certain address


Simple getter for group details, returning the respective fields.


Gets the library version.

Detailed findings

Critical severity

None found.

Medium severity

None found.

Minor severity

Lack of usage of “lockPresale” boolean

The pool has two fields: presaleAddress and lockPresale. These control where the balance will be sent when calling payToPresale and whether the address is already set. On functions init and payToPresale the presaleAddress field can be set via a parameter. However, the lockPresale boolean isn’t set, defaulting to false. This means that the internal state will be inconsistent. Furthermore, checks to see if the presale address was set are made based on whether it is zero or not, and the boolean, when set (which is only done when calling lockPresaleAddress), is never actually used for making the check on if it was locked or not.

The state of the pool is not explicitly initialized

When calling the init function, the pool’s state is never explicitly initialized. This means that it’ll default to the “Open” state, but it still is advised to change this as it can impact when changing the code and versions.

Warnings on compilation

The contracts emit warnings when compiled. It would add to the auditability of the contracts themselves if they were removed. While they may not cause any problem directly, they can occlude other more important warnings. We recommend fixing these before deployment, mostly as a way of ensuring that everything’s going well.

Inconsistent usage of SafeMath

There’s not enough usage of SafeMath in some lines in QuotaLib.sol, yet it is declared as being used in the following statement:

library QuotaLib {    

using SafeMath for uint;

We recommend either removing the statement, or following through in the usage consistently, since this could lead to errors and possible future changes could cause bugs. We also recommend the use of SafeMath in the other contracts, which it can be seen that it was considered yet decided against in some commented out parts of the code, which declare

// We could use SafeMath to catch errors in the logic,

// but if there are no such errors we can skip it.

// using SafeMath for uint;    

This is something which was found in ProPoolLib.sol, in Affiliate.sol and in FeeService.sol. While it is true that SafeMath has no purpose if the code is correct, it can act as a safeguard if that isn’t the case. It’d be advisable for the team to reconsider this design decision, since it’d mean arithmetical operations are guarded against overflow.

Enhancements Not enough documentation

In the QuotaLib.sol contract there’s not enough documentation. It’d be advisable for the development team to fix this, as it helps auditability and legibility of the contracts. There are many comments of the form:


* @dev …


Changing these to have meaningful commentary can avoid errors when handling the contract, so it’s suggested to follow through.

Non declarative function naming

There are functions named successively (getPoolDetails1, getPoolDetails2,…, withdrawAllRemaining1, withdrawAllRemaining2,…). These are poor names that don’t reflect what the functions actually do, and can lead to trouble when modifying the code in the future. Clearer names would reflect the differences between them.

Old compiler pragmas

The contracts have the following pragma directive:

pragma solidity ^0.4.24;

We remind the development team that the Solidity compiler has been updated. While this isn’t critical in these particular contracts, there are bugfixes which if absent could’ve caused problems. As usual, it is recommended to stay on top of the new versions, which could avoid problems with things like the ABI encoding, parser issues, et cetera.

Missing deployment scripts

We inferred the deployment order for the contracts based on what they do, since they follow a factory pattern, and we gathered the following inheritance graph for the contracts (taking into account the libraries):

HasNoEther was removed from OpenZeppelin

The contracts CMService, PoolFactory and PoolRegistry inherit from HasNoEther, but it was removed from OpenZeppelin, since it could be misleading, as there’s no actualy way to guarantee a contract won’t have ether. The following discussion is cited: https://github.com/OpenZeppelin/openzeppelin-solidity/pull/1254


We annotate some of the verifications done to the contracts to indicate that they were performed even when no critical issue was found during the audit.

  • Misuse of the different call methods: call.value(), send() and transfer().
    We found no incorrect use of call() or send(), and transfer(). Calls to these methods were done on checked input, with correct coverage of requirements.
  • Integer rounding errors, overflow, underflow and related usage of SafeMath functions.
  • Race conditions such as reentrancy attacks or front-running.  
    • The only call to an external contract is in the addressCall function, but considering it’s private and is only called from payToPresale, which calls to this function with the presaleAddress that’s set. Since this is only callable by the pool administrator and the state change of the pool is done before the call, the pool will have a consistent internal state, so reentrancy can’t cause damage.
    • There are many setters for addresses that could if not enough checks are done on input, cause transfers to unwanted addresses. However, these functions have the correct modifiers so only the administrator of the pool can change these.
  • Misuse of block timestamps, assuming things other than them being strictly increasing.  
    • Timestamps aren’t used in the contract at all, so attacking the contract from that angle is impossible.
  • Contract softlocking attacks (DoS) / unbounded gas usage.
    No function in the contract has a loop that can be abused to cause a soft lock or an unbounded usage of gas.

No critical issues were found, yet some possible legibility issues were considered. Overall the contracts had decent documentation, except for some points, but in general they were simple enough to follow that it wasn’t critical. There is some code which we recommend reviewing, but in general the state of the codebase is good, the only trouble being somewhat inconsistent snippets which reduce legibility, and could also lead to bugs if additions are made.

Disclaimer: This audit report is not a security warranty, investment advice, nor an approval of the EasyPool project since Coinfabrik has not reviewed its platform. Moreover, it does not provide a smart contract code faultlessness guarantee.

The post EasyPool Smart Contract Security Audit v2 appeared first on CoinFabrik Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


As a member of CoinFabrik, during the last year I dedicated myself to teach about Ethereum for the University of Palermo and given what I saw in my classes I required a set of tools that allowed me to observe and quickly analyze different situations in relation to Ethereum.

This was from analyzing the problems of a Node and its connection to the rest, passing through studying a particular blockchain, to seeing how the different transactions of a smart contract behaved.

In view of this, and due to the material conditions of the work, I required an environment where students could do all that but at the same time didn’t need to be connected to the internet. That is why I ended up configuring my personal computer to have those tools. However, the configuration of my computer was not portable enough for the needs I had. This motivated me to create a rapidly configurable environment using Docker and Docker Compose as main tools. This article will describe how to use those tools.

It is necessary to clarify that the configuration of the dockers was developed together with PeGa! (our SysAdmin) who helped the project to be in optimal conditions.

The software described below is intended to be a tool of both educational and development environment purposes.

The purpose of this configuration is to generate a Docker environment of 2 nodes connected together (a.k.a. Node1 and Node2) running on Geth and monitor them using Ethstat (at localhost:3000). Then we can use MetaMask as well as Remix to connect to Node1 (localhost:8545) to send simple transtractions, like sending Ether, or complex transactions like the creation or call of a smart contract.

To start our environment we just need to clone the DockerEthereum repository with:

Set up

git clone https://github.com/CoinFabrik/DockerEthereum

Go to the respective folder with

cd DockerEthereum

And just start our environment with

sudo ./start.sh

This last command will generate the images required for the ecosystem and run each container. We will have one container per service, in this case three containers with the following names:




Attach or connect to the containers as follows:

sudo docker attach


sudo docker exec -it /bin/bash

To detach use “Ctrl + P”, “Ctrl  +Q”

The Geth services are running in pm2 so if you want to see the logs of the nodes, you must first attach to the node and then inspect the log.


sudo docker attach Node1

pm2 log node1

Sending Ethers

The command:

sudo ./scripts/Faucet.sh <<Amount>> <<Address>

will send the amount of ethers in the first parameter to the address specified in the second.

In order for the transaction to actually take place, we must start the mining process.

Start and stop mining

sudo ./scripts/MinerStart.sh

sudo ./scripts/MinerStop.sh

These commands will send the Start and Stop command to the Node1 container but, because the two nodes are connected and synchronized, we can see in the Ethstat panel (localhost:3000) that the two nodes are holding the same blockchain.

Monitor the Nodes

just visit localhost:3000

Config MetaMask And Remix

To connect with MetaMask we select the “localhost 8545” option, whereas the connection from Remix to Metamask is done by selecting “Injected Web3” in the Environment option.

The post Dockerized Ethereum Private Testing Environment Compatible with MetaMask and Remix appeared first on CoinFabrik Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
CoinFabrik Blog by Mariana Soffer - 2M ago


CoinFabrik was asked to audit the contracts for the ArcadierX project. Firstly, we will provide a summary of our discoveries and secondly, we will show the details of our findings.


The contracts audited are from the ArcadierX repository at https://github.com/arcadierx/arcadierx. The audit is based on the commit 985f836527544ea3878f21d2ee68c47e079ccdd4.

The audited contracts are:

  • IReceiver: Defines an interface for contracts that are callable after receiving tokens.
  • LSafeMath: Overflow checked arithmetic operations.
  • Ownable: Common privilege function modifier.
  • ERC20Basic: ERC20 Token interface with only transfer and balance.
  • BasicToken: Implements ERC20Basic interface using IReceiver to call receiving contracts.
  • ERC20: ERC20 Token full interface.
  • StandardToken: Fully implements the ERC20 token interface using IReceiver to call receiving contracts.
  • ARCXToken: The final implementation details of the token and entry points are specified in this contract. This is the one that gets deployed in the blockchain.

No issues of critical, high or medium severity were found.

The following analyses were performed:

  • Misuse of the different call methods: call.value(), send() and transfer().
  • Integer rounding errors, overflow, underflow and related usage of SafeMath functions.
  • Old compiler version pragmas.
  • Race conditions such as reentrancy attacks or front running.
  • Misuse of block timestamps, assuming anything other than them being strictly increasing.
  • Contract softlocking attacks (DoS).
  • Potential gas cost of functions being over the gas limit.
  • Missing function qualifiers and their misuse.
  • Fallback functions with a higher gas cost than the one that a transfer or send call allows.
  • Fraudulent or erroneous code.
  • Code and contract interaction complexity.
  • Wrong or missing error handling.
  • Overuse of transfers in a single transaction instead of using withdrawal patterns.
  • Insufficient analysis of the function input requirements.
Detailed findings Minor severity No Solidity pragma

The contract code given does not have a Solidity version pragma. This is typically expected of contract code. Adding the pragma helps to know the compiler version that is being used in the project. The code itself doesn’t compile on version 0.5.0 or later which further reinforces this issue. We recommend adding a Solidity pragma to address this issue.

Solidity errors on newer compiler version

Solidity errors because of the usage of the obsolete qualifier constant. The contracts also have the old constructor declaration and do not explicitly define the data location on array-typed parameters which is required in version 0.5.0. Using old Solidity versions is not advised as newer versions address problems and bugs older versions may have, especially considering the immaturity of the compiler at the time of writing this document. We recommend migrating to a newer version of the Solidity compiler.

Inline assembly should be encapsulated in a function

There is inline assembly in both transfer functions to check whether an address is a contract:

assembly { 
// Retrieve the size of the code on target address, this needs assembly .
codeLength := extcodesize(_to)

Code like this should be encapsulated in a function to prevent errors. Assembly code is much more dangerous when called in the middle of functions as they have access to all the context inside them. We recommend moving this code to a single function.

Insufficient documentation

The functions in the contract ARCXToken are not documented. Having these documented would help knowing the functionality expected from them.We recommend documenting these functions using the NatSpec found in the other contracts documented.

There is also no documentation regarding the reentrancy of the contracts allowed by IReceiver (ERC223) interface. We also recommend documenting this behavior as it may allow for vulnerabilities if the code is modified in the future.

Commented code

There is commented code in the transferFrom function:

//code changed to comply with ERC20 standard
balances[_from] = balances[_from].sub(_value);
balances[_to] = balances[_to].add(_value);
//balances[_from] = balances[_from].sub(_value); // this was removed
allowed[_from][msg.sender] = _allowance.sub(_value);

Commented code is a problem as it clutters and weakens the main intention of the code. We recommend removing the commented code.

Typo in variable name

There is a typo in the variable ingnoreLocks at ARCXToken. It should be called ignoreLocks. We recommend fixing typos as they weaken the readability of the contracts.

Repeated code in transfer and transferFrom

The transfer and transferFrom implementations have repeated code that can be reused. It is common to have a private function implementing the repeated code, and having these two functions call that instead. Repeated code increases the surface area for errors so eliminating it should be considered when possible.


The token allows reentrancy since it calls the receiving contracts when transferring tokens. However, the calls are made at the end of each function after modifying all the storage. This avoids exposing an unexpected contract invariant in any of the calls, and thus makes them safe as the contract isn’t in an invalid state if reentrancy occurs.


This is a simple token. It does not have many features so it has little surface for errors. The contracts were readable and overall easy to follow. No issues of critical, high or medium severity were found. There are only a couple of minor issues: not using nor expliciting a recent compiler, and the use of unencapsulated inline assembly. Both of them can be fixed by small updates to the contracts. These issues impact future development mostly, and thus, the decision of fixing them is left to the developers. This kind of issue does not warrant a redeployment of the contracts.

Finally, we examined the contract bytecode deployed at the Ethereum mainnet address 0x1A506CBc15689F2902b22A8baF0e5Cc1eD8203eE. We found it to match with the bytecode produced by the Solidity compiler v0.4.25 when compiling the contracts.

Disclaimer: This audit report is not a security warranty, investment advice, or an approval of the ArcadierX project since CoinFabrik has not reviewed its platform. Moreover, it does not provide a smart contract code faultlessness guarantee.

The post ArcadierX Security Audit appeared first on CoinFabrik Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

After testing the performance of Ethereum using PoA, we tested the usability of the CardContact SmartCard-HSM USB token on an ethereum Proof of Authority network. The HSM allows to store and use multiple encryption keys, both RSA and Elliptic Curves (including secp256k1), for applications like issuing certificates as a CA, and with any application that can interface with the PKCS#11 standard. In this guide we explain how to install it in a Linux Machine and get it started with a fork of the go-ethereum  that implements an extension of web3 to interface with the above mentioned standard.

Token Installation

Before we are able to use our token, we need to install a few dependencies to interface an administer it:

  1. Install OpenSC, it has packages in most of the usual Linux distros, you can found the appropriate way for your own here
  2. Download the Starterkit from CardContact’s site. Unzip it and run the .sh file found in the ‘linux’ folder inside it. After it declares that it found the libccid configuration, we can go on. If it failed, there is a problem with our OpenSC installation, make sure it was done properly.
  3. After it is done, install the XCA key management application, you can download it and see more detailed instructions as to how to do it in their site
  4. Now, you can set up the HSM. For most of this part you can follow the guide found as a PDF inside the Starterkit you downloaded earlier.
    1. Open XCA and create a new database as explained in the PDF. After that, follow along and select your PKCS#11 module. In my case (Ubuntu 18.04 LTS), it was found in /usr/lib/x86_64-linux-gnu/pkcs11/opensc-pkcs11.so, but it could vary.
    2. Now, instead of following the PDF guide, you have to enter the following command with the device plugged in to initialize it, entering our own pin (must be 6-16 characters long):

sc-hsm-tool --initialize --so-pin 3537363231383830 --pin 123456

Project setup Preparation

Now that you’ve installed and initialized the token, you need to download and configure the modified version of geth, and create a new key to use.

  1. First, you must install the fork of geth 1.8.13 created by Github user gemalto, that you can download from this repo.
  2. After you downloaded and unzipped the files, open a terminal inside the folder and run make.
  3. Around this point you should also create the key you want to use to sign.
    1. Open XCA, open the previously created database and, with the HSM inserted, select “New Key”.
    2. A Dialog box opens and you are prompted to select a key name, that you could choose as you see fit, a Keytype, that must be “UserPin […] (EC Key of 192 – 320 bits)”, not a regular EC key, and a curve name, where you need to choose “secp256k1”, since is the curve that Ethereum uses to seal blocks and sign transactions.

After the key is created, you must prepare the private network.

Network setup

To be able to setup de PoA network with the key you just created, you need to know at least one address associated with it. For that, we chose to start a Proof of work network using the same repo, which  allowed us to also test that the PKCS#11 capabilities of it were working. First, we created a new genesis using puppeth (but you can do it the way you prefer it), and then we followed the following steps:

  1. Inside the geth repo folder we ran ./build/bin/geth init pow.json --datadir database/pow with pow.json being our genesis file
  2. Then ./build/bin/geth init pow.json --datadir database/pow --networkid 38969 dumpconfig > pow.toml to get the configuration file, with the correct network id form your genesis file
  3. Then, we edited the file as explained in the repository’s README, adding to the [Node] block the following lines (remember, if your opensc installed the library elsewhere, you have to change that):
    1. PKCS11Lib = "/usr/lib/x86_64-linux-gnu/pkcs11/opensc-pkcs11.so"
    2. NoPKCS11BIP32 = true
  4. Then, we ran  the geth instance with the new config file with ./build/bin/geth --datadir database/pow --networkid 38969 --config pow.toml console
  5. Now, inside geth console, you can derive the address:
    1. First, list your has wallets with personal.listWallets
    2. Then unlock the wallet with personal.openWallet("hsm://UserPIN (SmartCard-HSM)") . The parameter must match the wallet URL that appeared in the previous step. The password asked is the pin created when you initialized the device.
    3. Finally, derive an address form the key, using personal.newHsmAccount("hsm://UserPIN (SmartCard-HSM)") , again with the correct hsm url. Save that wallet since it’s the one you will be using to configure your poa network

Now, we are ready to create our private PoA network. The steps are fairly similar, the key differences are that, when creating the genesis block you should input the address we just derived from the key, and that you should use a different network id, datadir, and genesis and config files. When you reach the point where you open the geth console, now with the PoA network, you are ready to go.

Usage and conclusions

While testing it, we were able to check that the device works as intended. If we don’t open the wallet it is unable to start mining, if we close the wallet with personal.closeWallet("hsm://UserPIN (SmartCard-HSM)") the mining process immediately stops, and the same happens if we remove the device from the PC at any point. This proved to be a way that it’s both secure and portable to use a PoA network, since it allows to move the USB token from computer to compute physically, while the signing key remained secured inside.  We can only hope that this technology is adopted by the main geth and web3 projects to allow updated capabilities and continued support.

The post Using the CardContact SmartCard USB HSM in an Ethereum PoA Chain appeared first on CoinFabrik Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Lately, there has been a lot of talk about permissioned blockchains, in which only certain entities have the authority to validate transactions and generate blocks. The use of these technologies is currently highly required by different companies, since it allows them to store data in a decentralized way and show complete transparency in all their operations. It also allows creating crowdfunding platforms, in which its users can issue shares of certain assets, and benefit by raising funds from small investors from all over the world (for example, crowdfunding platforms of real estate, automobiles, artistic works and others). In the present article, we analyzed the performance of one of the most used implementations, the Clique algorithm for PoA consensus using Geth client for Ethereum.

Problem Description

The analysis is performed on a private blockchain development based on the Ethereum platform, using Go-Ethereum (Geth) client implementation. Geth allows using two different consensus algorithms: Proof of Work (PoW) and Proof Of Authority (PoA). Since most companies need to keep validation authority on data transmitted through the network, the blockchain will use the PoA consensus algorithm. The PoA Geth implementation is called Clique.

Multiple parameters can affect the network performance. The main parameters CoinFabrik has detected are: Sealers Number, Block Time and Gas Limit.

Sealers Number

In PoA network consensus is achieved by a majority agreement among the sealers nodes. A sealer node is a special client which is allowed to include blocks on the blockchain. Sealers are set in a whitelist in the blockchain genesis block. Once the blockchain is running, new sealers could be added by majority voting. To consider a block as valid, it must be validated by at least 51% of sealers. By increasing the number of sealers the network latency could be also increased. This can generate synchronization problems during the generation of blocks. It is necessary to study how the amount of sealers affects the performance of the network.

Block Time

Clique consensus algorithm divides time into epochs. At the beginning of each epoch, a sealer is selected using a Round Robin algorithm as the leader to propose a new block. During the epoch, the leader validates transactions and includes them in the new block, and once the epoch is finished, it broadcasts the block to the other sealers. If the majority of the sealers accepted it, the block can (finally) be considered as valid. In case that the leader delays in submitting the block, some back-up sealers can take its place and propose another block instead. The time between two consecutive blocks is called block time. Despite the fact that in PoA networks theoretical block time is fixed by configuration, it can fluctuates due to synchronization and network delays. That is why it is interesting to measure real block times given other varying blockchain parameters, e.g. number of sealers and Gas Limit (which determines the block size). The block time configuration parameter can be used to set the maximum network throughput, as evaluated in Gas Limit section.

Gas Limit

Ethereum platform prevents transaction spamming and rewards block miners by charging a gas fee on transactions. Each block contains a maximum amount of gas that can be collected from transactions, defining a maximum block size. That gas limit could be set as a configuration parameter. In the long term, the block gas limit approaches a target gas limit set also as a configuration parameter (it can also be changed at runtime if needed). The theoretical maximum transactions per second (TPS) can be calculated using the following equation:


where GasLimit is the Block gas limit, TxGas is the gas needed to compute the simplest transaction and Block time is the blockchain block time.

Experimental Setup

The experimental setup consists in a set of virtual machines running on a cloud, in servers set all around the world to add latency to the network and test in the worst scenario it could face in future.

CoinFabrik developed a software tool to:

  • loop over network configurations, considering different sealers number, block times and gas limit
  • set up sealer nodes using a blockchain Client implementation based on Geth version 1.8.18
  • run a private blockchain
  • run a spammers bot that submits dummy transactions
  • measure network statistics as real block time, throughput, block propagation time and others
  • shutdown the blockchain, remove files and restart with the following configuration point

The results are useful to characterize the blockchain functionality using different configuration points.

Test Results

The blockchain is tested using combinations of the following configuration points:

  • seales number: 5,10,15,20
  • block time [sec]: 2,5,20,15,20,30
  • gas limit [wei]: 4000000, 8000000, 16000000

Every configuration point is tested keeping the blockchain running for one hour.

Propagation block time is affected mainly by sealers number as well as by block time. Figures 1, 2 and 3 reports propagation block time in different network configurations, measured as the time since the block is proposed by the leader until it reaches all the other sealers.

Propagation Time vs Sealers

Figure 1: Block propagation time as a function of sealers number and block time, using gas limit=4000000.

Propagation Time vs Sealers

Figure 2: Block propagation time as a function of sealers number and block time, using gas limit=8000000.

Propagation Time vs Sealers

Figure 3: Block propagation time as a function of sealers number and block time, using gas limit=16000000.

It can be observed that the propagation time strongly depends on the number of sealers. Higher block times are also related to higher delays. This can be explained considering that higher block times produced higher block sizes in our tests, so propagation of information took more time due to the amount of data transmitted. Gas limit doesn’t affect the propagation time in our tests, since blocks didn’t include a limit in the amount of transactions.

Tables 1 and 2 report measured mean block times and standard deviation respectively, against amount of sealers and theoretical block times in the network configuration, running the blockchain during one hour. Times are reported in milliseconds.

Block times[ms] \ Sealers5101520

Table 1: Mean measured block times against theoretical block time and number of sealers supporting the blockchain.

Block times[ms] \ Sealers5101520

Table 2: Standard deviation measured block times against theoretical block time and number of sealers supporting the blockchain.

Propagation delays not only affect real block time but also consensus. When a leader delays in broadcasting its signed block, sealers accept blocks proposed by backup nodes, which are a set of sealers selected in each epoch. So, there are epochs in which the leader proposed blocks are discarded. Figures 4 to 6 reports the amounts of blocks lost in different network configurations, running one hour each one.

Lost Blocks vs Sealers

Figure 4: Lost blocks as a function of sealers number and block time, using gas limit=4000000.

Lost Blocks vs Sealers

Figure 5: Lost blocks as a function of sealers number and block time, using gas limit=8000000.

Lost Blocks vs Sealers

Figure 6: Lost blocks as a function of sealers number and block time, using gas limit=16000000.

The amount of lost blocks strongly depends on block time. There is a big jump measured between 2 seconds block time and 5 seconds block time. The amount of lost blocks per amount of blocks is higher in networks with lower block times. Lost blocks seems to be weakly related to amount of sealers, since at lower amounts of sealers, lost blocks number  increases, but at 20 sealers, the lost blocks number decrease. This behavior remains to us unexplained, and could be tested again in a future research.

Conclusions Main Results

Blockchain tests allowed us to characterize the blockchain behaviour at different configuration points, considering different amount of sealers, block times and gas limits. Lower block times were found to be strongly related to higher amount of lost blocks per valid blocks, which leads to a bad performance of the blockchain. It would be better if when one of these sealers is selected as a leader, it had higher chances of generating a valid block. For this to take place we recommend not to use lower block times. On the other hand, using 30 seconds block times some delays problems were also reported. Based on these analysis, we propose to use block times between 10 and 15 seconds, to keep a high TPS. Block times between 15 and 20 seconds are also expected to lead to a good performance, but were not tested in this analysis.

Despite the fact that amount of sealers are positive correlated with block propagation times, it was found that it does not degrade the performance of the network in the analyzed configurations. The analysis guarantees the good performance of the network when the blockchain is supported by up to 20 sealers. Analysis with more than 20 nodes were not performed.

Future Steps

The research could be strongly improved considering the following tests:

  • Run spammers on the network to submit as much transactions as needed to fulfill the blocks, therefore testing the network at its full capacity.
  • Run tests considering more granularity in the configuration points, in particular considering the amount of sealers between 15 and 20, to retest the unexplained lost blocks behavior.
  • Run tests considering higher amount of sealers than 20, to test blockchain performance in bigger networks.
  • Run tests considering malicious nodes in the network. This will include some Geth client modifications to produce a malicious software.
If you liked this article, you might also like:

Disclaimer: CoinFabrik does not provide financial advice. This material has been prepared for educational purposes only, and is not intended to provide, and should not be relied on for, financial advice. Consult your own financial advisors before engaging in any investment.

The post On Ethereum Performance Evaluation Using PoA appeared first on CoinFabrik Blog.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview