Loading...

Follow We are the ledger - Medium on Feedspot

Continue with Google
Continue with Facebook
or

Valid
Photo by Ben Pattinson on Unsplash

Working with different programming languages allows you to compare certain features between them. Convergence between languages, (and also frameworks) becomes noticeable: It seems new language- and framework features are “borrowed” from other language- or frameworks, or are sometimes the result of “taking the best of both worlds”. Multi-value return statements in programming languages is one such feature.

This post will focus on this concept, compared between the languages Golang, C# and TypeScript.

What are multi-value return statements anyway?

Let’s start with Golang. Golang is a (very) strict language, which forces you to write clean code. It also heavily relies on multi-value return methods, like for explicit error-handling for example.

func Hello(input string) (output string, err error) {  
if len(input) == 0 {
return "", errors.New("Blanks not accepted")
}
return input + "!", nil
}
func main() {
out, err := Hello("Josh")
if err != nil {
log.Fatal(err)
}
// continue with 'out' parameter
}

Golang forces you to use all variables you define, otherwise, the code won’t compile. (Did I mention Golang was strict?) Which in turn forces you to really think about error-handling up-front, instead of an afterthought. (“Oh right, exception handling…”). In the case where you don’t need any of the return values, you can use the underscore to signal Golang you are not interested in this output variable:

_, err := Hello("Josh") // Just checking the error...
So why is this useful?

Explicit exception handling is just one good example, but is specific to Golang. Another, more generic, benefit is that you don’t need another ‘wrapper’ object or data-structure with the sole purpose of delivering exactly one output parameter. For example, using the explicit Tuple data structure in C# , is a thing of the past.

Multi-value return statements weren’t part of C# for a long time. But since C# 7.0, we can write something like this:

public (bool Valid, string Message) Validate(Model model)
{
if (model.BuildingYear == null)
{
return ( true, "Warning - No building year supplied." );
}
if (model.BuildingYear >= 1000 && model.BuildingYear < 10000)
{
return ( true, null );
}

return ( false, $"Error - BuildingYear {model.BuildingYear} is not in the correct format YYYY." );
}

Calling the method, looks like this:

var (isValid, message) = Validate(myModel); 
// now you can use 'isValid' and 'message' independently.

Although C# code will still compile if you have defined unused variables, IDE’s like JetBrains Rider will advise you to rename those unused variables to underscore’s, just like in Golang’s syntax.

// Rider IDE will advice you rename 'message' 
// to '_' if this parameter is not used.
var (isValid, _) = Validate(myModel);
What about TypeScript?

With TypeScript (and also JavaScript ES6), you cannot return multiple values from a single method, like the way Go and C# are doing. Here, you mostly still use (nameless) javascript objects that are the ‘wrappers’ around multiple values you want to return. However, since ES6, the same end-result can be achieved with the use of destructuring. This gives you syntactic sugar to easily access multiple values from a return statement in a single call. Here is a TypeScript example, with the same logic as the one from the C# example:

private validate(model: Model): { valid: boolean, message: string } {
if (model.buildingYear === undefined) {
return { valid: true, message: 'Warning - No building year supplied.' }
}
if (model.buildingYear >= 1000 && model.buildingYear < 10000) {
return { valid: false, message: null }
}

return { valid: false, message: `Error - BuildingYear ${model.buildingYear} is not in the correct format YYYY.`}
}

Although we are returning a ‘wrapper’ object instead of multiple values natively, by using destructuring, we can have direct access to the individual parameters inside the object.

const { valid, message } = this.validate({ buildingYear: 1600 })
// You can now use 'valid' and 'message' independently.

Compare the statement above to calling the method in the C# example and you’ll see the syntactic resemblance immediately.

Conclusion

Programming languages keep evolving. And upon doing so, the best concepts are sometimes getting ‘borrowed’ from, or influenced by other languages, which can only be seen as a good thing!

Thanks for reading.

About multi-value return statements was originally published in wearetheledger on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The Problem

Our current Software Development lifecycle at work is straightforward: We have a development, a staging, and a production environment. We use feature-branches and pull-request where developers review each other PR’s before it gets merged (and auto-deployed) into development. On development, it gets tested by test-team. And once approved, gets pull-requested and accepted to the staging environment, where business can test it as well before going to production.

All is fine, except that if test-team disapproves a certain feature, then development is in a kind of blocked state, containing both features who have passed by test-team, together with features who are disapproved by test-team. We cannot decide that “feature A and B can go to staging now, but feature C cannot”, since all three features are already on a single branch (dev-branch).

Test team cannot decide that “feature A and B can go to staging now, but feature C cannot”

We could try to use something like git cherry-pick but we rather not starting to mess with git branches. Besides, the underlying problem is that test-team should be able to test these features independent of each other. A more ideal solution would be to have separate deployment environments for feature-testing. And so the following idea emerged:

The Objective

For provisioning environments for deploying PR’s, different options exists. Whatever option is chosen, it is important to follow the concept of cattle, not pets, resulting in that these environments should be easy to set up, and also easy to break down or replace. We chose to use Kubernetes for this situation (although TerraForm would also be a good fit).

Since we are already using Azure DevOps (formally know as Visual Studio Team Services — VSTS), this platform will connect the dots and give us centralised control over the processes. The plan can be summarised as follows:

Dockerize it

The first step is dockerize your application components, so they can be easily deployed on a kubernetes cluster. Let’s take this straightforward tech stack as example: We have an angular front-end, a .NET Core back-end, and Sql Server as database. Since PR environments should be cattle, even the database is dockerized. This results in completely independent environments, where the database can be thrown away after testing is done.

Dockerize the back-end component

Probably the easiest of the 3 components. We have a .NET Core back-end. For this, we use a multi-stage dockerfile, so that the resulting image only contains the necessary binaries to run.

# First build step
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /app
# config here...
RUN dotnet publish -c Release -o deploy -r linux-x64
# Second build step
FROM microsoft/dotnet:2.0-runtime-jessie AS runtime
WORKDIR /app
COPY --from=build <path/to/deploy/folder> ./
ENTRYPOINT ["dotnet", "Api.dll"]
Dockerize the front-end component (SPA)

A little bit more difficult, since Single Page Applications, like Angular, are mostly hosted as static content. This means some variables should be defined at build time, like an api-host for example. See this link for more information on how to configure this with docker builds. Knowing these variables in advance imposes some challenges as we will see below.

Dockerize Sql Server

Sql Server already has official docker images that you can use. However, what we would like to do, is make sure that every time a new environment is setup, the database is pre-populated with a dataset of our choice. This would allow us for more efficient testing. To achieve this, we can extend the current (sql-server) docker image with backups, and package the result as a new docker image! More details on how to achieve this can be found in this gist. Your docker file will look something like this:

FROM microsoft/mssql-server-linux 
COPY . /usr/src/app
ENTRYPOINT [ "/bin/bash", "/usr/src/app/docker-entrypoint.sh" ]
CMD [ "/opt/mssql/bin/sqlservr", "--accept-eula" ]

If you don’t want any data pre-populated, you can use the official microsoft/mssql-server-linux image straight from DockerHub instead.

To make sure all docker containers play nicely together, you could use a docker-compose file to wire them all up and see if everything works locally, before trying things out in the cloud.

Create VSTS Build Pipelines

Once we got our docker images, we’ll want to push them to a container registry, like Elastic Container Registry (ECR) for example. Of course, we don’t want to push locally build docker images to ECR directly. We’ll want an automated build tool do this work for us instead! Lots of build tools exists today. Here, we’ll be showing how to do things with Azure DevOps / VSTS.

In VSTS, can you can implement your build processes in Build Pipelines and Release Pipelines. It’s perfectly possible to put everything in a Build Pipeline without using the Release Pipeline, but this split-up will give you some benefits that we’ll see later.

For step 1 (building docker images) and step 2 (pushing images to ECR), we’ll use a Build Pipeline. Below is an example of a setup for the UI docker image build pipeline. In VSTS, you have the option to choose between ‘click-and-drag’ kind of build process setup, or use the YAML based (infrastructure-as-code kind of) setup.

For each type of the docker image, we’ll create a separate Build Pipeline, so we can exploit parallel build processes when necessary.

Click-and-drag kind of build setup

Great! After all 3 build-pipelines for the 3 components are configured, we can start configuring triggers on when these builds are run.

Configure triggers for Build Pipelines

Azure DevOps allows you to program very specific triggers, actions and gates. For triggering these Build Pipelines automatically, we can setup Branch Policies on a specific branch. In our case, on the development-branch.

Example of configuring build triggers on specific branches in specific circumstances.Packaging kubernetes yaml configuration files

The output of Build pipelines can be turned into artifacts in Azure DevOps. These artifacts can then be used as input for Release pipelines as a next phase.

Build pipeline config for packaging kubernetes yaml files

Because we’ll need the kubernetes yaml configuration files during the Release phase, we’ll need another Build pipeline which packages these files as an artifact. This Build pipeline will look something like this.

Create a VSTS Release Pipeline

Release Pipelines are used as a next phase. It uses the output produced by our Build Pipelines as input. Of course, the output of our docker build-pipelines are on ECR, not on Azure DevOps. The kubernetes yaml files are the only input used by the Release phase. The kubernetes cluster itself will pull the images straight from ECR when needed. (This sounds easier than done: EKS, AWS’s managed kubernetes solution, uses its own authorization mechanism, which does not play nicely with kubernetes own auth-mechanism. The solution consists of deploying a cronjob which will pull for new secrets once in a while, that will allow your cluster to be able to successfully authorize with ECR. This blogpost describes the solution in more detail).

Overview of a Release Pipeline

In a Release Pipeline, you can setup you release strategy with components called ‘Stages’. Inside these stages, you can define a number of jobs and tasks, just like in a Build Pipeline.

Take note of the names of the ‘Stages’ in this Release pipeline, given the names pre-dev-stage-1 and pre-dev-stage-2. These names can be dynamically retrieved in the tasks through parameters. The ‘stage’ name for example can be retrieved by using #{Release.EnvironmentName}# in expressions. We’ll use these values in 2 situations:

  1. As namespaces within our kubernetes cluster
  2. As part of a dynamic domain name
Apply kubernetes yaml file for specific namespace

It was this blogpost that helped me define setup everything in VSTS with kubernetes. By using the Release.EnvironmentName -parameter as namespace , you’re able to deploy complete new environments for each Stage you define. In our case for pre-dev-stage-1 and pre-dev-stage-2 .

In this scenario, we’ll expose our 3 services via LoadBalancers. (Exposing the database here is not necessary, but helpful if we want to be able to directly connect a local client to the database for test-purposes).

$ kubectl get svc --namespace=pre-dev-stage-1
NAME TYPE CLUSTER-IP EXTERNAL-IP
sql-server-01 LoadBalancer 10.100.23.456 xxx.elb.amazonaws.com
api LoadBalancer 10.100.56.789 yyy.elb.amazonaws.com
ui LoadBalancer 10.100.09.123 zzz.elb.amazonaws.com

Let’s look at what we have here: Each of these services has there own external-ip address which is great. However, remember from before that the UI is build as static sources, which are being hosted from within a container. We have no way to know upfront what the External-IP of the API service will be, which we will actually need upfront during docker build(because AWS will give these loadbalancers random names).

One way of solving this problem is using predefined domain names, so the UI can be build with such a predefined domain name. However, this gives us a new problem: Every time the ExternalIP changes, we need to modify DNS again and again to connect the ExternalIP of the Loadbalancer to the predefined domain we have chosen. Luckily, this problem can be solved thanks to ExternalDNS.

ExternalDNS and Cloudflare to the rescue

ExternalDNS is a tool that can be deployed within your kubernetes cluster. You can configure this service so it has direct access to your own DNS provider. In my case, I used Cloudflare, but this can be any DNS provider which is able to support ExternalDNS (see the list on github of supported DNS providers).

At regular intervals, it will scan your current config on specific tags which will tell the ExternalDNS service that it should update the DNS provider with the provided URI in the tag. For example, take a look at the following yaml configuration.

---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: myapp-api
name: myapp-api
annotations:
external-dns.alpha.kubernetes.io/hostname: myapp-api-#{Release.EnvironmentName}#.example.com
external-dns.alpha.kubernetes.io/ttl: "300" #optional
spec:
type: LoadBalancer
ports:
- name: "80"
port: 80
targetPort: 2626
selector:
app: myapp-api
status:
loadBalancer: {}
---

Be adding these extra annotation in my existing service, my external-dns service will be triggered to update my DNS (in this case Cloudflare) to match the correct LoadBalancer. Great! Fully automated! And yes, it will also clean up your DNS entries afterwards if these services are removed again from the cluster.

DNS entries automatically populated by ExternalDNS

Important note: DNS updates can be quite slow, so depending on a range of many factors, this could take a while to propagate… or not.

Conclusion

With this setup, we can deploy manually or semi-automatic test environments from within Azure DevOps!

Choose in which environment you want to deploy certain builds. Add new stages when desired!

Thanks for reading. Cheers.

Deploying test environments with Azure DevOps, EKS and ExternalDNS was originally published in wearetheledger on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
A collaboration for clinical trials on the blockchain between Boehringer Ingelheim and TheLedger

The process of discovering a potentially new drug until it becomes available to patients takes on 15–30 years. And on average brings a cost of 1.3 billion euros. Boehringer wants to speed up and optimise part of this chain. Clinical studies. Because 80% of the clinical trials are delayed.

By working on digitisation and focusing on the patient, Boehringer is convinced that this process is more efficient and therefore faster. The patient owns his own data, enabling him to submit and confirm additions and changes himself. This will increase transparency. The patient knows at any time which data is available and can be viewed by the parties involved.

Digitisation will increase efficiency because information flows to all parties more smoothly and reliably. In case of important new information for the safety of the patient, it can be taken, confirmed and signed directly via the smartphone or tablet, for example. The reliability and data integrity will be increased by blockchain technology. This results in faster and better recruitment.

Find the right hospitals for the right study in an instant and thus recruit the right patients and get the medicine to the patient more quickly.

Not only does this have enormous advantages for the patient, but hospitals can also jump on the boat of digital transformation. By keeping hospital and anonymised patient profiles on the blockchain, it is easier to find the right hospitals for the right study in an instant and thus recruit the right patients and get the medicine to the patient more quickly.

Current process: Paper

Before trials can be executed on humans, they need to run through a complex and intensive process. This process will not be discussed. We’ll start when the trial is verified and accepted for execution by all involved parties.

Boehringer sends out the requirements to conduct the study to several hospitals, after a CDA (Confidential Disclosure Agreement) is signed by those hospitals. What equipment they need, how many doctors, how many patients, etc ….

When a hospital fulfils all the requirements, they can start patient recruitment. After the recruitment, patients fit to participate in the study are selected. These patients need to sign a paper Informed Consent (IC). It explains the whole process, risks, schedules, etc. It needs to be signed with a doctor, so patients can ask any question they have. This Signed Informed Consent (SIC), which is paper, is kept at the hospitals.

Now it gets interesting. Every time something changes about the study, the patient needs to go to the hospital and sign the new informed consent again. Some studies are conducted over a few years, where visits are every X months. If they forgot to go to the hospital to sign the new informed consent and a visit is near to i.e. get some blood, it can not be taken. When the study doctor is not present, the new informed consent cannot be signed, thus the visit needs to be rescheduled, thus delaying the trial.

Our proof-of-concept solution: Hybrid

Patients are anonymised on the blockchain. Only the hospital may link an anonymous patient of the blockchain to a real patient based on the paper signed informed consent that is stored on their own private database.

A hybrid solution for the PoCVisibility

There are 3 parties who can view the patients’ data: Boehringer as the pharma company, the hospitals and the patient. Boehringer can see from every anonymous patient its data and actions from the blockchain (not the signed informed consent). The hospital and patient can see both the anonymised data from the blockchain and the signed informed consent.

How does it work?

When a patient is ready to register anonymously on the blockchain, he gets a number. This number is written on the paper informed consent that needs to be signed. This way the anonymous patient’s ID is linked with the identity written down on the informed consent. When registering, the document is uploaded and saved on the private off-chain database of the hospital.

Data integrity of off-chain documents

When the patient is registered, a hash of the document is calculated. This hash, path and version of the document are saved with the anonymous patient on the blockchain. At the first login, the patient needs to sign this uploaded document digitally. So future digital signatures can be compared with this initial one and the patient can digitally check if the document uploaded is the one he signed on paper.

When the patient requests its data, the smart contract will go to the path, get the document, calculates the hash and compares it with the hash saved on the blockchain. If the hash is a mismatch, the user will get an error and the system will know there has been tampered with the uploaded document and measurements can be taken.

Updating through digital signatures

Like mentioned before, the initial document is signed on paper, uploaded and signed digitally. When there is an update of the study and a new informed consent needs to be signed, the patient is notified. He can read this new document from home and sign it digitally in an instant. When having questions, he can call the doctor.

A new informed consent needs to be signed by a minimum of 2 parties: patient and doctor. When the patient signs the new document digitally then the doctor can sign this document when the patient visits. He can only sign after the patient has signed the document due to rules in the smart contract.

Architecture of Proof-of-Concept

Note that the backend service consists of 3 different services: an API-gateway, a chain service to communicate with the blockchain and a document service for the documents uploaded and stored on amazon s3 bucket.

I hope you enjoyed this article. If you got excited about blockchain and want to know how this technology can transform and add value to your business, just contact us @ TheLedger

PharmaChain: Proof-of-Concept was originally published in wearetheledger on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
send us stickersConnecting IoT, AI, and Blockchain through a collaboration between Innovation Unplugged, Craftworkz and TheLedger

That’s right, we participated in a hackathon again! First of all a shout out to Junction and all involved parties and sponsors for organizing this amazing hackathon. As well a shoutout to Innovation Unplugged and Craftworkz to collaborate with us and developing this amazing solution and taking on this wonderful adventure with us! The #chainporthack was a splendid edition with 220 participants in Antwerp and at the same time 220 participants in LA, tackling the same challenges listed below in the image.

challenges

We mainly focussed on the ‘Interport’ and ‘Safety & security’ challenge but implemented some features of gamification, sustainability, and process & document flow in it as well. As for other hackathons, we started from scratch and after 48 hours our solution was hosted on AWS. Live and ready to use! The only thing that was made beforehand, was a 3D printed container. Yeah, I just said 3D-printed, pretty cool, isn’t it? So let’s get on with it and showcase what we’ve built.

Interport: ETA(I)

The main issue from the interport challenge was the Estimated Time of Arrival (ETA). The arrival of late shipments costs the companies and ports a lot of money. If they could predict the time more accurately, they could use resources better and more efficiently. We started thinking: “How can we make it smart and correct?”. Looking at the current situation and currently available solution. And of course, we instantly thought of Artificial Intelligence (AI). Leveraging its power to predict the ETA was a genius idea. That’s why we called it ETAI!

How does ETAI works?

Every container has a route to follow. Let’s say from LA to Montreal to Barcelona to Antwerp by boat and from Antwerp to Hamburg by truck.
Knowing its predefined route and arrival at each port, there can be done a first estimation if the shipment will arrive at the right time at the destination. No AI needed for this. But then it gets interesting. Data has shown that arriving late at one of the previous destinations is not really a good indicator for the prediction of the ETA.

Board and props for demonstration used at the hackathon

Here is where we implemented the AI. The algorithm will check the nautical conditions to predict the new ETA. Because wind speed, wind direction and sea-state (high waves, etc …) have a huge impact on a boat when sailing those kinds of distances. E.g.: a captain will decrease speed when there are high waves on the sea. Taken these parameters in to account between all the stops, the AI will predict if the container will still arrive on time, even when it is late at one or multiple of the stopovers.

Safety & security

Actually, this is the coolest part we did in my opinion. Inside the 3D-printed container, there were some IoT devices. These devices were measuring: tilting, temperature, humidity, water and eSeal. So when the container got tilted, a red tilting alert is shown instantly on the dashboard and those actions are stored on the blockchain.

Real-time container information

Ok cool. Tilting, getting a red alert on the dashboard, nothing fancy really. But now comes the most interesting part. When the eSeal is broken and the doors of the container are opened, a picture is taken from the thief, shown on the dashboard and an alarm inside the container is triggered! An extension would be that this face would be interpreted by AI and matched with databases from Interpol, ports security and other databases where criminals are stored.

Real-time eSeal openingSustainability

For the IoT device(s) plugged in the container, we need a power source. So we started thinking again (did a lot of thinking during that weekend). When looking around, the watch of Jonas Snellinckx came to my mind. He has an automatic watch (self-winding).

An automatic or self-winding watch is a mechanical watch in which the natural motion of the wearer provides energy to run the watch, making manual winding unnecessary.

A container is always in motion due to the motion of the ocean and even when being driven by a truck. So what if we covered the floor of the container with special tiles that convert the kinetic energy of the ‘bouncing’ to electrical energy. The excessive energy is stored in a battery in case there would be no motion. The most amazing part of this solution is that we are talking about 100% self-generated energy and that such tiles already exists!

Gamification with self-sovereign identity

We wanted to incentivise the dockworkers for quick document handling. When completing the tasks needed to be done on the container, they would receive some kind of points. Having enough points for doing a good and fast job, they could exchange them for extra vacation of other benefits.

But we turned 180 degrees and added self-sovereign identity through uPort. A captain can claim his badge when done a perfect shipment. When having a lot of these badges, captains can ambiguously claim and proof they have done a lot of perfect deliveries.

Document & process flow

We can predict a more accurate ETA and can check the conditions of the container throughout the whole route. I already hear you thinking: “Aren’t there any penalties related to some conditions?”. And the answer is YES. These conditions and actions are written in smart contracts and stored on the (Hyperledger fabric) blockchain. E.g.: When a certain temperature is above the maximum limit, the smart contract will be triggered and penalties are added. At the end of the trip, all the penalties are summed up and can be viewed transactionally through the history that is saved on the blockchain.
What has that to do with document & process flow? The rules of these smart contract are defined in a paper contract, which is linked to the container and can be viewed in the same application as well. This way there can be no dispute about the penalties added.

Simon says raise your leg as high as you can!

But at every port, a container has to go through a process, some checks you might say. We implemented that as well. At every port, 3 new actions (Simon says) were generated. This time again, when not all actions were done, penalties are added. A real example of a “Simon says” can be: “Move container to place X”.

real-time data dashboard

Reach out to us @ TheLedger. If you are interested in AI, reach out to Craftworkz. For all things IoT, please contact Innovation Unplugged.

Real-time connected container tracking at the chainport hackathon was originally published in wearetheledger on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Within Flanders, there are a multitude of road managers and dozens of other parties, who rely for their daily operations on accurate information about roads, road amenities, electro-mechanic and telematic installations, or road assets in short. There is a need for information about ownership and management of the assets, but also about specific properties of the assets. However, the multitude of parties, the necessary exchanges and the absence of one overall responsible make the situation very complex. In this blog post, we try to explain how blockchain technology could help alleviate these problems. This initial investigation was done by TheLedger in close collaboration with Agentschap Wegen en Verkeer (AWV), and in assignment of Agentschap Informatie Vlaanderen.

Current road asset management

There are a number of road managers within Flanders, each responsible for a set of roads and their associated assets. These road managers include the road and traffic agency (AWV), several public-private partnerships, and the different cities and municipalities. Other parties, such as utility companies, public transport companies, and water road managers are impacted by the status of the road assets as well. They typically also have their own assets, which might be impacted in case of changes to the roads or the assets.

Many parties are involved in the management of roads and road assets or are impacted by them.

For the various road assets, it is important to know who the owner, the manager, and possible contractor are, because maintenance is required for these assets in order to ensure smooth traffic and to guarantee safety. Additionally, the correct party must take responsibility if a problem arises or in case an accident occurs due to poor maintenance. Due to the multitude of parties owning road assets, however, there often is a lack of clarity about who is the manager or owner. Each party has its own overview of assets and thus its own view on the truth. There is thus a need for a single version of the truth regarding the ownership of the road assets.

Furthermore, a lot of information about the assets is exchanged between the various road managers, but also with contractors and utility companies that carry out road works and maintain their assets. However, information threatens to leak away. For example, it happens that contractors have a lot of information about the assets, but that this information does not reach the managers or owners. Inefficient data exchange can lead to extra costs due to product failure, wrong deliveries or the human intervention that is needed to obtain the information. There is a need for an easy way to share asset information. In order to achieve this, several measures are already being taken. First of all, the use of BIM standards will facilitate the exchange of information, since this way everybody speaks the same “language”. The AWV also focusses on the controlled delivery of data via their application DAVIE (Data Acceptance, Validation and Information Extraction), via which contractors submit data to the agency. An additional aspect that could help, is to actually share the road asset data, as compared to throwing it over the wall, as is the case in some situations.

Blockchain-based road asset management

Blockchain is a technology that offers transparency and trust, and that allows data and logic — for example as an interpretation of agreements made — to be shared directly between different parties. These aspects of blockchain could help road managers to exchange information about road assets across different stakeholders and to determine who is the owner or manager of an asset. As such, this “road asset blockchain” would form a shared platform between the various road authorities, which provides an unambiguous version of the truth and ensures smooth and transparent information exchange. For this solution, we assume that a standard for roadside asset information has already been agreed upon.

Within this “road asset blockchain”, different road authorities together form the blockchain consortium, and each consortium partner shares its asset information on the network. This way, we can easily obtain a single version of the truth regarding the ownership of road assets. Road authorities thus each register their assets, and with the help of geospatial queries, it is possible to detect and remove assets that are registered twice. With such a system it is also easier to exchange other asset information. All information is already in the network, and ownership is simply transferred. Furthermore, logic could enforce for example that every asset must have an owner and a manager, that a certain agreed-upon asset standard is followed, or that maintenance must be done at a specific frequency. The transfer of ownership can also be handled using the shared logic. Additionally, logic could determine who may see which data. Keep in mind however that each party participating in the blockchain network has a copy of the data.

Blockchain as a piece of the puzzle

The consortium members own and manage the “road asset blockchain” together, and together they decide on the data and the rules in the network. The blockchain should, however, be seen as only a piece of the puzzle: it forms the common layer where road asset data, logic and functionalities are shared. This common layer can be addressed by each of the consortium parties from their own systems, whereby an interface or front-end is provided for the users. This interface can be a new application or can be an existing application that is extended with the new functionalities. The users can also be internal employees as well as external parties, such as contractors with whom the road manager cooperates. It is possible that the data is first enriched with additional information from its own systems, or that additional rules are applied within the own systems. The consortium parties are also free to make certain functionalities available to users or not, and they can also do this in different ways: either via API or via an application with screens.

In principle, the consortium can also build an application jointly — as a kind of central party — that can, for example, be used by citizens to report defects of road assets. A notification can then automatically be sent to the owner or manager. Note that in such a system, we would not have to waste time looking for the one responsible.

Advantages of a blockchain solution

A “road asset blockchain” has certain advantages. It forms a single vision of the truth, without the need for a third party or central administrator. After all, there is currently no central authority that could take up this role, nor is there currently a distributed system which the different parties could join. The proposed system enables the various parties to share the data and logic as equals and also stimulates the reuse of data. The distributed nature of blockchain also ensures that the blockchain consortium can continue to exist in the long term and could survive institutional changes.

Blockchain offers transparency and traceability. This could strengthen confidence in contractors. In addition, the timestamp of the information becomes increasingly important.

Finally, a blockchain network also allows automation between different companies by means of its smart contracts and imposes rules on the network.

A typical alternative to a blockchain solution is to create a central database where all data (in this case concerning all road assets) is gathered and managed by a central authority. The big question here is who should fulfill this role, and to what extent data with this party is trusted with this data. As stated above, the decentralized and transparent character of blockchain also has certain advantages.

How blockchain could support road asset management was originally published in wearetheledger on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Student & labor mobility are well established within Europe. For example, in 2015 about 1.6 million students were undertaking tertiary level studies in a foreign country, across the EU-28 [1]. When doing this, students need to present their diploma such that the admission requirements can be checked. Likewise, employees should show their diplomas to proof their qualifications when applying for a job.

No European authentic diploma database

In this era of increased student & labor mobility, we should be able to easily obtain a cross-country overview of one’s diplomas and to check the authenticity of these diplomas. Keep in mind also that diploma fraud is still a current issue, and in many cases even go unnoticed. However, a European central authentic diploma database does not exist. Even within Belgium, there is no one central solution. In Flanders, the LED is used as the authentic source, but there is no equivalent in Wallonia. In the end, the schools are contacted to confirm the authenticity of a diploma, which means a lot of administrative work. Furthermore, when going abroad, your diploma also has to be recognized, which can be a cumbersome process. Now, imagine for a second that you are a refugee, you have lost your paper diploma and your school doesn’t exist anymore. Imagine the administrative processes you face at that point [2]. And let’s admit: even if you never go abroad for work or studies, managing your degree is an annoying process. You have to keep the actual inconveniently sized paper diploma, scan it and show it when needed (if you still know where it is at that moment). Let it be clear that there is a need for an easy way to authenticate diplomas and to exchange them across borders. Regional ad hoc solutions exist, but these are difficult to scale, and each has its limitations.

Certified for life solution

In assignment of Informatie Vlaanderen, AHOVOKS and la Fédération Wallonie-Bruxelles, TheLedger worked out a blockchain solution for this. We prototyped a decentralized system in which different governments and schools can join, and which enables their students to obtain their complete and authentic diploma profile and share it internationally.

Blockchain

“Blockchain solution” you said? Blockchain is an append-only distributed database, with on top of it shared business logic. This technology enables different parties to share data immediately without the need for a central administrator. With permissioned or enterprise blockchains, the data is only shared with certain parties, such that not just everybody can see and access the data. As such, a blockchain forms a decentralized system, maintained by a consortium of partners. The consortium will decide on the data and rules in the system and will decide which new partners can join. Of course, such a solution will only work if the consortium has a shared incentive. When the data is shared among all relevant stakeholders, this system could form a single source of the overall truth. All transactions are immutably logged within a blockchain, thereby offering build-in auditability and traceability. These aspects together ensure data integrity. Furthermore, a blockchain allows sharing not only data, but also business rules. This means that we could enforce certain rules over the network, and also enable inter-company automation.

Blockchain consortium

The idea for the certified for life project is that the different governments together form a blockchain consortium. Together, they will be able to provide the civilians with a cross-country and authentic diploma profile. In countries where the government does not interfere with education, or in the case of private schools, the schools themselves could become part of the blockchain consortium. This idea is also visualized in Figure 1.

Figure 1: A blockchain consortium is a group of parties with a common incentive to share data and business logic.Blockchain functionality

So different governments can together maintain a diploma blockchain, but which functionalities did we include? First, governments and school should be able to add diplomas to the blockchain. Governments will likely use batch processes to upload the data they already have in their regionally centralized system (i.e. in case such a system exists). Schools can then add missing diploma data manually (on demand), for example in the case of older diplomas. Civilians should be able to consult their diplomas and manage the visibility of individual diplomas. For sharing their diploma profile, the civilians can use access keys or access links. Different access links with different time frames can be created to distribute to different people. A third-party user can then consult this profile using the access link. When accessing the data, this anonymous user should provide a reason for accessing the data, since he is accessing personal data. The fact that the access link was used, will be reported to the individual concerned, together with the reason provided by the anonymous user, as shown in Figure 2. This way, we bring ownership back to the individual.

Figure 2: User interface for a civilian. Here, the civilian can see a timeline with all actions that happened to his diploma profile. E.g. diplomas which were added or access links which were created, but also the fact that an access link was actually used.

In the built prototype, schools have an overview of their students (i.e. of the diplomas obtained at that school), but they can also add possible future students and consult their profiles, if this student gave them permission (access key).

Furthermore, because different governments might use a different diploma standard, we followed the “bring-your own-standard” principle.

Of course, in a later stage, other functionalities could be added. Here are some examples:

  • Include diploma supplements & transcripts of records. These documents provide additional valuable information which is for example also needed for the admission checks at schools.
  • Include mappings (or even groupings) between diploma standards, which could mean a great deal for the automatization of diploma recognition and equivalence.
  • Link the diploma profiles to the processes of study program registration and admission, allowing for further automation and acceleration of administration.
  • Consider mergers and discontinuations of schools for reasons of governance.
  • Include other learning institutions and certificates, in order to build complete resumes for individuals.
  • Include accreditors and sworn translators.
Identification of civilians

There is an aspect we did not discuss yet, but which is of utmost importance for the diploma case. Governments and schools assign diplomas to individuals, and therefore they have to uniquely identify the individuals. However, each government knows its civilians by a unique identifier which is only known within that country. There is no such thing as a European national number. You could have a national number in more than one country (e.g. if you study abroad or if you have a double nationality), but no records are kept of their linking. Unfortunately, even the combination of your name, date of birth and place of birth is not unique. How to create a cross-country overview of one’s diplomas, if we do not have a unique way to identify people across countries? A solution is to let the civilians link their different national identifiers in the current system, whereby they have to sign this transaction digitally with both identifiers. We followed this idea for the prototype. A prerequisite for this is that civilians can digitally proof that a certain national identifier is theirs. This is in fact not that straightforward, since for example foreign students don’t usually get a national ID card in that foreign country. This problem could be alleviated with the eIDAS project, which will allow foreigners to identify themselves on governmental sites using their own national ID, which would then be translated to an identification number in that country.

Blockchain architecture & set-up

So different governments can share diploma data and functionality using the certified for life blockchain solution. However, what is often forgotten when thinking about blockchain, is that blockchain is a backend component: it forms a shared database and shared functionality but for example does not contain a front-end. Furthermore, it is up to each consortium partner to integrate its existing IT systems with the blockchain. As such, blockchain is only a piece of the puzzle.

Already existing regionally centralized diploma databases — such as the LED — can be synchronized with the blockchain. The consortium partners can also create user interfaces for other stakeholders: e.g. a front-end for civilians, and an API for schools. Figure 3 visualizes this architectural set-up. Of course, we did not implement the complete picture for the prototype, but enough to make the concepts and ideas clear. The set-up for our prototype is shown in Figure 4. The development was done on Hyperledger Fabric (HLF).

Figure 3: TO BE architecture of enterprise blockchain solutionFigure 4: Architectural set up for the prototypeAdvantages of the proposed solution

The proposed solution provides a cross-country diploma overview for civilians and creates a platform where the authenticity of diploma data can be checked. As explained above, ownership and ease of use for the civilian are central aspects of the solution (see also Figure 5): he can decide who can see his profile by creating access links or access keys, and he gets insight into all actions done on his profile.

Figure 5: The certified for life solution brings ownership and ease of use for civilians.

The decentralized nature of the system implies that no one central administrator needs to be appointed, and that the system could outlive the lifespan of individual institutions. It is up to the consortium to decide if new partners (i.e. schools or governments) can join this decentralized system. This cross-country system allows not only to share data, but also business rules: the way the data is handled is enforced over and controlled by the network. By design all transactions — i.e. diploma additions or even modifications — are immutable, traceable and auditable. Therefore, the integrity and authenticity of the diplomas are ensured. Furthermore, the solution can also be integrated with governments’ and schools’ current IT systems, allowing for easy ways to update and receive student information. The certified for life blockchain solution can be seen as a part of the puzzle that helps smoothen and simplify the administrative processes of governments and schools (see also Figure 6).

Figure 6: The certified for life solutions can be easily integrated with schools’ and governments’ IT systems and will smoothen and simplify their administrative processes.Lessons learned & alternative architectures

In the set-up for the prototype, we exchange all diploma information via the blockchain, meaning that both the diploma data, the person’s national number and the permissions (access keys) are stored on the blockchain. Using this set-up, we obtain the most advantages and ease of use for the civilians, and we could leverage the shared business rules on blockchain the most. However, concerns are raised in terms of the “right to be forgotten”, increased risk for data breaches (due to duplication of personal data across different consortium partners) and the potential misuse of the data by individual consortium partners. Indeed, even if the data is encrypted, if the logic should be able to decrypt the information, then the consortium partners themselves are in principle able to decrypt and read the data.

Solutions exist, but they limit the advantages of the solution as well. For example, we could also work out a solution where only the proof of the diploma is kept on the blockchain in the form of a hash, and the permissions and the encrypted diploma data is kept in the “side databases” of Hyperledger Fabric version 1.2. Here, the diploma data could be symmetrically encrypted with a key, which is by itself asymmetrically re-encrypted with the public key of each individual user that gets access. In these side databases of HLF v1.2, private transactions can be kept, and data can be deleted. In this solution, the private key of the user that got permission to view the diploma data is needed to decrypt the symmetric key and thus the diploma. Therefore, malicious consortium partners are not able to decrypt the diploma data. However, we will not be able to work with access links as proposed in the prototype, and we won’t be able to leverage the shared logic as much (such as including mappings between diplomas, as discussed above).

A second option is the idea of “self-sovereign” identity. Here, the proof of the diploma — again in the form of a hash — would be kept on the blockchain, and a digital version of the diploma would be given to the civilians. This way, the individual is truly the owner of his own personal data and only proofs are kept on-chain. However, be aware that then the student would e.g. get the digital diploma at graduation, and he would have to keep the data himself on his devices or on his cloud storage.

To conclude, I would like to add that there is a non-technical solution as well for the risk of malicious consortium partners: that is to put in place the necessary legal agreements with these consortium partners. Keep in mind as well that it is the existing consortium that decides on new partners, so the best option might be to only involve only parties in which you have relative trust.

Next steps

With this prototype, we came to a solution for the authentication and cross-country exchange of diploma data using blockchain technology. This prototype shows that the immutability and access control ensure data integrity and proof of authenticity, and that the build in traceability increases insight, trust & control for the individual.

Next steps are to involve the necessary stakeholders at European level and start creating a consortium. The consortium should come to an agreement on which set-up to follow and can then together to take this set-up to production.

Special thanks

Besides to our clients — Fédération Wallonie-Bruxelles, Informatie Vlaanderen and AHOVOKS –, we would like to send out a special thanks to our stakeholders at the different schools and educational institutions that participated in this project. We received a lot of valuable input from them during the project, and their enthusiastic collaboration and constructive feedback ensured a workable prototype and important steps taken towards making this a successful European project.

Certified for Life — International exchange & authentication of diplomas via blockchain was originally published in wearetheledger on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview