X-Road and Containers (part 2)

This is a series of blog posts about X-Road® and containers. The first part provides an introduction to containers and container technologies in general. The second part concentrates on the challenges in containerizing the Security Server. The Security Server Sidecar – a containerized version of the Security Server – is discussed in the third part.

Container support for X-Road – and for the Security Server especially - has been requested for some years already, but at the moment, production-level support is not available yet. However, both Central Server (xroad-central-server) and Security Server (xroad-security-server, xroad-security-server-standalone) Docker images are already available for testing purposes on NIIS’s Docker Hub account. This means that different X-Road components can be run inside containers, so why production use is not supported yet? Let’s consider the question from the Security Server’s point of view. What needs to be taken into account when running the Security Server in a container?

One process per container

According to the best practices, each container should have only one concern and run only a single process. The Security Server consists of multiple processes, including a PostgreSQL database, and the currently available Docker image runs them all in a single container. Decoupling all the Security Server processes into multiple containers would require a significant effort providing minimal benefits in exchange since the current architecture has not been designed to run and scale different application processes separately. Supporting that kind of approach would require significant changes to the Security Server architecture.

However, rules and best practices are made to be broken. After all, it is quite common to run multiple processes inside a container. A good approach for the Security Server is to deploy the Security Server application and Postgres database separately. In that way, the Security Server is split into two parts. Yet, the Security Server application processes remain in the same container. In this case, no software-level changes are required since the Security Server already supports using a remote database that can be a separate container, managed DB service on the cloud, etc.

Running multiple processes in a container requires that process management is appropriately implemented. When the Security Server is run on a Linux platform, the Security Server processes are managed using systemd service and system manager. The use of systemd is built in the Security Server packaging since it’s used by the Linux distributions supported by the Security Server. However, it is not recommended to run systemd inside a container since systemd does things that are typically controlled by the container runtime. Besides, some things systemd does are prevented inside containers by default, e.g., change host-level parameters. Therefore, the Security Server processes need to be managed using some other more lightweight process manager, such as supervisord.

Persistent storage

The Security Server is a stateful application. Therefore, the configuration in the database and on the filesystem must be persisted over a lifecycle of a single container. The data includes local overrides to the default configuration, keys and certificates, registered clients and their configuration, logs, backups, etc. Without persisting the configuration, the Security Server should be initialized, configured, registered, etc., whenever an existing container is recreated.

When an external database is used, the data in the database is already stored outside the container. However, the configuration data, backups, and message log archives stored on the filesystem must be persisted too. It can be done using persistent storage that is mounted to the Security Server container. Persistent storage stores the data on the host system and not in the container. Besides, X-Road application logs must be persisted as well. It can be done using the persisted storage or redirecting logging to console to enable the container management system to collect and store the logs.

Version upgrades

Security Server version upgrades sometimes require running database migrations and updating the contents of the configuration files. Since the way how version upgrades are handled with containers differs from traditional version upgrades done using Linux package management systems, special attention must be paid to the Security Server version upgrades. In practice, it means that the upgrade mechanism has to be built in the container image. The mechanism must detect that the application version used by the container differs from the version of the persistent configuration, and perform the steps required by the upgrade. In this way, it is possible to change from an older image to a newer one and keep the existing configuration and data.

First run

Similarly to version upgrades, there must be a mechanism that detects when a container is started for the first time, and there’s no existing, persisted configuration already available. For security reasons, each container must have a unique internal and admin UI TLS keys, certificates, and a database password. The secrets are typically generated during the installation process, which in the container context means when the image is created. In practice, it means that all the containers created from the same source image share the same secrets. In case of a public Security Server container image, anyone could access the secrets which would expose all containers created from the image to different kind of attacks. Therefore, the secrets must be recreated on the first run so that each container has its own unique set of secrets that are not shared with any other container.

Hardware security modules (HSMs)

One additional challenge that has not been discussed yet is related to hardware security modules (HSM). For extra security, sign keys and certificates of the Security Server clients may be stored on an HSM instead of a software token that’s used by default. Different cloud platforms provide cloud HSM services that can be accessed over a network, but in case using a physical HSM device is required, how to connect it to containers? Finding an answer to the question is out of the scope of this blog post.

Towards containerization

X-Road version 6 was initially designed to be deployed on Linux hosts (physical or virtual), and therefore, some additional effort is required to enable its production use in containers. However, the challenges related to containerizing the Security Server can be overcome without changing the application itself.

In the long run, the Security Server architecture should be refactored to be able to utilize the benefits that containers can offer fully. At the same time, it’s important to remember that the currently supported Linux platforms must be supported in the future too. Fortunately, the two alternatives are not mutually exclusive. Containers are not going to replace virtual machines, but they will provide an alternative way to run the Security Server.

From Virtual Machines to Containers (part 1)

This is a series of blog posts about X-Road® and containers. The first part provides an introduction to containers and container technologies in general. The second part concentrates on the challenges in containerizing the Security Server. The Security Server Sidecar – a containerized version of the Security Server – is discussed in the third part.

Nowadays, it’s hard to avoid hearing about Docker and containers if you work in the field of IT. It applies to X-Road, too, since questions regarding X-Road and support for containers have been arising regularly during recent years. But what containers are, and how do they differ from virtual machines?

What are the containers?

Containers package an application and all its dependencies, libraries, configuration files, etc., into a single package that contains the entire runtime environment needed to run the application. The package can then be deployed to different computing environments without having to worry about the differences between operating system distributions, versions of available libraries, etc. The differences are abstracted away by the containerization.

The difference between virtual machines and containers is that a virtual machine includes an entire operating system and the application. In contrast, a container only contains the application and its runtime environment. Therefore, containers are more lightweight and use fewer resources than virtual machines. The size of a container may be only tens of megabytes, and it can be started in seconds. Instead, a virtual machine with an entire operating system may be several gigabytes in size, and booting up may take several minutes.

Image 1. A physical server that runs multiple containers compared to a physical server that runs multiple virtual machines.

Image 1. A physical server that runs multiple containers compared to a physical server that runs multiple virtual machines.

A physical server that runs multiple virtual machines has a separate guest operating system running for each virtual machine on top of it. Instead, a server running multiple containers only runs a single operating system which resources are shared between the containers. However, each container runs in a separate, isolated process that has its namespace and filesystem. The number of containers that can be hosted by a single server is far higher than the number of virtual machines that the server can host.

Container technologies

Docker is commonly considered a synonym for containers, even if it’s not the only container technology out there. Besides, Docker is not the first container technology either since several other technologies had existed already before its launch in 2013. However, Docker was the first container technology, which became hugely popular among the masses, which is why the name Docker is often mistakenly used when referring to container technologies in general.

Nowadays, there are multiple container technologies available, and the fundamental building blocks of the technology have been standardized. The Open Container Initiative (OCI) is a project facilitated by the Linux Foundation, which creates open industry standards around container formats and runtime for all platforms. The standardization enables portability between infrastructures, cloud providers, etc., and prevents locking into a specific technology vendor. All the leading players in the container industry follow the specifications.

Images and containers

Images and containers are the two main concepts of container technologies. Therefore, understanding their difference on a high-level, at least, is essential.

A container image can be compared to a virtual machine image – except that it’s smaller and does not contain the whole operating system. A container image is an immutable, read-only file that contains executable code, libraries dependencies, tools, etc., that are needed for an application to run. An image represents an application and its virtual environment at a specific point in time, and it can be considered as a template of an application. An image is compiled of layers built on top of a parent or base image, which enables image reuse.

Containers are running images. When a new container is started, the container is created from a source image. In other words, the container is an instance of the source image, just like a process is an instance of an executable. Unlike images, containers are not immutable, and therefore, they can be modified. However, the image based on which the container was created remains unchanged. Consequently, it’s possible to create multiple containers from the same source image, and all the created containers have the same initial setup that can be altered during their lifecycle.

Images can exist independently without containers, but a container always requires an image to exist. Images are published and shared in image registries that may be public or private. The best-known image registry is probably Docker Hub. Images are published and maintained by software vendors as well as individual developers.

Stateful and stateless containers

Containers can be stateful or stateless. The main difference is that stateless containers don’t store data across operations while stateful containers store data from one time they’re run to the next. In general, a new container always starts from the sate defined by the source image. It means that the data generated by one container is not available to other containers by default. If the data processed by a container must be persisted over a lifecycle of the container, it needs to be stored on a persistent storage, e.g., an external volume stored on the host where the container is running. The persisted storage can then be attached to another container regardless of the source image of the other container. In other words, persistent storage can be used to share data between containers.

Handling upgrades

Upgrading an application running in a container also differs from the way how applications running on a virtual machine are traditionally upgraded. Applications running on a virtual machine are usually upgraded by installing a new version of the application on the existing virtual machine. Instead, applications running in a container are upgraded by creating a new image containing the latest version of the application and then recreating all the containers using the new image. In other words, instead of upgrading the application running in the existing containers, the existing containers are replaced with new containers that run the latest version of the application. However, the approach is not container-specific since handling upgrades on virtual machines in cloud environments often follows the same process nowadays.

Container management systems

Running a single container or an application consisting of a couple of containers on a local machine for testing or development purposes is a simple task. Instead, running a complex application consisting of tens of containers in a production environment is far from simple. Container management systems are tools that provide capabilities to manage complex setups composed of multiple containers across many servers. In general, container management systems automate the creation, deployment, destruction, and scaling of containers. Available features vary between different solutions and may include, for example, monitoring, orchestration, load balancing, security, and storage. However, running a container management system is not a simple task that brings additional complexity to management and operations.

Kubernetes is the best-known open-source container management system. Google originated it, but nowadays, it is widely used in the industry, and by different service providers. For example, all the major cloud service providers offer Kubernetes services. When it comes to commercial alternatives, Docker Enterprise Edition is probably the best-known commercial solution, but there are many other solutions available too.

Pros and cons

The benefits of containerization vary between different applications. And sometimes containerization may not provide any benefits. Therefore, instead of containerizing everything by default, only applications that benefit from containers should be containerized.

Containers provide a streamlined way to distribute and deploy applications. Containers are highly portable, and they can be easily deployed to different operating systems and platforms. They also have less overhead compared to virtual machines, which enables more efficient utilization of computing resources. Besides, containers support agile development and DevOps enabling faster application development cycles and more consistent operations. All in all, containers provide many benefits, but they’re not perfect, they have disadvantages too. 

In general, managing containers in a production setup requires a container management system. The system automates many aspects of container management, but implementing and managing the system itself is often complicated and requires special skills. Managing persistent data storage brings additional complexity as well, and incorrect configuration may lead to data loss. Besides, persistent storage configurations may not be fully compatible between different environments and platforms, which means that they may need to be changed when containers are moved between environments. For example, both Docker and Kubernetes have the concept of volume, but they’re not identical and, therefore, behave differently.

All in all, containers offer many benefits, and they provide an excellent alternative to other virtualisation options. However, containers cannot fully replace the other options, and therefore, different solutions will be used side-by-side in the future too.

New Security Server UI and management REST API are here

X-Road version 6 was released in 2015, and it has been continuously developed further throughout the years. As so far, the most significant change has been adding support for REST services in 2019. However, the system hasn’t changed much visually since its release in 2015. That’s about to change soon since X-Road version 6.24.0 will introduce the biggest changes X-Road 6 has experienced yet.

The beta version of X-Road 6.24.0 is already out, and the official release version will be published on the 31st of August 2020.

It’s got the look

The most significant change in X-Road version 6.24.0 is the fully renewed Security Server user interface (UI). The new UI aims to improve the usability and user experience of the Security Server. The new intuitive UI makes regular administrative tasks easier and supports streamlining the on-boarding process of new X-Road members.

Image 1. Add client wizard.

Image 1. Add client wizard.

For example, the new UI uses wizards to implement tasks that require completing multiple steps in a specific order, such as adding a new client with a new signature key and certificate. Before, the user needed to know what steps are required and their correct order, but from now on the UI provides the information to the user and guides the user through the process.

Image 2. The new UI provides additional information on different configuration options.

Image 2. The new UI provides additional information on different configuration options.

Another essential improvement is providing more additional information regarding different Security Server features in the UI. For example, the Security Server has multiple keys and certificates, and it may not always be clear what different keys and certificates are used for. Therefore, the new UI provides information about different keys, such as authentication and signature keys.

Management REST API

Another significant change in X-Road version 6.24.0 is the brand-new management REST API. The API provides all the same functionalities with the UI, and it can be used to automate common maintenance and management tasks. It means that maintaining and operating multiple Security Servers can be done more efficiently as configuration and maintenance tasks require less manual work. By the way, the new UI uses the same API under the hood too.

The Security Server User Guide provides more information about the API, and there’s also the API’s OpenAPI 3 description available on GitHub. Access to the API is controlled using API keys that can be managed through the Security Server UI or through the API itself. In addition, access to the API can be restricted using IP filtering.

Changes in the architecture

The new UI and management REST API have also caused changes in the Security Server architecture and packaging. The previously existed Nginx (xroad-nginx) and Jetty (xroad-jetty) components have been replaced with the new UI and API (xroad-proxy-ui-api) components. These changes have affected Security Server’s log files, directories, software packages, and services. It’s strongly recommended that Security Server administrators study the details of these changes from the release notes before upgrading to version 6.24.0.

Image 3. Changes in the Security Server architecture - before version 6.24.0 (left) and starting from version 6.24.0 (right).

Image 3. Changes in the Security Server architecture - before version 6.24.0 (left) and starting from version 6.24.0 (right).

Wait, there’s more!

Even though the new UI and management REST API are the most significant and most visible changes in version 6.24.0, the new version contains many other new features, improvements, and fixes. Here’s a short overview of other changes included in the latest version.

  • Support for running Security Server on Red Hat Enterprise Linux 8 (RHEL8).

  • Updates on operational monitoring protocols that enable monitoring of SOAP and REST services in more consistent manner. N.B.! The updates cause breaking changes in the Operational Monitoring protocols.

  • Better support for using external database services on different platforms (e.g. Amazon Web Services, Microsoft Azure, Google Cloud Platform) for both Central Server and Security Server.

  • Changes in allowed characters in X-Road system identifiers and improved validation of the identifiers.

  • Technology updates and decreased technical debt. 

The full list of changes with more detailed descriptions is available in the release notes.

It’s all about users

Another significant change in X-Road over the years is how X-Road is being developed. Nowadays, X-Road users play an essential role in the design and development as a source of input and as validators of the development results. It applies to the new UI, too, since X-Road users have participated in its design and development by providing input, feedback, and comments in different phases of the process. The involvement of the users in the design and development is here to stay, and also the new UI will be further developed and improved based on the feedback received from the field.

Towards the Unicorn

One major change has just been completed, but the next ones are already waiting around the corner. The very first flight of the Unicorn – the release of the beta version of X-Road 7 – is expected to happen by the end of this year, and the first release version should see the daylight in 2021. More information about X-Road 7 and the changes it will introduce will be provided at a later date. Meanwhile, please try out the new X-Road 6.24.0 and tell us your opinion about it!

X-Road Implementation Models

X-Road® has become known as the open-source data exchange layer that is the backbone of the Estonian X-tee and the Finnish Suomi.fi Data Exchange Layer ecosystems. Both ecosystems are nationwide, and they’re open for all kinds of organizations – both public and private sectors. Also, Iceland is currently setting up its national X-Road ecosystem called Straumurinn. Besides, X-Road has been implemented all around the world in many different shapes and sizes.

In general, an X-Road ecosystem is a community of organizations using the same instance of the X-Road software for producing and consuming services. The owner of the ecosystem, the X-Road operator, controls who are allowed to join the community, and the owner defines regulations and practices that the ecosystem must follow.

Image 1. Roles and responsibilities of an X-Road ecosystem.

Image 1. Roles and responsibilities of an X-Road ecosystem.

Technically, the X-Road software does not set any limitations to the size of the ecosystem or the member organizations. The ecosystem may be nationwide, or it may be limited to organizations meeting specific criteria, e.g., clients of a commercial service provider. Thanks to its scalable architecture and organizational model, X-Road is exceptionally flexible, and it supports various kinds of setups. Even if a nationwide implementation of X-Road is probably the best known implementation model, X-Road can be used in many other ways too. Let’s find out more about the different alternatives.

National data exchange layer

National implementation is probably the most typical way to implement X-Road. In a national implementation, X-Road is implemented nationwide within a country, and the aim is to use it in data exchange between organizations across administration sectors and business domains. Typically, the ecosystem is open for all kinds of organizations – both public and private sector organizations. However, it is also possible to restrict the implementation to cover only the public sector, specific administration sector, business domain, or a combination of these.

Besides, X-Road can be used to implement cross-border data exchange with other countries that have a national X-Road implementation. In practice, the ecosystems of different countries are connected using federation – an X-Road feature that enables connecting two X-Road environments. Federation enables member organizations of different ecosystems to exchange data as if they were members of the same ecosystem.

Image 2. X-Road federation - connecting two X-Road ecosystems.

Image 2. X-Road federation - connecting two X-Road ecosystems.

In a national implementation, a government agency is usually the owner of the ecosystem. The owner takes the role of the X-Road operator, who is responsible for all the aspects of the operations. The responsibilities include defining regulations and practices, accepting new members, providing support for members, and operating the central components of the X-Road software. Technical activities can be outsourced to a third party, but administrative and supervising responsibilities are carried out by the operator.

There are multiple implementations around the world where X-Road is used as a national data exchange layer. The best known national X-Road ecosystems are in IcelandFinland, and Estonia.

Data exchange solution for regions

Regional implementation means implementing X-Road within a region or an autonomous community, such as a region, a province, or a state. In a regional implementation, X-Road is used within a region, and the scope is usually very similar to the national implementation – data exchange between organisations across administration sectors and business domains. However, the scope may be more restricted, as well. Besides, X-Road may be used to exchange data with the central government and/or other regions. 

In a regional implementation, a regional agency or authority is usually the owner of the ecosystem. The owner takes the role of the X-Road operator, who is responsible for all the aspects of the operations. Some of the technical activities may be outsourced, just like in the national implementation.

As an alternative approach, the national implementation described earlier may consist of multiple regional implementations too. Every region or some of the regions within a country can have their X-Road ecosystems that are connected using federation. However, compared to a single national implementation, this approach generates more overhead since every region must manage and operate its X-Road ecosystem. Therefore, when targeting for national implementation, a single national ecosystem is recommended over multiple regional ecosystems that are connected using federation.

One example of a regional implementation can be found from Argentina. The province of Neuquén in Argentina is using X-Road as a regional data exchange platform. Also, some regions in other countries are currently considering the use of X-Road on a local level.

Data exchange within a business domain or sector

In national and regional applications, X-Road is implemented within a geographic area, such as a country or a region. However, there are no restrictions on why an X-Road ecosystem could not span multiple states and/or regions as long as there’s an organisation that takes the role and responsibilities of the X-Road operator. A practical example of this kind of approach is implementing X-Road within a business domain or sector in which members are located in different countries around the world. However, X-Road could be implemented within a business domain or sector on the national level too.

The critical factor is that all members commit to follow the rules and policies of the ecosystem set by the X-Road operator. In this case, the use of X-Road is based on a mutual agreement between the members of the ecosystem. In national and regional implementations, the use of X-Road is often based on a law or a regulation issued by a governmental or regional authority. 

In case different business domains have their X-Road ecosystems, they can be connected using federation, which enables data exchange between member organisations of different business domains. Technically, a business domain-specific implementation can be connected to a national or regional X-Road ecosystem too.

X-Road based business domain-specific solutions have been implemented in several countries. For example, in Germany X-Road is being used to exchange healthcare data, and in Estonia, the X-Road based Estfeed platform is utilised in energy sector data exchange. Besides, Estfeed is also applied by the Data Bridge Alliance to exchange energy data on a cross-border level.

A platform for data exchange within an organisation

The primary use case for X-Road is data exchange between organisations, but there are no restrictions on why X-Road could not be used to exchange data within an organisation too. For example, a large international organisation that has branches and departments in different countries and continents may have information systems that communicate over the public Internet. X-Road provides a solution to connect those systems in a standardised and secure manner guaranteeing confidentiality, integrity, and interoperability of the data exchange.

When it comes to the organisational model of X-Road, one of the departments takes the role of the X-Road operator, and other branches and departments are members of the ecosystem. In addition to connecting information systems communicating over the Internet, X-Road could be used inside a private network of an organisation too.

One example of corporate use of X-Road can be found in Japan. A major Japanese gas company uses an X-Road based solution to exchange data between its different organisation units. Another interesting approach to corporate use is building a commercial product on top of X-Road. Since X-Road is open source and licensed under the permissive MIT license, it can be utilised in commercial closed source products too. For example, Planetway, a Japanese-Estonian company, has built its PlanetCross platform using X-Road.

For clarity, X-Road is not a service mesh platform for microservices, such as Istio. X-Road is meant for data exchange between information systems over the public Internet, and service mesh platforms are used as a communication layer between different microservices in a microservices architecture. The high-level capabilities that X-Road and many service mesh solutions provide may seem very similar. Still, the way how they have been implemented is optimised for very different use cases. Therefore, X-Road is not to be mixed with service mesh solutions.

How would you use X-Road?

As we have learned, X-Road can be implemented in many different ways. The right way always depends on the use case, requirements, and operating environment. Thanks to its distributed architecture, X-Road is highly scalable and is, therefore, a good fit for all sizes of implementations. It also enables different approaches when it comes to the speed and scale of the implementation – starting small with few member organisations and services, or going live with a big bang with a bunch of members and connected systems.  

If you’re interested in the upcoming changes in the X-Road core, please visit the X-Road backlog. Anyone can access the backlog, and leave comments and submit enhancement requests through the X-Road Service Desk portal. Accessing the backlog and service desk requires creating an account that can be done in a few seconds using the signup form.

X-Road development going full steam in 2020

The year 2020 has started like the previous ended, with the X-Road development going on full steam. The first X-Road release of the new decade saw the daylight in February, which means that X-Road releases have now been published in three decades. The first production-level X-Road version was released in 2001 – almost 20 years ago. It does not mean that X-Road is cooling down – on the contrary, the near future brings a bunch of changes to X-Road that take it to the whole new level. However, getting there does not happen overnight.

The changes are implemented using an iterative approach, which means that every new X-Road release brings something new to the table. The changes start from version 6.24.0, but the most significant milestone will be the release of X-Road 7 in 2021. We have published a high-level X-Road development roadmap for 2020 so that everyone can see what kind of new features are coming out and when. The roadmap is available on the X-Road website.

The first release of the year, version 6.23.0, was published in February. The release was all about the Central Server, and it introduced changes in the Central Server high-availability support. More information about the changes can be found in my previous blog post and the official release notes.

The first production-level X-Road version was released in 2001 – almost 20 years ago. It does not mean that X-Road is cooling down – on the contrary, the near future brings a bunch of changes to X-Road that take it to the whole new level.

New Security Server admin UI and API

As you probably know, we have been working on the new Security Server UI and administrative REST API for some time already. The work is not fully completed yet, but at this point, it can be said that the new UI and API will be included in version 6.24.0. The release of the new UI and API is probably the most significant change in X-Road core since the first release of X-Road version 6 in 2015 – even more significant than the long-awaited REST support in 2019. Technically, the new UI and API are built on top of the existing X-Road core. However, the implementation technologies have been updated in the process.

The new UI provides improved user experience (UX) for Security Server administrators. The new UI has a new look and feel, and it makes taking care of administrative tasks easier and supports streamlining the onboarding process of new X-Road members. The administrative REST API will enable automation of Security Server maintenance tasks since all the features that are available through the UI are available through the API too. Maintaining and operating multiple Security Servers can be done more efficiently as configuration and maintenance tasks require less manual work.

The release of the new UI and API is probably the most significant change in X-Road core since the first release of X-Road version 6 in 2015 – even more significant than the long-awaited REST support in 2019.

Supported platforms

Currently, the Security Server officially supports Ubuntu 18.04 LTS and Red Hat Enterprise Linux 7 (RHEL7) platforms. Instead, the Central Server and Configuration Proxy officially support only Ubuntu 18.04 LTS.

In 2020 official support for Ubuntu 20.04 LTS will be added to the Central Server, Configuration Proxy, and Security Server. Also, official support for RHEL8 will be added to the Security Server.

Version 6.21 is the last X-Road version that supports Ubuntu 14.04 LTS. It is good to keep in mind that once the version 6.24.0 is released, the version 6.21 drops out of the supported X-Road versions list. X-Road components still running on Ubuntu 14.04 LTS host cannot be upgraded to a newer X-Road version anymore without first upgrading the underlying host operating system.

X-Road 7

The development of the core components of X-Road version 6 continues actively throughout the year 2020. It has been decided that X-Road 7 will be built on top of version 6, which means all the enhancements implemented for version 6 will benefit the development of version 7 too. Making the current codebase more modular and reducing technical debt are also important goals for this year. Enabling the smooth implementation of new features planned for version 7 requires implementing certain changes to the current codebase upfront. However, the aim is to implement all the changes in a backwards-compatible manner. It means that the version upgrade between version 6 and 7 is no different compared to a version upgrade between the minor versions of version 6.

X-Road 7 will be implemented iteratively using agile software development methods. It means that changes and new features will be implemented in small pieces, every new version building on top of the previous one. In practice, this means that the first release of X-Road 7 will not include all the new features planned for version 7, but only a minimal subset of them. In the following versions, new features will then be added piece by piece and existing features are further developed based on the user feedback.

At the same time, with the technical track, we’re also actively working on the design of X-Road 7. Multiple activities will be carried out throughout the year, and X-Road users and stakeholders will have an active role in the process. Feature-wise, the target areas for this year are messaging patterns, message logging, and onboarding process.

X-Road 7 will be implemented iteratively using agile software development methods. It means that changes and new features will be implemented in small pieces, every new version building on top of the previous one.

X-Road extensions

In addition to the X-Road core, the maintenance and further development of two X-Road extensions will be handed over to NIIS by the Estonian Information System Authority (RIA). The extensions are X-Road 6 Monitor Project and Mini Information System Portal 2 (MISP2). The handover will take place during the first half of 2020.

X-Road and eDelivery

X-Road and eDelivery are both data exchange solutions that have been successfully used in multiple implementations in several countries and / or projects. They both provide a standardised and secure way to exchange data over the Internet. eDelivery is a building block of the Connecting Europe Facility (CEF).

NIIS is currently implementing a gateway between eDelivery and X-Road that will enable data exchange between eDelivery and X-Road ecosystems. A technical proof-of-concept level implementation has already been completed, and more detailed design is being drafted in collaboration with the European Commission’s Directorate-General for Informatics (DIGIT). The actual implementation of the gateway will begin later this year.

NIIS is looking for organisations that are interested in piloting the gateway. In case your organisation is an X-Road or eDelivery user and would like to exchange data with an organisation that is using the other platform, please contact NIIS for more detailed information.

NIIS is currently implementing a gateway between eDelivery and X-Road that will enable data exchange between eDelivery and X-Road ecosystems. A technical proof-of-concept level implementation has already been completed.

Want to know more?

If you’re interested in more detailed information about the upcoming changes, please visit the X-Road backlog. Anyone can access the backlog, and leave comments and submit enhancement requests through the X-Road Service Desk portal. Accessing the backlog and service desk requires creating an account that can be done in a few seconds using the signup form.

When X-Road is developed, and new features are added, the X-Road technology stack changes too. X-Road Tech Radar provides up-to-date information on different technologies used in X-Road.

Changes in the X-Road Central Server High Availability Support

Central Server is one of the key components of the X-Road ecosystem. It contains a registry of X-Road member organisations and their Security Servers. In addition, the Central Server contains the security policy of the X-Road instance that includes list of trusted certification authorities, list of trusted time-stamping authorities and configuration parameters. Both the member registry and the security policy are made available to the Security Servers via HTTP protocol. This distributed set of data forms the global configuration that the Security Servers use for mediating messages sent via X-Road. An X-Road operator is responsible for operating the Central Server.

Image 1. X-Road architecture and roles.

Image 1. X-Road architecture and roles.

To be able to mediate messages Security Server must have a valid copy of the global configuration available all the time. Security Server downloads the global configuration from Central Server regularly and uses a local copy while processing messages. Security Server remains operational as long as it has a valid copy of the global configuration available locally. This means that Central Server may be unavailable for a limited time period without causing any downtime to the ecosystem. However, registering new members or subsystems is not possible without Central Server. Both the download interval and global configuration validity period can be configured according to the requirements of the X-Road ecosystem.

Design for Failure

An X-Road ecosystem is very fault tolerant against Central Server failures even with one Central Server node only. However, critical information systems should always be designed for failure so that they remain operational despite of a failure of individual components.

Central Server supports high availability through clustering that provides additional fault tolerance and scalability from performance point of view. A Central Server cluster consists of two or more Central Server nodes. The cluster is based on active-active model which means all the nodes can be used for both read and write operations. In case one of the nodes fails, Security Servers are able to fail over to other available nodes.

Why Changes Are Needed?

Until X-Road version 6.22 the clustering implementation was based on asynchronous, active-active database replication between the nodes. Unfortunately, the technology that was used in the implementation reached its end-of-life in December 2019 and newer versions of the same technology are not available under an open source license. Therefore, there was no other choice than to give up the BDR plugin for PostgreSQL by 2ndQuadrant and update the high availability support implementation for Central Server. Continuing with a newer version of the BDR plugin for PostgreSQL would have meant that every X-Road operator using clustering was required to buy a commercial license for the plugin.

Image 2. Central Server high availability implementation until version 6.22.

Image 2. Central Server high availability implementation until version 6.22.

What Will Change?

Starting from version 6.23 the Central Server high availability implementation is based on a shared, optionally highly available database. Before version 6.23 every Central Server node in a cluster had its own database and changes were synchronized using multi-master database replication between the nodes. X-Road provided tools to setup the cluster and replication between the nodes. Starting from version 6.23 all the Central Server nodes share the same database that can be a standalone database, a database cluster, a fully maintained database service in the cloud etc. X-Road provides instructions how to configure the Central Server nodes in the cluster, but implementing high availability of the database is out of X-Road’s scope. However, the documentation provides instruction for setting up a replicated PostgreSQL database, but the documentation does not cover automatic failover.

Image 3. Central Server high availability implementation starting from version 6.23.

Image 3. Central Server high availability implementation starting from version 6.23.

Compared to the previous implementation the new implementation is more flexible, because it gives the X-Road operator the freedom to choose how high availability is implemented on the database level. Instead, the previous implementation was tied to the BDR plugin for PostgreSQL. At the same time, more flexibility also brings more responsibility as implementing the high availability of the database is now the X-Road operator’s responsibility.

Available Resources

The official X-Road documentation provides an updated Central Server High Availability Installation Guide. In addition, the X-Road Knowledge Base provides an article about migrating Central Server clusters from version 6.22 to version 6.23. It is highly recommended for all the X-Road operators to read these documents before updating clustered Central Servers to version 6.23.

Try It Out!

X-Road 6.23.0-beta is now available for testing and the production version will be released by the end of February 2020. We wish to receive feedback about the new version and/or any possible challenges regarding migration to the new version.

Interoperability Puzzle

In today’s digital world information is stored across multiple information systems owned and maintained by different organisations. In addition to information spreading across multiple organisations, every organisation has internally numerous information systems that store information. Most of the digital services and processes require accessing multiple information systems and combining data from different sources – both inside an organisation and across multiple organisations. Without connections between different information systems building digital services would be extremely challenging if not impossible.

The ability of information systems to exchange and utilize information is known as interoperability. Unlike it may first sound like, interoperability is not only about technology and technical connectivity. On the contrary, interoperability consists of different layers that include also technology. The European Interoperability Framework (EIF) defines four layers of interoperability:

  • legal – aligned legislation

  • organisational – coordinated processes

  • semantical – precise meaning of exchanged information

  • technical – connecting information systems and services.

Image 1. EIF conceptual model. (source)

Image 1. EIF conceptual model. (source)

All the four layers are equally important when building digital services and processes. In addition, challenges on one layer are often reflected to other layers too. Therefore, it is important to be aware of all the layers and not to neglect any of them. That being said, in this blog post I’m going to concentrate on the technical layer and its dimensions because covering all the layers at once would be too big a bite to chew.

Data Exchange Scenarios

When it comes to a public sector organisation exchanging information, three top level data exchange scenarios can be recognized:

  • Internal – data exchange within an organisation

  • National – data exchange on national level

  • Cross-border – international data exchange.

The same rules, laws and regulations don’t apply to national and cross-border data exchange which is why they are two separate scenarios instead of a single “external” scenario. Cross-border data exchange between authorities usually requires both state level agreements and data exchange agreements between the data exchange parties. The two scenarios could probably be combined as a single scenario making the total number of different scenarios two: internal and external.

The common factor between the scenarios is that all three require certain technical basic elements including, but not limited to connectivity, secure communication protocols, interfaces and integration services. The more standardized these elements are, the less work is required to build new connections between information systems and services. For example, if there’s no commonly agreed solution to securely connect information systems to each other and to how the connections are managed, the result is probably a jungle of point-to-point connections which means agreeing on the connection details and then building the connections every time when a new connection is needed – this is repeated again, again and again.

However, even if the technical basic elements in all the scenarios are the same, they are usually implemented using different technical solutions and technologies. Implementing a standardized connectivity layer within an organisation is usually based on different technology than a standardized connectivity layer with external parties. Let’s take a look at an example of an organisation that has a microservice-based information system with REST APIs published to external consumers.

Image 2. A microservice-based information system with REST APIs published to external consumers.

Image 2. A microservice-based information system with REST APIs published to external consumers.

Internal Communications

Internally the information system uses a service mesh to facilitate service-to-service communications between microservices. A service mesh is a dedicated infrastructure layer that provides features such as standardized and secure connections, service discovery, and centralized logging and monitoring capabilities. Microservices communicate with each other through a service mesh proxy that is usually responsible for microservice level authentication, message routing, service discovery, automatic retries, timeouts, logging etc. As these features are provided by the proxy, they do not need to be implemented in the application code of each microservice separately. In addition, a service mesh usually has a centralized control plane that can be used to configure the proxies, and access logging and monitoring information etc.

Requests originating outside of the mesh typically enter the mesh through a service mesh gateway component. Available capabilities vary between different solutions, but in general, a service mesh is designed to manage traffic internal to the service mesh. In this case the example was very simple, but in real life a service mesh could serve multiple information systems and span multiple networks and data centers.

Exposing Services Externally

When it comes to accepting traffic from outside of an organisation, an API gateway comes into the picture. An API gateway exposes backend services as managed APIs and distributes traffic internally – in and out of the service mesh. An API gateway provides a single entry point to all clients, and hides the details of individual microservices. An API gateway also typically provides capabilities such as logging, monitoring, metrics, access control, request limiting, message transformations, orchestration etc. In addition, an API gateway is usually well connected to other components of the API management ecosystem, e.g. API marketplace and API publishing portal.

Even though API gateways and service meshes are complementary solutions, they have many overlapping functionalities and features. They are often deployed together, but they can be deployed separately as well. In addition, an API gateway can be used for internal purposes too – not only for publishing services to external clients. Similarly, a service mesh could be used to publish services to external clients.

What X-Road Brings to the Puzzle?

As so far I have been writing about internal and external data exchange, but I haven’t written a word about X-Road yet. At this point you may be wondering what is X-Road needed for if internal and external data exchange can be implemented using other technologies.

First of all, X-Road is best suited for external data exchange over the public Internet. The most common use case is data exchange between two organisations, but a single organisation may have information systems that are hosted in different locations and communicate with each other over the Internet too. In this case X-Road is a good fit for internal data exchange as well.

At first sight X-Road may seem like a service mesh as the architecture and feature sets have many similarities – both provide secure and standardized connections, service-to-service authentication, logging, reporting etc. In addition, both are based on an architecture model that implements service level communication through a proxy component. However, X-Road is not a service mesh as service mesh is the connection layer between different services in microservices architecture. In other words, service mesh is used as an internal connection layer within an application or between multiple applications of a single organisation whereas X-Road is used as a connection layer between different organisations and information systems.

How about X-Road and an API gateway then – are they mutually exclusive or can they be used side by side? X-Road and an API gateway are both used to publish services to external clients. Their architecture and feature sets are different even though they have features in common too, e.g. publish APIs to external clients, service-to-service authentication, authorization, logging, metrics. The major difference between X-Road and API gateway is that X-Road requires that the Security Server is used on both service consumer and provider side whereas API gateway enables client connections directly without any additional components on the client side.

Image 3. Point-to-point connections, an API gateway and X-Road in comparison.

Image 3. Point-to-point connections, an API gateway and X-Road in comparison.

Overall, an API gateway provides more flexibility and API management related features compared to X-Road, but when the same client communicates with multiple API gateways the client must adapt to different requirements and configurations of multiple service providers. Instead, X-Road provides a single communication channel between multiple service providers and services that all share the same configuration that is automatically distributed and applied by X-Road. In addition, X-Road guarantees that both service consumer and service provider meet the same security requirements, and non-repudiation of all the processed messages by signing, time-stamping and logging every processed message on the consumer and provider side. The logs can be used in a court proceeding as evidence. These features make X-Road ideal solution for secure, reliable and auditable data exchange.

One Happy Family

X-Road, an API gateway and a service mesh all have their place in the interoperability puzzle, and they can be used together side-by-side. They all have their own strengths and they can be used to complement each other.

X-Road is an ideal solution for secure data exchange that requires strong authentication of data exchange parties and non-repudiation with recorded eIDAS compliant evidence. X-Road can connect to backend services directly or through an API gateway. X-Road does not support message transformations, orchestration, rate limiting, quotas etc. which can be implemented in the API gateway layer if they are required.

Some APIs may not require strict security controls or they should be accessible without an additional access point on the client side, e.g. APIs providing open data. There’s no reason why an API could not be published through multiple channels, for example an API providing open data can be published through both X-Road and an API gateway. The benefit of this approach is that organisations that are not using X-Road can access it directly through an API gateway and organisations using X-Road can access it using the same channel they use to access other services and APIs too.

Image 4. The example application with X-Road.

Image 4. The example application with X-Road.

Let’s go back to the different data exchange scenarios mentioned earlier – internal, national and cross-border. X-Road is a good fit for national and cross-border data exchange, and it can be used for certain internal data exchange use cases too. An API gateway can basically be used for all the scenarios, but depending on the use cases and their requirements X-Road might be a better choice for external data exchange and a service mesh for internal data exchange. Last but not least, a service mesh is best suited for the internal scenario for microservice-based applications.

Disclaimer

Finally, it must be said that there’s one major difference between X-Road, an API gateway and service mesh that has not been brought up yet. API gateway and service mesh are architecture patterns which have multiple implementations that all have their own set of features and functionalities. In this blog post I have compared API gateway and service mesh to X-Road on a general level without referring to any specific implementation, solution or product. Instead, X-Road is a product with a specific set of features and functionalities. This means that conceptually X-Road, API gateway and service mesh are not the same thing.

Get Some More REST

Over the last year and a half I’ve written multiple blog posts about X-Road and REST. Those blog posts have covered implementation plans, technical design details and release of the first X-Road version with REST support – the version 6.21.0 that was released in April 2019. All in all, the blog posts have covered the whole REST support journey from design to implementation and release. Currently we’re putting the finishing touches on the second stage of the REST support implementation which provides even more built-in REST-related features. The results are included in the version 6.22.0 that will be released in October 2019.

And for the readers interested in the technical details on source code level, the code implementing the REST support is available in the develop branch of the X-Road master repository on GitHub.

For clarity, adding support for REST does not mean dropping support for SOAP. No changes are required to information systems consuming and producing SOAP services via X-Road. Instead, the two architectural styles can co-exist side by side which means that all the current SOAP services are supported in the future too.

Basic Support for REST

The version 6.21.0 already provided a basic support for consuming and producing REST services:

  • Basic REST functionality

    • Message exchange with signing and time-stamping

    • Message logging with archiving

    • Downloading and verification of log records

  • Adding a REST service using an URL

  • Operational monitoring of REST services

  • Service-level authorization

  • Certificate based authentication (clients + services) 

The version 6.22.0 will provide all the REST related features included in the previous version plus a set of whole new features. Let’s find out what they are!

Metaservices for REST

Metaservices are built-in Security Server services that can be used by X-Road member organisations to discover what services provided by other members are available and download the service descriptions of these services. Until now the metaservices have been available over SOAP only, but starting from the version 6.22.0 the metaservices are available over REST too. The responses of the REST metaservices are always returned in JSON as the Security Server does not currently support other content types in the responses.

In the version 6.21.0 the SOAP versions of the metaservices return information about available SOAP and REST services. This is somewhat confusing as a SOAP client is not very likely to be interested in receiving information about REST services, and vice versa. Therefore, in the version 6.22.0 the functionality has been changed so that the SOAP versions contain information about the available SOAP services only and similarly, the REST versions contain information about the available REST services only. This means that in case information about all the available SOAP and REST services needs to be collected, both SOAP and REST versions of the metaservices must be invoked.

More detailed information about the metaservices can be found in the Service Metadata Protocol for SOAP and the Service Metadata Protocol for REST.

Support for OpenAPI 3 Descriptions

Existing REST services can be published in X-Road as-is – just like in the version 6.21.0. Unlike with SOAP services, the Security Server does not require X-Road specific information to be present in the responses returned by REST services. Certain X-Road-specific information is still included in the response message returned to a client information system, but the Security Server takes care of adding the required information to response message’s HTTP headers.

In the version 6.21.0 publishing a REST API is done by defining the base URL of a REST API and a service code. This is still possible in the version 6.22.0, but in addition it is possible to publish a REST API using the OpenAPI 3 Specification. When a new REST API is published it is possible to choose whether it is done using the base URL of the API or the URL of an OpenAPI 3 description of the API. The description can be provided in both JSON and YAML formats. This means that providing an OpenAPI 3 description is supported, but not mandatory. All REST APIs added using the version 6.21.0 will continue to work without any changes in the configuration.

Image 1. Adding a REST API in the version 6.22.0.

Image 1. Adding a REST API in the version 6.22.0.

The first benefit of providing OpenAPI 3 description is that other X-Road members can query the OpenAPI description using the new getOpenAPI metaservice – just like it is possible to query WSDL descriptions of SOAP services using the getWsdl metaservice. Another benefit of publishing an OpenAPI 3 description is that the Security Server reads all the API endpoints defined in the description and they become visible in the Security Server UI. The endpoints can then be used in access rights management – more about that later.

Image 2. List endpoints of a REST API.

Image 2. List endpoints of a REST API.

In addition, it’s also possible to add endpoints manually. However, manually created endpoints are not visible to other X-Road members through the getOpenAPI metaservice, but they can be used in access rights management just like the endpoints read from an OpenAPI description. Manually created endpoints can be updated and deleted by Security Server administrators. Instead, endpoints read from an OpenAPI description cannot be manually updated or deleted. They can be updated and/or deleted by updating the OpenAPI 3 description and then refreshing it on the Security Server. The same logic applies updating SOAP services through WSDL descriptions.

Image 3. Adding an endpoint manually.

Image 3. Adding an endpoint manually.

Besides access rights management, the Security Server does not use the endpoint-related information for anything else, e.g. the Security Server does not validate if an endpoint defined in a request by a client information system exists under an API or not. In other words, if a client information system has sufficient access rights to invoke an API endpoint, the Security Server forwards the request to the specified endpoint without any further validations.

More Fine-Grained Authorisation

In the version 6.21.0 REST APIs are authorized on the API level. In practice, this means that access rights are defined for all endpoints of an API. Sometimes this is OK, but other times it might be needed to define access rights on more fine-grained level, e.g. access to a specific endpoint only or only read access, but no permissions to add or modify data. The use of endpoints makes it possible to define access rights on more fine-grained level.

Starting from the version 6.22.0 it’s possible to define access for REST APIs on two levels: REST API level and endpoint level. In general, a REST API usually has multiple endpoints. When access rights are defined on the API level, they apply to all the endpoints of the API. Instead, defining access rights on the endpoint level enables more fine-grained access rights managements as access rights are defined using HTTP request method and path combination. Therefore, it is possible to define access rights for a single endpoint or alternatively for a subset of endpoints using wildcards.

When a client application has access rights on the API level, it means that the client can access all the endpoints of the API. In case clients must not have access to all the endpoints, then access rights must be defined on the endpoint level. Security Server’s access rights management only supports allowing access – explicitly denying access is not supported, e.g. allow access to all endpoints on API level and then deny access to a single endpoint is not supported.

What’s Next?

The version 6.22.0-beta is already out and the official release version 6.22.0 will be released in October 2019. However, the beta version already provides all the REST-related features included in the final release. The last weeks are reserved for fine tuning and testing.

The REST support implementation has been done in phases which means that REST-related features have been added along several X-Road versions – every new version adding something new. However, the version 6.23.0 does not have any planned new REST-related features yet, but it’ll likely contain smaller improvements to existing features, e.g. performance optimisations. In addition, we hope to receive feedback and enhancement requests from you regarding the existing REST functionality. Improvements and new features may be added to the roadmap based on the received feedback.

In case you have not checked out the X-Road REST support yet, it’s time to do it now!

X-Road and eDelivery – Identical Twins or Distant Relatives?

Building a digital society and digitizing public services both nationally and across borders are hot topics in Europe right now. Standardised and secure data exchange is one of the key enablers that is required to be in place for succeeding in the task. The good news is that there are already solutions and building blocks available that can be used for the job. Instead of reinventing the wheel and building everything from scratch it is possible to use off-the-shelf, battle-proven solutions that have already been successfully used in multiple implementations.

It goes without saying that X-Road is one of the available solutions. Another solution that is often mentioned in the same context is eDelivery – a building block of the Connecting Europe Facility (CEF). X-Road and eDelivery are both data exchange solutions that have been successfully used in multiple implementations in several countries and / or projects. Technically, they are both based on distributed architecture and they are enablers of decentralized data management. At first glance they may seem very similar, even competitors to each other. But is that really the case? Let’s find out.

Architecture

On architectural level eDelivery and X-Road have many characteristics in common as they are both based on four-corner model. The basic idea of the model is that information systems do not exchange data directly with each other. Instead, information systems are connected through additional access points that implement the same technical specifications and therefore are able to communicate with each other. In addition, access points usually provide common features required in data exchange, e.g. message routing, security, logging, authentication etc. Both X-Road and eDelivery also have an address registry and tools for capability lookups that are used in message routing and service discovery.

Image 1. Four-corner model explained through X-Road architecture.

Image 1. Four-corner model explained through X-Road architecture.

The similarities do not finish there. In both X-Road and eDelivery the trust model is based on digital certificates, and they both guarantee non-repudiation of messages and identities of message exchange parties using digital signatures. In addition, in both cases the message transport protocol used between the access points is based on MIME/multipart messages even if the structure of the messages is not the same. X-Road and eDelivery can be used to exchange both data and documents. In addition, they are both payload agnostic which means that they can be used for transferring any kind of data (structured, non-structured and/or binary), e.g. purchase order, invoice, JSON, XML, PDF etc.

Cross-Border and Cross-Sector Data Exchange

Technically eDelivery supports both cross-border and cross-sector data exchange. However, eDelivery is typically implemented within a policy domain and different policy domains have their own implementations with domain specific operations and management models. In practice, each policy domain creates its own eDelivery subdomain, and all the eDelivery components that belong to the same subdomain trust each other. Usually, a component / participant from one subdomain, e.g. eHealth, is not considered trusted in another subdomain, e.g. eJustice. This means that there are multiple eDelivery subdomains and it might not be possible to exchange data between them.

X-Road's main idea is that it enables data exchange between organisations from different sectors (public, private, non-profit etc.) and different policy domains. X-Road is typically deployed on a national level so that it provides a nationwide data exchange layer for all kinds of organisations across sector and policy domain boundaries. Two X-Road environments can be joined together, federated, which enables cross-border data exchange between the member organisations of the two ecosystems. Federation means that members of two different X-Road ecosystems can exchange data as if they were members of the same ecosystem.

Trust Models

Both X-Road’s and eDelivery’s trust models are based on digital certificates. X-Road and eDelivery both use certificates to secure the communication between access points (TLS encryption), and sign the data and documents that are transferred. In addition, eDelivery also supports encrypting and decrypting the data and documents. In addition, how certificates are managed, configured and distributed differs between the systems.

eDelivery supports multiple trust models and the model that is used can vary between different eDelivery subdomains. Depending on the selected trust model the distribution of the digital certificates may be manual, automatic or something in between.

X-Road supports the use of multiple trust service providers within an ecosystem and the distribution of certificates is always handled automatically by X-Road. Two organisations may use certificates issued by different trust service providers, but this is fully transparent to the user organisations as the exchange and verification of certificates is automated. In X-Road's context, organisations and access points of the same ecosystem always trust each other. The trust can be expanded to cover other ecosystems using federation.

Messaging Models

One of the main technical differences between eDelivery and X-Road is related to supported messaging models. X-Road is based on synchronous communication that is well suited for real time data and document exchange. Instead, eDelivery is based on asynchronous communication that is well suited for reliable, non-time-critical document and data exchange. eDelivery also supports duplicate message detection and message retry / resending scenarios.

The difference between synchronous and asynchronous communication is that in synchronous communication a service consumer sends a request and stays on waiting for a response, but in asynchronous communication a service consumer sends a request and continues processing with other tasks. In synchronous communication the service consumer’s control flow is disrupted until the service provider has processed the service consumer’s request. Instead, in asynchronous communication the service provider sends a response later once it has processed the request.

Connecting Information Systems

Connecting an information system to eDelivery means that the information system must implement the eDelivery AS4 Profile so that communication between an eDelivery access point and an information system is technically possible. This means that an additional adapter or connector is usually required between an access point and an information system. The adapter/connector acts as a converter between the eDelivery AS4 Profile and the information system’s native format. Exchanging all kind of data is supported, but the data must always be wrapped inside a message that conforms the eDelivery AS4 Profile.

X-Road supports two alternative messaging protocols that can be used in the data exchange – a message protocol for SOAP and a message protocol for REST. When the SOAP protocol is used, an additional adapter or connector is usually required, because the data to be transferred must be wrapped inside a message that conforms the X-Road message protocol for SOAP. Instead, when the REST protocol is used, no additional adapter or converter component is required as existing REST services can be published and consumed as-is.

Operations and Management Model

eDelivery prescribes technical specifications that can be used to enable secure and reliable exchange of documents and data, and the specifications are based on standards. There are multiple software, both commercial and open source, available that implement the eDelivery specifications. Organisations are free to choose which software they use when exchanging data using eDelivery. eDelivery is managed through the specifications – once they change, vendors of the implementations update their products accordingly. Operations and management model is policy domain specific – each domain defines its own model and models between different domains may vary.

X-Road is a technical and organizational framework that provides secure and standardised way to exchange data between data providers and consumers over the Internet. Multiple standards are used in X-Road's implementation, but no X-Road specific parts have been standardised. However, X-Road’s source code and all X-Road protocols are open, and documentation is publicly available so anyone is free to create an implementation of X-Road protocol stack. Organisations that want to exchange data over X-Road must install X-Road software's Security Server component. X-Road and its protocol stack are managed as a software product – the protocol stack is managed and developed as a part of the X-Road software product. In addition, X-Road defines an organizational framework that describes the roles and responsibilities of different actors of an X-Road ecosystem.

Conclusions

On high level it may seem that eDelivery and X-Road are very similar, but more detailed review reveals that there are many significant differences between them. Even if they provide many same features and they have many common components on logical level, the implementation details vary greatly.

One of the key differences between eDelivery and X-Road is that eDelivery is a set of technical specifications with multiple implementations, and X-Road is a technical and organizational framework. In other words, eDelivery and X-Road are conceptually two different things. eDelivery is also missing a detailed organizational framework that defines the roles and responsibilities regarding the operations and management of an eDelivery policy domain.

Another important difference is related to the supported messaging models – asynchronous and synchronous communication. Both messaging models have their pros and cons, and it depends on the use case which one is a better a fit. Choosing a wrong messaging model may result in additional complexity which requires more implementation and maintenance effort.

All in all, eDelivery and X-Road are not identical twins and they should not be considered competitors either. X-Road is well suited for synchronous real-time data and document exchange, whereas eDelivery is a good fit for reliable, non-time-critical document and data exchange. Therefore, eDelivery and X-Road are not mutually exclusive, and they can be used side by side to fulfill different kind of data exchange needs. In the future it might be even possible to exchange data between eDelivery and X-Road. NIIS is currently studying alternatives for implementing a gateway between eDelivery and X-Road that would enable data exchange between eDelivery and X-Road ecosystems. A technical proof-of-concept level implementation has already been completed, but there are legal and administrative questions yet to be resolved. However, that’s another story.

Netflix of Public Services

Everyone knows Netflix, the online streaming service where users can watch films, documentaries and TV series online 24/7. Netflix has over 100 million subscribers globally, and they all expect the service to work flawlessly and provide first-class user experience and content each and every time. To be able to meet users’ expectations the service must be resistant to failure and it must adapt to changing demand quickly and automatically. Technically, this is a huge challenge to any information system – especially when we’re talking about over 100 million users.

Of course, the most important thing to the users is high-quality content. World class technical solutions and architecture mean nothing if a service does not provide interesting and meaningful content to its users. When it comes to delivering the content to the users, technical solutions and architecture are key enablers, and without those it is not possible to get access to it or the user experience is poor. Great technical solutions are transparent to their users – the users don’t even know that they’re there.

Netflix has been able to meet the expectations well. At the same time they’ve managed to keep the underlying architecture fully transparent to users – as it should be. Netflix has built the underlying system so that it is highly available, fault tolerant, resilient and scalable. One of the key factors in their success lies in the architectural choices. Instead of building one monolithic system, Netflix has built its system around multiple loosely coupled services. This approach is called microservice architecture. Another key factor in Netflix’s technical success is the use of cloud services.

Size does matter

Microservice architecture pattern is one of the most commonly used architecture patterns in the recent years. It is based on the idea that a system is composed of multiple small, independently deployable and loosely coupled services that communicate with each other using language-agnostic APIs. Usually the services are organized around business capabilities. Each service can be developed and deployed independently of one another which simplifies the development and deployment of large, complex applications as each part of the application can be developed and deployed independently instead of deploying the whole application every time when a single component is updated.

Image 1. Microservice architecture.

Image 1. Microservice architecture.

On the other hand, microservice architecture also increases the complexity of a system. The complexity comes from multiple fine-grained services operating together seamlessly. A single business feature may span multiple microservices which requires an additional layer for coordination and orchestration, service discovery, error handling etc. Locating a malfunctioning component from such a system is not a trivial task. In addition, each service can be developed independently, but testing of a business feature requires that all the related services or their mock versions are available.

At this point you might be wondering what all this has to do with X-Road? Keep on reading, you will find it out soon.

What about X-Road?

X-Road is an open source data exchange layer solution that enables organizations to exchange information over the Internet. X-Road is a centrally managed distributed data exchange layer between information systems that provides a standardized and secure way to produce and consume services. X-Road ensures confidentiality, integrity and interoperability between data exchange parties. The data is always exchanged directly between a service consumer and a service provider, and no third parties have access to it.

X-Road is not based on microservice architecture, but the X-Road ecosystem shares many of the same characteristics – on a higher level, though. Instead of a single information system consisting of multiple small atomic services, X-Road is a data exchange layer between service consumers and business services provided by various information systems owned by different organisations. The services available via X-Road are independently deployable and loosely coupled, and they communicate with each other using language-agnostic APIs. Each service can be developed, deployed and scaled independently without affecting other services as long as the API remains unchanged. Sounds familiar?

However, X-Road is just a data exchange layer – an enabler for secure and standardized data exchange that is transparent to end-users. Just like microservice architecture is enabler for building scalable, fault-tolerant and highly-available systems. The real value comes from services that are built on top of the technical infrastructure and the content that they provide to users.

It’s all about content

X-Road enables citizens, entrepreneurs and officials to operate via different portals and applications (document management systems, institutional information systems) in a more efficient and flexible manner. For example, it helps to check for relevant information in various base registries or securely exchange documents between organisations.

X-Road is used nationwide in the Estonian data exchange layer X-tee and in the Suomi.fi Data Exchange Layer service in Finland. Both Estonia and Finland have their own state portals that provide users access to different public registers and services. In general, a state portal is a single point-of-entry to public services for citizens, entrepreneurs and officials. X-Road is used in the background to connect the portal to different information systems and registers maintained by various organizations. Instead of going through websites and portals of different authorities one by one, there’s one centralized place to search and access services.

Image 2. A state portal connected to various information systems and base registers via X-Road.

Image 2. A state portal connected to various information systems and base registers via X-Road.

A state portal is a Netflix of public services. It is a centralized place that gives 24/7 access to public services provided by different authorities. It is a platform that citizens can use to communicate with different authorities, and search, access and update information. New services are added to the platform and old ones are removed. Also the platform itself is constantly developed based on the feedback received from users. X-Road is a transparent data exchange layer in the background that enables secure and standardized data exchange between the portal and various information systems and base registers. X-Road plays a key role in the architecture, but the most important thing is the actual content – what would be the value of Netflix without all films, documentaries, TV series etc.? The same goes with a state portal, it’s all about the available content and services.