Skip to main content

Part of Integration Patterns Book

Interaction methods

Current Chapter

Current chapter – Interaction methods


Synchronous interaction

Synchronous interaction flow diagram

Definition

A synchronous interaction is an HTTP transaction in which a query or a command is submitted, and the result is received by the client on the same HTTP connection (known as an HTTP Request/Response Exchange). The requestor waits until a response has been returned and cannot continue processing until it received the response.

When you would use a synchronous interaction

Synchronous interactions can be used to support most scenarios where information or transactional services need to be accessed. The only requirement for a synchronous transaction is there being a trigger point to initiate it, which could be a user action (selecting a record to review) or system action (such as a workflow trigger within a defined business process).

Synchronous interactions are therefore the default pattern for all user or system-initiated scenarios which involve small granular sets of information and can be used to support all integration patterns.

As part of the patterns discussed in the last chapter, synchronous interactions support:

  • information retrieval
  • updates to information repositories where the update can be fully applied without involvement of a human actor
  • submission of update requests where it is not necessary for the submitting system to know the outcome of the request, only that the request is understood and received - this is known as the 'promise of work pattern'
  • creation of records in information repositories

There are some potential exceptions where a synchronous interaction approach may not be the best option.

For example, long running transactions - transactions where the provider system cannot guarantee a response being provided in a time that would be reasonable for a user/system to wait. In this circumstance an asynchronous pattern may be more appropriate.

Also, where the target system cannot be guaranteed within required service levels (defined on a programme by programme basis), meaning it is not classed as 'highly available'.

Benefits and limitations synchronous interactions

Benefits are that it:

  • provides immediacy, which makes it the most suitable approach for requests for information
  • offers better user experience, as the end user feels that the system is reacting to them, and provides visibility to consumers of the outcome of requests
  • drives design of the underlying systems/business processes to provide instant 'answers' to queries
  • supports easier application flow and simpler integration

Limitations include:

Potentially less scalable approach than using asynchronous interactions. The implementation of multiple small requests for information can result in large network and ‘handshake’ overheads which can lead to performance issues.

For use cases which demand near real-time communication between two parties or systems the latency associated with synchronous interactions is potentially problematic and therefore a real-time interaction pattern is the preferred approach.

When to use synchronous interactions for updates and commands

Synchronous update interactions include those which perform RESTful updates on a resource, and also transactional command type interactions such as 'register a patient'.

Synchronous updates and commands are most appropriate when:

  • the API and associated service infrastructure is highly available
  • repeated (duplicate) update requests do not have a consequential impact on the service (that is, the update action is idempotent) - for example, repeated submission of the same updated email address to a patient’s demographic details
  • the update can be automatically applied – it does not depend on a human actor or other offline process as part of a long running update workflow
  • the client system which submits the update does not need to know about the result of the update

Use caution when choosing a synchronous implementation in non-idempotent transactions.

These are circumstances where a repeated transaction would have a consequential impact upon the provider system. For example, a repeated transaction leading to multiple test packs being delivered to a citizen.

Potential issues with using synchronous transactions for non-idempotent transactions can be resolved through the use of unique message IDs to identify duplicate transactions. Therefore, whilst a synchronous approach is still preferred this approach does add additional complexity to the providing system; where the level of additional complexity is high, the use of an asynchronous transaction can be considered as an alternative.


Synchronous interactions - implementation details

As stated above, a synchronous interaction is an instance of an HTTP Request/Response exchange.

More specifically, this means an HTTP Request/Response exchange, according to the HTTP 2.2 specification.

"A client sends an HTTP request on a new stream, using a previously unused stream identifier (Section 5.1.1). A server sends an HTTP response on the same stream as the request."

Where a synchronous interaction pattern is appropriate, it is important that the implementation details of the pattern are followed in a standard way so that HTTP clients interacting with healthcare HTTP APIs can maximise re-use across multiple use cases.

We have provided a platform which enables this standardised implementation for synchronous interactions. This is called the API Management Platform.


Standardising access to APIs

To enable this standard approach, the API Management Platform sits between API consumers and the APIs themselves. API consumers do not interact directly with an API, but interact via a proxy for the API. It is this proxy which ensures that synchronous interactions are implemented in a standard way.

API management platform integration

A key part of this standardisation is the way in which security and authorisation is implemented. The API Management proxy applies a standard security and authorisation method in which the synchronous interaction takes place.

Our security and authorisation guide describes a set of methods available. Each synchronous interaction will take place within the security context of one of these options.

Contact [email protected] for help and support in creating and consuming synchronous healthcare APIs.


Synchronous interaction - implementation patterns

APIs which expose synchronous interactions can be classified into a set of types or 'implementation patterns'.

Recommended patterns are:

  • Simple pass through
  • Orchestration Façade

Patterns to use with caution include:

  • Transformation Façade
  • Wrapper Façade

Simple pass through

A single client HTTP request to the API Management proxy results in a single back-end target API call. The API will present data in the latest standard data model appropriate to the use case – FHIR R4 UK Core at the time of writing.

The API does not perform any mediation between on-the-wire data formats and does not orchestrate calls to other APIs.

The diagram below illustrates this:

Simple pass through synchronous integration implementation pattern

One client HTTP request results in a single back-end target API request without any message transformation. The request is 'passed through' the API Management layer which applies, for example, standard security, logging and alerting.

Orchestration Façade 

Where there is a set of back-end API services exposing granular functionality or data sets, it can often be very beneficial from a consumer viewpoint to create a middleware API from these granular APIs. The façade API is then made available to consumers to meet a business use case more simply.

One API client HTTP request results in the middleware API making multiple HTTP requests to back-end service APIs. There is often a need to implement some form of business logic in this middleware API to orchestrate these back-end calls. This type of façade presents a simple synchronous API for consumption by API clients, while hiding from clients the complexity of multiple back-end services which are involved in servicing the request.

FHIR exchange paradigm

The strongly recommended means of exchanging data when synchronous interactions are used is to exchange UK Core resources using the FHIR http exchange paradigm. FHIR http expresses requests as Create/Read/Update/Delete (CRUD) operations upon resource endpoints which are exposed by an HTTP server.

Where the nature of the interaction is clearly the invocation of a business service, or the called function cannot be expressed easily as a granular CRUD operation, the Operations Framework extends FHIR http for these scenarios. These calls can be defined alongside Restful CRUD requests in the same API.

FHIR exchange paradigm - orchestration facade

Where a back-end API has legacy characteristics, elements of the transformation or wrapper pattern would also be required.

Use of the orchestration façade pattern assumes that the primary focus is on production of granular pass-through APIs. These pass-through APIs continue to be consumable by API clients even though for certain use cases a client does not need to interact with them directly but may use an orchestration façade.

To hide granular pass-through APIs from consumers through construction of an orchestration layer could be considered an anti-pattern. It is normally better to provide API consumers with options - to consume granular APIs to meet granular data or functional needs, or to choose to use an orchestration façade to fulfil specific use cases.

Transformation Façade

A middleware layer is introduced which acts as a façade to a back-end API. The back-end API may be an existing API which uses a deprecated message format. This pattern enables the existing API to be used quickly while hiding legacy implementation details from the API client.

Transformation facade synchromous integration implementation pattern - diagram

One client HTTP request results in a single back-end target API call, but this request is handled by an intermediary API. This middleware API mediates between a legacy or proprietary message format and a current standard. The API Management layer component is a proxy which applies standard security, monitoring other platform benefits.

You would use the transformation façade typically where you have an existing legacy API which you want to expose quickly for consumption in a way which meets current standards.

However, you should prefer the simple pass-through API over transformation façade patterns as this adds additional complexity solely to hide technical debt from the API consumer. As such, if there is no plan in place to address the technical debt present in the back-end service, this is an anti-pattern.

It is typically acceptable therefore to use this pattern to meet short term business objectives, while also planning to reduce or remove technical debt which is hidden but still present in the overall architecture.

Wrapper façade 

A façade is placed between the API consumer and a back-end legacy API, and wraps (hides) the legacy implementation details of the back-end from the API consumer. Rather than hiding a legacy message format, as seen with the transformation façade, other legacy API implementation details are hidden from the consumer. 

Where these legacy implementation details are cross-cutting concerns such as the security model in use, the API Management layer implements the wrapper.

For example, a legacy back-end API may have a dependency on a legacy security method such as TLS Mutual Authentication, and it is desirable to hide this from API consumers. The diagram below illustrates where API Management presents a OAuth2/OpenID Connect based security model to consumers and mediates this, acting as a client implementing TLS Mutual Authentication.

Wrapper facade synchronous integration implementation pattern - diagram

The characteristics of a legacy API may mean that the API Management Layer is not the best place to hide all the legacy behaviour of the back-end API. This is typically when some business logic or processing state is required to hide back-end behaviour. An additional middleware API layer would be required in this case, possibly in addition to the role of the API Management Layer to mediate security approaches.

You would use the wrapper façade typically where you have an existing legacy API which you want to expose quickly for consumption in a way which meets current standards.

However, you should prefer the simple pass-through API over the wrapper façade pattern as this adds additional complexity solely to hide technical debt from the API consumer. If there is no plan in place to address the technical debt present in the back-end service, this is an anti-pattern.

It is typically acceptable therefore to use this pattern to meet short term business objectives while also planning to reduce or remove technical debt which is hidden but still present in the overall architecture.


Asynchronous interaction 

Definition

An asynchronous interaction is a communication method used when a business interaction cannot be completed appropriately during the lifetime of a simple HTTP request/response as described in the synchronous interaction section. Business responses containing the outcome of the request are returned when available to the requestor using a separate communication which, typically, the provider initiates.

The diagram below illustrates the components involved:

Diagram - components involved in asynchronous interaction

The consuming system submits a request to the providing system. The consuming system must implement an additional component which listens and for the outcome of this request in incoming responses. The consuming system does not know when these responses will arrive and must usually also implement an additional component which correlates these responses with the requests it has made.

When you would use an asynchronous interaction

The prime scenario where an asynchronous interaction is appropriate is where a human actor must be involved in the processing of the request before a business outcome is known.

Benefits and limitations of asynchronous messaging

There are some non-functional benefits to the use of asynchronous messaging which may be valid for integrations with legacy systems.

Performance and capacity – as there is no immediate need to process a message and return a response it allows the provider system to ‘throttle’ transactions which can reduce overall load on systems.

Availability – a benefit of asynchronous messaging is that it does not require that the systems involved in the message exchange to be available at the same time.  If the provider system is unavailable providing reliable messaging is implemented (a message is persisted) there will be minimal impact as the request will be processed once the provider system is available again.

Limitations of asynchronous messaging include:

  • lack of immediacy, which makes it less applicable for requests for information
  • potential increase complexity for consumers as they need to develop separate request and response processes
  • potentially increase complexity around transactional boundaries and exception handling

Business scenarios

Providing patient care – update an external healthcare repository

During scheduled or unscheduled care, sharing new information arising from a patient encounter with an external repository where the request must be reviewed before being accepted. For example, sending the summary of an encounter to the patient’s GP practice.

Providing patient care – initiate a business process or task

Whether during scheduled or unscheduled care, an outcome of care delivery may be the need to initiate a business process or task(s) at an external organisation to ensure that a patient continues to receive appropriate care. For example, at the conclusion of an episode in a secondary care setting, it is necessary to inform the patient's GP practice of the outcome.


Technical scenarios for choice of asynchronous interactions

There are some scenarios where asynchronous interactions are required due to certain non-functional characteristics of the interaction. However, this usually means that the consumer must implement a more complex solution due to these constraints. It is almost always better to provide solutions which minimise the level of complexity for consumers – and therefore to locate this complexity in provider solutions.

Some examples below are therefore given below where asynchronous interactions may be considered, along with what options should first be explored to keep the integration as simple as possible from a consumer viewpoint.

1. A transaction potentially taking a long period of time

Therefore it would not be practical for the requesting system to wait for the outcome within the lifetime of an HTTP request/response.

Before proceeding, consider whether the provider API infrastructure can be re-designed to deliver performance characteristics suitable for a synchronous interaction. Place particular focus on this consideration where the use case is about information retrieval needed to deliver direct care.

2. Inability to provide required availability levels

A provider system is unable to provide the availability levels which the use case demands, and therefore a middleware messaging layer is introduced to hide this issue from consumers.

Firstly, consider investing in delivery of a synchronous API meeting required availability SLAs. If this is not possible, consider whether a synchronous interaction façade can be provided to consumers which maintains guaranteed message delivery.

3. Provider system is unable to handle throughput

Where the provider system is unable to handle throughput associated with peaks in consumer demand, an asynchronous interaction approach enables incoming transactions to be throttled using a middleware component such as a message queue.

First, evaluate options which avoid requiring consumers to implement the complexity required to handle response messages. Synchronous interaction patterns via API Management enable client request throttling. Also consider whether investment should be made to reduce technical debt in the provider system which limits current capacity.

4. An update to an external system for which the sender does not require a business response

Such updates should normally be implemented using a synchronous interaction from a consumer’s viewpoint. If non-functional constraints in the provider systems are a concern, consider wrapper API options or a 'promise of work' approach as described in the synchronous interaction section.

5. An asynchronous transaction may be suitable where a guaranteed once-only delivery of a message is required.

This does require having highly available middleware components (with no data loss) but, by being able to persist transactions, it reduces risks of duplicate HTTP transactions where a retry is attempted if a timeout occurs or issues where a provider system is unavailable for any period of time.

Take a consumer first viewpoint. Use of unique message identifiers in synchronous request, with de-duplication logic in the provider, is preferable. An asynchronous approach to this scenario introduces complexity for consumers around transactional boundaries and issue/exception handling.

6. Initiate and track tasks at another NHS organisation

Use cases where an NHS organisation must initiate and track tasks at another NHS organisation.

It is possible to implement this scenario as a synchronous HTTP service by requiring the external organisation to expose task state machine API which allows the consuming system to both create tasks, and query task status. Though this is an option, assessment of the relative complexity of an asynchronous messaging solution should inform the decision on approach.


Asynchronous interaction - implementation details

If, after having reviewed the considerations above, an asynchronous interaction is a valid approach, the main options available are:  

  • messaging via MESH middleware 
  • HTTP communication

Messaging via MESH middleware

This is a good option where a human actor is involved, and therefore it is not known when a business response will be available. MESH is the current NHSD platform for asynchronous messaging between healthcare entities and is therefore the preferred implementation option for asynchronous messaging use cases.

MESH is a middleware component which supports asynchronous messaging between NHS organisations. MESH implements the following message exchange:

  1. A client places a message in a MESH mailbox (which it may need to optionally lookup using the ODS code of the intended recipient and a workflow id).
  2. The recipient polls the MESH mailbox for new messages.
  3. The recipient retrieves messages and processes these.
  4. To deliver the business response to the organisation which sent the message, the recipient places a response message into the sender’s MESH mailbox for retrieval by the client.

Messaging via MESH middleware diagram

When selecting the MESH messaging option, be aware of the following:

  1. If the use case demands urgent action on the part of the message recipient, a synchronous interaction is preferable. Asynchronous messaging introduces a level of latency into message delivery and processing. As this level of latency varies by supplier implementation, where use cases require low levels of latency, the context of supplier implementation should be clearly understood in order to verify whether MESH messaging is a viable option.
  2. The complexity of error scenarios should be understood – handling of error conditions is more complex when using the asynchronous interaction approach.

FHIR exchange paradigm 

The recommended means of exchanging data when implementing a asynchronous interactions using MESH is to exchange UK Core resources using FHIR Messaging exchange paradigm. This paradigm is a good fit where there is a high level of de-coupling between sending and receiving systems, and where a traditional messaging approach is taken to deliver messages using a store and forward architectural pattern.

HTTP communication

A number of HTTP based options are available, which may be considered usually where technical constraints require divergence from a synchronous interaction approach. This may be acceptable as a transitional step towards a target architecture which aligns with best practice but in all other circumstances is considered to be an anti-pattern.

The following options with example scenarios describe circumstances where use of HTTP communication may be suitable, with an indication of what a target state should look like.

Client short polling

Overview

The provider system separates the HTTP Request/Response seen in the synchronous interaction approach into its two stages which correspond to the API endpoints: 

  • Stage 1 – submit request
  • Stage 2 – poll for business response

Stage 2 is repeated until the client receives the business outcome of the request. The client will usually employ a 'back-off algorithm' to limit the number of polls which must be made in order to receive the business response.

In production scenarios where client requests must be performed at scale, in a non-blocking manner, a likely architecture is described below:

Client short polling architecture

When to use

This pattern may be employed where a request is made which does not depend upon a human actor, and where the provider system has fixed architectural constraints which preclude a synchronous transaction.

Target state

Removal of technical debt from the provider system enabling move to a fully synchronous interaction.

Client long-polling

HTTP based client long-polling is a modification to the short polling approach designed to avoid the need for multiple client HTTP requests. A single HTTP connection is kept open until the server is able to deliver the business outcome of the request.

When to use

This pattern may be employed where a request is made which does not depend upon a human actor, and where the provider system has fixed architectural constraints which preclude a synchronous transaction. 

However, use of long-polling is discouraged in favour of short polling or web-sockets as a tactical step. Long-polling introduces complexity and performance constraints at the server. Additionally, long-polling is incompatible with an API proxy approach, and therefore the benefits of API Management cannot be realised.

Where the business use case requires real-time communication, other mechanisms such as web sockets should be considered which are designed specifically to support full-duplex communication.

Target state

Removal of technical debt from the provider system enabling move to a synchronous interaction.

Callback HTTP request

An HTTP Request/Response, is separated into two HTTP Request/Responses interactions. These are:

  • Request 1 – consuming system request to provider with request details
  • Request 2 – providing system request to consuming system with business response details

Callback HTTP request response diagram

When to use

This pattern may be employed where a request is made which does not depend upon a human actor, and where the provider system has fixed architectural constraints which preclude implementing as a single synchronous transaction.

From the consuming system perspective, the complexity of the integration is greater than a short polling approach. The consuming system must implement an HTTP server and provide a correlation layer to match requests with response outcomes. Therefore, short polling should be considered as better tactical step.

Where the business use case requires real-time communication, other mechanisms such as web sockets should be considered which are designed specifically to support full-duplex communication.

Target state

Removal of technical debt from the provider system enabling move to a synchronous interaction.

FHIR exchange paradigm 

These HTTP communication options for implementing asynchronous interactions are transition steps towards a target synchronous interaction state. Therefore it is recommended where possible to exchange UK Core resources using the FHIR http exchange paradigm as this would minimise the degree of technical debt introduced, and thus limit the remedial work and cost associated with a later move to the synchronous interaction model.


Bulk Transfer

Definition

Bulk Transfer involves the movement of one or more files from one location to another over a network. Bulk file transfers may use data compression, data blocking and buffering to optimize transfer rates when moving large data files.

When you would you use a Bulk Transfer

Bulk transfers can be used in circumstances where information has been captured, the sender knows who to share the information with and the recipients do not need the information immediately for direct care purposes. The pattern is mostly applicable where information is needed for background processing, such as management information reports or analysis of historical data.

Business Scenarios

Collection of information to support:

  • healthcare planning
  • commissioning of services
  • National Tariff reimbursement

 


Benefits and limitations of Bulk Transfer

Benefits of Bulk Transfer are that:

  • Batch Processing is ideal for processing large volumes of data/transaction - it also increases efficiency by bulk processing of records rather than processing each record individually
  • for AI/machine learning, the use of 'mini-bulk' transfers is a useful approach for minimising the expensive initialisation or some physical resources such as Gated Linear Units (GLUs) which are needed to perform some machine learning routines

Limitations of Bulk Transfer are:

  • lack of immediacy which makes it less applicable where the transfer of data is needed to trigger specific business activity or where a shared repository is being updated and users require a real-time view of the data in the repository
  • it leads to time lags in data processing
  • time delays inherent with the pattern meaning it is not a recommended approach for requests for information
  • it requires quality control and assurance to be built into the processing of the bulk file - exceptions cannot be handled on an individual basis as for message-based interactions, so even the failure of a single record can cause a major failure
  • it promotes poor business practice and system design

Bulk Transfer – implementation details

The standard approach for implementation of bulk file transfer is via File Transfer Protocol (FTP) Secure File Transfer Protocol (SFTP) with a FTP client connecting to a FTP server to transfer a file of data.

The FTP Software is used to ensure:

  • the transfer is secure (data is encrypted)
  • the integrity of the file transfer (data has not changed during the transfer)
  • the file transfer is complete (all records have been transferred)
  • an audit that the transfer took place.

Real-time interaction

Some business use cases may demand near real-time communication between two parties or systems. In such scenarios the latency associated synchronous interactions is problematic.

Classic use cases for real-time interaction are those which facilitate textual based communication such as instant message between practitioners, or between a practitioner and a patient.

Other use cases are provision of real-time dashboards.

When to use

This interaction approach can be considered where a business use case suggests real-time, near-zero latency type of communication may be required.

The following criteria should be met prior to diverging from a synchronous interaction approach:

  1. The frequency of information events which must be communicated between the two parties is very high, and thus not appropriate for implementation by multiple synchronous interactions.
  2. The nature of the communication is genuinely full-duplex so that both parties need to initiate communication with the other.
  3. The level of latency acceptable in the fulfilment of interaction is very low.

When these criteria are met, you can consider technical approaches such as:


Last edited: 17 November 2023 4:11 pm