8/14/21

Web Development with Nodejs NPM NVM

When building web applications using JavaScript frameworks like React or Angular, there are tools that enable the installation of these frameworks and aid in the development process. These tools are essential to be able to build, test and manage some dependencies in our projects. It is those package dependencies that we continuously need to keep up with the latest update. To update those packages, we use the NPM CLI tool, which runs on the NodeJS runtime environment.

ozkary nodejs, npm, nvm

When we need to update a package, we may face issues that a package or a new version of that package is not supported (see error below) for the current version of Node.js and that we need to update to a new version. In this article, we discuss the tools that are used to manage the software development process and how to best update NodeJS using the command line interface (CLI).

 

npm WARN notsup Unsupported engine for create-react-app@5.0.0: wanted: {"node":">=14"} (current: {"node":"12.16.1","npm":"6.14.4"})

npm WARN notsup Not compatible with your version of node/npm: create-react-app@5.0.0

 

This error message indicates that a particular required version of Node.js is not in the system and node version 14 is a dependency.

What is Node.js?

Node.js is a cross-platform JavaScript runtime environment. It is used by software engineers to build server-side and client-side web applications.  When building client applications with popular frame works like React, Angular and others, Node.JS provides the runtime environment for the JavaScript applications to run. It also enables the build and test tools that are used during the implementation effort of those applications.

JavaScript's applications are made of several libraries or packages that can be added to the project. Those libraries are mostly refereed as packages, and to install them, developers use a CLI tool to install and configure them.

What is NPM?

Node Package Manager (NPM) is a tool that runs on the NodeJS runtime environment. It comes with the installation of Node.js. Its purpose is to download and install packages for a particular project. Those packages and respective versions are tracked on a JSON file on the root folder of the project. With NPM, we can also install other CLI tools that can be specific for scaffolding startup codebase for a particular JavaScript framework. Some examples include, but not limited to, are: yeoman, create-react-app, angular CLI.

NPM has many commands, but the install command is the most basic and most important one, as this is the one that enables us to install and update packages. Let’s look at some of these commands:

Command

Description

 

$ npm install package-name  –save

 

Installs a package latest version and saves the reference the package.json file

 

$ npm install package-name

 

Installs a package but does not store any reference information

 

$ npm update package-name

 

Updates a package with a new release. NPM decides what version to select

 

$ npm install package-name@latest

 

To have better control on what version to install, we can provide the version number or latest release flag right after the package name, separated by the @ character

 

$ npm install -h

 

Shows help information on running the install command

 

$ npm run script-name

 

Runs a script command defined on the package.json for build, test, starting the project

 

$ npm install -g npm@next

 

This command is used to install the next version of NPM. The -g flag should be used to install this globally in the system

 

What is package JSON?

Package.json is a metadata file which host all the project related information like project name, licensing, authors, hosting, location and most importantly information to track project dependencies and scripts to run.

When installing NPM packages to a project, the information is saved on a file at the root of the project, package.json. This file maintains project information and all the package dependencies.  By tracking the package dependencies, a development environment can be easily created it. The developers only need to clone the repo or codebase and use NPM to download all the dependencies by typing the following command from the root folder of the project:

 

$ npm install

 

 

*Note:  package.json must exist in the same folder location where this command is typed

The script area of the package.json file provide commands that can be used to create production quality builds, test plan execution, coding standard validation and running the application. These are essential command for the day-to-day development operations and integration with CICD tools like GitHub Actions or Jenkins.

Keep Node.js Updated

With a better understanding of Node.js and the purpose of NPM for the development process, we can now discuss how to deal with situations when NPM fails to install a package because our Node.js installation is behind a few versions, and we need to upgrade it.

What is NVM?

The best way to update Node.js is by using another CLI tool, Node Version Manager (NVM). This tool enables us to manage multiple versions of Node.js in our development workspace. It is not a required tool, but it is useful because it enables us to upgrade and test the application to the latest releases, which can help us identify compatibility issues with the new runtime or NPM packages. It also enables us to downgrade to previous version to help us verify when a feature started to break.

To install NVM on Linux, we can run the following command:

 

$ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash

 

Once the tool is installed, we can run a few commands to check the current Node.js version, install a new one and switch between versions. Let us review those commands:

Command

Description

 

$ nvm version

 

Shows the selected node version

 

$ nvm –version

 

Shows the nvm cli version

 

$ nvm ls

 

Lists all the Node.js versions installed

 

$ nvm use version-number

 

Selects a Node.js version to use

 

$ nvm install version-number

 

Installs a Node.js version

 

To install a new version of Node.js, we can use the install command. This downloads and install a new version. After the version is installed, the environment should default to the new version of Node. If the environment was encountering the unsupported Node.js version error, we can run the NPM command that failed again, and since the new version is installed, it should be able to install the new package.

Conclusion

When working with JavaScript frameworks like React or Angular, the Node.js runtime environment must be installed and kept with the latest version.  When new NPM packages need to be installed on our projects, we need to make sure that they are compatible with the current version of the runtime environment, Node.js. If this is not the case, the NPM package fails to install, and we need to update the runtime with the next or latest version. Using tools like NPM and NVM, we can manage the different versions of packages and runtime respectively.  Understanding the purpose of these tools and how to use them is part of the web development process, and it should help us keep the development environment up to date.

Have you used these CLI tools before? Do you like to use visual tools instead?
Send question or comment at Twitter @ozkary

Originally published by ozkary.com

7/17/21

App Branding Strategy with GitHub Branches and Actions

Branding applications is a common design requirement. The concept is simple. We have an application with functional core components, but based on the partner or client, we need to change the theme or look of the design elements to match that of the client’s brand. Some of these design elements may include images, content, fonts, and theme changes. There are different strategies to support an app branding process, either in the build process or at runtime. In this article, we discuss how we can support a branding strategy using a code repository branching strategy and GitHub build actions.

Branching Strategy

A code repository enables us to store the source code for software solutions. Different branches are mostly used for feature development and production management purposes. In the case of branding applications, we want to be able to use branches for two purposes. The first is to be able to import the assets that are specific to the target brand. The second is to associate the branch to build actions which are used to build and deploy the branded application.

To help us visualize how this process works, let’s work on a typical branding use case. Think of an app for which there is a requirement to support two different brands, call them brand-a and brand-b. With this in mind, we should think about the design elements that need to be branded. For our simple case, those elements include the app title, logo, text, or messaging in JSON files, fonts, and the color theme or skin.

We now need to think of the build and deployment requirements for these two brands. We understand that each brand must be deployed to a different hosting resource with different URLs, let’s say those sites are hosted at brand-a.ozkary.com and brand-b.ozkary.com, These could be Static Web App or CDN hosting resources.

With the understanding that the application needs to be branded with different assets and must be built and deployed to different hosting sites, we can conclude that a solution will be to create different branches which can help us implement the design changes to the app and at the same time, enable us to deploy them correctly by associating a GitHub build action to each branch.

Branching Strategy Strategy for Branding Apps
GitHub Actions

GitHub Actions makes it easy to automate Continuous Integration / Continuous Delivery (CI/CD) pipelines.  It is essentially a workflow that executes commands from a YML file to run actions like unit test, NPM build or any other commands that can be executed on the CLI to build the application.

A GitHub Action or workflow is triggered when there is a pull request (PR) on a branch. This is basically a code merge into the target branch. The workflow executes all the actions that are defined by the script. The typical build actions would be to pull the current code, move the files to a staging environment, run the build and unit test commands, and finally push the built assets into the target hosting location.

A GitHub Action is the great automation tool to meet the branding requirements because it enables us to customize the build with the corresponding brand assets prior to building the application. There is however some additional planning, so before we can work on the build, we need to define the implementation strategy to support a branding configuration.

Implementation Strategy

When coding a Web application with JavaScript frameworks, a common pattern is to import components and design elements into the container or pages of the application from their folder/path location. This works by either dynamically loading those files at runtime or loading them a design/build time.

The problem with loading dynamic content at runtime is that this requires that all the different brand assets be included in the build. This often leads to a big and slow build process as all those files need to be included. The design time approach is more effective as the build process would only include those specific features into the build, making the build process smaller and faster.

Using the design time approach does require a strategy. Even though, we could make specific file changes on the branch, to add the brand-a files as an example, and commit them, this is a manual process that is error prompt. We instead need an approach that is managed by the build process. For this process to work, we need to think of a folder structure within our project to better support it. Let’s review an approach.

Ozkary Branching Strategy Branding Folders

After reviewing the image of the folder structure, we should notice that the component files import the resources from the same specific folder location, content. This off course is not enough to support branding, but by looking carefully, we should see that we have brand resources outside the src folder of the project in the brands' folder. There are also additional folders for each brand with the necessary assets.

This way this works is that only the files within the src and public folders are used for the build process. Files outside the src folder should not be included in the build, but they are still within source control.  The plan is to be able to copy the brand files into the src/content folder before the build action takes place. This is where we leverage a custom action on the GitHub workflow.

Custom Action

GitHub Actions enable us to run commands or actions during the build process. These actions are defined as a step within the build job, so a step to meet the branding requirements can be inserted into the job, which can handle copying the corresponding files to the content folders. Let’s look at a default workflow file that is associated to a branch, so we can see clearly how it works.

Ozkary Branching Strategy Build Action

By default, the workflow has two steps, it first checks out or pull all the files from the code repo. It then executes the build commands that are defined in the package.json file. This is the steps that generates the build output, which is deployed to the hosting location. The logical step here is to insert a step or multiple steps to copy the files from all the brand subfolders. After making this suggested change, the workflow file should look as follows:

Ozkary Branching Strategy Custom Action

The new steps just copy files from the target brand folder into the src and public folders.  This should enable the build process to find those brand specific files and build the application with the new logo, fonts, and theme. The step to copy the fonts does some extra work. The reason is that the font files have different font family names, so we want to be able to find all the files and delete them first. We can then move forward and copy the new files.

It is important to notice that the SASS files, SCSS extension, are key players on this process. Those are the files that provide variable and font information to support the new color theme, styles, and fonts. When using SASS, the rest of the components only import those files and use the variables for their corresponding styles. This approach minimizes the number of files that need to be customized. The _font.scss file, for example, handles the font file names for the different brands, as those files are named differently.

 For cases where SASS is not used, it is OK to instead copy over the main CSS files that defines the color theme and style for the app, but the point should be to minimize the changes by centralizing the customization just by defining variables instead of changing all the style files as this can become hard to manage.

Conclusion

Branding applications is a common design requirement which can become difficult to manage without the right approach. By using a branching strategy and GitHub custom action, we can manage this requirement and prevent build problems by distributing the branded assets in different directory to keep the build process small. This approach also helps  eliminate the need to have developers make code commits just to make import reference changes.

Thanks for reading.

Gist Files

Originally published by ozkary.com

6/12/21

Understand Your Users with App Insights to Improve Adoption

After spending several months of software requirements, scrum and design meetings, hours of implementation and testing efforts, our application is finally deployed, but the work for a sustainable product is not complete. After the app is deployed, it is important to understand the users to improve its adoption, and to understand its users, many questions need to be answered. Some common questions may include:

How do we know about user acceptance? How can we measure the user experience? Are all those different features on the app being used? How can we learn from our users to make app improvements?

To answer those question, we need to collect data that can be used to provide and validate the answers.  The best approach to collect this data is by adding telemetry tracking in our apps. By tracking user activities like page views, clicks and custom events, we can use Application Insights analytical visualization tools to understand the user’s behavior. In this article, we look at some of the analytical tools that are available on Azure Monitor Services, which we can leverage to help us answer the questions listed before and better understand our users. Before we get to understand those tools, let’s first provide an overview of the monitor service that host them.

Application Insights (AppInsights)

AppInsights is part of the Cloud Monitor Services on Azure. It has two main areas that work tightly together. It is a monitor service with statistical analysis tooling, and it is an application framework, with support for languages like JavaScript, C#, Python, that is used to instrument our applications and track telemetry information for our apps.

This service has multiple statistical analysis tool to help digest the telemetry information for many areas of the apps which can include front-end, back-end, APIs. The tools are also grouped on areas of concerns like investigation for performance metrics and problems, monitoring/alert, and application usage.

It is the usage statistical analysis tooling that can help us learn how are users interacting with the application. Some of these tools include the users, sessions, events, cohorts, funnels, and flows tools. Let’s review each one of these tools and see how they can help us understand our users and their behavior.

Users Analysis

The user analysis tool helps us understand details about the users. We can visualize what events and pages/views the users are interested in as well as some user specifics like country or city of origin, operating system in their devices, and browser versions.

Those details can help us make decisions like improving advertisement on some geographical areas to attract those users. Also, they are great indicators to understand the usability based on the device types and browsers being used. This leads to making decision on open issues that are related to mobile device improvement, as an example.

App Insights User Analysis


Sessions Analysis

The session analysis tool is very similar to the users’ tool, but it provides session-based information about those users. Keep in mind that one user can have multiple session. These sessions can help us see how users are using the application. The sessions can be visualized by tracking page views and user events. This information is valuable to be able to understand what areas of the application are mostly used. Analysis can be done to understand usability problems in some areas, so they can be improved or simply removed from the app altogether.  

The session information is very valuable for some A/B testing analysis, in which design variants can be introduced to the user experience. This, for example, can reveal that users tend to have a much better user experience with one design over the other.

App Insights Session Analysis

Events

When we need to understand how users are responding to a particular design or feature on the application, we can use the events tool to see the telemetry information that is collected when the user makes some actions like clicking on tabs, links, dropdowns to change selections. The collected insights from these actions can reveal if a particular feature is relevant to users. This can help determine that perhaps a feature is not needed at all or if a design is too confusing to our users.

Custom events not only track user actions, but they can also be used to track some data and or integration telemetry. For example, we can track events about data variations that comes from an external API. The application may not have any control about the source of the data, but custom events can help us track those variations, which leads into creating some countermeasures to handle them.

Other areas that can be tracked with custom events can include input validation, timeouts from user inactivity, chat or help request from specific areas of the application. All these leads to better insights for the improvement of the overall user experience.

App Insights Event Analysis

Cohorts

Cohorts enable us to create a specific group of users which have completed a specific goal metric like signing for newsletters, completed a purchase or landed on a particular area of the app. By predefining groups of users with a particular criterion, we can use that group as a filter on the users, sessions, and events tools. This enables us to continuously analyze the same criteria and compare performance metrics.

The advantage of using Cohorts is that we can use custom logs query expressions to select the information we need from the customEvents and pageViews log tables. The customEvents table has all the events that are sent from the app, while the pageViews table contains all the page visit information. Combining these tables with a particular criterion can generate an excellent filter which can enable us to analyze the data in more detail.

App Insights Cohorts

App Insights Cohorts Query


Funnels

Most apps have a particular workflow that we would ideally like our users to follow. This however may not always be the case, and users may not follow the steps as expected. Therefore, we require a tool that can help us analyze the data to verify how are the users running each step of the workflow. Funnels represent a set of steps on the application and the percentage of users who follow such steps.

For example, an app workflow may consist of several pages/steps to allow the user to register. Since each page visit is tracked, we can create a funnel and select each page visit as a separate step, thus creating a workflow. The output should indicate to us which step the users are not completing or when they stop or abandon the workflow. This is an indication that a particular step may have some design issues that are causing our users to drop off and not complete the workflow. This information should help identify design improvements on those pages to help guide the user and improve the completion percentage.


App Insights Funnels

User Navigation

Whereas Funnels can help us understand the percentage of users completing certain application steps, it does not provide to us enough information to clearly see where the remaining percentage of users are going.

User Navigation visualization tool provides this insight. The tool provides a page view visualization that can help us track how users are navigating the site. Ideally, the workflow should be a sequential flow, but if we see that users are flowing/going into other areas of the app or stopping at a particular step, we can analyze that information further and make some design changes that can help increase the completion percentage.

To use the tool, an initial event like a page visit can be selected as the starting point, then custom events and other page views can be included to see a flow diagram of the steps before and after the selected target event.

When it comes to user visiting pages, this is by far the most useful tool for us to examine, as this can lead us into better understanding of our application and make design decisions on how to improve the user experience and app navigation which help improve the page visit goals for our apps.

App Insights User Flow Events

Conclusion

When an application development life cycle starts, several decisions are made based on the information that is available during the design phase of the app. It is very important to validate those design decisions by instrumenting our application, so the necessary telemetry information can be gathered and analyzed post deployment. It is then that we can better understand the app users to help validate the design decisions or make changes to improve both the user experience and adoption of our apps. In addition, this is an ongoing process as user behavior can change over time, so ongoing iteration of this process is required.

Add AppInsights to Single Page Apps

Thanks for reading!

Originally published by ozkary.com

5/15/21

React Static Web Apps Manage ChunkLoadErrors on Dynamic Routes

The React JavaScript framework for Single Page Applications (SPA), which can be hosted on Azure Static Web Apps (SWA) or CDN hosting, supports the concept of Code Splitting by loading pages/routes dynamically instead of building one single package.  The benefit of Code Splitting is that it enables for faster load time on the user’s browser as opposed to loading the entire application on one single request. This feature however also introduces other concerns that we need to manage with code otherwise, the user can end up with a white page in front of them as the application is unable to render the requested content.

ozkary chunk load error


To leverage Code Splitting, we load the page routes by using React Lazy* Loading and Dynamic Import features. This way the page, or chunk, for the selected route is loaded only when it is requested. Because this is done a run-time, a new request is made to the server to download the chunk of code that is needed to render the page. Yes, this is a server-side trip to get the additional resources, and the application still works as an SPA.

Note:  Lazy loading is an application architecture that is used to load code in memory or web pages only when it is needed. This improves application performance and load time.

Because a server-side request must be made to download an additional chuck of code and render the route or page properly, there could be failures that are reflected as the following errors:


Uncaught ChunkLoadError: Loading chunk 2 failed.(timeout)


The failure could be due to two main reasons:

  •         There is a network latency issue and the content failed to download
  •          The client application may have been cached on the browser and App update has replaced those files with new hash codes

For the network problem, we can add some error handling and retry logic to allow the application to get the chunk again. We do need to be careful here and avoid locking the user on some retry logic because the second case, app update, can be happening, in which case the only way to solve the issue is by refreshing the entire app again.  Let’s look at the code below and talk about the root cause of this issue.

 

Ozkary - React Dynamic Routes

After looking at the code, we can see that we are lazy loading dynamic imports by using promises.  Those directives tell the compiler to create a chunk for that route, and when that route is dynamically requested by the user, the chunk is downloaded to the browser. The issue with this code is that there is no error handling, and promises can fail to download the chunk resulting on the ChunkLoadError.

To address this issue, we create a loader component that can manage both the error and attempts to download the requested chunk. At the same time, this component needs to be able to limit the number of retries, to avoid an infinite loop, and decide to load the entire app again. Let’s look at a simple implementation on how that could be done.

Loader Component


ozkary route loader component


Using the Loader Component 

 

ozkary load routes with error handling


After looking at our solution, we can see that we are lazy loading the loader component which manages the promises, errors and retry activities. The component uses a default limit for the number of attempts that should try to download the next route. This is done by calling the same function recursively and decreasing the limit with every attempt until the limit is decrease to zero.  When the limit is reached, it does the next best thing, which is to reload the application.  If the chunk files for the current version of the application are still available, the retry logic should be able to solve the problem. Otherwise, a page reload takes place to download the application with the updated chunk information.

Code Gists

Route Loader Gist

Routes Gist

Conclusion

For this simple implementation, we decided to reload the application when a retry continues to fail. Depending on the use-case, the approach can be different. For example, a nice message can be displayed to the user explaining that a new update is available, and the application needs to be updated. This is the best user experience as feedback is provided to the user.

An important concern to consider when using Code Splitting is that a ChuckLoadError can take place for users with network issues or when a new update is pushed to production. Therefore, additional design and architecture considerations must be thought of before just adding the code splitting performance improvement to a React single page application.

Thanks for reading.

Originally published by ozkary.com

4/17/21

Modernize SOAP Web Service with Azure API Management

Simple Object Access Protocol (SOAP) is an XML based protocol standard to build Web Services. It provides a communication channel for data exchange between applications that are built with different technologies. As a result, most companies have invested heavily in building SOAP Web Services.

SOAP, however, has not been the de facto approach for building modern Web application for some time now.  With the creation of new protocols and standards, REST (Representational State Transfer) and JSON (JavaScript Object Notation) are now widely used for new projects, leaving SOAP services mostly on maintenance or support mode only.

ozkary-apim-gateway-soap


SOAP began to lose momentum because of its limitations. The XML message format is a verbose compared to the JSON format. SOAP is also tightly coupled as the client-server communication uses a strict contract definition, WSDL (Web Service Description Language). RESTful APIs in contrast provide an architectural style with loose guidelines and supports several message formats like plain text, XML, JSON, HTML.

Replacing a SOAP Service

The decision to replace a SOAP service with other technologies can be a challenging one. Depending on how complex the project is, it may just be a tremendous investment full of risks for any company. In addition, there are the operation cost to support the new technology and still support clients that cannot migrate to the new API definition.

Some companies may also have the argument that their APIs are doing a great job, and they do not need to be updated with new technology.  For some clients, however, it may be a risk to take on an integration investment with a company that supports, what many perceive as, older technology, or that there are security limitations. Luckily for situations like this, there are alternatives that can meet both sides of the argument.

Modernize a SOAP API

With the growth of cloud technology, there are ways to modernize a legacy API without having to re-write the API which immediately becomes very appealing as this effort saves companies with project management and implementation cost.  For a solution like this, we need a system that can provide a reverse-proxy and document format mapping capabilities.  Azure cloud provides the perfect solution with the API Management service offering.

API Management (APIM) Service

Azure APIM service accelerates this kind of solutions by providing multiple capabilities which enables the managing of APIs by protecting the resources with authentication, authorization, IP Whitelisting, and usage limits. It also provides reverse-proxy capabilities which enable companies to publish REST-based APIs by creating Open API endpoints which act as façade for the legacy systems. It supports the transformation of XML documents into JSON documents and Vice-Versa which makes it the obvious solution for modernizing a legacy SOAP API.

Modernization Process

The process to modernize a legacy SOAP API is much simpler and takes lot of less effort than to take on a rewriting effort. This is mostly due to the metadata reading capabilities of the APIM service. The process starts by importing a WSDL document into the service which reads the metadata to create facades with mapping policies for inbound and outbound stages of the request lifecycle.  The façade is the Open API definition of the RESTful service. The mapping policies enable the mapping and transformation of JSON to XML documents for inbound processing as well as XML to JSON transformation for outbound processing. The Inbound stage handles the JSON payload request which needs to be transformed into XML, so it can be proxied to the legacy SOAP API.  The outbound stage handles the XML response from the legacy SOAP service, and it transforms the response into a JSON document, so the client application can consume it.

Sample Process

Now that there is a good conceptual understanding of the approach, let’s look at a practical example by using a publicly available SOAP API which returns a list of continents by name.  Note that any other SOAP API can be used instead as the process is really the same. Let’s review the SOAP API specifications for this Web service.

Service endpoint:

http://webservices.oorsprong.org/websamples.countryinfo/CountryInfoService.wso

Open or use any tool like Postman that can enable sending API requests to the service. Send a POST request with this header Content-Type: text/xml.  Also add the following for the request body payload.

 

<?xml version="1.0" encoding="utf-8"?>

<soap12:Envelope xmlns:soap12="http://www.w3.org/2003/05/soap-envelope">

  <soap12:Body>

    <ListOfContinentsByName xmlns="http://www.oorsprong.org/websamples.countryinfo">

    </ListOfContinentsByName>

  </soap12:Body>

</soap12:Envelope>

 

The expected response should be an XML document that looks as follows:

 

<?xml version="1.0" encoding="utf-8"?>

<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">

    <soap:Body>

        <m:ListOfContinentsByNameResponse xmlns:m="http://www.oorsprong.org/websamples.countryinfo">

            <m:ListOfContinentsByNameResult>

                <m:tContinent>

                    <m:sCode>AF</m:sCode>

                    <m:sName>Africa</m:sName>

                </m:tContinent>

                <m:tContinent>

                    <m:sCode>AN</m:sCode>

                    <m:sName>Antarctica</m:sName>

                </m:tContinent>

                <m:tContinent>

                    <m:sCode>AS</m:sCode>

                    <m:sName>Asia</m:sName>

                </m:tContinent>

                <m:tContinent>

                    <m:sCode>EU</m:sCode>

                    <m:sName>Europe</m:sName>

                </m:tContinent>

                <m:tContinent>

                    <m:sCode>OC</m:sCode>

                    <m:sName>Ocenania</m:sName>

                </m:tContinent>

                <m:tContinent>

                    <m:sCode>AM</m:sCode>

                    <m:sName>The Americas</m:sName>

                </m:tContinent>

            </m:ListOfContinentsByNameResult>

        </m:ListOfContinentsByNameResponse>

    </soap:Body>

</soap:Envelope>

 

On Postman, this configuration with the request and response should look like this:

ozkary-postman-soap-api

This example uses XML envelopes for the request and response. To modernize this API, we can import the WSDL definition of the API into Azure APIM. To get the WSDL file, just add the wsdl parameter to the query string and send a GET request. The response should be an XML schema file which describes the API operations and data types.

http://webservices.oorsprong.org/websamples.countryinfo/CountryInfoService.wso?wsdl

Import the WSDL

Note: The following steps should be done on a subscription that has an API Management service configured.

To import the API schema definition into APIM, open Azure Portal, API Management console. Click on the API link to open the existent API definitions. From this view, click on Add API and select the WSDL option. Configure the popup information as shown on the image below. Make sure to use the URL that has the wsdl parameter on the query string. When ready, press create to generate the API definition with policies.

Note: Make sure to use the URL that has the wsdl parameter on the query string

ozkary-import-wsdl

A new API with the name of ContryInfoService should get created. Click on that name to display the operations that are available from this API. Check the settings for this API and make a note of the Base URL. This is the URL that should be used to send request to the new API.

 We are interested in finding the ListOfContinentsByName operation. This is the same API that we use for the Postman validation.  Once selected, review the inbound and outbound policies by clicking on the code icon as shown below:

ozkary-apim-policy

Policy File

APIM Policy Gist

The policy has an inbound and outbound node. The inbound node uses a liquid template to transform the incoming payload into a SOAP envelope which is required by the SOAP service.  The outbound node also uses a liquid template to transform the XML response into a JSON payload which is then sent back to the calling application. The markup use on the liquid template has many features, and we could no cover all the details here.  For this simple example, we can see how a new JSON document is created by adding the items from the continents collection.

We should now have a JSON API ready, but we are not yet RESTful. The SOAP service requires documents to be send for every request, as a result it uses POST request method for even reading information. A RESTful service should instead be able to get the continent information using a GET request method instead of using POST which should only be used for creating new resources. We should remove that constraint by using APIM to change that configuration as well. Let’s load the operation settings and click on the Frontend configuration. From this configuration, we can change the request method from POST to GET, and even change the URL route to something that aligns with our naming conventions. Also notice that on the inbound policy, we add the set-method policy to indicate that even if the request came in as a GET request, we want to forward this as a POST request which is what the SOAP API expects. The new setting should look as follows:

ozkary-change-api-operation

All should be setup to make requests to the new API. If no extra configuration was done like adding a subscription key, and the APIM resource was not created in a private subnet, we should be able to access the API from the internet. Let’s load our tool of choice and make a request to the new API. Get the Base URL from the setting and append the operation name. That is the new RESTful route. We can now configure a request with the header Content-Type: application/json and set the method to GET. There is no need to add anything on the body as there is no JSON payload to send during a GET operation. This is what the new configuration should look like.

ozkary-apim-json-response

As we can see the results show a list of continents, but instead of showing an XML document it is now showing the data on a JSON document.

Conclusion

Modernizing a legacy SOAP API can be a very expensive and long project. In cases when it is needed to accelerate the solution and minimize the risks, Azure API Management provides a solid approach. This service provides capabilities to create a REST-Base API, policies to handle document transformation and enhances the security of the APIs. All this can be achieved without having to decommission the existent legacy SOAP API and getting a business to provide modern APIs to its customers.

Originally published by ozkary.com