WeChat Mini Program Development 06 Automated Testing: Improve the delivery quality of Mini Programs

WeChat Mini Program Development 06 Automated Testing: Improve the delivery quality of Mini Programs

We all know that testing is one of the necessary links in any product iteration process, its purpose is to control quality, before the product is delivered to the user, early detection and solution of quality hazards, to ensure the high availability of products. In other words, testing is a necessary means to indirectly improve the competitiveness of a product.

As an end-side developer (whether it is a traditional web front-end or a small program), in the entire product module matrix, responsible for the interaction logic layer that is directly related to the user, the code you write directly affects the user experience of the application, including UI and interaction. These two points are the main testing targets of the Mini Program on the end side.

I’d like to remind you that the importance of interaction and UI is often overlooked by most developers, but for users, the first impression is visual and operational feedback. If I had two mini-programs with the same functionality for me to choose, I would definitely choose the one that was interactively friendly and had a comfortable UI. Therefore, testing for interaction and UI (and also for the end side) is very important.

Then in order to build a complete automated test system, you must have systematic testing knowledge and choose the right tool. So in this lesson, we will cut from these two angles, take you to consolidate your theoretical knowledge, and then learn the current excellent Mini Program testing tools, and create an automated test system for Mini Programs in a targeted manner. Hopefully, after this lesson, you will have a deeper understanding of automated testing and apply it to real-world work.

Next, we will officially enter today’s course, first to learn some test theory, including pyramid models and the target points and implementation methods of various types of tests, and then proceed to the next step of task disassembly and technical selection.

 

Test the pyramid model

Automated testing has long formed a perfect test system in the field of server-side development. However, unlike the server side, which focuses on data and logic, automated testing on the side (including front-end, App, Mini Program, etc.) is a very difficult thing even today.

Taking the front end as an example, the late start and rapid development have affected the popularity of automated testing in the front-end field to a certain extent, but this is not the core factor, but because the front-end, or the GUI level of direct interaction with the user, has been difficult to automate testing, because the closer to the user’s code, the more difficult it is to test, because:

Compared with the server, the UI changes very frequently;

Because the relationship between UI and design is very close, and design has certain subjective factors in it.

Machines can’t accurately determine whether the UI is in line with expectations, so today there are still most of the technical teams in front-end testing can not get rid of the dependence on people, and even manual testing accounts for the majority of the entire test system, forming a counter-pattern contrary to automation:

WeChat Mini Program Development 06 Automated Testing: Improve the delivery quality of Mini Programs

 

From bottom to top are unit testing, integration testing, end-to-end testing, and acceptance testing, which are said from the perspective of the application as a whole:

Unit tests should contain front and back end individual unit tests;

Integration tests correspond to front-end and back-end integration scenarios, or commonly understood as front-end and back-end joint debugging tests;

The end-to-end test corresponds to the project’s full functional test, which is the automated GUI test in Figure 1.

WeChat Mini Program Development 06 Automated Testing: Improve the delivery quality of Mini Programs

If we further refine it, see the front end as a relatively independent black box isolated from the back end, and then understand the pyramid model in this context, unit testing can be extended to a broader meaning, as a narrow sense of the front-end unit test, front-end module integration test, and the set of end-to-end tests isolated from the back end. As shown in the following figure:

WeChat Mini Program Development 06 Automated Testing: Improve the delivery quality of Mini Programs

In this way, in addition to the acceptance test in the pyramid model, the other three automated tests can be brought into the front-end field and carried out with Mock without back-end cooperation. That is, automated testing on the front-end or side of the Applet needs to include three types:

Unit testing;

Integration testing;

End-to-end testing.

However, there is some controversy between unit tests and integration tests, which are tests for the smallest unit; Integration testing targets multiple minimal units of integrated functionality.

Although the concept is relatively clear, in practice (especially in different technical fields) the understanding of the smallest unit is not completely consistent, such as:

From the perspective of the application as a whole, both the front end and the back end can be considered as the smallest unit, so the integration test is the front end + back end;

In the respective domains of the front and back ends, a module can be defined as the smallest unit, and a class can also be defined as the smallest unit;

If functional programming is used, a function can also be defined as the smallest unit.

**This ambiguity in the definition of the smallest unit is the main cause of the dispute between unit testing and integration testing. **For example, a shopping cart module contains product introduction, quantity, price and other components, each component can be regarded as the smallest unit of the shopping cart, then the unit test is for these components, and the integration test is for the shopping cart module.

However, the shopping cart is only a module in the e-commerce application as a whole, and then the shopping cart can be considered as the smallest unit of the application as a whole, then the unit test is aimed at the shopping cart module, and the integration test is aimed at the application as a whole. This kind of dynamic programming in the scenario and algorithm is similar, starting with the smallest problem and proceeding until the complete complex problem is solved.

The reason why I talk about the difference between unit testing and integration testing is to illustrate a core point: the application of theory to practice requires a combination of specific domain and scenario characteristics. This is an important basis for our next step in task decomposition for the end-side test of the mini program.

After learning these theories, the next step is to apply the theory to practice, and clearly locate and assign tasks for unit tests, integration tests, and end-to-end tests on the end side of the Mini Program. Remember the custom components that 03 talked about? At the time, I stressed that learning how to use a custom component is only the first step, and it is more important to understand the componentization specifications and thinking behind it.

In the field of front-end development (or all of the end-side development areas), componentization is one of the most core ideas, which not only determines how to organize the architecture and write code, but also affects testing. In any necessary scenario, many lessons will recommend that you try to extract a relatively independent function as a component, which is conducive to code reuse and testing, and the same is true in the field of mini programs.

If we define a component as the smallest unit of a Mini Program, then unit testing is component-oriented, integration testing is aimed at functions or pages composed of multiple components, and end-to-end testing is aimed at the overall interaction flow of the Mini Program.

Unit tests for components
The ideal state of componentization is to think of everything as a component, and then form an application through the combination of multiple components. However, this state is difficult to implement in real-world work, and it is inevitable that there will be some “anti-componentization” patterns in the project.

For a small program project, we can encapsulate the navigation bar shared by multiple pages into a custom component, but in each page, not all places outside the navigation bar are suitable for encapsulation as components, such as the outer container of the component and the container of the container. These are not nodes of components, often acting as the “glue” role of glue components, and do not have the value of testing alone, but need to be reflected in a complete function or page, that is, integration testing or end-to-end testing. This illustrates that for applets, unit tests are targeted at custom components.

However, the running environment of the Mini Program is relatively special, and the basic library is not completely open source, so it is a little difficult to unit test the custom components of the Mini Program. Popular frameworks in the front-end field (such as Vue/React) have a complete set of component unit testing solutions, and the tools that can be used for unit testing of custom components of applets are scarce. I personally recommend the Miniprogram-simulate (hereinafter referred to as simulate) produced by the WeChat team, which is a tool specifically designed to test the custom components of mini programs, and it is very easy to use.

simulate can be used with front-end testing frameworks such as jest and karma, and can simulate all the behaviors of custom components, including props injection, state operation, lifecycle function triggering, event handling, etc. The specific usage method is also simple, you can view the documentation. I want to emphasize that the simulate tool does not rely on the developer tools of WeChat Mini Programs, and can only use the browser or even the browser environment simulated by jsdom to test the work, which is also the basic premise that it can be combined with the tools of the front-end ecosystem and is more conducive to accessing the CI/CD system.

Once you’ve solved unit testing for custom components, the next step is to take a higher level of integration testing and end-to-end testing.

 

Feature-oriented integration testing and holistic end-to-end testing

Functions or pages composed of multiple components in a Mini Program are the target of integration testing, and the Mini Program as a whole composed of all pages is the target of end-to-end testing.

In most scenarios of Mini Programs, the association between pages and pages is very weak, and there are three most common associations:

Jumps between pages;

Pass data via URL search;

Sharing data through caching is the three most common associations between pages.

Under this premise, the end-to-end test covers a very limited number of scenarios, most of which can be covered by integration tests, so we may wish to discuss these two test types together.

For integration testing and end-to-end testing, when we select technologies related to automated testing, we hope that the tool can provide the following basic capabilities:

Simulate the user interaction behavior of the Mini Program, such as clicking a button, sliding to switch pages, etc.;

Simulates the behavior of an Applet operating storage;

Simulate different states of network requests, including returned data and exceptions;

Take a screenshot to compare ui correctness.

These capabilities are almost the same as the front-end test tool requirements, but because of the closed nature of the Mini Program, you can not directly use some very useful tools in the front-end ecosystem, fortunately, WeChat officially provides a tool specifically for the automated testing of the Mini Program: miniprogram-automator (hereinafter referred to as automator).

Automator enables you to automate testing by controlling the behavior of the Mini Program through external scripts, including simulating interactions, manipulating storage, and simulating the network (the above points). The use methods and functions of automator are very close to those of Selenium and Puppteer commonly used on the front end, but the relevant functions are migrated from the browser to the Mini Program (functions include controlling the Mini Program to jump to the specified page, obtaining the Applet page data, actively triggering the binding event of the Mini Program element, and calling the wx object (the global variable of the Mini Program runtime) on any interface, etc.). However, unlike the simulate tool for unit testing of custom components, automator requires the help of WeChat developer tools.

In addition to implementing the first three points of functionality, the fourth point we also mentioned to take a screenshot for UI comparison, this work is divided into two steps: the first step of screenshot; The second step is comparison.

The work of the screenshot can be handed over to the automator tool, and there are more choices for the work of comparing whether the screenshot is correct, such as the original manual review, or with the help of some tools to achieve automatic comparison. But we mentioned in the test pyramid model that whether the UI meets expectations is not only a technical matter, it is difficult for the tool to ensure the correctness of the results, so even if the tool is used for image comparison, it also needs to be manually confirmed twice. There are many specific image comparison tools, based on Node.js Python and even Golang, which I will not bother to introduce.

The disassembly of integration testing and end-to-end testing into a specific task, there are different business logic in different types of mini programs, which will lead to different disassembly methods, and it is difficult for us to cover all types in one course, so we will pick some tasks with high universality to explain. Let’s take a look at the following diagram:

WeChat Mini Program Development 06 Automated Testing: Improve the delivery quality of Mini Programs

Note here that with automator, you can jump to any page, and the page A in the figure is not necessarily the home page, or it can be any relatively independent page without a front-dependent page.

The test simulation scenario for a page can be roughly divided into three categories:

Render a simulation of the scene. By simulating normal scenarios and abnormal scenarios, as well as different conditions (for example, in abnormal scenarios, there are two kinds of conditions: network anomalies and service interface anomalies). The final output is produced as a screenshot. The results are compared to the tools and manually;

Simulation of an interactive scene. By triggering a binding event for a component in the page and then verifying the correctness of the result of the response function execution, the final output can be data, or if rendering is involved, it can also be produced in the form of a screenshot. The comparison of results is handed over to tools and manuals (if screenshots are not involved, the comparison of results can be done with only tools);

Validation of parameter transfer and data sharing. This work occurs in the jump behavior between pages, such as page A in the figure jumping page B, which may pass parameters through URLs, or share data through storage, and if either of these two behaviors exists, the correctness of the parameters and data needs to be checked. hmi screen display

In fact, the integration test and end-to-end testing of most Mini Programs will involve these test cases listed in the above figure, but in the actual development work, there will be some variations or differences according to the specific business scenarios.

 

summary

Testing is an important measure to ensure product delivery, as a R & D personnel, to ensure the high quality of their own output can have a positive impact on the overall R & D efficiency and quality assurance. Unit testing, integration testing, and end-to-end testing are three types of tests that progress from bottom to bottom, with their own goals and supporting testing tools.

Due to the closed nature of its own ecology, the Mini Program is slightly cramped in the technical selection of automated testing, but fortunately, the official provides some easy-to-use tools for developers to refer to, and these tools themselves also have certain extensibility, which can be combined with some test tools in the front-end ecology to jointly build a complete automated testing system.

We devote more space to this lesson to explain the theoretical knowledge related to testing, which is independent of mini programs and can be applied to any technical field, including the cloud development field that we will learn later.

Of course, we do not introduce too much about the use of Mini Program testing tools, one is because these contents can be found in the documentation, and the other is because the tools are only a medium to assist in the landing of theory. So these simple tasks are left to you as today’s after-school assignments: read the documentation for the intensive testing tools mentioned in this lesson and learn how to use them.

By hmimcu