Quickly writing integration tests with Cypress

Quickly writing integration tests with Cypress

Test automation is all about having fast, consistent and quality feedback about the features of your site. A continuous deployment cycle is fast and should give you the confidence to deploy to production at any moment in time. So why do we still put software into production we can’t trust with test automation?!

A flaky test, by definition, is a test that fails even though nothing has changed code wise.
Because of flaky tests: Your build fails, you hit retry a few times and it suddenly becomes green. Or the web drivers that you’re relying on are outdated with the current version of your browser and need to be updated (again). This is a silly thing to do and creates an environment where your tests are more a nice to have, than a must have.

Teams dealing with the above start with a clear goal of creating a Continuous Integration (CI) pipeline with back-end tests in the form off unit testing or API testing and using Selenium, Cucumber or Specflow for the front-end. But at a certain point of time tests become flaky: the person who was in charge left the team; the team spends less time on maintaining the tests because the client wants new features asap. Debugging the project takes time and analyzing sometimes even guessing what the issue is, and at the end the whole test integration project is archived.

So, what if the team could see at any point in time what is happening to a particular test?

A few weeks ago, I tried out Cypress.io at Test automation Days, with the knowledge I got from there I tried Cypress out, as a small Proof of Concept project, at one of the clients we work for. The big wins Cypress has over the old-fashioned Selenium/Cucumber projects is that it simplifies the effort and complexity of writing and debugging integration tests. Cypress doesn’t use PhantomJS or Selenium Webdriver but Chromium, an opensource project that is used by Google Chrome and Microsoft Edge (used since version 75).

With the client I discussed what and how the project should look like, here a few checks they wanted to explore during the PoC:
• The language used should be clear to everyone in the team
• Support’s Gherkin language
• Clicking on elements not yet available in the DOM
• Waiting on elements to disappear

The language Cypress integrated tests are written in is JavaScript, a language commonly used by web developers today. This makes acceptance in the team more easily, improving on adaptation throughout the team. For the Angular community it also support’s TypeScript!

Gherkin support is not native in Cypress, commonly used the build in bundled tools like Mocha or Chai. More info on that can be found here. Gherkin is a format for test specifications. It is a domain specific language which helps you to describe business behavior without the need to go into detail of implementation.

So how did we add Gherkin support? Well we added a preprocessor called “cypress-cucumber-preprocessor” the one we used can be found on github.

Installation was easy: Just run a single npm command:
npm install --save-dev cypress-cucumber-preprocessor
And a small line in your index.js and cypress.json and you’re ready to go.

├── README.md
├── cypress
│   ├── cucumber-json
│   │   ├── LanguageSelector.cucumber.json
│   │   └── Menu.cucumber.json
│   ├── integration
│   │   ├── LanguageSelector
│   │   │   └── languageselector.js
│   │   ├── Menu
│   │   │   └── menu.js
│   │   ├── LanguageSelector.feature
│   │   ├── Menu.feature
│   │   ├── __image_snapshots__
│   │   ├── __snapshots__
│   ├── plugins
│   │   └── index.js
│   ├── screenshots
│   │   ├── LanguageSelector.feature
│   │   └── Menu.feature
│   ├── support
│   │   ├── commands.js
│   │   ├── index.js
│   │   └── multiple-cucumber-html-reporter.js
│   └── videos
│       ├── Menu.feature.mp4
│       └── LanguageSelector.feature.mp4
├── cypress.json
├── package-lock.json
└── package.json

Here is a small example feature file where we check if an element is shown:

Feature: Language selector on Sentia.com

  Scenario: Show language selector to a first time user
    Given the user visits sentia.com for the first time
    Then The language selector is shown

The above shown feature file is named LanguageSelector.feature so our integration test file is called languageSelector.js. Here is a small snippet:

/// <reference types="cypress" />

import { Given,When,Then,And } from 'cypress-cucumber-preprocessor/steps';

Given('the user visits sentia.com for the first time', () => {

Then('The language selector is shown', () => {
    cy.get('div.text-center.s-language-welcome').should('be visible');

Here you see a few things: The first /// offers intelligent code suggestions directly in your IDE while writing tests.

The import { Given, When, The, And } links the feature file with the JS version. For this example we added all options on the same JS file.

After all this, you can add the scenarios as shown above.

To run all available tests, you can run the following command:
cypress run --spec "**/*.feature"

To run the tests locally you run the following command:
.node_modules.bin/cypress open

Now you should see the following screen:Cypress - Picture 2Here you can see all the features you have added and select on what browser it should run once you click on a test it will open a separate browser in this case Chrome 86 and it will auto run all test in that feature file.

If you have looked closely there is a small typo error in the should namely there is a space instead of a dot (be.visible) Cypress uses Chai as assertion language a full list with all available assertions can be found here.

Cypress instantly shows you after it has run where the error is and what the error was. If you run this project as a full test it would also have generated a small .mp4 file as video evidence showing the same as the screenshot.Cypress - Picture 3Here is also one of those nifty tricks with Cypress: If you make a change in your active test if will auto reload that test for you. So, no more fixing code -> compile -> rerun -> fix other errors -> etc…

After the small typo error has been fixed, you will be greeted with this screen:
Cypress - Picture 4Cypress gives the user some powerful tools to check for errors, or what happens at each step. One of those I have mentioned already: The video evidence, for each test it runs it will create a .mp4 file showing all scenarios it has run on that feature and explaining what assertion failed. The other in my opinion even better feature is that Cypress takes a snapshot of each step it performs from the DOM. This allows me to step back in time and precisely figure out the point at which the error occurred.

With all this in mind let’s return to the start on what test automation should have:

  • Fast
  • Consistency
  • Quality feedback

Fast, The speed of tests is in the hands of the tester/developer if you write long complicated tests it will take equally long to implement and for the tool to test said test. But all in all, from start to finish simple test creation will take you around 5 to 10 minutes. From creating a project to running your first own test could be done in less than an hour. No more installing web drivers in your project or having support for different operating systems for the whole team.

Consistency, not using selenium web driver but using the chromium web kit gives more precise tests. Sure, there is still the possibility to create flaky tests that succeed or fail at random. But the chance that it’s because of using outdated web driver is gone. So how does Cypress prevent flaky tests? Well, we still use selector’s that can change during development resulting in failed tests, but those are not flaky tests. Maybe an element is loading slow and you forgot to add a wait for x amount of time. Cypress has a built in retry and wait for elements, it will retry finding an element for a few times after which it will say: sorry boss this element is still not were u told me it would be. This might sound like a hacky way to prevent flaky tests, but it prevents users from adding wait for 10 seconds command that happens every test and result in a run taking more time then brewing a cup of coffee.

Quality feedback, Cypress gives instant feedback in developer modus and creates mp4 files if in test modus that show what happened. And the walk back in history is a blessing we all were waiting for.

The one thing that I haven’t seen is native support for screenshot comparison assertion, it is possibly, but with plugins. A full list is available on the Cypress site found here.
We chose for cypress-plugin-snapshots this allows for both text and image snapshots.
Using .toMatchSnapshot() for text or .toMatchImageSnapshot() you can check the state from the website from a point of time in history to the current situation. There is an option to allow for some fault tolerance using an override on the function by using the option threshold this will allow you to say it may be off by 1% or even more. This is handy if the tests move to a certain view but not always ends at the same position.
Cypress - Picture 5If an image doesn’t exist, you will see a small tab SNAPSHOTS UPDATED, if the snapshot identifier is present you will see either an SNAPSHOTS MATCH or a red error message asking to compare the snapshots.
Cypress - Picture 6After checking the snapshot, you can select that the difference is the new standard or acknowledge the error.

Reporting the outcome of the tests is default done by using “Mochawesome” or the two others built in options TeamCity and JUnit. For this project we are using Gherkin files, so the cucumber preprocessor spews out a .json file for each scenario. So, we added another plugin called “Multiple Cucumber HTML Reporter” that can be found on Github. Also, another well working option is a plugin for Jenkins, also on Github. For Jenkins you only need to point towards the location were the .json can be found and it makes a nice report after each build.
Cypress - Picture 7For the Multiple Cumber HTML Reporter some work is required. We added separate js file in cypress/support/multiple-cucumber-html-reporter.js.

const report = require('multiple-cucumber-html-reporter');

    jsonDir: 'cypress/cucumber-json/',
    reportPath: './',
    displayDuration: true,
    hideMetadata: true,
    customData: {
        title: 'Run local',
        data: [
            {label: 'Project', value: 'PoC Cypress'},
            {label: 'Release', value: '1.0.0'}

And in the package.json the following line under scripts:

"cyreport": "node ./cypress/support/multiple-cucumber-html-reporter.js"

We tried using the prefix “post” on the script as in “postcy:run” and it works but only if there are no failing tests otherwise it will not run afterwards, another solution was to add an event listener in de plugins/index.js on run:end but this is an event not yet available (they are working on it though, more info here).

So now after each run, we need to start the reporter manually using npm run cyreport it will look inside the directory cypress/cucumber-json/ fetch all .json files and create a neat little report inside an index.html. The downside of this plugin is that when using it in a pipeline where you are not able to expose the index.html to the team, it serves only the team on local machines. That’s why I added also the Jenkins plugin as usually a large part or rather the whole team has access to Jenkins and can view it from there.

After all these steps you can simply add this project to your build pipeline in an .gitlab-cy.yml or .jenkins file and run in as a separate stage, adding more automated test with each new feature.

In the end, Cypress takes a big step forward toward a mature automated testing framework, in the current day were fast paced development and deployment is more and more the new standard. Though there are still some improvements needed, such as better support for event listeners and integrated support for screenshot comparing, overall Cypress is really a next step in test automation, making the implementation and maintenance of tests significantly easier for the team. I see a bright future ahead for testers and developers using automated test in their product landscape.

Robbert Stevens
Robbert Stevens

Web Consultant