endo IT innovates

Test automation framework for automated test data generation and test execution monitoring

Automation is a great way to free up valuable human resources from demanding and time consuming repetitive tasks, such as software testing. It's a win for everyone. Testers can work on more challenging tasks, testing becomes more reliable and faster, thus reducing time-to-market and software costs.

Business operations get more complex, as does your software stack. Every new feature implementation brings new test cases into the testing ecosystem. Maintaining software quality demands performing an ever-growing set of regression tests, consisting hundreds of test cases, each simulating a real life business scenario. The following table shows the breakdown of manual and automated test tasks:

Completing a test case typically consists of the following steps:

Flowchart showing main tasks of a test case: planning, test data preparation, test step execution and test evaluation, where the last three steps are to be repeated in every release. The chart also shows the outputs of each task, which is the test case for the planning step, input data for the test data preparation step, test result for the test execution step and test report for the evaluation step.

Where is the difference?

The table below sums up the tasks required to conduct testing for each test case manually or in an automated way.

Manual Automated
One-time tasks
  • Plan test steps
  • Define input data criteria
  • Plan test steps
  • Define input data criteria
  • Implement automated test case
Repeating tasks
  • Prepare test data
  • Execute test steps
  • Evaulate test case result
  • Examine test report
  • Start automated test
  • Examine test report

The difference is obvious: the labourous task of test data preparation and test case execution/evaluation are perfect candidates for automation. Once the automation is implemented, it can be executed as many times as needed with much less time and human labour than full manual testing.

Preparing test data

Finding or creating the necessary test data can be really time consuming. Let's imagine a scenario where the test case requires contacts with some kind of special order. Different test cases also need their own specific test data, and the task of finding or creating it repeats with every test cycle.

Meme for regression testing, slide 1: a happy tester trying to reach for the test case. Meme for regression testing, slide 2: a now sweating tester held back by a friendly character saying 'find suitable data'.

Our test automation framework has built-in capabilities to automate this process:

Test data preparation can be done using these connectors easily since these are no-code or low-code implementations and don't require deep coding or Siebel knowledge.

Building test cases

Test cases are the basic building blocks of automated testing. Test cases are used to create test suites, which can cover an entire business process. Finally, test plans are built from test suites to cover some or even all of the entire regression test.

Hierarchy of objects for the test automation framework: test plan sits at the top, based on test suites, which are composed of one or more test cases.

Once the test cases of a test suite are defined, all of the input data can be generated with a simple click or a REST interface call. The generated metadata is accessible via a REST call from the testing software (e.g. SOAP UI), and the interface calls and test case assertions can be customized within the testing application. It is possible to define a message schema with template placeholders, which will be populated from the generated input data, so the request message does not have to be made manually. With this request message, the given interface can be tested, and both the request and response message can be saved to the test automation framework via a single REST call.

Running tests

The beauty of automated testing lies in the simplicity of test execution:

  1. Generate input data with one click or a REST call
  2. Start the testing flow from a third party software
  3. Watch & evaulate test execution report

This is simple. Repeating it multiple times sounds easy, right?

Reporting

Another advantage of automated testing, and having test results stored in Siebel is the ability to inspect the outcome realtime in a nice user interface.

Screenshot of a test report from the test automation framework demo environment. The left lists test plan executions with an ongoing and previous test executions. For each test plan, summary diagrams display the running process as well as the ratio of failed and successful test suites and test cases within the plan. The right part displays the details of test suites within test plans and individual test cases for the test plan selected in the left. Every entity uses colorful icons to represent its status (running, failed, passed) and the process is updated in realtime. Response times are also shown for test cases.

Since test plan executions are stored in the database historically, the results can be checked at any time along with various data such as the request and response messages. Test managers get an overall overview about the system health, like change of response time between releases.

Let us arrange a demo for you...

...and put an end to this with automated testing: