Edited at

Evidence for Automated Test Efficiency in Web / Mobile Services Development

More than 3 years have passed since last update.

This is the mathematical evidence that it is significantly efficient to create automated tests in web services projects.

At first, I define constants.

a = (Hours to create 1 auto-test)

m = (Hours for 1 manual test)

In this case, I define that it takes 10 times as long to create automated test than manual test. (Actually I don't think it takes so long to create automated tests, but I define so here.)

a = 10 × m ... (1)

I also define:

t = (total times to run test for the software)

Then you need (t × m) hours for manual test at the total. ... (2)

You need almost no additional time to run automated test.

So derived from (2), you should create automated test if

a < (t × m) ... (3)

Derived from (1), (3) can be transformed as:

(10 × m) < (t × m)

So you should create automated test if:

10 < t

So in this definition, it is efficient to create if you have to test for more than 10 times.

In web service / mobile service projects, many projects introduces continuous integration. So you release so frequently. In large scale company such as Github, I hear they release several times in a day.

As a principle, you have to test all functions before the releases.

Our project is run with small number of people, so maybe we don't release everyday. Let's consider the case that you release once in a week. If so, it will be efficient if your project runs more than 10 weeks.

There is no web services which is closed in 10 weeks (except for the failure), so it is significantly efficient to create automated tests.

This suppose project which will be maintained for long term such as web services, so it might be inefficient if your project is entrusted one, which you may release only for once.