If you are programmer, there’s a very high chance you have come across the term ‘testing’. For us people who play video games, we think it’s something associated to letting users play your game to find bugs, glitches, etc. before releasing it to the general public. However, this term is not limited only to the world of gaming, it can be applied to almost any area.
For the purposes of this article (and my range of experience), I will only be talking about testing in web applications.
As the name implies, it means testing your code; that means, to run it against several scenarios and making sure you always get the expected result. Be it to make sure the functions of your software work correctly, or putting a real user against your nearly finished application, testing is a (sometimes extra) step that must be taken in order to make sure your code runs as expected and you get the desired result. If I click ‘Browse’, I don’t want to see a ‘Thank you for your purchase’ message.
Well, how do you test something or someone in real life? You ask a question, you create a challenge, etc. If the result is what you expect, the test is passed, if not, the test is failed. Depending on what you are testing, you usually have to write additional functions to test each one of the scenarios or aspects you consider necessary. (this is where writing testable code comes in, but that’s a whole different topic). Some frameworks can make this task go more smoothly (like rSpec for Ruby and Selenium for PHP).
Testing is usually seen as a grinding task. Truth is, usually no one wants to do it, but the benefits of testing your code greatly outweigh the risks of not doing it.
Not testing your code can lead to unexpected bugs, and even crashes. Code that has been properly tested and has been modified to be testable is much easier to maintain, improve and scale up.
Well but, why would I need to write all that extra code to make sure it works if I already know it does? I wrote it, ran some use cases and it works flawlessly!
Well, it works now, but let’s say other people (or even YOU after some time) want to add a super cool function to your application. It’s possible that to implement new functions, old ones must be refactored, modified, or even deleted. You might or might not know which functions could be affected, and a new developer most certainly won’t. People would be adding code in the dark, not knowing if the changes they make are critical to another process.
This actually happened to me in my first months on the job. I was adding a new function to an already existing application, and just a few days before the official release, my team discovered a critical change that made another function unusable. Luckily the bug was caught on time and there were no customers affected, but had it been released, it would’ve caused an uproar.
There are plenty of testing methods for any given application:
- Acceptance testing
- Unit testing
- Integration testing
- Performance testing
- Usability testing
among many others. I won’t go into detail, but each one of those types tests something different. Discussing with your development team WHAT and HOW you need to test is very useful.
I hope that after reading this, you can see tests in a new light. I used to be one of those people who didn’t really care for testing, and saw it as something unnecessary. But it’s a really useful thing to do for your code. As they say, ‘do something that the tomorrow you will thank you for’.
- Testing means to put up your code against a scenario to make sure it works as expected.
- Testing might seem like a chore, but in the long run it’s the better option!
- Writing tests keeps new developers aware if they are changing the behaviour of existing functions.
- If you have testable code, it’s much easier to maintain, and upgrade.
- Testing your code before adding definitive changes to the source saves up a lot of time (and money!).
- You should discuss with your team which and how many tests you would need to do.