dimanche 21 décembre 2014

Your tests take too long to run

On a big application with lot of acceptance tests, integration tests, unit tests, performance tests, whatever you want tests you can wait sometime more than 1 hour to have your results.

YOU SHOULDN'T WAIT

There is not excuse. Whatever your change is, it doesn't justify that you have to waste one hour of your precious time. 
Sometime people says "During this time, I may use my brain to think about next tasks" you know what ? If I have to queue for one hour at the checkout, even if the cashier allow me to think about what I can do when I will be back at home, I just don't care and I want to burn the checkout, the cashier and the shop too.

SO BURN YOUR TESTS

Agree ? you don't want that anymore ? so what next

1. Refactor your test

Profiling tool are not only for your running application. For instance, I gained 50% of time changing a simple configuration in jBehave's steps retrieving in my current project. It wouldn't have happen without profiling.


2. Use crowd testing !!!


You are not alone on your journey there are guys that can help you : your colleagues and your continuous integration server.

Sound weird ? let me explain :
  • 90 % of the time you know witch test may have been impacted by your change. (You might want to look at the next chapter at this point but don't do it unless you want to come back here later and it will be all scrambled in your mind)
  • Run those tests on your machine. (You should take less than 5 minutes to do so) 
  • Run your new crazy maven/make/ant/gradle/yourOwnStuffThatIsSoCool goal that will run only a chunk of the tests (Your are 7 in your team run only 1/5 of all tests for instance)
  • Ask all your colleagues to do so.
  • Grab the result, fix your test

How to do that ? That's up to you, but some ideas :

Randomly chosen chunk. If you and your colleague run tests quite often maybe it is reasonable to choose randomly your tests to run and hope to have a failure quite soon

Here is some example with different size of chunk and number of builds required to have all tests run at least once.


chunck size 0.1



Number of build 0 5 10 15 20 30
Chance that a test is run 0% 41% 65% 79% 88% 96%

By running only 1/10 of your tests, you have to run your build 50 times in order to make sure (>99 %) all tests are run. If your team of 7 guys are running tests 5 times a days your get >96% chance that the failing test will be run in one day.
On top of that your tests now take virtually 5 minutes to run


chunck size 0.2



Number of build 0 5 10 15 20 30
Chance that a test is run 0% 67% 89% 96% 99% 100%


By running only 1/5 of your tests, you have to run your build 20 times in order to make sure (>99 %) all tests are run.
On top of that your tests now take virtually 10 minutes to run  

chunck size 0.5



Number of build 0 5 10 15 20 30
Chance that a test is run 0% 97% 100% 100% 100% 100%

By running only 50% of your tests, you have to run your build 20 times in order to make sure (>99 %) all tests are run.
On top of that your tests now take virtually 30 minutes to run


The good point of this strategy is that whatever the frameworks you use it should be easy to implement that.


Round Robin chunk

Better but less easy to put in place is to choose your chunk with an incremental system. It means that all test in the chunk you run will be different than for the next build. It is not always easy to implement since it should be stateful. (Some ideas later but you have to continue reading)


Almost Round Robin chunk
 
Each user of your system builds a predefined chunk of tests but in this solution the idea is to insure that if only x% of the guys are required to run all tests. (Yes some guys dare to be sick in my team)

It means that we are doing a more tests than our chunk. For instance if I have a team of 5 guys and if i want to have all tests runs even if 1 guy is absent I have to run 1/5 of the test + 1/4 tests to cover others colleagues chuncks   = 45 % of all tests 

Here is a reminder table :




Number of colleagues


4 5 6 7 8 9 10
number of colleagues potentially dead 1 58% 45% 37% 31% 27% 24% 21%
2 92% 70% 57% 48% 41% 36% 32%
3
95% 77% 64% 55% 49% 43%
4

97% 81% 70% 61% 54%
5


98% 84% 74% 66%
6



98% 86% 77%
7




99% 88%
8





99%

So it's becoming quite interesting and worth investigation with large team.

Also one thing to understand is that you have to put an order to each members of your team and store it somewhere. (As a bash variable for instance)




Continuous distrubuted testing 

We can also think about having one way to have a daemon on each machine that run chunk of tests and communicate with each other to allow full test coverage in an acceptable time.

You can even think about having a predefined time to test and have a time limited chunk. (Chunk should take less than 5 minutes)

Most of the time you can even use your own VCS and commit one single little file that informs others what need to be tested in the next build
Also using your VCS you can easily verify that all the test has been run for a unique commit.

Although even if it s clearly cool, this solution is not easy to implement if you are thinking that you want test processes to be run in parallel

3. Test in priority what need to be tested

Challenge your project to know if it is really useful to run all tests always. does it worth it ? What are the benefits vs cost ? Don't be dogmatic

If you want to segregate your tests there's multiple strategies :
- Isolate business domains in your test (Annotation or other language artifact may help you)
- Create separate goal in your build tool to run only subsets of tests
- Link your test coverage tool with your build and VCS tool Unfortunatly it's your job here to do that, the idea is to say that if you modify one piece of code it should, most of the time, only impact new created test or test that previously cover the chunk of code you have modified.




Aucun commentaire :

Enregistrer un commentaire