I recently read the book How Google Tests Software by James A. Whittaker, Jason Arbon and Jeff Carollo. You might say this is a few years of test experience and research at Google boiled down to short list of bullet points:
Three engineer positions mentioned in this book:
- SWE (Software Engineer): Engineers products.
· Time spent coding : 100%. (“Feature code”)
· Automated tests they create: Lots of small tests.
· Focus: Business goals.
· Main goal: Product innovation.
- SET (Software Engineer in Test): Engineers Test frameworks.
· Time spent coding: 100%. (“Test code”)
· Automated tests they create: Some small tests, many medium tests. Main challenge is in test framework design and integration with the product.
· Focus: Developers. Plan and create test frameworks that will aide in the development process.
· Main Goal: Product testability.
· Technical requirements of SWE’s and SET’s are basically the same. They work in parallel much of the time.
- TE (Test Engineer): Customer experience tester.
· Time spent coding: 0-100%. (“User Experience code”)
· Automated tests they create: End to end tests. Manual and various forms of exploratory testing are possibilities.
· Focus: Customers.
· Main Goal: Product usability.
· Challenges: Testing customer experience at scale. Chrome is an example of a product that required a lot of automated virtual instances and crawling at a scale that was similar to the way the actual search engine index crawls the web.
· TE’s can get away with slightly less technical knowledge than SWE’s or SET’s.
· Crowd Sourcing (testing done by Google loving users) is an option when security or intellectual property isn’t an issue.
- Testers are nomadic at Google. Development teams have to justify their need for an SET or TE. This usually happens when a product is endorsed as something that makes business sense to invest in. The riskier the product, or part of a product, the more likely testing resources will be allocated to test it.
- There are far fewer testers at Google than SWE’s. This is partially because a large percentage of company resources are allocated to creating new products that may or may not ever be embraced as a viable product, and will thus not need testers.
- Google encourages all software engineers to move to a different team after every 18 months. Teams are encouraged to “compete” for interest from SWE’s.
- All resources, whether for product or test development are supposed to be sharable (and reusable) within a common repository. Test and Product software engineers are rewarded for creating software that is reusable by multiple products.
- Google pushes a “Test Certification” for SWEs. It’s a step by step program to build SWE’s test knowledge and ensure that products are incorporating test at every level.
- Every major aspect of testing at Google has been incorporated into a centralized, shareable software driven design, many times because someone did this in their “20% time”. Examples: bug reporting and tracking, test planning (breaking down test plans according to ACC – Attributes, Components and Capabilities), test case creation, and Test Analytics (similar to Webmaster tools, but for testing).
The general message of this book is: “*This* is the ideal model for testing and Google does it’s best to measure up to these models most of the time.”
Interesting things I heard about thanks to this book:
- Protocol Buffers
- WebDriver (Google’s contribution to Selenium), included with Selenium
- 10 Minute Test Plans. The goal here is to create an entire test plan in 10 minutes to avoid creating lengthy documentation that will eventually be neglected and be discarded anyway. This brings