Create your own luck. The four organizational factors that decide whether a test team succeeds.
Most testing success is mistaken for luck — a cooperative dev team, an early engagement, shared tools. It isn't luck. It's four organizational factors you can engineer into almost any project. This talk names them, shows the good and bad versions of each from real engagements, and gives test leads a plan to move their next project from one to the other.
Abstract
There is no luck in testing.
Conference attendance is a great way to pick up new ideas for your test efforts. Most of us walk away from a talk thinking "their situation is just like ours — we can do that." Sometimes we're right. A similar tech stack, similar test activities, a similar application domain — those contextual factors really do make it easier to adopt a new practice. But they are not where success comes from.
Look at every successful test team you have ever been on and every failed one you have ever seen. The difference is almost never the technology or the domain. It is four organizational factors: clear roles and interfaces, early involvement, shared test artifacts, and a project culture that actually values what testers do. Every one of those is context-independent. Every one is something a test lead can influence. None of it is luck.
This talk walks through the four factors, names the anti-patterns that kill each of them, and shows what the good version looks like from eight anonymized engagements. The point is not to copy any of the engagements — the point is to recognize the pattern on your project before the first bug report lands, because by then it is usually too late.
“There's no luck involved at all — just careful planning, calm and reasoned persuasion, and lots of attention to organizational details.”
— Rex Black, Create Your Own Luck
Outline
What the talk covers, in order.
Clearly defined roles and interfaces
Test teams depend on everyone: the SUT comes from developers, the test environment from sysadmin or the NOC, bug-report hand-offs go to developers, status reports go up to project management. When those interfaces are undefined, chaos follows — missed hand-offs, blame games, and an endless cycle of test-team firefighting.
- Define roles and interfaces in the test plan.
- Work with the project team to build support for those roles before they're needed.
- Reinforce the boundaries tactfully when hand-offs come due. Do not absorb responsibility that belongs somewhere else.
Early test-team involvement
The cost of fixing a bug rises throughout the project. Some test-development tasks take a long time — quality risk analysis, test tools, test harnesses, environment build-out. Relationships are easier to build under low stress than under deadline pressure. All of that argues for getting the test team into the room on day one.
- Promote awareness early: brief the project on the advantages of early test involvement.
- Start test-team work on day one of the project — not at integration, not at system test.
- Have the test team review requirements and design specifications. Finding errors there is the cheapest finding you will ever do.
- Use the early window to analyze quality risks, develop and sell an estimate, write the test plan, and build the test context.
Shared test tools, cases, and data
Test tools, cases, and data take real effort. When test artifacts built for unit or component testing can be leveraged at integration and system test, that effort compounds. When they can't — because each team built their own harness in isolation — you will rebuild everything, and the rebuild never quite catches up.
- Have test engineers work alongside developers on unit and component tests.
- Design tools, cases, and data for re-use from the start. Retrofitting is almost always more expensive than designing for it.
- Build automated harnesses out of commercial, open-source, or custom frameworks — whichever your team can actually maintain.
A project culture that values testing
Aligning the test effort with the project's critical quality risks requires teamwork. Test resources — time and money — are almost always insufficient. Expectations about what testing actually delivers are almost always unclear. A test-friendly culture promotes "important" testing, adequate resources, and the use of test results for project tracking. It is not optional.
- Explain testing benefits to managers and executives in their language — risk, cost, schedule, and quality, not test-case counts.
- Distinguish assessing quality from assuring it. You are measuring the product, not insuring it.
- Involve the right stakeholders in test design, estimation, planning, and development.
- Apply quality risk management techniques (FMEA, ISO/IEC 9126) so the priority argument is data-driven, not anecdotal.
Key takeaways
6 things to remember.
The four factors are context-independent
Your stack, your domain, and your team size do not change which four factors decide whether the test team succeeds. Get those right, almost anything works. Get them wrong, nothing works.
Clarity is a test-lead job
No project manager is going to write your role definitions for you. Do it in the test plan, sell it to the project, and reinforce it at every hand-off.
Get in on day one, or lose six weeks
Every week of late involvement costs you roughly a week of missed leverage. Requirements review, quality risk analysis, and early test environment work are not luxuries.
Shared artifacts beat heroic rebuilds
When development and test build test artifacts together, the re-use compounds. When they don't, you rebuild everything at integration, and the rebuild never catches up.
Sell the testing mission
A test-friendly culture is sold, not given. Executives need the testing pitch in their language: risk, cost, schedule, quality. Not case counts.
Plan, persuade, repeat
None of this is luck. It is careful planning, calm and reasoned persuasion, and attention to organizational details — repeated on every project, every release, every team.
Worked examples
One bug. Eight drafts.
Clearly defined roles — two engagements
Same factor, two engagements. The undefined-role version absorbed an entire test-lead role into build-engineering and still got blamed for the test results.
No one on the project was responsible for making installable releases from source. The test manager accepted the role by default because she needed builds for system test. She figured out how to produce test and customer releases, but the added workload made planned testing impossible. The test effort was seen as a failure by peers, management, and executives — although the real failure was upstream.
Another test manager defined the release and environment-management processes for client, server, and hardware components up front. Builds arrived on time, as promised. Test-release installation problems were treated as mutual problems rather than finger-pointed. No unanticipated lab downtime occurred due to unapproved reconfiguration.
Early involvement — two engagements
Timing of the first test engagement decides whether the test team contributes strategically or is seen as a distraction.
The test manager spent his time dealing with previous maintenance releases while the test team waited. He spent what free time he had haranguing the project about bad processes. Testers did not engage with the new release until modules were written and integration was in progress. The window for creating an appropriate test context — including automation — was gone. The test team was unable to contribute fully and was seen as an extraneous distraction by the rest of the project.
Another test manager allocated two test engineers at the beginning of development. They reviewed early requirements and design specifications and found roughly 100 errors and omissions before a single module shipped. The full testing context — quality risk analysis, test plan, team, tools, cases, data, environment — was ready on day one of integration test. The test team was seen as a major player and brought credible quality assessments into every status meeting.
Shared artifacts — two engagements
When development and test share the tool-building effort, the re-use compounds. When they don't, you rebuild — and you never catch up.
One development team built a functional unit-test harness with no test participation. The harness used special-skill tools for a specific environment and could not be reused downstream. A second development team built a load generator over the test engineers' objections. The load generator was worthless for performance testing because it was too intrusive. The test team had to recreate both tools — over a person-year of duplicated effort.
A development engineer and a test engineer adapted an automated test harness together to smoke-test nightly builds. The harness submitted 200+ queries against the SUT (a multi-OS, multi-DB reporting tool), compared results to baselines, and emailed a report to both teams. Regressions detected during system test were greatly reduced, test cycles shortened, and monthly maintenance releases became possible.
Test-friendly culture — two engagements
Culture does not appear. It is sold, project by project, by the test lead.
A client wanted a test engineer to come in, plan the test process, create a test context, build automated harnesses for web and legacy apps, and train the development team — at half her usual rate, in six weeks. One executive referred to his test manager as the "Quality Assurance manager" and expected testing to make quality problems go away. When the project team misunderstands what testing is, the test team cannot help but fail.
Another test manager clarified expectations and project context as the first step. He used quality risk management with the project team to determine scope. The test team worked with developers to build test tools, data, and cases. The team helped marketing, customer support, and development define "correct." The test dashboard became the project's key quality indicator. Test exit criteria became the ship criteria.
Closing
Lucky testers work on projects where they have clearly defined roles and hand-offs, get involved early, work with developers on re-usable test tools, cases, and data, and contribute clearly understood quality information to a test-friendly project team.
You can be lucky, too — because really, there is no luck involved. Just careful planning, calm and reasoned persuasion, and lots of attention to organizational details.
Keep reading
Related pieces.
More for this audience
Articles, guides, and case studies tagged for the same readers.
- Whitepaper
Evaluation Before Shipping: How to Test an AI Application Before It Hits Production
The release-gate playbook for AI features. Covers the five evaluation dimensions, how to build a lean golden set, where LLM-as-judge is trustworthy and where it lies, rollout mechanics with named exit criteria, and the regression suite that keeps a shipped AI feature from quietly rotting in production.
Read → - Whitepaper
Choosing the Right Model (and Knowing When to Switch)
A practical framework for matching LLM model tier to task. Covers the four axes (capability, latency, cost, reliability), cascade routing patterns that cut cost 60 to 80 percent without measurable quality loss, switching costs you did not plan for, and the worked economics at 10K, 100K, and 1M decisions per day.
Read → - Whitepaper
Beyond ISTQB: A Multi-Domain Certification Roadmap for Technical L&D
Most engineering L&D programs over-index on a single certification family, usually ISTQB on the QA side, AWS on the infrastructure side, and under-invest across the rest of the technical domains the org actually needs. This paper covers a multi-domain certification roadmap (QA, AI, cloud, data, security, project management, software engineering) with sequencing logic for each level of the engineering ladder, plus the maintenance discipline that keeps the roadmap relevant as the technology shifts underneath it.
Read → - Guide
The ISTQB Advanced Level path, mapped
The Advanced Level landscape keeps changing — CTAL-TA v4.0 shipped May 2025, CTAL-TM is on v3.0, CTAL-TAE is on v2.0. This guide maps all four core modules, prerequisites, exam formats, sunset dates, and which module a given role should take first. Links directly to the authoritative istqb.org syllabi.
Read → - Whitepaper
Bug Triage: A Cross-Functional Framework for Deciding Which Defects to Fix
Bug triage is the cross-functional decision process that converts raw defect reports into prioritized action. Done well, it optimizes limited engineering capacity against risk; done poorly, it becomes a backlog-management ritual that neither fixes the important defects nor drops the unimportant ones. This whitepaper covers the triage process, the participants, the six action outcomes, the four decision factors, and the governance disciplines that keep triage effective in continuous-delivery environments.
Read → - Whitepaper
Building Quality In: What Engineering Organizations Do from Day One
Testing at the end builds confidence, but the most efficient quality assurance is building the system the right way from day one. This whitepaper covers the upstream disciplines — requirements clarity, lifecycle selection, per-unit programmer practices, and continuous integration — that make system-level testing cheap and fast rather than the only thing holding a release together.
Read →
Where this leads
- Service · Quality engineering
Software Quality & Security
Independent test programs, security testing, and quality engineering for systems where defects cost real money.
Learn more → - Solution
Risk Reduction & Clear Decisions
Quality programs and decision frameworks that shift risk discussions from anecdote to evidence.
Learn more → - Solution
Reliable Software at Scale
Quality engineering programs for organizations whose software is now operationally critical.
Learn more →
Want this talk delivered in-house?
Rex Black, Inc. delivers every talk on this site as a live workshop, a keynote, or a conference session. Tailored to your stack, your team, and your timeline.