WEBINAR RECAP
|
CONTRACTOR SAFETY MANAGEMENT Using Leading Indicators for Better Contractor Outcomes |
We had a full house for last week's session on Using Leading Indicators for Better Contractor Outcomes β and based on the engagement in the polls and the Q&A, the topic hit close to home for a lot of you. So here's a full recap, with the key concepts, the frameworks we walked through, and a few things worth sitting with after the fact.
If you attended, this is your reference doc. If you missed it, consider this your crash course.
Stop Driving Forward by Looking in the Rear-View Mirror
That's not just a provocative tagline β it's a fair description of how most organizations actually manage contractor safety. We spend enormous energy measuring what already happened: injury rates, citations, recordables, costs. Those are lagging indicators, and they tell you what went wrong after it went wrong.
Leading indicators, by contrast, try to tell you what might happen. The honest framing here is that they're probabilistic, not deterministic. Nobody predicts that an electrician falls through a floor opening on the third level at 11:15 on a Tuesday. But project teams do know when new floors get poured. They know when floor openings need to be barricaded or covered. They can measure whether that's happening consistently and whether the controls are actually in place before someone gets hurt.
That's the whole game with leading indicators: identifying elevated risk states before they produce events.
|
"Leading indicators attempt to measure what's below the surface before it breaks through." |
The iceberg model we showed early in the session made the point well. The visible stuff β injuries, citations, property damage β is the tip. Below the waterline is where the real action is: near misses, unsafe conditions, procedural drift, cultural fractures, system weaknesses. Leading indicators are our best tool for getting at what's below the surface.
One honest caveat: we can only measure what we can see and define. There's a real possibility we're still missing the most dangerous part of the iceberg.
The Validity Problem: Are You Measuring the Right Things?
Here's something worth sitting with: measuring the wrong things precisely doesn't make you safer.
Two examples that come up constantly:
- Training completion rates do not equal competence. Competence is only demonstrated at the work face, where a worker applies knowledge, experience, and skill on an actual task. Ticking the box on a training module tells you the training happened β it doesn't tell you it landed.
- Inspection counts do not equal hazard elimination. You can run a lot of inspections and still not be doing a good job of actually identifying and treating hazards. Volume isn't quality.
Goodhart's Law: When the Measure Becomes the Target
British economist Charles Goodhart put it plainly: when a measure becomes a target, it ceases to be a good measure.
In a safety context, this plays out in two predictable ways:
- If workers are incentivized to report near misses, you get over-reporting of trivial observations while genuine risk signals get lost in the noise.
- If managers are incentivized on OSHA and DART rates, systems get bent to ensure injuries become 'non-recordable'. Most of us have seen this.
The question worth asking at your own organization: which of those two scenarios is more damaging to your safety culture? There's no universally correct answer β but it's a conversation worth having.
The Leading Indicator Value Spectrum (previously Maturity Model)
This was one of the frameworks that generated the most discussion, and for good reason. It gives you a practical lens for evaluating whether your current metrics are just counting activities or building genuine resilience.
The model has five zones:
|
The Five Zones
|
Here's the honest news from the ASSP/JJ Keller construction industry survey of roughly 1,000 professionals conducted in mid-2025: the most commonly tracked leading indicators β equipment inspections, safety meeting participation, observations, JHA completion, near misses β cluster almost entirely in Zones 1 and 2. No representation at Zones 4 or 5.
That doesn't mean the lower-level indicators don't have value. It means there's a real opportunity gap for organizations willing to do more than count activities.
What Zones 4 and 5 Actually Look Like
Zone 4 β Outcome-Informed Decision Making (System and Process Health)
These indicators look at organizational capacity and how quickly the system responds to what it finds. Some examples:
- Time between hazard identification and corrective action β how fast does the contractor respond?
- Percentage of incidents with root cause identified
- Contractor safety prequalification scores β quality and completeness, not just whether they submitted.
- Audit finding recurrence rates β are problems getting scrubbed from the system or showing up over and over?
Zone 5 β Proactive and Adaptive Capacity
Zone 5 represents the leading edge of safety management thinking. The core idea is that the most resilient organizations don't just measure safety activities or even outcomes β they actively develop their capacity to sense change, learn from all operational experience, and adjust before failure occurs.
In practice this can include structured learning reviews following both successes and failures, genuine incorporation of frontline worker input into how work is designed, and deliberate attention to the gap between how work is planned and how it actually gets done in the field.
What it looks like in contractor management? After-action reviews that change something in your own system, not just the contractor's. Prequalification criteria that evolve in response to your own incident history. Kickoff meeting formats that get refined based on what you've learned about where alignment actually breaks down. The principle is learning that feeds back into the system. The specific mechanisms are yours to develop.
The Contractor Path of Risk: Where to Put Your Indicators
Before diving into specific leading indicators for contractor management, we framed where those indicators live. The Contractor Path of Risk maps six phases that every contract relationship moves through β from prequalification through to closeout and the feedback loop back to the beginning.
Each phase creates decisions that either generate risk or reduce it. Each phase has measurement opportunities. We focused on two areas where the gap between what organizations do and what they should do is widest:
Phase 3 β Alignment and Kickoff
This is the phase between contract award and boots on the ground. It includes the kickoff meeting and all the alignment activities that largely determine whether a contractor thrives once mobilized. And it is chronically under-managed.
The familiar scenario:
|
The Checkbox Kickoff
|
What should happen instead: the kickoff meeting occurs early enough β a week to two weeks out, depending on complexity β that there's actual time to identify gaps in the execution plan and site safety plan, anticipate scope hazards, and fix problems before the contractor ever hits the site.
Decision-makers need to be in the room. The safety plan needs to be reviewed before the meeting, not during it. The meeting should be a discussion about execution married to site standards β with the contractor doing the majority of the talking, demonstrating knowledge of your requirements applied to their scope.
Two suggested leading indicators for this phase:
- How many days before mobilization does your kickoff actually occur?
- How often are submitted safety plans accepted without revision?
If your kickoff is happening on the day of mobilization or after, you've already lost your ability to be proactive. That's not a judgment β it's a structural reality that's worth naming.
Safety Plan Quality: What You're Actually Looking For
Generic templates with the project name pasted in. Corporate manuals with no project specifics. These are extremely common and they tell you almost nothing useful. The real prize is a plan where the contractor's scope of work is married to your site's safety standards.
When assessing a safety plan for quality, look for these five elements:
- A site-specific hazard identification document
- Specific control measures with named responsible parties β not just 'line management', but an actual role: crew leader, foreman, craft supervisor
- An emergency response plan tailored to this specific work site
- A training matrix showing competencies required for the scope of work, incorporating your site minimums
- A subcontractor plan β subs are identified, and their plans are attached
Phase 6 β The Feedback Loop Nobody Closes
The Contractor Path of Risk is circular for a reason. What you learn at closeout should feed back into prequalification. What you learn during execution should refine your kickoff process. What you learn from incidents should change your contracting requirements.
Almost nobody actually closes this loop.
The closeout and performance review at the end of a project is something nearly every organization agrees is valuable. And almost none of them do it consistently or well. The realities of project work β defined start and end dates, fixed budgets, teams rolling off to the next job β make it genuinely difficult. But the value is real.
Here's a practical challenge. Pull the last five contractor incidents at your organization. For each one, ask:
- Was this contractor fully prequalified, or were they exempted?
- What did the kickoff meeting look like?
- Were there warning signs that could have been seen earlier?
- Did anything change in our process as a result?
If the answer to the last question is consistently no, the loop has not been closed.
Prequalification: The First Line of Defense β and How It Gets Compromised
Our live poll confirmed what most of us already suspected: the most common reason contractors bypass prequalification is cost. Geographic limitations, small contractor situations, and budget pressure round out the list.
Prequalification exemptions aren't always wrong β if you're a geographically dispersed organization working in small towns with limited service providers, that's a real constraint. But it's worth tracking the exemption rate over time and asking whether it's trending in the right direction.
A few leading indicators worth building into your prequalification process:
- Exemption/waiver rate β are you trending up or down? Does leadership understand the implications?
- Performance comparison: prequalified versus exempted contractors β over time this becomes a built-in validation of your prequalification criteria
- Percentage of complete submittals versus incomplete β someone at your organization is spending significant time chasing missing documents; knowing the scale of that gap is useful
- Prequalification submittal data versus actual site performance β if OSHA and DART rates on your site are two or three times what's reported in their submittal, that's a conversation worth having
High-Value Leading Indicators by Phase
Here's a phase-by-phase summary of the indicators covered in the session. Not a comprehensive list β a starting point.
|
PHASE |
HIGH-VALUE LEADING INDICATORS |
|
Prequalification |
Exemption/waiver rate (tracks selection discipline); Performance comparison of prequalified vs. exempted contractors (validates criteria over time) |
|
Kickoff & Mobilization |
Structured kickoff meeting quality score (measures effectiveness, not just occurrence); Days between plan approval and mobilization (measures whether gaps can realistically be addressed) |
|
Alignment |
Revision rate of site safety plans before acceptance (assesses plan quality and contractor's openness to input) |
|
Execution |
Contractor's ability to self-identify hazards (ownership vs. compliance mindset); Evidence of subcontractor oversight (prime/GC accountability; controls cascaded down the chain); Time from gap identification to resolution (responsiveness and accountability) |
|
Closeout |
Performance documented for future use (measures whether organizational learning actually occurred) |
Five Takeaways Worth Taking Back to Your Organization
- Leading indicators are a hypothesis, not a guarantee. They require ongoing validation throughout the life cycle of a contractor β and yours.
- Measure quality and effectiveness, not just activity. Counting things is easy. Knowing whether those things are actually working is harder and more valuable.
- Know your Theory of Connection. For every indicator you track, be able to answer: how does this genuinely connect to incident prevention? If you can't answer that, it's a feel-good metric β and tracking it takes time and effort that could be better spent.
- Use a balanced scorecard. Lagging indicators aren't going away, and many clients still require them. The goal is to put them in context alongside leading indicators β not to replace one with the other.
- Build adaptive capacity. The goal is learning and responding, not perfect prediction. If your organization can sense change, extract knowledge from operational experience, and adjust β that's the ballgame.
|
"The most valuable leading indicators measure how well something was done β not whether it was done." |
One Final Challenge
Many contractors' safety metrics reward activity over outcomes. The contractor who submits a thick safety plan, attends every meeting, and completes every checklist may still be a significant field liability if nobody is assessing quality at each step.
Before the next webinar, pick one phase of the Contractor Path of Risk at your organization and ask: what are we actually measuring here? Is it an activity count, a compliance check β or something that tells us whether we're building toward a good outcome?
That's the shift worth making.
|
Presented by Patrick Robinson | Contractor Safety Management Resources referenced in this session β including the ASSP/JJ Keller Construction Industry Survey β will be attached to the webinar follow-up materials. |
Contractor Safety Insights
News, opinions and best practices for owners, prime and general contractors.
Responses