Measuring Success in Disability Employment Programs: KPIs and Benchmarks

Top TLDR:

Measuring success in disability employment programs requires balanced KPIs and benchmarks across access, process quality, placement, retention, wages, advancement, worker satisfaction, and equity. Programs that track only placement miss whether jobs were sustainable, well-paid, or matched to participant goals. Employers should also track representation, accommodation response times, and retention parity. In Greenville, SC, Kintsugi Consulting helps organizations build measurement frameworks alongside broader disability inclusion practices.

Why Measurement Matters — and What's Usually Broken About It

Most disability employment programs track some form of success, but the measurement frameworks in wide use today tend to miss what actually matters. The dominant metric in vocational rehabilitation, supported employment, and many employer-led disability hiring initiatives is placement — did the person get a job? — with limited attention to whether the job was a good fit, whether the person stayed, whether wages were adequate, whether advancement happened, and whether the people being served were representative of the population who needed services.

That narrow focus on placement has consequences. It incentivizes programs to place clients in low-wage, high-turnover jobs that don't lead to sustainable careers because those are the easiest placements to produce. It penalizes programs that serve people with more significant disabilities or more complex barriers, since placement times are longer. It hides the equity story inside aggregate numbers that don't reveal who's benefiting from services and who isn't. And it gives funders, policymakers, and employers a distorted view of what works.

Measuring success in disability employment programs well — through a balanced set of KPIs and benchmarks that capture access, process, outcomes, sustainability, advancement, equity, and the quality of the employment experience — is how programs improve over time and how organizations actually understand whether their disability employment efforts are producing the outcomes they claim.

This guide walks through the KPI categories that matter, specific indicators within each category, benchmarks where data exists, measurement pitfalls to avoid, and how to use the data for ongoing improvement rather than just reporting. It's written for service providers, employers, funders, policy folks, and program administrators designing or refining measurement systems — including organizations in Greenville, SC and the broader Upstate region where Kintsugi Consulting supports measurement and accountability work as part of broader disability inclusion consultation.

Categories of KPIs in Disability Employment Measurement

Effective measurement spans several distinct categories, each capturing a different dimension of program performance. No single category tells the full story; programs that balance across categories produce more accurate performance pictures and more useful improvement signals.

Access and participation metrics ask who is being served. How many people applied? How many were found eligible? How many actually received services? What was the demographic composition — by disability type, race and ethnicity, gender, age, geography, educational background? If your program serves a narrow slice of the eligible population, that's a finding worth knowing before you evaluate outcomes.

Process and service quality metrics ask how well services were delivered. How long did eligibility determination take? How long between eligibility and IPE development? How long between IPE and service initiation? How many services were actually delivered as planned vs. delayed or dropped? Process metrics reveal where internal bottlenecks are slowing down the work.

Placement metrics ask whether participants were employed. Did they get jobs? Were the jobs competitive integrated employment (regular jobs in regular workplaces at competitive wages), sub-minimum-wage settings, or sheltered employment? Placement is necessary but not sufficient.

Retention metrics ask whether placements stuck. How many participants were still employed at 30 days? 90 days? 180 days? One year? Two years? Five years? Retention data reveals whether placements were sustainable or churned.

Wage and job quality metrics ask whether the employment was good employment. What were median and mean starting wages? Did workers earn enough to support themselves? Were they full-time or part-time? Did they have benefits? Did they have schedule predictability? A minimum-wage 15-hour-a-week placement without benefits is a placement, but it's not the same thing as a sustainable, quality job.

Advancement metrics ask whether careers grew. Did workers receive raises? Promotions? Additional training? New responsibilities? Lateral moves to better-fit positions? Workers connected to disability employment systems often plateau in entry-level roles even when they're capable of more; advancement metrics reveal whether that pattern is changing.

Worker experience and satisfaction metrics ask whether participants themselves considered the services effective. Were they satisfied with their counselor/coach? Did they feel their choices were respected? Did the job match their goals? Were their accommodation needs met? Satisfaction data, collected well, reveals whether informed choice is real or performative.

Disparity and equity metrics ask whether benefits are distributed equitably. Are outcomes similar across demographic groups? Across disability types? Across geographic regions? Or are some groups systematically getting worse outcomes? Aggregate numbers can obscure dramatic equity gaps.

Specific KPIs with Context and Benchmarks

Within each category, specific indicators can be measured with varying degrees of rigor. A starter set of disability employment program KPIs — with benchmarks where federal data or published research allows — includes the following.

Competitive Integrated Employment (CIE) Rate. The percentage of served participants who achieved employment in competitive integrated settings (regular jobs at competitive wages in inclusive workplaces, not sheltered workshops or sub-minimum-wage placements). Federal vocational rehabilitation reporting includes CIE, and WIOA significantly strengthened the emphasis on this outcome. Benchmark targets vary by state and service model; programs serving populations with more significant disabilities appropriately have lower CIE rates than programs with broader caseloads.

90-Day Retention Rate. The percentage of placed participants still employed 90 days after starting a job. This is the standard benchmark for "successful closure" in vocational rehabilitation case management. Federal VR targets vary but rates in the 70-80% range are common among well-functioning programs.

1-Year Retention Rate. The percentage of placed participants still employed one year after placement. This is a more demanding metric that reveals whether placements were sustainable beyond the initial post-placement support period. Quality programs typically see 1-year retention rates of 60-75% across their placement cohort.

Median Hourly Wage at Placement. The middle starting wage among placed participants. Federal VR data consistently shows national median VR closure wages in the $14-17 per hour range, though this varies significantly by state, industry, and disability population served. A program whose median placement wage is well below living wage for the local region is producing placements that don't translate to financial sustainability.

Time from Application to Employment. The total duration from initial application to the first day of a sustainable job placement. This varies dramatically by program design and participant complexity; median times of 6-18 months are common, with significant spread.

Participant Satisfaction Score. Typically measured via survey using consistent instruments administered at defined points in the service journey. Programs with genuine satisfaction data often find that participants' qualitative feedback reveals patterns that outcome numbers don't.

Informed Choice Indicator. Programs increasingly measure whether participants reported having meaningful input into their service plans, including the identity of providers and the design of their employment goals. This metric resists simple benchmarking but reveals cultural patterns in service delivery.

Job Match Quality Score. A measure of how well placements align with participant stated interests, skills, and goals. Requires upfront documentation of those interests and a defined scoring rubric; worth the investment for programs that claim to center informed choice.

Employer-Specific KPIs

Employers running internal disability employment initiatives or measuring their engagement with external programs have their own KPI set, distinct from but connected to program-level measurement.

Disability Representation in Workforce. The percentage of employees who self-identify as having a disability, tracked by organizational level, job category, and time. Section 503 sets a 7% utilization goal for federal contractors; employers not subject to Section 503 still benefit from tracking this metric as a baseline inclusion indicator.

Self-Identification Rate. The percentage of employees who feel comfortable self-identifying disability status. Low self-identification rates often indicate cultural issues that suppress disability disclosure, which affects both representation measurement and workers' access to accommodations.

Accommodation Request Response Time. The median time between an accommodation request and a decision. Programs with long response times suggest process bottlenecks that discourage requests and undermine retention.

Accommodation Request Approval Rate. The percentage of requested accommodations that are approved. Very low approval rates suggest systemic barriers; very high approval rates without process scrutiny might suggest the formal process isn't filtering appropriately — either extreme is worth examining.

Retention Rate Parity. The retention rate of employees with disabilities compared to the overall workforce retention rate. Significantly lower retention for workers with disabilities signals cultural, accommodation, or management issues that need addressing.

Advancement Rate Parity. The rate at which employees with disabilities are promoted compared to the overall workforce promotion rate. Workers with disabilities often plateau in initial roles; measuring advancement parity reveals whether that's happening in your organization.

Inclusion Climate Measures. Survey-based indicators of how included, respected, and supported workers with disabilities feel in your organization. Disability:IN's Disability Equality Index (DEI) and similar benchmarking tools provide structured instruments for this measurement.

Our employee DEI training programs from frontline to C-suite blog post covers how layered training across organizational levels supports the inclusion climate that drives these employer-side metrics.

How to Actually Measure These Things

Good measurement requires good data, and good data requires intentional infrastructure. A few practices make the difference between performative measurement and useful measurement.

Define metrics upfront and stick to the definitions. "Placement" can mean many things; "retention" can be calculated many ways. Document your definitions and apply them consistently across time. Inconsistent definitions produce data that's technically impressive but substantively meaningless.

Use standardized instruments where they exist. Federal VR reporting, Disability:IN's DEI, Council on Quality and Leadership's Personal Outcome Measures, and various validated satisfaction survey instruments all provide benchmarked tools that produce comparable data. Avoid reinventing measurement tools when standardized options exist.

Collect data at multiple time points. One-time measurement captures snapshots but misses trajectories. Programs that collect data at intake, at placement, at 90 days, at 6 months, at 1 year, and at annual follow-ups produce much richer pictures than programs measuring only at closure.

Include qualitative data alongside quantitative. Numbers tell you what; narratives and focus groups tell you why. Programs that combine quantitative tracking with qualitative participant feedback produce much more actionable improvement insights.

Disaggregate by demographic and disability categories. Aggregate data hides equity gaps. Disaggregating by race, ethnicity, gender, age, disability type, urban/rural geography, and other relevant categories reveals whether services are actually reaching and benefiting all populations.

Invest in data infrastructure. Case management systems, client databases, and reporting tools matter. Programs running disability employment on spreadsheets miss signals that integrated systems would catch.

Common Measurement Mistakes

Several predictable pitfalls come up in disability employment measurement, and naming them helps programs avoid them.

Confusing placement with success. Placement is the start, not the endpoint. Programs that celebrate placement and don't follow through on retention, advancement, and quality miss most of what matters.

Counting hours instead of outcomes. Service hour tracking measures what providers did, not what happened for participants. Hour-based metrics have their place (for billing, for capacity planning), but they should never be confused with outcome metrics.

Using aggregate numbers to hide disparities. Whole-program outcomes can look reasonable while specific demographic or disability groups experience dramatically worse results. Disaggregation is essential.

Measuring only at closure. If your measurement captures only what happened at case closure, you miss everything that happens afterward — including the retention patterns that reveal whether placements were sustainable.

Ignoring informed choice and satisfaction. Quantitative outcomes matter, but they don't measure whether participants felt respected, empowered, and well-served by the process. Programs that measure only outcomes miss cultural failures that show up in outcomes later.

Using benchmarks without context. National or statewide benchmarks provide useful reference points, but they don't necessarily apply to your specific population. Programs serving more complex populations appropriately have different patterns than programs serving broader caseloads.

Treating measurement as reporting rather than learning. Data collected for external reporting and then ignored internally is waste. Effective measurement is used — in case conferences, in program design reviews, in staff learning, in continuous improvement cycles.

Equity Considerations in Measurement

Disability employment measurement intersects with broader equity questions in important ways. People with disabilities are themselves a diverse population — racially, ethnically, economically, by gender, by disability type, by geographic location. Effective measurement surfaces whether disability employment services are serving that full diversity or reinforcing existing inequities.

Questions worth building into your measurement framework include: Are participants of color entering services at rates proportional to their representation in the eligible population? Are outcomes comparable across racial and ethnic groups? Are women with disabilities achieving advancement at rates comparable to men? Are LGBTQ+ participants experiencing service environments that respect their identities? Are rural participants accessing services comparable to urban participants? Are people with more significant or more stigmatized disabilities (including mental health and I/DD) receiving services of comparable quality?

Our broader work on industry-specific DEI training and the comprehensive guide to DEI training programs touches on how equity considerations shape effective programming across the broader DEI landscape — considerations that apply equally to disability employment measurement.

Using KPIs for Continuous Improvement

Measurement is only valuable if it changes practice. The gap between "we collect data" and "we use data to improve" is where many programs fall short.

Effective use of disability employment KPIs includes regular program review meetings that examine data trends, identify underperforming indicators, and set improvement goals; case-level use of data to inform individual service decisions; staff training that builds data literacy and connects measurement to practice; funder and stakeholder reporting that emphasizes learning rather than just accountability; and transparent sharing of data with participants and community, so measurement serves the people being measured rather than just the people doing the measuring.

Programs that build these practices produce improvement over time. Programs that treat measurement as a reporting obligation produce stagnation.

The Greenville and South Carolina Context

For programs, employers, and service providers in the Upstate region and the broader South Carolina context, disability employment measurement intersects with state-level systems and local realities. The South Carolina Vocational Rehabilitation Department's measurement framework follows federal RSA reporting requirements, with state-specific overlays. Employers running internal measurement can benchmark against state-level VR data to contextualize their own outcomes. Local service providers and nonprofits can partner with SCVRD for coordinated measurement and data sharing.

Local measurement work also benefits from connection to broader disability community voices. Independent Living Centers, disability-led advocacy organizations, and peer networks in the Upstate often have perspectives on service quality and equity that formal measurement systems don't capture. Building measurement frameworks that include these voices — through advisory committees, participant feedback loops, and community listening sessions — produces more accurate pictures of what's working and what isn't.

How Kintsugi Consulting Supports Measurement Work

For organizations in Greenville, SC and beyond looking to build or improve their disability employment measurement practices, Kintsugi Consulting provides training, consultation, and ongoing partnership that supports measurement alongside the broader disability inclusion work.

Our work includes disability inclusion assessment and audit work that establishes baselines for measurement; accommodation process design that includes tracking infrastructure; inclusive recruitment consultation that builds demographic and equity tracking into hiring processes; employer DEI training that produces the cultural foundation measurement depends on; and ongoing partnership as organizations mature their measurement and continuous improvement practices over time.

Our services page outlines specific offerings; prepared trainings cover topics organizations find useful as entry points; and collaborations and partnerships describes ongoing relationships we've built with organizations committed to this work.

Where to Go From Here

Measuring success in disability employment programs well takes real investment — in data infrastructure, in defined metrics, in sustained collection practices, in data-informed improvement cycles, and in equity-conscious measurement design. The alternative — simple placement counting — produces comfortable numbers that don't actually tell you whether your program is working.

For service providers: audit your current measurement framework against the categories above. What are you missing? What would it take to add?

For employers: build disability-inclusive measurement into your broader people analytics. Representation, retention parity, advancement parity, and accommodation metrics should be routine, not exceptional.

For funders and policymakers: push the programs you support toward more comprehensive measurement. Outcome quality, not just outcome existence, is what matters.

For organizations in Greenville, SC and beyond looking to build measurement practices that support authentic disability inclusion, contact Rachel Kaplan at Kintsugi Consulting directly or visit our scheduling page to set up a conversation.

Measurement done well produces programs that actually serve the people they're meant to serve. That's the goal — not the data itself, but what the data reveals about whether the work is changing lives.

Internal Links Used:

Bottom TLDR:

Measuring success in disability employment programs well means defining metrics upfront, disaggregating data by demographic and disability groups, using standardized instruments, collecting at multiple time points, and pairing quantitative outcomes with qualitative participant feedback. Continuous improvement depends on using data, not just reporting it. Kintsugi Consulting in Greenville, SC partners with organizations to build these measurement practices as part of broader disability inclusion work. Contact Rachel Kaplan to begin.