By: Mark Lardner, LCSW, Center for Innovation in Population Health
When a system is looking for an assessment process to help improve their work with individuals, they often explore the TCOM approach. A simplified description of an initial implementation of TCOM tools would include the following activities. First, local versions of the tools are designed with input from a variety of stakeholders. Next, policies are developed that define the population (the youth, families or individuals) to be assessed, and the timeframes for the completion of the assessment. Finally, training is rolled out for the workforce and the system is off and running.
As an implementation gets off the ground, every system has an interest in measuring the success of the initiative. Tracking the timely completion of assessments seems like a natural next step for a handful of reasons:
- It is often the first data that a system has access to. You can’t analyze data you don’t have, and typically the first complete data set that a system has access to is assessment completion information.
- They are easy to design. The parameters for the design of compliance reports are set within policy, and there are very few data fields needed to create a compliance report (Date completed, Date Expected, etc.).
- They are easy to consume. Most systems have trained their supervisors to manage front line workers through a process of policy enforcement. They are used to seeing compliance reports.
- The reports are scalable. Systems can create complimentary reports at the system, county/agency, program/site, supervisory and caseload level using the same basic data set.
It is clear that these offer some value to the system, but they also come with some significant downsides. The most potentially damaging downside is that they unintentionally communicate that the priority is the completion of the form, not the utilization of the tool. The measurement of compliance is not inherently flawed, but attention does need to be paid to how it is measured and, more importantly, communicated.
In order to move beyond compliance, the system needs to broaden its focus using “quality of assessment” measures. Determining the quality of any single assessment is a subjective exercise, so when thinking about all assessments across a system we can define quality assessment in the following way: it is accurate, collaborative and timely:

Our challenge is to create and utilize reports that communicate all three aspects of quality while being easy to consume and scalable for each level of the system. The shift beyond compliance enables a system to:
- have a more nuanced discussion about assessment quality,
- develop a road map for designing and delivering technical assistance, and
- create a process for measuring their progress towards building best practitioners.
Below is an example of scalable reports that look at timeliness and accuracy at the state, county/program, supervisor and frontline staff levels. A narrative is provided that illustrates the potential utility of each report.
Let’s take a look at a series of reports used by a Child Welfare system (this data is not real, this is a public blog after all) to get a quick snapshot of the accuracy and timeliness of assessments at the state and county level.

This first bar graph communicates:
- the % of Completed Assessments that had No Needs identified (all ratings on need items were less than “2”) during the past quarter
- the % of Completed Assessments that had All Zeroes on Need Items (all ratings on need items were “0”) during the past quarter
- the % of Completed Assessments that had No Identified or Useful Strengths (all ratings on strength items were “3”) during the past quarter
- the % of youth without a completed assessment within the policy time frame during the past quarter
The State has concerns in all three areas.
- Given what they know about their population, they are concerned that 33% of the assessments completed in the last quarter have no needs and that 8% of assessments were completed with every need item defaulting to “0.”
- Additionally, the system has for a long time prided itself as being strength-based, but their assessments show that 42% of assessments do not have any identified or useful strengths.
- Finally, given that they are in their second year of implementation, they have set a goal of having 80% of children receive an assessment each quarter (the goal for their first year was 70%). They have not reached their goal this quarter; 28% of youth did not receive a timely assessment.

The second bar graph keeps the same color scheme and presents a visual for:
- county level accuracy issues related to Need items, and
- timeliness of assessment.
The blue bars that approach or cross the red state average line for assessment with no needs are the first counties to receive offers of technical assistance. (A similar county level report for the green “% No Strengths” bar is generated and utilized.)
In this case, the State would start their outreach with “County H,” “County C” and “County B” who are all having difficulties with accuracy. (County C and H are also having challenges around the timeliness of their assessments.) Those counties would be asked to review their county-specific reports that are organized by supervisory unit and caseload.
The Supervisor Caseload Intensity report provides the supervisor a snapshot of the timeliness (“# of Youth Assessed” and “# of Youth Missing CANS”) and accuracy (“# of Actionable Needs” and “# of Strengths”) for all the frontline workers that they supervise.
With a quick glance the supervisor realizes that more than half of their supervisees are struggling to identify actionable needs and useful strengths. Determined to provide support to the staff, the supervisor reviews the worker level report before their next supervisory session with R Dahl.

Realizing that there is not enough time to review each individual story in detail, the supervisor selects two individuals, M Wormwood and A Prewt, to review during the upcoming supervisory session. Instead of going through the completed assessments and asking for changes to be made, the supervisor asks R Dahl to describe his understanding of the youth and to describe any needs that could be resolved, and strengths that could be useful. The supervisor then helps R Dahl organize their conversation using the common language of the CANS. The supervisor’s goal is to build agreement and understanding, and to model how conversations with the youth and their team can be organized around action. These skills are critical to staff trying to complete the CANS with accuracy and efficiency.

The reports above still lack a scalable and easy to consume approach for measuring the collaborative aspect of quality assessments. The following ideas require testing and further development):
- Using twice a year surveys of the experiences of youth, families and adults to create a collaborative score for each case unit.
- Sum the number of signatures on each assessment (youth, family, individual and team members). Use this total # as another bar on the Supervisor Caseload Intensity Report.
- Use measures of skill development in the area of collaborative assessment (e.g. items from the CHQIN – the Collaborative Helping Quality Inventory).
- For youth with multiple team members completing assessments (e.g., caseworker and MH clinician), measure consistency across assessments completed within the same timeframe. Create a collaborative assessment score based upon level of agreement between the two assessments.
The level of collaboration in assessment is the most difficult for systems to measure; it is also potentially the most important metric. Truly collaborative approaches to assessment tend to increase the accuracy of the assessment as every voice is represented. Collaborative approaches to assessment and planning also tend to increase the timeliness of assessments. The work of completing the assessment is no longer the responsibility of one solitary worker, but instead the global responsibility of the person-centered team.
So how does your system measure the quality of assessments? Share your thoughts and ideas by leaving a reply below.
Mark – this is a great article capturing first hand what assessment data is struggling to to tell us. In our agency, we also struggle with a lack of identified strengths, low % of needs and actionable items captured – which doesn’t correpond well to the accuity of the clients we serve. We are always looking at better ways to understand the data and report out on it more accurately. Strengthening data accuracy is significant.
Thanks Shantal. It has been interesting to see the responses reports like this get from the assessors. There is usually some initial defensiveness, but once people are able to move past that I have heard people share things like (1) they were confused about the rating system, (2) they were just trying to get the assessment done, (3) they didn’t know anyone was looking at the data, (4) they put the “real” information in the comments. All of these responses become part of the conversation when designing training, technical assistance and coaching efforts.
Mark, this is an outstanding article. I’m saving as a reference and recommending to others.
Thank you for your time to prepare it so clearly.
Thanks for the feedback Jerome. Much appreciated.
These are really great suggestions for more meaningful monitoring of your TCOM data at a systems level. I might add the suggestions many of these reports could also be improved by being looked at over-time. That is to say, put on the X axis dates (maybe months, or quarters) and then look at how these values change over time. As an example, for our contract in Central Pennsylvania we track how many CANS are submitted a month, and we break up the CANS into one of four severity levels (https://communitydataroundtable.org/reports/outcomes-reports/_cabhc_community_report/). With this we can track not only rates of completed CANS, but also get a sense for how the severity of the population is shifting through time. Further, with tool tips you can track the changes of the prescription of a certain service that this system is interested in. Putting this information over time makes a story, and helps understand the context for any observation.
Thanks Dan. I wholeheartedly agree that after a system defines their metrics for quality assessment, they would want to track the how these shift over time. Utilizing a severity, or perhaps a service intensity, chart that tracks shifts in the population over time is anther great suggestion for measuring the accuracy component of quality assessment.
These are really great suggestions for more meaningful monitoring of your TCOM data at a systems level. I might add the suggestions many of these reports could also be improved by being looked at over-time. That is to say, put on the X axis dates (maybe months, or quarters) and then look at how these values change over time. As an example, for our contract in Central Pennsylvania we track how many CANS are submitted a month, and we break up the CANS into one of four severity levels. With this we can track not only rates of completed CANS, but also get a sense for how the severity of the population is shifting through time. Further, with tool tips you can track the changes of the prescription of a certain service that this system is interested in. Putting this information over time makes a story, and helps understand the context for any observation.