By Dan Warner, Executive Director-Community Data Roundtable

I have seen many hard-working, diligent people fail TCOM certification because they over-focus on anchor definitions, and do not properly understand the item they are rating.  They think that by focusing on the anchor definitions they are being “detail oriented,” but instead they are “missing the forest for the trees.”  The anchor definitions are not the “concrete” part of a TCOM tool.  It is the items themselves that are concrete.  After all, this is the first rule of a communimetric tool: It is an item level tool, and items are chosen because they lead to different action trajectories in the client’s treatment.  Attending to anchor definitions without fully understanding the items in which they are trying to anchor you, all but guarantees you will score the item incorrectly.

This is one of the key ways that TCOM tools are different from similar biopsychosocial assessments, which spend much of their certification processes making sure that the concrete differences between the levels of a “domain” (typically their “items” are actually whole domains) are well understood by people pursuing certification on their tool.  The details of a “domain” are the most concrete part of the tool in those situations (see for instance CAFAS, DLA-20).  In such a situation, being very attentive to the exact nuances of the language of the anchors for each level of a domain-item makes sense, and attending like this helps you pass those certification exams well.  However … it doesn’t really help with treatment planning.  Further, it is not very flexible in accounting for all the diverse ways that needs and strengths can show up on any given item-domain in the world. 

Anyone who has tried to use such tools in practice realizes quickly that many people simply do not fit in well to the anchor definitions of the various levels.  While passing the testing vignettes is easy, because the vignettes for those certification exams are written to exactly fit into the concrete anchors specified in the tool, in the “real world” no one quite looks like what the assessment captures.  Thus, workers find themselves finagling a person into the tool’s limited concrete definitions, cutting off what doesn’t fit, oblonging other things to make them fit better.  It’s a procrustean process done for the service of the tool, instead of designing optimal treatment.

In contrast, TCOM tools focus on items that we all have to do in our work.  These are the concrete parts of the tool.  If it is not relevant for this person: score a 0!  If it is relevant, now the issue is what are we going to do about it?  Here the action levels are essential: are we going to watch this problem (1), put it on the treatment plan (2), or jump on it intensely (3)?  The anchors affiliated in each item for each of these levels are helpful to see what the item is describing, and to help you pick the appropriate action level.  However, those anchors are concrete only in the context of the item and the actions that need to be taken. And frankly, once you understand the item, scoring these levels is usually a pretty straight ahead process.

In conclusion, if you find that you are failing TCOM certifications over and again, have someone quiz you on what the items on the tool you are testing on mean.  You might be misunderstanding several of these items fundamentally.  It is not because you think something is a crisis, while the vignette designers felt the problems were mediocre.  Instead, you are not understanding a sufficient amount of items, that your scoring is simply looking random.  Reign yourself in, slow down – look at what the item is about, and then scoring it well enough will always get you passed certification, and quite frankly – will get you working actively in the field making top quality plans, that also produce helpful, analyzable data.

Extra note: I wrote this blog post a few weeks ago, but was teaching this morning and again saw a hard working person, furiously scrutinizing her CANS manual during her testing vignette.  She was working on an item and saw that the word “willful” appeared as a part of the anchor definition of the level 2 rating, and then was trying to figure out if the child in the vignette had acted willfully or not.  This was an absolute distraction from the question of if this item was actionable or not for the client.  Willful is just a word that helps describe what an actionable need on this item is, but she was getting all distracted by whether or not in this particular vignette, we were seeing a willful problem.  My response? I took the manual from her, and asked her what the item she was looking at is about.  She couldn’t tell me.  So this helped get us back on course.  I didn’t give her back the manual for the rest of the exam, but asked her to ask her friends what items meant if she didn’t know.  She passed the exam with a high reliability!  This is exactly what I am talking about — the anchor defintions were distracting her from actually understanding what needs to get done.  She was too preoccupied with fitting things into the test, and it wasn’t until I refused to allow her to do this, that she could successfully pass the exam.  Hopefully, this little interchange is also going to help her be a better clinician — focusing on the work she needs to do, not the scoring she needs to do.

For additional support and references on training and certification, visit the TCOM Training FAQ page on the blog for tip sheets and blog posts about TCOM and its tools.

2 Responses

  1. I have this same experience when certifying staff on the CANS. Love that when you took the book away they were unsure which item they were rating. This blog will help me with reminding future classes the role of the manual. Thank you!

  2. Very well stated Dan! You captured the essence of Communimetrics-CANS & TCOM-it is the work that we do with those we serve…the children/youth & families.

Leave a Reply

%d bloggers like this: