I was reading this article around Algorithmic Bias as applied to teachers under the IMPACT framework. Under the framework teachers are required to elevate their students a to a higher level as part of their KPI. This effectiveness is measurable, traceable and . . . wrong.
That algorithms can give you these measures is great but I would like you to consider this: Is my teacher bad at teaching or is my teacher knuckling down on the hardest to teach students? The main example (and I will have to Google at it just now) is this, do the worst surgeons have the highest chance of killing their patients? This is a complex question becuase this is a very easy metric to apply to doctors. Consider the following cases and decide who is a good doctor and a bad doctor in each case:
One doctor has 100 patients die under the knife the other has 1.
One doctor practicing for 50 years has had 100 patients die under the knife the other doctor has been practicing for a week and has 1 patient die under the knife.
One doctor practicing for 50 years in a trauma ward in South Central LA has had 100 patients die under the knife and the other doctor is a dermatologist practicing for a week and has a patient die under the knife.
Obviously the first thing is how many patients went under the knife, what was the success rate? Is it trending up or down. And that, is the rabbit hole. If it is trending up then the doctor is getting worse? Yes? No! What if a doctor skilled in saving lives works on progressively harder cases? What if he becomes the go to person for that long shot surgery?
The case of the teacher is identical. I have had some great teachers, I have had some bad teachers, I have had teachers that took on the children that few others would. What metrics do you apply as your outcome? Increased grades? Well, bad news, I had a significant drop between my standard 9 (grade 11) and matric (grade 12) results. All my teachers would have been fired and yet I passed my Bachelors in Engineering with good marks.
So, in steps the human response, and we see this in private schools, medicine, public schools and the work place. People start only working on projects that will succeed because that’s what the company measures. What about that enterprise data warehouse upgrade project that nobody wants to touch, that has to be done, but anyone working on it will have a mark against their name because of the very high likelihood that it fails? Does that make them bad workers? They will absolutely be heroes if it works though. Some systems (schooling, it, medicine) require a special caliber of people to do the jobs that have a high risk of failure. The danger is selecting the metric for “success” incorrectly as it will lead to your best people either moving on or not willing to work on the risky task that they may be best suited for. Algorithmic bias is very difficult to fix if you chose the wrong measures for success.