Medicinal computers and the importance of documentation in AI

0
53

(Sugree/Flickr Creative Commons)

AI has recently been used as a diagnostic tool for tuberculosis and HIV. In this case, a computer which taught itself. The researchers input data into the system and the system would then determine whether or not a given patient had the disease.

In this case, the computer did well, but the exact mechanism was unknown, the AI used a “black box” algorithm. Like the human brain, a black box mechanism is a mechanism where the researchers lack information on what the algorithm looks like. Black box algorithms were key to the several advances in Deep Blue beating Kasparov and Go and other strategy games. However, some analysts found that the machine was using top-down processing of the images. Top-down processing means that a system utilizes knowledge external to the features of the problem at hand. In this case the computer utilized information about what type of x-ray was used to create the image. Cynthia Rudin argues that black boxes are problematic and that a better solution is to use explicable models in the first place. She also argues that if the algorithm is built from clinical knowledge, it will be easier for medical professionals to verify and check the work.

Black box models are important outside of medicine. In the field of criminal justice, they have been used to determine risk assessments of criminals. Yet, in the field of criminology, a post-mortem analysis yielded that COMPAS, which was intended to assess criminal risk independent of prejudice, produced samples where African American criminals obtained higher scores than European Americans with comparable or worse records. Sometimes that was due to the coding of certain crimes, and COMPAS does release the algorithm it believed was impartial. But this highlights the need for documentation, so that humans can run analyses independent of the given AI system and determine the reliability of an assessment.

Furthermore, “black boxes” inject a false sense of impartiality because there may be unknown priors that the original designers failed to realize when they designed the system. One key epistemological tool of science is replicability, or the principle that theoretically another scientist could apply the same procedures and would be able to add evidence to whether or not the significant finding depicts a true effect or a sampling quirk. However, verifying that a “black box” made a proper decision is difficult as there is no procedure that can be applied to ensure that the positive is correct, short of running conventional tests to avoid accidental misdiagnoses. With humans, you can repeat conventional procedures and if the second test is also positive you can apply Bayes’ Theorem to determine whether or not the positive is a false positive or a true positive and then make treatment decisions. However, this also represents another foray of the machine into a field that was previously occupied by humanity. The computers in this case did not make critical treatment decisions, but unless the humans can verify the procedure and confirm the analysis, doctors would be forced to accept the analysis and thus make choices based on the decisions of a machine. Luckily, it has not yet reached the level of suggesting replacement of humans, and John Hopkins does require Statistics and data science for doctorates. There is merit to AI as it reveals that humans understand how to create thinking machines which indicates knowledge of information processing and demonstrates that the TRACE model or similar models of cognition are more likely to be valid. Documentation enables humans to verify and a means of allowing  analysts to use the same data to reproduce comparable results to detect implicit biases not accounted for by the programmers or flaws in the current methodology utilized by the machine to arrive at its conclusion. 


Jacob Ningen is a campus correspondent for The Daily Campus. He can be reached via email at jacob.ningen@uconn.edu

Leave a Reply