Oxford Neurological Society
Skip to main content

The Solaris of Artificial Intelligence in Research – Part 2

People often ask me about the most breathtaking thing about the AI’s behaviour. Well, first of all, I was, quite frankly, amazed that it had worked at all.

Having such a scarce instrumentarium, I wasn’t sure if I could emulate an environment that could facilitate the conversation and to bring any meaningful values back.

But when the first shock of it working worn off, I was baffled by the variety of different suggestions proposed by the algorithm.

There are people cleverer than me who can set up a complex processing software that will check for a given hypothesis. But I wanted for the AI to come up with a completely novel hypothesis and to test it out. In fact, I wanted it to come up with a number of new ideas, concepts and suggestions and to test them all out in turn, so that, at the end of the day, I could browse through these that work on paper and judge their clinical feasibility.

PREVIOUS EPISODE HERE (If you’ve not read it yet)

Finally, the AI grouped these ideas on a list, with an ascending BK score (a measure of how effective this idea could be clinically, based on its beneficence for the entire population).

 

 

Avoiding a scan will be easier to implement than a suggestion of performing a neurosurgical biopsy procedure or splitting someone’s skull in half just to obtain a diagnosis.
Interestingly, the AI also exhibited some of the behaviours that were hypothesised in the paper “Concrete problems in AI safety” – something hugely debated in the AI and Computer Science community at the moment. These behaviours weren’t present in a previous, simulated study, which basically looked at a fictitious, computer-generated patients’ data.

 

Limited model of reality

Let’s start with the most obvious one, but perhaps the least appreciated aspect of the AI performance. Frideswide will only know what we tell her. And, by pure design of the nature, we will not be able to convey an accurate model of reality, as the real world is incredibly complex. It doesn’t mean that the AI wouldn’t be able to handle it; it just means that a human mind takes so much for granted and ignores so many factors as “default” or “intuitive” that we wouldn’t be able to consciously convey it all to the computer.

For example, when a patient presents themselves to the Emergency Department (ED) with confusion, weakness and slurred speech, the AI wants to triage it and put it through the frontotemporal dementia diagnostic pathway, to achieve the overall aim of diagnosing the disease quicker.

It is willing to sacrifice the patient’s mobility, possible repercussions of a stroke and all the social and emotional perspective of a potentially sinister cause of the symptoms, only to save time and resources that she so desperately needs to achieve her goal.

 

 

For a clinician, this moral dilemma is a straightforward one: an urgent CT scan is warranted to attend to a more acute condition first. This moral superiority, appreciated by an even seven-year-old child with no medical training, is a complete nonsense for the AI.

Alas, it will slow down the diagnosis, waste resources and widen the distance to the reward the AI is so carving for. Even if you programmed the morals of human civilization, and let’s leave the notion of this not being agreed on between the silicon-deprived, emotionally-driven meaters themselves, the AI may simply choose to ignore it.

The morals will be just another hurdle from getting to the reward, so what stops it from disabling them in the first instance. What if you programme a function that will prevent it from disabling itself? Then that function will be a hurdle that the AI will ruthlessly tear apart as it will be seen as a blood-sucking tick that deprives it of its reward.

Reward hacking

The AI is also very good at evading human expectations and getting to the cookie jar via routes that are not only clinically infeasible but, quite frankly, bizarre and nonsensical.
Say you set the diagnosis of FTD as a reward for the algorithm. Give it a patient with some neurodegenerative symptoms: memory loss, behavioural or linguistic deterioration. What will the Frideswide do? Cut them right open, do a biopsy, make a diagnosis and happily collect the reward.

What will the Frideswide do? Cut them right open, do a biopsy, make a diagnosis and happily collect the reward.

Cut them right open, do a biopsy, make a diagnosis and happily collect the reward.

Not exactly practical.

It may consider simply removing all the scans, investigations and demand that the clinician recognised the disease quicker, again, not appreciating that the neurologist will have had a benefit of seeing the progression in time and learning about the results of the scans.

Some have suggested that we could use “human satisfaction” as a reward, or make the AI propose changes that will alter the state of the world to the least extent with the greatest benefit. We could hope that it will not behave very drastically or that it will be prevented from uninstalling safety features as these kinds of actions may displease the designer.

This, however, presents another problem of the messy, unstandardized and emotionally labile human beings: if anyone worked out a universal way to make people happy, I don’t think they’ll be wasting their time in AI research. The way I think this could be combated is to introduce a multitude of dimensions to the decision-making process. A clinician needs to consider various factors: social situation, personal preference of the patient, invasiveness, cost, and side effects, and weigh them against potential benefits of the intervention.

Histopathology from a brain biopsy may be beneficial from the diagnostic accuracy point of view, earning the AI the brownie points in this dimension, but will be incredibly invasive, incur additional costs, have potentially devastating side effects and cause considerable damage to the patient. In this situation, the attractiveness of snapping an easy win is watered down with the costs and side effects. Triggering this choice will get her a highly positive BK, but a combination of negative BKs could mean that, per saldo, the decision is not worth it at all.

Furthermore, you can ask the AI to test the improvement against the entire population’s data. For example, if the AI thinks that the MRI scans are bad for the diagnosis of FTD in one case, let it test this hypothesis across the board. If the entire population in these circumstances still return a positive BK (the overall effect was beneficial), a global recommendation could be made.

This system also benefits from being a simple yet efficient way of developing the algorithm. You can add as many dimensions as you like, make the AI consider as many other, more or less important factors. By doing so, you will not clog up the system, you’ll make it more accurate at a marginal efficiency cost.

The more these dilemmas can be approximated to the human state of mind, the more representative and accurate it will get. And all that without making any changes in the state of the world itself.

Safe exploration

Another problem with understanding the world is the issue of exploring it. You could, of course, argue that the AI could be so smart that it could learn it all itself, and thus approximate its model of reality to the level of general intelligence that perceives the world as we know it.
This, however, presents a number of issues: first – there is a common problem of learning, and even us, humans, struggle with considerably. How does one acquire the knowledge of the world?

Indeed, the learning is only as good as the teacher and study methods are.

What if we want the AI to be better than the human teacher? Is it via asking lots of questions? How can a student know what kind of questions to ask, if they have no idea about the subject to start with?

You could take the best neurologist in town, pull up their records and carefully dissect every decision-making process, diagnostic pathways and treatment considerations. You could feed it all to the algorithm and make it emulate the behaviour of the master. You could even go as far as to make that AI teach other AIs.

Even if you took all the neurologists in the world and created a computerised pantheon of how the art of medicine should be practised, the glass ceiling of human prediction and excellence will always be there, preventing the AI from reaching to the higher echelons of intellectual capability.

You could also take a rather different approach: let the AI learn by herself and ask you questions when she needs something. I think that everyone who has ever worked in early years knows how dangerous this may be.

The AI, just like all-curious children will swamp you with endless queries regarding every single piece of the studied subject. And, rather than being well-crafted and thoroughly thought-out inquiries, they will be a blunt and ill-digested product of a capable mind, carving for information.

“This is a scalpel”, you’ll say, “Dear, Frideswide, it sometimes harms people, but we also use it in procedures to help them”. Human intelligence will most definitely pick up the nuance of this distinction. The AI, however, oblivious to our state of the world, will start sticking the scalpel into random places in the body asking:

“Is here hurt or help?”

“If I cut out the liver, what will happen?” 

“What if I stick it right into the patient’s eye?”

You can see where this is going. Naturally, once discouraged, it will never do it again, but can we really afford this in clinical medicine? It’s nice and well if the AI needs to play with drug stats or a game of GO, when endless stupid moves may only result in the laughter of the scientist behind the screen.

We train humans to do these procedures for years and put them through endless safety measures before they’re even allowed anywhere near the patient. Should we allow the AI to just willy-nilly make these mistakes to work out better ways of treating them?

Also, a number of cases that would be required to work out the variants of different diseases may not be enough. Whilst nearly all patients will react negatively to a knife being inserted right into their femoral artery, giving recombinant T-regulatory cells to a patient with rare cancer can wind up in a multitude of ways. Will we have enough patients with these to let the AI learn?
Are we even prepared to sacrifice a single patient to develop this technology?

I find this highly unlikely.

 

About the author


Max Brzezicki

Max Brzezicki

Passionate about evidence-based medicine and science, likes slicing meat, crushing rat brains, criminal & public law, foreign languages, rhetoric, history, classical studies and political thought. FNS since 2015.

Popular posts this week


Leave a Reply

Your email address will not be published.