Personalised Health Assistant
Next: Decentralised Medicine Manufacturing
Large language models have made the creation of personalised health assistant easier than every before. We have envisioned something like this in previous posts for example in FI Support Model Example (in the context of helping long time unemployed) and in previous post on how to iteratively build automated remote care.
Large language models are AI systems that can process human language and generate human-like responses. And such responses that are meaningful and correspond to what a human area expert would also give.
In context of a health, they can answer questions from patients or doctors. When a sick person tells what their symptoms are, the query to LLM can be framed as follows: “what diagnostics steps are needed to make initial analysis when the patient has these symptoms”. The answer will be meaningful dialogue containing clarifying questions, perhaps request to the customer to take a picture of the rash or gently push some place to see if it hurts, how long they’ve waited before contacting and so on and finally making a recommendation.
The answer can either be used by health professional or in a totally automated option by the patient themselves.
Some of these tasks will cause call to an external scripts or API call to fetch further information from existing sources. In a further automated system the calls in turn can cause self-diagnostic devices to be automatically sent to the user, or request for user to use their own smart phone to take pictures or record as video of some set of actions. These then are fed to a machine learning system providing diagnostics.
The end result is a system that can take customer requests in spoken format, and in language that normal users use, be understood and turned into personalised list of activities on the backend systems to make at least initial diagnostics and give patient guidelines what to do. Sometimes much more.
To enable this, the health system needs to expose its capabilities behind APIs. This means turning complex IT systems that staff now use into fine grained APIs that automations weave into personalised responses.
This is a personalised health assistant. The LLMs can be made open source as well the diagnostics code and machine learning models. So instead of health provision being expensive and centrally managed, everyone can have a free smartphone app helping with health.
The health assistant offers you to ‘chat’ with your condition. You can ask where your case is progressing in the health system: where in the queue for an operation you are at the moment, what conditions are needed for you to be accepted for an operation and how does your case look like, where in the treatment path you are and what is expected to happen next (i.e. either what treatment is next likely to be tried and/or how your condition is predicted to progress), how many times you’ve visited doctor about it, what types of tests and treatments you’ve received so far etc.
This would require that the enterprise planning systems (ERPs) of the health system have open interfaces for enquiry (and that there are such ERPs in the first place). This could lead over time to a model where cases are tracked systematically, which in turn allows to detect bottlenecks in the system. Once they are identified, various solutions can be tried to solve them.
The health assistant can also have other secondary secondary effects that help the overburdened health systems of today as they can reduce failure demand. So what is failure demand?
At least in Finland the time you are allocated when visiting a doctor is sliced into small segments. The doctor is under constant time pressure and can treat only one thing at a single visit. There is a saying in the IT field that there is never time to do it right but always time to do it twice. In health and many other domains you have to use a number much bigger than two to understand the system. People visit a doctor, get partial solution and return for more and more and more.
Digital tools do not have time limitations and can (if they are good enough and we are a long way away from that) look holistically at a patient and suggest solutions.
And digital tools are not limited to what the official health system has to offer. Many people need emotional support from nearest and dearest, there are non-governmental actors in the field in addition to the standard processes.
So, at least in principle the digital assistants can bring big improvements.
But nothing is without problems. One of the risks with such automated language based assistants is, that people can learn to say and show right things to force it’s results. For example try to trick it to give treatments they for one or other reason believe to be the right one or to issues prescriptions to sedatives etc. This type of detailed instruction giving to reach desired outcomes is called prompt engineering and it can be used both for good and bad.
This is one topic that needs to be taken in mind when such systems are envisioned.