Artificial Intelligence – Report Back

Artificial Intelligence with PAUL DAVIES of Hilltop Digital Labs.

FRIDAY 19 APRIL 2024

Ok, what’s AI? The basic idea is to give computers loads of data and tell them to learn from it; that’s a repeated exercise, so that very quickly the application is capable of detecting patterns which would previously have taken months or years to discover. The programme is then required to use that knowledge to refine its working practices, and keep refreshing it. Simple, really.

And as Paul Davies pointed out, this is nothing new. The first autonomous vehicle, a Mercedes van, appeared in 1986. Chess Grand Master Gary Kasparov was defeated by IBM’s Deep Blue back in 1997, while Go champion Lee Se-Dol took a while longer, beaten by Google DeepMind’s AlphaGo in 2016 (he retired soon after, saying, Even if I become the number one, there is an entity that cannot be defeated.”).  Amazon, Netflix, YouTube, Alexa all detect patterns in our behaviour and make recommendations to us. Modern life would be unthinkable without it; a whole generation has grown up knowing that the machine can be smarter than the best human brain – though of course, only at what it is programmed to do. For the moment.

Indeed the term “AI” only gets applied when anxieties about privacy or control or deep-fakery are being expressed, often via AI-dependent systems such as Tik-Tok and X. Familiarity breeds complacency. The irony seems lost on many people.

Paul started his working life in 1997 as an academic geophysicist looking at an impact crater in Mexico, using banks of computers in tents with no air conditioning, clearly a memorable experience (he gained a PhD from Imperial College – they don’t get brighter than that). One big issue which surfaced, with repeated disappointments, was that computers were often being asked the wrong question, especially when pitched against human intelligence and experience. The example he gave was horseracing form. A computer should be able to pick winners to make betting a lucrative doddle, right? Wrong – the bookmakers are there already, and set their odds accordingly. So the question shouldn’t be, “Who’s going to win?” but “When are the bookies wrong?” And that is much harder.

Paul was employed in the NHS for five years and has worked with the sector ever since, including in United Health Group’s Optum Lab, where his budget for the UK was $60millions – “a rounding up error for the business as a whole!” he feels the NHS has excellent data but “doesn’t have the outlook to use it well.” Again, it’s not a new problem. Efforts have been made since 1998 to get the NHS computerised, leading to a £40bn package announced in 2002 – Tony Blair was PM then, who still believes that technology is the answer to solving many problems. He may well be right, but that top-down project collapsed in acrimony along with many of the smaller businesses involved after over £12bn was spent. That left little appetite in government for similar large-scale projects. Perhaps, I reflected, given the awfulness of the Horizon Post Office scandal, it’s no bad thing that its failures were accepted relatively quickly.

So carrying through computerisation on smaller datasets has been the chosen path instead. And that’s where businesses like Paul’s come to the fore.

Public discourse is often about the dangers of AI, but in practice “you have to make sure it works,” Paul said. Sounds like a statement of the bleedin’ obvious, until you realise the implications. The FreeStyle Libre app which enabled diabetics to monitor their blood glucose and warned when to inject insulin morphed into an automatic delivery system. But when it went wrong, users’ lives were suddenly at risk. This can happen with an innocuous update, as every Tesla owner can attest. But ask the right question and the outcomes are lifesaving: a 2010 screening service for diabetic retinopathy aimed at preventing sight loss, but its AI discovered it was better at measuring blood pressure that any other system, and thus could detect those at risk of heart disease and strokes. If I were the Health Minister concerned, it’d be in every chemist’s right now.

During Covid, Paul’s company HD Labs won a £30,000 contract to develop a ChatBot, to help with the many calls to clinicians which turned out to be non-clinical, the idea being that inquirers could be redirected to the practical help and information they really needed. This required an interesting take – how do people say things and what do they really mean? “The language we use gives verbal clues,” said Paul, for example to our commitment to change such behaviours as giving up smoking or adopting a better diet.  Clinicians (and slimming club consultants) are trained to spot this and give the appropriate steer and support. Machines however are some way behind: “ChatGPT etc are not doing this to any level of sophistication..” said Paul. But in a few years’ time it may be quite different.

Dealing with health matters throws up immediate issues in ethics. When technologies are being deployed the ethical issues must be incorporated from the start; not only will they cause damage if wrong, the bad publicity adds resistance to further AI systems in future. A skin cancer scan which was only tested on white skin was found in 2021 to fail with black skin, because there was no data set of black skin on which to train it. Ask ChatGPT to picture a “chiropractor” or a “successful CEO” and it’ll be a white man. No wonder people get angry.

Should we modify what we are expecting? “AI simply helps us get through a process more quickly,” said Paul. “It does not replace practitioners, but if done well should free up their time to concentrate on the human side.” His latest ambition is to develop a “Kite Mark” with the British Standards Institute, a form of validation homework which could be used by developers to avoid basic errors.

The NHS phone app now has 33million users, with two-thirds of adults signed up now (Dec 23), with over-65s the most active users – so much for the “old people aren’t capable” argument. Some £2billion has been allocated recently in the NHS for more digital health applications of one kind or another. The stated aims are “to empower the person, support the frontline, integrate services, manage the system and create the future.” If these apps work, then the sky’s the limit; said Paul, “In the USA, the insurers are very interested.” You bet.

An obvious area where AI could help allocate resources better and quicker is bed blocking, where patients are in hospital for ages while care packages are organised. That’s a nightmare for families; checking power of attorney alone can take months. Another example: Blackpool is trialling a system with young mothers, alert for early warning of post-natal depression. “The real problem is data sharing,” Paul said. “It’s available – but there’s no simple way of connecting the systems.” Good luck with that…or maybe as Sir Tim Berners-Lee has suggested, one solution is Personal Online Datastores (PODs) that could empower us each as citizens to decide when AI algorithms can access our data. Could be that’s where the NHS app is heading.

I was reminded of the efforts made in the 1980s as we introduced breast cancer screening. It wasn’t only the equipment we lacked (all of which had to be imported), or staff to operate it and interpret the results. GP patient lists were unreliable, theatre time was non-existent and lab capacity had to be created from scratch. There were no counsellors to help with recovery or trauma. We even had to change the rules on consent to ensure that implants remained the property of the patient. It was fiendishly complicated, far more than users realised. But we got it together and it is still working – let’s hope Paul and his team will say the same about their work in 30 years’ time.

A thought-provoking meeting, with lots of wonderful ideas.

If you would like to look back at what happened to £12bn in the NHS, here’s the Guardian take on it:

https://www.theguardian.com/society/2002/apr/25/epublic.technology

https://www.theguardian.com/business/2010/mar/21/nhs-national-program-problems