Artificial intelligence (AI) continues to bring benefits across many industries, including healthcare diagnostics and consumer technology. However, as its applications expand, so too do concerns about its accuracy and potential for misuse. Two recent examples—the use of AI in detecting ovarian cancer and its controversial implementation in summarising news—highlight both the transformative potential and the risks of AI.

AI in early cancer detection

Ovarian cancer is notoriously difficult to detect in its early stages. Early intervention is critical for improving survival rates, yet current methods rarely identify the disease before it spreads.

A breakthrough by Dr Daniel Heller and his team at Memorial Sloan Kettering Cancer Center offers hope. They have developed an AI-powered blood test that uses nanotube technology—tiny tubes of carbon that react to molecules in the blood. These nanotubes emit fluorescent light based on what binds to them, creating a molecular “fingerprint.”

The challenge lies in interpreting this data. While the molecular patterns are too subtle for humans to discern, machine-learning algorithms excel at recognising such complexities. By training AI systems with blood samples from patients with and without ovarian cancer, the team can identify the disease far earlier than conventional methods.

This innovation could revolutionise diagnostics, not just for ovarian cancer but for other diseases, including infections like pneumonia. However, as with any AI system, its effectiveness depends on the quality of the data and algorithms used—bringing us to an example that underscores the risks of misapplied AI.

The risks of misapplied AI

Apple’s AI-driven news summarising feature on its latest iPhones has faced criticism for generating inaccurate headlines. Designed to help reduce the number of notifications smartphone users receive, the BBC noted that “these AI summarisations by Apple do not reflect – and in some cases completely contradict – the original BBC content.”

The BBC, along with the journalism body Reporters Without Borders, has called for Apple to withdraw the feature, citing the dangers of misinformation.

Apple has since announced a software update that will make it clearer the summaries are AI-generated, but critics argue this step is insufficient. They highlight that the responsibility for verifying accuracy will still rest with users, complicating access to reliable information and undermining trust in the news.

Lessons for businesses

These contrasting examples of AI in action offer valuable lessons for businesses seeking to integrate AI.

  1. Ensuring accuracy is paramount: This is particularly vital in high-stakes applications like healthcare, where false positives or negatives in diagnostics can have life-altering consequences. In any context, AI systems must undergo rigorous testing and validation.
  2. Clear communication about AI’s role is essential: Miscommunication about AI’s functions and limitations can damage trust. Apple’s failure to initially acknowledge that its summaries were AI-generated contributed to public confusion and backlash.
  3. AI systems must include safeguards: To prevent the dissemination of false information, AI needs to be designed with robust checks and balances.

Balancing promise with caution

AI has the potential to bring significant benefits, but as these two examples illustrate, it is not without risks.

In the race to innovate, the lesson is clear: AI should be approached with caution. By rigorously testing its applications and communicating transparently, businesses can harness its benefits while minimising potential downsides.

See: https://www.bbc.co.uk/news/articles/cq8v1ww51vno

https://www.bbc.co.uk/news/articles/cge93de21n0o