Online anatomy atlas 3d
Read More
Advances in healthcare artificial intelligence (AI) are occurring rapidly and will soon have a significant real-world impact. Several new AI technologies are approaching feasibility and a few are close to being integrated into healthcare systems [ 1, 2]. Because of this rapid progress, there is a growing public discussion about the risks and benefits of AI and how to manage its development [ 13].
Specifically, in radiology, AI is proving to be highly useful for the analysis of diagnostic imagery [ 3, 4]. For example, researchers at Stanford have produced an algorithm that can interpret chest X-rays for 14 distinct pathologies in just a few seconds [ 5]. Moreover, radiation oncology, organ allocation, robotic surgery and several other healthcare domains also stand to be significantly impacted by AI technologies in the short to medium term [ 6, 7, 8, 9, 10]. In the United States, the Food and Drug Administration (FDA) recently approved one of the first applications of machine learning in clinical care—software to detect diabetic retinopathy from diagnostic imagery [ 11, 12].
Many technological discoveries in the field of AI are made in an academic research environment. However, commercial partners can be necessary for the dissemination of the technologies for real world use. As such, these technologies often undergo a commercialization process and end up owned and controlled by private entities. In addition, some AI technologies are developed within biotechnology startups or established private companies [ 14].
The nature of the implementation of AI could mean such corporations, clinics and public bodies will have a greater than typical role in obtaining, utilizing and protecting patient health information. Furthermore, because AI itself can be opaque for purposes of oversight, a high level of engagement with the companies developing and maintaining the technology will often be necessary.
This raises privacy issues relating to implementation and data security. The first set of concerns includes access, use and control of patient data in private hands. Some recent public–private partnerships for implementing AI have resulted in poor protection of privacy. Consequently, there have been calls for greater systemic oversight of big data health research. Private custodians of data can be impacted by competing goals and should be structurally encouraged to ensure data protection and to deter alternative use thereof.
Another set of concerns relates to the external risk of privacy breaches through AI-driven methods. The ability to deidentify or anonymize patient health data may be compromised or even nullified in light of new algorithms that have successfully reidentified such data. This could increase the risk to patient data under private custodianship.
Appropriate safeguards must be in place to maintain privacy and patient agency. We are currently in a familiar situation in which regulation and oversight risk falling behind the technologies they govern. Regulation should emphasize patient agency and consent, and should encourage increasingly sophisticated methods of data anonymization and protection.
In terms of legislative developments, the European Commission has proposed legislation containing harmonized rules on artificial intelligence [ 16]. Additionally, the United States Food and Drug Administration, are now certifying the institutions who develop and maintain AI, rather than focusing on the AI which will constantly be changing [ 15].