Artificial intelligence is already in our hospitals. 5 questions people want answered

Artificial intelligence (AI) is already being used in health care. AI can look for patterns in medical images to help diagnose disease. It can help predict who in a hospital ward might deteriorate. It can rapidly summarise medical research papers to help doctors stay up-to-date with the latest evidence.

These are examples of AI making or shaping decisions health professionals previously made. More applications are being developed.

But what do consumers think of using AI in health care? And how should their answers shape how it’s used in the future?



What do consumers think?​

AI systems are trained to look for patterns in large amounts of data. Based on these patterns, AI systems can make recommendations, suggest diagnoses, or initiate actions. They can potentially continually learn, becoming better at tasks over time.

If we draw together international evidence, including our ownand that of others, it seems most consumers accept the potential value of AI in health care.

This value could include, for example, increasing the accuracy of diagnoses or improving access to care. At present, these are largely potential, rather than proven, benefits.

But consumers say their acceptance is conditional. They still have serious concerns.

1. Does the AI work?

A baseline expectation is AI tools should work well. Often, consumers say AI should be at least as good as a human doctor at the tasks it performs. They say we should not use AI if it will lead to more incorrect diagnoses or medical errors.



2. Who’s responsible if AI gets it wrong?

Consumers also worry that if AI systems generate decisions – such as diagnoses or treatment plans – without human input, it may be unclear who is responsible for errors. So people often want clinicians to remain responsible for the final decisions, and for protecting patients from harms.

3. Will AI make health care less fair?

If health services are already discriminatory, AI systems can learn these patterns from data and repeat or worsen the discrimination. So AI used in health care can make health inequities worse. In our studies consumers said this is not OK.



4. Will AI dehumanise health care?

Consumers are concerned AI will take the “human” elements out of health care, consistently saying AI tools should support rather than replace doctors. Often, this is because AI is perceived to lack important human traits, such as empathy. Consumers say the communication skills, care and touch of a health professional are especially important when feeling vulnerable.



5. Will AI de-skill our health workers?

Consumers value human clinicians and their expertise. In our research with women about AI in breast screening, women were concerned about the potential effect on radiologists’ skills and expertise. Women saw this expertise as a precious shared resource: too much dependence on AI tools, and this resource might be lost.



Consumers and communities need a say​

The Australian health-care system cannot focus only on the technical elements of AI tools. Social and ethical considerations, including high-quality engagement with consumers and communities, are essential to shape AI use in health care.

Communities need opportunities to develop digital health literacy: digital skills to access reliable, trustworthy health information, services and resources.

Respectful engagement with Aboriginal and Torres Strait Islander communities must be central. This includes upholding Indigenous data sovereignty, which the Australian Institute of Aboriginal and Torres Strait Islander Studies describes as:

the right of Indigenous peoples to govern the collection, ownership and application of data about Indigenous communities, peoples, lands, and resources.
This includes any use of data to create AI.


file-20231122-25-txy8nb.jpg

Respectful engagement with Aboriginal and Torres Strait Islander communities is vital. Thurtell/GettyImages





This critically important consumer and community engagement needs to take place before managers design (more) AI into health systems, before regulators create guidance for how AI should and shouldn’t be used, and before clinicians consider buying a new AI tool for their practice.

We’re making some progress. Earlier this year, we ran a citizens’ jury on AI in health care. We supported 30 diverse Australians, from every state and territory, to spend three weeks learning about AI in health care, and developing recommendations for policymakers.

Their recommendations, which will be published in an upcoming issue of the Medical Journal of Australia, have informed a recently released national roadmap for using AI in health care.

That’s not all​

Health professionals also need to be upskilled and supported to use AI in health care. They need to learn to be critical users of digital health tools, including understanding their pros and cons.

Our analysis of safety events reported to the Food and Drug Administration shows the most serious harms reported to the US regulator came not from a faulty device, but from the way consumers and clinicians used the device.

We also need to consider when health professionals should tell patients an AI tool is being used in their care, and when health workers should seek informed consent for that use.

Lastly, people involved in every stage of developing and using AI need to get accustomed to asking themselves: do consumers and communities agree this is a justified use of AI?

Only then will we have the AI-enabled health-care system consumers actually want.

This article was first published on The Conversation and was written by Stacy Carter, Professor and Director, Australian Centre for Health Engagement, Evidence and Values, University of Wollongong, Emma Frost, PhD candidate, Australian Centre for Health Engagement, Evidence and Values, University of Wollongong, Farah Magrabi, Professor of Biomedical and Health Informatics at the Australian Institute of Health Innovation, Macquarie University, Yves Saint James Aquino, Research Fellow, Australian Centre for Health Engagement, Evidence and Values, University of Wollongong

 
Last edited by a moderator:
Sponsored
Artificial intelligence (AI) is already being used in health care. AI can look for patterns in medical images to help diagnose disease. It can help predict who in a hospital ward might deteriorate. It can rapidly summarise medical research papers to help doctors stay up-to-date with the latest evidence.

These are examples of AI making or shaping decisions health professionals previously made. More applications are being developed.

But what do consumers think of using AI in health care? And how should their answers shape how it’s used in the future?



What do consumers think?​

AI systems are trained to look for patterns in large amounts of data. Based on these patterns, AI systems can make recommendations, suggest diagnoses, or initiate actions. They can potentially continually learn, becoming better at tasks over time.

If we draw together international evidence, including our ownand that of others, it seems most consumers accept the potential value of AI in health care.

This value could include, for example, increasing the accuracy of diagnoses or improving access to care. At present, these are largely potential, rather than proven, benefits.

But consumers say their acceptance is conditional. They still have serious concerns.

1. Does the AI work?

A baseline expectation is AI tools should work well. Often, consumers say AI should be at least as good as a human doctor at the tasks it performs. They say we should not use AI if it will lead to more incorrect diagnoses or medical errors.



2. Who’s responsible if AI gets it wrong?

Consumers also worry that if AI systems generate decisions – such as diagnoses or treatment plans – without human input, it may be unclear who is responsible for errors. So people often want clinicians to remain responsible for the final decisions, and for protecting patients from harms.

3. Will AI make health care less fair?

If health services are already discriminatory, AI systems can learn these patterns from data and repeat or worsen the discrimination. So AI used in health care can make health inequities worse. In our studies consumers said this is not OK.



4. Will AI dehumanise health care?

Consumers are concerned AI will take the “human” elements out of health care, consistently saying AI tools should support rather than replace doctors. Often, this is because AI is perceived to lack important human traits, such as empathy. Consumers say the communication skills, care and touch of a health professional are especially important when feeling vulnerable.



5. Will AI de-skill our health workers?

Consumers value human clinicians and their expertise. In our research with women about AI in breast screening, women were concerned about the potential effect on radiologists’ skills and expertise. Women saw this expertise as a precious shared resource: too much dependence on AI tools, and this resource might be lost.



Consumers and communities need a say​

The Australian health-care system cannot focus only on the technical elements of AI tools. Social and ethical considerations, including high-quality engagement with consumers and communities, are essential to shape AI use in health care.

Communities need opportunities to develop digital health literacy: digital skills to access reliable, trustworthy health information, services and resources.

Respectful engagement with Aboriginal and Torres Strait Islander communities must be central. This includes upholding Indigenous data sovereignty, which the Australian Institute of Aboriginal and Torres Strait Islander Studies describes as:


This includes any use of data to create AI.


file-20231122-25-txy8nb.jpg

Respectful engagement with Aboriginal and Torres Strait Islander communities is vital. Thurtell/GettyImages





This critically important consumer and community engagement needs to take place before managers design (more) AI into health systems, before regulators create guidance for how AI should and shouldn’t be used, and before clinicians consider buying a new AI tool for their practice.

We’re making some progress. Earlier this year, we ran a citizens’ jury on AI in health care. We supported 30 diverse Australians, from every state and territory, to spend three weeks learning about AI in health care, and developing recommendations for policymakers.

Their recommendations, which will be published in an upcoming issue of the Medical Journal of Australia, have informed a recently released national roadmap for using AI in health care.

That’s not all​

Health professionals also need to be upskilled and supported to use AI in health care. They need to learn to be critical users of digital health tools, including understanding their pros and cons.

Our analysis of safety events reported to the Food and Drug Administration shows the most serious harms reported to the US regulator came not from a faulty device, but from the way consumers and clinicians used the device.

We also need to consider when health professionals should tell patients an AI tool is being used in their care, and when health workers should seek informed consent for that use.

Lastly, people involved in every stage of developing and using AI need to get accustomed to asking themselves: do consumers and communities agree this is a justified use of AI?

Only then will we have the AI-enabled health-care system consumers actually want.

This article was first published on The Conversation and was written by Stacy Carter, Professor and Director, Australian Centre for Health Engagement, Evidence and Values, University of Wollongong, Emma Frost, PhD candidate, Australian Centre for Health Engagement, Evidence and Values, University of Wollongong, Farah Magrabi, Professor of Biomedical and Health Informatics at the Australian Institute of Health Innovation, Macquarie University, Yves Saint James Aquino, Research Fellow, Australian Centre for Health Engagement, Evidence and Values, University of Wollongong


As an aged, disabled person who often finds herself admitted to hospitals, I am very hesitant about the use of AI as a diagnostic aid! It's hard enough getting doctors to take what I say is happening as fact! They often decide to totally ignore whatever I say and go off on investigations that are neither needed or wanted! The last time I was admitted was because of severe allergic reaction to a medication, I couldn't stop vomitting. I had been drinking pepsi flavoured water before and naturally the vomit was dark brown, so they ignored what I told them I'd been drinking, and instead decided I was vomitting blood, going down the road of multiple scans etc when all that was required was a change to the medication I was taking...THERE WAS NEVER ANY BLOOD, IT WAS PEPSI! So, if human doctors won't listen what hope is there for AI to make correct diagnoses????
 
If there is a caring humand doctor in the Southern Highlands Region please let me know. I haven't found one yet. They are usually too busy looking at their clocks to get you out.
 
If there is a caring humand doctor in the Southern Highlands Region please let me know. I haven't found one yet. They are usually too busy looking at their clocks to get you out.
Where I live, we have a roster of locum doctors. Must are not really interested, and just write the script I ask for. Not really good health care.
 
Artificial intelligence (AI) is already being used in health care. AI can look for patterns in medical images to help diagnose disease. It can help predict who in a hospital ward might deteriorate. It can rapidly summarise medical research papers to help doctors stay up-to-date with the latest evidence.

These are examples of AI making or shaping decisions health professionals previously made. More applications are being developed.

But what do consumers think of using AI in health care? And how should their answers shape how it’s used in the future?



What do consumers think?​

AI systems are trained to look for patterns in large amounts of data. Based on these patterns, AI systems can make recommendations, suggest diagnoses, or initiate actions. They can potentially continually learn, becoming better at tasks over time.

If we draw together international evidence, including our ownand that of others, it seems most consumers accept the potential value of AI in health care.

This value could include, for example, increasing the accuracy of diagnoses or improving access to care. At present, these are largely potential, rather than proven, benefits.

But consumers say their acceptance is conditional. They still have serious concerns.

1. Does the AI work?

A baseline expectation is AI tools should work well. Often, consumers say AI should be at least as good as a human doctor at the tasks it performs. They say we should not use AI if it will lead to more incorrect diagnoses or medical errors.



2. Who’s responsible if AI gets it wrong?

Consumers also worry that if AI systems generate decisions – such as diagnoses or treatment plans – without human input, it may be unclear who is responsible for errors. So people often want clinicians to remain responsible for the final decisions, and for protecting patients from harms.

3. Will AI make health care less fair?

If health services are already discriminatory, AI systems can learn these patterns from data and repeat or worsen the discrimination. So AI used in health care can make health inequities worse. In our studies consumers said this is not OK.



4. Will AI dehumanise health care?

Consumers are concerned AI will take the “human” elements out of health care, consistently saying AI tools should support rather than replace doctors. Often, this is because AI is perceived to lack important human traits, such as empathy. Consumers say the communication skills, care and touch of a health professional are especially important when feeling vulnerable.



5. Will AI de-skill our health workers?

Consumers value human clinicians and their expertise. In our research with women about AI in breast screening, women were concerned about the potential effect on radiologists’ skills and expertise. Women saw this expertise as a precious shared resource: too much dependence on AI tools, and this resource might be lost.



Consumers and communities need a say​

The Australian health-care system cannot focus only on the technical elements of AI tools. Social and ethical considerations, including high-quality engagement with consumers and communities, are essential to shape AI use in health care.

Communities need opportunities to develop digital health literacy: digital skills to access reliable, trustworthy health information, services and resources.

Respectful engagement with Aboriginal and Torres Strait Islander communities must be central. This includes upholding Indigenous data sovereignty, which the Australian Institute of Aboriginal and Torres Strait Islander Studies describes as:


This includes any use of data to create AI.


file-20231122-25-txy8nb.jpg

Respectful engagement with Aboriginal and Torres Strait Islander communities is vital. Thurtell/GettyImages





This critically important consumer and community engagement needs to take place before managers design (more) AI into health systems, before regulators create guidance for how AI should and shouldn’t be used, and before clinicians consider buying a new AI tool for their practice.

We’re making some progress. Earlier this year, we ran a citizens’ jury on AI in health care. We supported 30 diverse Australians, from every state and territory, to spend three weeks learning about AI in health care, and developing recommendations for policymakers.

Their recommendations, which will be published in an upcoming issue of the Medical Journal of Australia, have informed a recently released national roadmap for using AI in health care.

That’s not all​

Health professionals also need to be upskilled and supported to use AI in health care. They need to learn to be critical users of digital health tools, including understanding their pros and cons.

Our analysis of safety events reported to the Food and Drug Administration shows the most serious harms reported to the US regulator came not from a faulty device, but from the way consumers and clinicians used the device.

We also need to consider when health professionals should tell patients an AI tool is being used in their care, and when health workers should seek informed consent for that use.

Lastly, people involved in every stage of developing and using AI need to get accustomed to asking themselves: do consumers and communities agree this is a justified use of AI?

Only then will we have the AI-enabled health-care system consumers actually want.

This article was first published on The Conversation and was written by Stacy Carter, Professor and Director, Australian Centre for Health Engagement, Evidence and Values, University of Wollongong, Emma Frost, PhD candidate, Australian Centre for Health Engagement, Evidence and Values, University of Wollongong, Farah Magrabi, Professor of Biomedical and Health Informatics at the Australian Institute of Health Innovation, Macquarie University, Yves Saint James Aquino, Research Fellow, Australian Centre for Health Engagement, Evidence and Values, University of Wollongong


AI systems can make recommendations, suggest diagnoses, or initiate actions. Just my luck I'll wake up and find they've sewn my arsehole up?.
 
The only experience with AI I have had is Alexa.
We use it for my hubby's aged care. She keeps him up to date with medications, appointments, he can switch lights on and off without having to heave himself out of his bed, and gives him someone else to argue with because he keeps calling her Alexis. It is handy.
I am not happy with the idea of AI taking over medical care. I am the type of person who wants to be face to face so I can see reactions when I ask pointed questions about medical status and care.
AI is very pointed, without emotion and not able to consider a person's emotional status when being treated.
I guess there is a fine line between personal care and no care at all.
My grandchildren will most likely be the ones to make considered judgement about this new trend.
 
Artificial intelligence (AI) is already being used in health care. AI can look for patterns in medical images to help diagnose disease. It can help predict who in a hospital ward might deteriorate. It can rapidly summarise medical research papers to help doctors stay up-to-date with the latest evidence.

These are examples of AI making or shaping decisions health professionals previously made. More applications are being developed.

But what do consumers think of using AI in health care? And how should their answers shape how it’s used in the future?



What do consumers think?​

AI systems are trained to look for patterns in large amounts of data. Based on these patterns, AI systems can make recommendations, suggest diagnoses, or initiate actions. They can potentially continually learn, becoming better at tasks over time.

If we draw together international evidence, including our ownand that of others, it seems most consumers accept the potential value of AI in health care.

This value could include, for example, increasing the accuracy of diagnoses or improving access to care. At present, these are largely potential, rather than proven, benefits.

But consumers say their acceptance is conditional. They still have serious concerns.

1. Does the AI work?

A baseline expectation is AI tools should work well. Often, consumers say AI should be at least as good as a human doctor at the tasks it performs. They say we should not use AI if it will lead to more incorrect diagnoses or medical errors.



2. Who’s responsible if AI gets it wrong?

Consumers also worry that if AI systems generate decisions – such as diagnoses or treatment plans – without human input, it may be unclear who is responsible for errors. So people often want clinicians to remain responsible for the final decisions, and for protecting patients from harms.

3. Will AI make health care less fair?

If health services are already discriminatory, AI systems can learn these patterns from data and repeat or worsen the discrimination. So AI used in health care can make health inequities worse. In our studies consumers said this is not OK.



4. Will AI dehumanise health care?

Consumers are concerned AI will take the “human” elements out of health care, consistently saying AI tools should support rather than replace doctors. Often, this is because AI is perceived to lack important human traits, such as empathy. Consumers say the communication skills, care and touch of a health professional are especially important when feeling vulnerable.



5. Will AI de-skill our health workers?

Consumers value human clinicians and their expertise. In our research with women about AI in breast screening, women were concerned about the potential effect on radiologists’ skills and expertise. Women saw this expertise as a precious shared resource: too much dependence on AI tools, and this resource might be lost.



Consumers and communities need a say​

The Australian health-care system cannot focus only on the technical elements of AI tools. Social and ethical considerations, including high-quality engagement with consumers and communities, are essential to shape AI use in health care.

Communities need opportunities to develop digital health literacy: digital skills to access reliable, trustworthy health information, services and resources.

Respectful engagement with Aboriginal and Torres Strait Islander communities must be central. This includes upholding Indigenous data sovereignty, which the Australian Institute of Aboriginal and Torres Strait Islander Studies describes as:


This includes any use of data to create AI.


file-20231122-25-txy8nb.jpg

Respectful engagement with Aboriginal and Torres Strait Islander communities is vital. Thurtell/GettyImages





This critically important consumer and community engagement needs to take place before managers design (more) AI into health systems, before regulators create guidance for how AI should and shouldn’t be used, and before clinicians consider buying a new AI tool for their practice.

We’re making some progress. Earlier this year, we ran a citizens’ jury on AI in health care. We supported 30 diverse Australians, from every state and territory, to spend three weeks learning about AI in health care, and developing recommendations for policymakers.

Their recommendations, which will be published in an upcoming issue of the Medical Journal of Australia, have informed a recently released national roadmap for using AI in health care.

That’s not all​

Health professionals also need to be upskilled and supported to use AI in health care. They need to learn to be critical users of digital health tools, including understanding their pros and cons.

Our analysis of safety events reported to the Food and Drug Administration shows the most serious harms reported to the US regulator came not from a faulty device, but from the way consumers and clinicians used the device.

We also need to consider when health professionals should tell patients an AI tool is being used in their care, and when health workers should seek informed consent for that use.

Lastly, people involved in every stage of developing and using AI need to get accustomed to asking themselves: do consumers and communities agree this is a justified use of AI?

Only then will we have the AI-enabled health-care system consumers actually want.

This article was first published on The Conversation and was written by Stacy Carter, Professor and Director, Australian Centre for Health Engagement, Evidence and Values, University of Wollongong, Emma Frost, PhD candidate, Australian Centre for Health Engagement, Evidence and Values, University of Wollongong, Farah Magrabi, Professor of Biomedical and Health Informatics at the Australian Institute of Health Innovation, Macquarie University, Yves Saint James Aquino, Research Fellow, Australian Centre for Health Engagement, Evidence and Values, University of Wollongong

 
Just like when madam Curie discovered Radium these things can be used for good and for destruction. Unfortunately I do not trust our population enough to think this will have a positive outcome
 
  • Like
Reactions: Oldgirllaughing

Join the conversation

News, deals, games, and bargains for Aussies over 60. From everyday expenses like groceries and eating out, to electronics, fashion and travel, the club is all about helping you make your money go further.

Seniors Discount Club

The SDC searches for the best deals, discounts, and bargains for Aussies over 60. From everyday expenses like groceries and eating out, to electronics, fashion and travel, the club is all about helping you make your money go further.
  1. New members
  2. Jokes & fun
  3. Photography
  4. Nostalgia / Yesterday's Australia
  5. Food and Lifestyle
  6. Money Saving Hacks
  7. Offtopic / Everything else

Latest Articles

  • We believe that retirement should be a time to relax and enjoy life, not worry about money. That's why we're here to help our members make the most of their retirement years. If you're over 60 and looking for ways to save money, connect with others, and have a laugh, we’d love to have you aboard.
  • Advertise with us

User Menu

Enjoyed Reading our Story?

  • Share this forum to your loved ones.
Change Weather Postcode×
Change Petrol Postcode×