AI image recognition technology used to detect traffic offences raises privacy concerns, Queensland audit finds
- Replies 0
Queensland's transport department is "not effectively identifying" the ethical risks of artificial intelligence used in mobile phone and seatbelt cameras, a report has found.
Some of those include privacy concerns, inadequate human oversight to ensure fair decisions, inaccurate image recognition, and issues with photo handling and storage.
The Department of Transport and Main Roads (TMR) uses AI in its cameras to detect mobile and seatbelt offences, and in QChat — a virtual assistant for Queensland government employees.
A Queensland Audit Office (QAO) report urged TMR to perform department-wide ethical risk assessments for both of these technologies.
From a total 208.4 million assessments by AI in 2024, about 114,000 fines were issued.
AI reduced the volume that needed human review by an external vendor by 98.7 per cent to 2.7 million, the report found.
After they were reviewed by an external vendor, the Queensland Revenue Office then also reviewed 137,000 potential offences in this period.
While the mobile phone and seatbelt technology (MPST) program has included some mitigation strategies, including human review of potential offences, it must assess the "completeness and effectiveness of these arrangements," the report said.
The department has reviewed some general ethical risks in the MPST program but without a more fulsome review, it "does not know whether all ethical risks for the MPST program are identified and managed, according to the QAO.

Users could also mistakenly upload protected information or receive misleading or inaccurate information from the virtual assistant.
To mitigate these risks, TMR must establish monitoring controls and take a more structured approached to staff training, according to the report.
Overall, it found the department could do better to consider the risks of AI.
"It has taken initial steps, but lacks full visibility over AI systems in use," the QAO said.
It recommended the department strengthen its oversight of ethical risks, assess and update governance arrangements, and implement appropriate assurance frameworks.
In a response to the report, TMR Director-General Sally Stannard said the department had accepted the recommendations and was already progressing them.
"While TMR has implemented a range of controls to mitigate the ethical risks, we will ensure current processes are assessed against the requirements of the AI governance policy," she said.
Written by ABC News.
Some of those include privacy concerns, inadequate human oversight to ensure fair decisions, inaccurate image recognition, and issues with photo handling and storage.
The Department of Transport and Main Roads (TMR) uses AI in its cameras to detect mobile and seatbelt offences, and in QChat — a virtual assistant for Queensland government employees.
AI reducing need for external review
The AI image recognition technology used in mobile and seatbelt cameras filters out photos unlikely to have captured offences, to make it easier for humans to review them.From a total 208.4 million assessments by AI in 2024, about 114,000 fines were issued.
AI reduced the volume that needed human review by an external vendor by 98.7 per cent to 2.7 million, the report found.
After they were reviewed by an external vendor, the Queensland Revenue Office then also reviewed 137,000 potential offences in this period.
While the mobile phone and seatbelt technology (MPST) program has included some mitigation strategies, including human review of potential offences, it must assess the "completeness and effectiveness of these arrangements," the report said.
The department has reviewed some general ethical risks in the MPST program but without a more fulsome review, it "does not know whether all ethical risks for the MPST program are identified and managed, according to the QAO.

About 114,000 fines were issued in 2024 after AI assessments. (Supplied: Department of Transport and Main Roads)
'Lacks full visibility' over AI
The report also raised concerns that users of QChat may interact in an unintended or inappropriate manner with the tool, breaching ethical and legislative procedures.Users could also mistakenly upload protected information or receive misleading or inaccurate information from the virtual assistant.
To mitigate these risks, TMR must establish monitoring controls and take a more structured approached to staff training, according to the report.
Overall, it found the department could do better to consider the risks of AI.
"It has taken initial steps, but lacks full visibility over AI systems in use," the QAO said.
It recommended the department strengthen its oversight of ethical risks, assess and update governance arrangements, and implement appropriate assurance frameworks.
In a response to the report, TMR Director-General Sally Stannard said the department had accepted the recommendations and was already progressing them.
"While TMR has implemented a range of controls to mitigate the ethical risks, we will ensure current processes are assessed against the requirements of the AI governance policy," she said.
Written by ABC News.