Human Resources
Double-edged sword: Inquiry digs into the promises and perils of AI in workplaces
People & Culture: At the public hearing on the Digital Transformation of Workplaces, stakeholders convened to explore the profound impact of generative AI on the labour market.
“Australian workplaces are changing the way they operate. The Committee wants to understand what these changes mean for employees and employers, our workplaces, and the way we regulate and govern our employment practices,” said stated Labor Party’s Lisa Chesters MP, Chair of the House Standing Committee on Employment, Education and Training Inquiry.
The 2024 Skills and Workforce Development survey by Ai Group revealed that 41% of Australian businesses are not yet engaging with artificial intelligence (AI), with small and medium-sized companies being the least involved.
Among the businesses that are embracing AI, over a third are leveraging AI tools to enhance business analytics (37%), optimise operations and reduce costs (36%), and automate labour-intensive processes (33%).
Australia plays a modest role in AI research, contributing 1.6% of global AI publications, yet an impressive 22% of Australian AI research ranks in the top 10% of published work worldwide. Despite this, the country lags in translating research into commercial success, with a mere 0.2% of global AI patent inventions originating in Australia. Meanwhile, countries like Canada, China, and Singapore are making strategic moves to boost their AI research and commercialisation through greater government investment.
The Promise and Perils of AI in Healthcare
In the healthcare sector, Dr Gerry Adam, Dean of the Faculty of Radiation Oncology, and Dr Rajiv Rattan, Dean of the Faculty of Clinical Radiology from the Royal Australian and New Zealand College of Radiologists (RANZCR), said that “artificial intelligence can benefit businesses and societies, provided that it is implemented and overseen carefully”.
However, they also caution about the legal uncertainties surrounding AI, particularly in cases where AI failure could lead to poor patient outcomes. These scenarios, they argue, require a clear understanding of the shared responsibilities between “clinicians, developers, and the site that implemented the AI”.
The rapid adoption of AI in healthcare underscores the need for end users to grasp the system’s limitations and outputs. Critical issues such as data ownership, privacy, and consent must be addressed at a national level as AI tools become more widespread.
RANZCR advocates for stringent regulation in the medical AI space to prevent patient harm while balancing the need for innovation, workforce development, and rapid AI integration in healthcare.
The Pharmacy Guild of Australia echoes these sentiments, emphasising that “the increased usage of digital technology in the pharmacy operations should improve the direct engagement between a primary healthcare worker and the consumer/patient to meet their health needs and not be confined to undertaking tasks/activities that constraint this enhancement opportunity.”
Challenges in AI Adoption: Skills Shortages and Ethical Concerns
The Australian Academy of Science warns that major barriers to AI adoption remain, including a lack of a diverse, skilled workforce and insufficient data literacy. The shortage of AI-skilled workers poses a significant risk for Australia, especially as competition for talent intensifies globally.
“AI offers great opportunities to Australia’s scientific workforce, but without guidance on how researchers should use AI tools, there is a risk of misuse,” the Academy stated.
Concerns about AI misuse extend beyond research, as there have already been instances of AI-generated papers being published. The risk of fraudulent or manipulated data being introduced into peer-reviewed journals could undermine the metrics used to allocate research funding. To mitigate these risks, the research sector requires immediate and robust guidelines.
The broader STEM workforce in Australia also faces challenges, with underrepresentation of women, people with disabilities, and First Nations individuals. Science & Technology Australia (STA) calls for government investment in AI to include guidelines that support workforce diversity.
Risk of Bias and Discrimination
The risk of bias and discrimination in AI adoption is a significant concern, as pointed out by Associate Professor Alysia Blackham of Melbourne Law School. Automated decision-making tools, she warns, carry the risk of perpetuating biases, leading to negative outcomes like stress and reduced workplace productivity.
The infamous Amazon recruitment tool, which discriminated against female applicants, is a stark example of such risks. The tool was ultimately scrapped because it systematically discriminated against women applicants for software development and technical jobs.
The tool had been trained on resumes from 10 years of job applicants; men are significantly over-represented in the field, and were therefore significantly over-represented in the pool of resumes and successful applicants. The tool ‘learnt’ that male applicants were to be preferred and therefore reportedly penalised applications with the word “women’s,” or the name of all-women’s colleges.
Transparency about the use of automated decision-making tools in workplaces remains minimal, with no current requirement for employers to disclose their use or address errors. In the private sector, where most Australians are employed, there are often no mechanisms to review AI-assisted decisions.
For example, in the Australian Public Service, there was an attempt to use AI-assisted technology to manage promotions. Many of these promotions were later overturned for not being on the basis of merit. This was only revealed because the Public Service has a dedicated Merit Protection Commissioner. In the private sector, where most people work, these forms of review are not always in place.
Australia’s privacy laws are also lagging, particularly when compared to the EU’s robust regulatory framework. The EU’s General Data Protection Regulation (GDPR) mandates a human decision-maker in significant automated decisions, a safeguard that Australian law currently lacks.
The EU’s Artificial Intelligence Act, which categorises AI systems used in the workplace as high-risk, requires rigorous risk management, transparency, and human oversight. As Australia considers its own AI regulations, these international examples provide valuable lessons on protecting workers and workplaces by ensuring AI is used responsibly.
In reimagining healthcare, Health Industry HubTM is the ONLY one-stop-hub uniting the diversity of Pharma, MedTech, Diagnostics & Biotech sectors to inspire meaningful change. The exclusive leadership and influencer podcasts and vodcasts offer unparalleled insights and add immense value to our breaking news coverage.
The Health Industry HubTM content is copyright protected. Access is available under individual user licenses. Please click here to subscribe and visit T&Cs here.
News & Trends - Pharmaceuticals
Will the new PBS campaign pressure the government to act?
Pharma News: The medicines industry is ramping up its campaign to slash delays in patient access to new and innovative […]
MoreNews & Trends - MedTech & Diagnostics
Private hospitals struggle despite surge in admissions: Financial crisis worsens as inflation outpaces benefits
MedTech & Diagnostics News: Private hospital admissions are on the rise, but this uptick is not enough to counter the […]
MoreDigital & Innovation
Health sector and government under scrutiny: Record data breaches expose millions
Digital & Innovation: The national privacy regulator has raised alarm over escalating threats, reporting a record surge in data breach notifications. […]
MoreMedical and Science
Diabetes research funding plummets by 35%: Consultation launched to reverse decline
Medical & Science: Australia’s diabetes crisis is deepening, with calls for urgent action growing louder. The Standing Committee on Health, […]
More