A 65-year-old former Commonwealth Bank of Australia (CBA) employee says she was left “shell-shocked” after discovering that the artificial intelligence system she helped train ultimately replaced her role.
Kathryn Sullivan, a bank teller who worked with CBA for 25 years, was made redundant in July after completing final tasks that involved scripting and testing responses for the bank’s Bumblebee AI.
“We just feel like we were nothing, we were a number,” Sullivan said. While acknowledging the benefits of AI, she warned against unregulated adoption: “I believe there needs to be some sort of regulation to prevent copyright infringements or replacing humans.”
Initially, CBA did not respond to her inquiries for more than a week. The bank later admitted its AI rollout was premature and offered to reinstate affected workers. Sullivan declined the offer, citing insecurity in the revised role.
A CBA spokesperson conceded that the bank’s first assessment “did not adequately consider all relevant business considerations,” leading to errors in declaring 45 roles redundant.
Despite the controversy, the bank continues to expand its AI strategy. CEO Matt Comyn recently announced a partnership with OpenAI to combat scams, fraud, cyber threats, and financial crime.
The case has fueled fresh debate over AI ethics, job security, and whether safeguards are needed to prevent companies from displacing workers with systems trained by those same employees.
The discussion resonates globally—including in Nigeria—where banks and tech firms are increasingly exploring AI applications in customer service, fraud detection, and financial operations.