Bleich’s whim sharply illustrates the dilemma of the education system. A teacher or lecturer can no longer be sure that students’ work is their own – existing plagiarism checkers have no defense against ChatGPT.
So far, three Australian states have blocked its use on school internet networks – NSW, Queensland and Tasmania – as have many other education authorities internationally. But they can’t block it everywhere. As Elon Musk noted in response to his arrival: “Goodbye homework.”
Jeremy Weinstein, a professor at Stanford University in the heart of Silicon Valley and co-author System failure: Where Big Tech went wrong and how we can reboothe points out that the creator of ChatGPT – a San Francisco firm called OpenAI – “is only one company, and there are dozens of companies developing these large language models”.
Weinstein says that “it’s clearly a revolution” and that “like many technological advances before it, the world will be completely different as a result”.
In an anonymous survey of about 4,500 Stanford students conducted this month by the school newspaper, The Stanford Daily17 percent said they used ChatGPT in their final exams and assignments even though it was a violation of ethics codes.
“One of the costs of this is teachers and the education system — we’re at a point where teachers and school districts are overwhelmed,” Weinstein tells me. “Do we approach this new moment with concerns about potential harm? Certainly not us.”
It should be possible to integrate a program like ChatGPT into teaching, just as the new calculator was eventually integrated into teaching mathematics. But schools, companies, regulators are not ready, Weinstein says: “Do any companies or governments have the infrastructure to enable the benefits of this technology and mitigate its potential harm? We don’t have standards or codes in societies, and we’re racing between disruption and democracy—and democracy always loses.”
The world is in a ‘seat belt moment’ with machine learning, as it was when basic safety equipment was introduced in the auto industry in the 1960s and 1970s, but no one is installing seat belts yet: “Government largely lacks regulatory technology environment,” says Weinstein. “In AI, we are reliant on self-regulation. It puts things like moderating the platform in the hands of a single individual, which is deeply uncomfortable for a lot of people,” a reference to Musk’s control of Twitter.
This also creates perverse results like the recruitment system created by Amazon. The machine learning program ingested all existing data about Amazon’s hiring practices and applied it to new job applicants. The result was a bot that systematically discriminated against women. The robot, beyond repair, had to be thrown away.
It is one of the limitations of machine learning that it learns from the data it is trained on. So ChatGPT can do impressively broad and fast internet research, but it’s only as accurate as what’s on the internet. And we all know how accurate that is. Caveat emptor.
The dilemma posed by ChatGPT extends far beyond education. “There’s going to be a lot of anxiety about that.” [artificial intelligence] is targeting white-collar jobs,” Bill Gates predicted during a visit to Sydney last week.
It was already focused on blue-collar jobs. As Bleich well knows. Barack Obama’s deputy in Australia from 2009-13 is now the chief legal officer of Cruise, which already has a hundred driverless taxis on the streets of San Francisco offering rides to the public.
Driverless vehicles have yet to be perfected, but they already have a better safety record than humans behind the wheel. The consequences are clear for the millions of people who make a living as delivery drivers, couriers, truck drivers, taxi drivers, Uber drivers.
The release of ChatGPT is now raising chills in the smug set. Lawyers, doctors, journalists, academics all face the prospect of serious disruption as machine learning promises to do some of their work faster and at almost zero cost. Millions more jobs face disruption.
One member of the US Congress, Ted Lieu, a California Democrat with a degree in computer science, says he is “terrified” of AI. He suggests a federal commission consider how to regulate it. He hopes to eventually create something like the US Food and Drug Administration.
Weinstein agrees that this is the kind of ambition that is needed. He says Australian regulators can play an important role: “I think we’re in a moment of regulatory experimentation. Therefore, while small markets cannot influence the extraterritorial behavior of large technology companies, they can experiment with new policy and regulatory approaches. That’s a huge value now.”
As for Oscar, the sonnet failed to change his ways, Bleich says. “But the bell we finally managed to get around his neck seemed about right. A constant reminder that there are things machine learning can’t do. Yet.
The Opinion Newsletter is a weekly opinion package that will challenge, promote and inform your own. Register here.
More global commentary from our acclaimed writers
Weapons of choice: We may not be at war, but Australia has learned some hard lessons from Ukraine about what we need to prioritize and prepare for should we go to war – To Mick Ryan
US/Australia Alliance: There’s a new head of the Center for American Studies in Sydney, and the Republican has a few things he wants Australians to know about the reality of their relationship with the US – Peter Hartcher
Long Term Profit: The latest headlines suggest that far-right populism is on the wane, but that conclusion would be short-sighted and dangerous. The long-term trend line is getting clearer – and turning sharply to the right – Duncan McDonnell