Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
People say nothing funny ever happens on LinkedIn and in my experience, people are right.
But the other day, a colleague showed me a post on the site from a man named Chris who said he had started joining online meetings 30 seconds early, all the better to be discreetly recorded by the AI note-taking assistants now used to transcribe virtual meetings.
He then starts screaming that he is on the Titanic, which has just hit an iceberg, and needs help pronto, before carrying on normally for the rest of the meeting.
“When the meeting ends,” he wrote, “everyone gets an emailed transcript where the AI summary is: ‘Chris hit an iceberg, is trapped on a sinking ship, and general Q2 pricing updates.’”
I enjoyed this story and hope it travels far on the grounds that finally, someone may have found a good use for AI in the office.
Obviously I hear constantly about the latest “use case” in the “AI space” that is going to make working life more productive, efficient and streamlined.
I also realise that scientists at Google DeepMind were joint winners of last year’s Nobel chemistry prize for an AI model that is already helping to speed up work on intractable problems such as antibiotic resistance and plastic pollution.
In the right hands, artificial intelligence can clearly be a force for great good. It’s just that I keep coming across people like Sarah Harrop, who know how dire it can be in the wrong hands.
Harrop is an employment partner at the Addleshaw Goddard law firm in London, which means she deals with claims of unfair dismissal, discrimination and other forms of mistreatment.
Since the arrival of ChatGPT, she says there has been a distinct rise in the number of vastly more detailed, lengthy and outwardly credible correspondence to HR departments and employment tribunals.
The documents often contain references to legal precedents and other references to the law that don’t always turn out to be accurate but take hours to sort through.
“We have seen examples where there are dozens and dozens of pieces of correspondence sent to the employment tribunal,” she told me. The length of the documents and the speed at which they are generated suggests they are almost certainly produced by robots rather than humans, she said, adding this causes “significant pressure” for employers and tribunals.
I can well imagine what a tedious and costly burden this can be for employers, and I am sure they are not alone.
I mentioned Harrop’s observations to a few people last week and quickly learnt that employees are by no means the only ones using AI to turbocharge complaints.
A man who has been a school governor for many years said the volume and intensity of what were almost certainly AI-assisted complaints from parents had skyrocketed in the past 18 months.
The complaints were often well crafted and included convincing references to legal precedents that made them hard to ignore, he said, even though the longest ones invariably turned out to be specious.
I do not see this situation easing any time soon. The internet is now awash with sites offering to harness the power of AI to generate powerful, well-written complaints.
I tested one designed for employees by telling it I wanted a letter about the extent to which male lavatories outnumbered female ones in a large office I recently visited.
Within seconds, it spat back a brisk, grammatically correct and unnervingly persuasive indictment of what it called an unfair and discriminatory arrangement that “presents a negative and potentially unprofessional impression” to female guests.
I’m not going to deny that this induced a surge of gratitude, and the realisation that there must be many times when an AI-aided complaint is fully justified. Shoddy products, unfair parking tickets and malevolent employers are doubtless all grounds for a souped-up grievance letter.
But a world in which those charged with handling complaints are buried beneath an avalanche of AI-generated verbiage of questionable legitimacy is not a good one.
How tempting it must feel to stop diligently wading through the crud and go over to the dark side by simply using AI to respond.
I don’t suppose we will ever reach the point where we leave it to the bots to fight it out and get back to us once they are done. But then again, can we say for sure that we won’t?