Share This

 

When I was a junior doctor in the late 1990s writing my first scientific papers, once each article was finished, I had to fill out an application form, print out multiple copies and then walk to the post office at lunchtime to submit everything by mail to the journal. Then I would wait several weeks or months for an answer.

Don’t feel sorry for me – there was a sandwich shop by the post office, and I also thought it was quite exotic mailing something overseas. If you do want to sympathise with me then it can be because I submitted one of my first papers to the journal Neuro-Ophthalmology which isn’t indexed for PubMed. Schoolboy error. In essence, the whole process was much more cumbersome, but retrospectively academia back then appears to have had more integrity. Of course, there have always been issues with falsifying data in scientific publications, but not on the same scale as there appears to be now. In this edition of Pete’s Bogus Journey, we’re going to look at how the threat of modern plagiarism has grown, but first, James Bond.

 

James Bond: I mean, sir, who would pay a million dollars to have me killed?
M: Jealous husbands! Outraged chefs! Humiliated tailors! The list is endless!

 

Those who, like me, are obsessed with Bond movies will recognise that quote from the start of The Man with the Golden Gun (1974), when Bond is informed by M that a million-dollar hit has been taken out on him and the likely assassin is the golden gun wielding Francisco Scaramanga. This scene also demonstrates why I am so fond of these early-era films: the humour, which is usually delivered with perfect timing by Roger Moore and his characteristic, slightly raised eyebrow.

Another reason for my enjoyment are the breathtaking stunts. The Man with the Golden Gun includes one of these feats, which has been dubbed the ‘Astro Spiral Jump’ and is one of the most audacious and memorable stunts in the history of cinema. The location for the filming was rural Thailand and involved the British stuntman Loren ‘Bumps’ Willard. He drove a modified AMC Hornet X at an unstable looking wooden ramp and launched it across a narrow river, rotating the car through 270 degrees and landing it on the other side.

The brains behind the stunt was Raymond McHenry, who created a simulation model to calculate the behaviour of the car using the variable parameters of launch angle, landing ramps, ideal vehicle speed and roll velocity, and published his findings [1]. Ambulance crews and divers were waiting on hand in case he got his sums wrong, but ultimately on the day everything went perfectly, and the stunt was performed successfully in one take.

Looking back, what makes these stunts special to me is that they’re authentic, without the use of computer-generated imagery (CGI) [2,3]; I can trust them. At the time, CGI was in its infancy. The first movie to use this technology for live action was ironically the rogue artificial intelligence (AI) science-fiction film, Westworld, in 1973. It wasn’t until the 1980s that the boundaries of computer power were pushed much further, and it became more mainstream.

Now, present day, when I watch the endless Marvel and Fast and Furious titles with my kids and their spectacular stunts, I find myself underwhelmed and mouthing a silent “whatever” to myself, as I have no idea what is real or not. You may say it doesn’t matter if CGI is used or if the stunts are real or fake, just enjoy the visual spectacle – at the end of day, it’s just entertainment. “Get over it, Boomer,” I can hear my kids saying. And maybe they’re right, and I am just a stick-in-the-mud purist.

“Could it be that it’s just an illusion, putting me back in all this confusion?” queried the British trio Imagination in their song, Just an Illusion, from 1982. There are many aspects to life where what we observe may be artificially generated, thereby altering our perception of reality. With regards to my examples above from the arena of fiction, in the grand scheme of things it doesn’t really matter and keeps the increasingly demanding audiences entertained. But there are many other areas where it clearly is important.

One of these is deepfakes, which are videos where a face or body has been digitally manipulated using machine learning and AI so that they appear to be of someone else. I first became aware of them in February 2021 when Chris Ume released on TikTok a deepfake video of Tom Cruise teeing off on a golf course. At first glance, the only people one would have thought may be upset would be the actor and his agent. However, it was so realistic that it fooled nearly every available deepfake detection software and sparked security concerns around the globe because the potential for this new technology to be used maliciously to spread false information was profound.

 

 

Indeed, there have already been examples of this occurring, including a one-minute-long video of the Ukrainian President, Volodymyr Zelenskyy. Released on 16 March 2022, shortly after the start of the latest Russian invasion of Ukraine, it showed Zelenskyy requesting that his soldiers lay down their arms and surrender. It is not hard to extrapolate the development of this disinformation industry to the possibility of an Armageddon scenario where a deepfake video such as US President Joe Biden declaring war on Russia is released and this has become deeply troubling for national security organisations. However, a potential deepfake-induced nuclear holocaust is not my primary concern regarding AI-generated media in this article, but the field of education and science.

ChatGPT, an AI chatbot software that aims to mimic human conservation through text or voice, was released by OpenAI in November 2022. I have already encountered chatbots over the past few years when dealing with organisations such as energy companies online and they have usually all been singularly unhelpful. I have only really used them to try and prevent the inevitable soulless and often fruitless phoning of a human at a call centre [4].

"The stakes are high. With fake biomedical science articles being published on such a large scale, there is a potentially massive impact on society, endangering health and damaging trust in science"

However, ChatGPT has raised the game of the chatbot to a new level as it mimics a human conversationalist and provides comprehensive and fluent answers across broad areas of knowledge. It is extremely versatile and can perform tasks such as composing music and writing poetry and song lyrics [5]. However, the chief areas of worry are that it can also answer test questions and write student essays above the level of an average human test taker [6]. With many student exams taking place virtually and in-course assessments based on writing essays, there is clearly an opportunity for ChatGPT to be abused for cheating. And it is, with reports that at least half of school and university students already using ChatGPT to cheat with the compounding problem being that it is almost impossible to detect.

Unsurprisingly, there has been much criticism of ChatGPT by educators and academics since its release and an open letter with over 20,000 signatories including Apple co-founder Steve Wozniak called for an immediate pause to the development of this AI technology because it represents “profound risks to society and humanity”. Certainly, from my perspective, in the words of the pessimistic Private James Frazer from the sitcom Dad’s Army, “we’re doomed” if our future talent for society is assessed on whether they’re able to type a few words into ChatGPT rather than on what now appears to be more out-dated traditional methods such as knowledge or, perish the thought, ability. This now leads finally to my principal area of concern in this article and that is science and academia.

A recent paper from a team at the University of Magdeburg in Germany headed by the psychologist and neuroscientist Professor Bernhard Sabel has discovered that fake biomedical science publications are not only increasing but are far more common than previously thought.

Students, scientists and physicians are often judged on their performance by their publication output, and this creates pressure on them, especially if they are unable to do this legitimately and their jobs and livelihood depend on it. To fulfil these demands, there is therefore a whole industry of paper mills producing scientific papers with fake data and text at scale using AI, with the annual revenues of this sector estimated to be £3-4bn [7].

In the study, researchers looked at red flag indicators in a set of PubMed listed publications and estimated that 28% of published biomedical material in 2020 was fake, having risen from 16% in 2010. Therefore, from the 1.3m biomedical Scimago-listed publications in 2020, over 300,000 are estimated to be fake. China was found to be the largest contributor at 55%, followed by India, Turkey and Russia. Sabel describes this as “the biggest science scam of all time”.

One can’t help but wonder if the scientific publishers themselves in their quest to generate profits are not fuelling the fake AI paper mill industry in the knowledge that researchers and doctors are so desperate to get published. Open Access journals which require considerable sums of money to be paid by authors for their work to be published are particularly open to criticism. With such a financial incentive to accept papers for publication, there is not only potential for publishers to lower the bar for quality of those articles deemed worthy of publication but also unwittingly publish these fake AI-generated papers with increased frequency. The scientific community is now starting to address these concerns as only very recently, 40 leading scientists from the editorial board of the journal Neuroimage resigned en masse, objecting to the ‘greed’ of the publishing leviathan, Elsevier. It is hoped that this action represents the start of the fight back against what has been described as the ‘unethical’ costs to authors used to generate enormous profit margins for publishers and maybe help to improve the quality control of the scientific papers published.

The stakes are high. With fake biomedical science articles being published on such a large scale, there is a potentially massive impact on society endangering health and damaging trust in science. With the increasing role that AI is playing in our lives and in medicine, there would be some irony in a future Doomsday scenario of robot doctors seeing about the demise of humans by treating them using flawed AI-generated scientific research. Only very recently with the COVID-19 pandemic and the development of vaccines, one can see the potential for fake scientific data to rapidly bring about the death of millions of people. There’s a dystopian science-fiction novel in that for anyone that is interested. You’re welcome!

In the chilling movie Shallow Grave (1994), featuring a group of flatmates in Edinburgh falling out when they discover a large sum of illicit money, Alex (Ewan McGregor) narrates in the opening soliloquy:

“Take trust, for instance, or friendship. These are the important things in life. These are the things that matter, that help you on your way. If you can’t trust your friends, well what then? What then?” And in similar fashion, I ask you: “If you can’t trust your science, well what then?” It really is quite worrying that the future of biomedical science now hangs in the balance and trust in academia is now as precarious as a house of cards.

Clearly, I cannot finish an article on AI without referencing The Terminator. The final quote therefore goes to John Connor, the future leader of the human resistance against the AI-driven apocalypse in the second film, T2: Judgement Day (1991): “The whole thing goes: The future’s not set. There’s no fate but what we make for ourselves.”

 

References

1. McHenry R. The Astro Spiral Jump – An Automobile Stunt via Computer Simulation, SAE Technical Paper 760339, 1976.
2. Moonraker (1979) is another of these early Bond movies with a truly amazing non-CGI stunt which I was awestruck with as a kid. It involves Bond being pushed out of an aircraft without a parachute by the villain known as Jaws, and Bond ultimately wrestling a parachute from the pilot. I have subsequently learned that the filming took five weeks and involved a total of 88 skydives by cameramen and stuntmen, with all the footage in the film real. As John Cork, the author of James Bond: The Legacy, states: “when audiences saw Bond pushed out of an airplane and then go slicing through the air to take the pilot’s parachute away, it was one of the most amazing bits of cinema ever witnessed by anyone.”
3. Bullitt (1968) is another film that I wish to make the Gen Z’rs aware of. This is because it contains what is considered to be the greatest Hollywood car chase of all time, and again there was no CGI used in its production. The scene involves a Ford Mustang GT Fastback in an exhilarating pursuit of a Dodge Charger through the streets of San Francisco. The director called for a limit on speeds of 75-80mph, but the drivers paid little attention and at times the cars reached speeds of over 110mph. The scene was shot over five weeks and the editing of the footage won Frank Keller an Oscar in 1968. For a well spent ten minutes and 53 seconds of your life, watch it on YouTube. Whenever I view this scene, which I do on a regular basis, I wonder if maybe films should state at the start whether CGI has been used in its production, similar to a Kitemark seal of approval, to guarantee the authenticity of the stunts for the viewers.
4. There are many first-world (trivial) problems that have been reported, such as losing the computer screen mouse cursor when using more than one monitor. For me, there are also a couple related to phoning a call centre. The first is that the music when on hold is truly awful and also usually contains an unbearable underlying distortion and one can only think that it is done to encourage the caller to hang up in despair. Secondly is the intermittent announcement of where you are in the queue, which often doesn’t change for prolonged periods, yet another inducement for you to just give up.
5. The use of AI to produce songs and lyrics has generated divergent views from the music industry. Recently, Neil Tennant from the enormously successful band Pet Shop Boys suggested that AI technology could be used as a tool to help musicians complete half-written songs. From my own perspective however, when I listen to a song, a lot of the meaning comes from a connection with the performer and I am not sure I would feel the same emotions from a song if I thought the music or lyrics had been created by a robot. Perhaps once again I am a dinosaur and authenticity will become as outdated a word as farthing or groovy.
6. ChatGPT has also passed the three-part Medical Licensing Exam (USLME) required to practise medicine in the US, once again raising the spectre of robot doctors replacing humans in the future.
7. Sabel BA, Knaack E, Gigerenzer G, Bilc M. Fake Publications in Biomedical Science: Red-flagging Method Indicates Mass Production.
https://doi.org/10.1101/2023.05.06.23289563

 

COMMENTS ARE WELCOME

Share This
CONTRIBUTOR
Peter Cackett

Princess Alexandra Eye Pavilion, Edinburgh, UK.

View Full Profile