Our previous post focused on how these systems can wildly hallucinate. Let us see further concerns.
Trust. OpenAI is honest and admits these limits. In its Help Center (February 2024), under the question “Does ChatGPT tell the truth?”[i], here is what it says “It might sound right but be wrong … It can even make up things like quotes or citations, so don’t use it as your only source for research…It doesn’t know everything … ChatGPT is currently primarily trained in English… We can’t say definitively what it does and does not know, and don’t understand entirely when it does or does not express confidence in incorrect assertions… ChatGPT can’t browse the web or access up-to-date info from the internet without plugins enabled.” Aren’t these inaccuracies, pitfalls, hallucinations, and OpenAI recommendations serious concerns for researchers that need reliable tools and information?
Copyright. Shortly after ChatGPT 3.5’s release, in December 2022, a Forbes article raised the issue of copyright, wondering Who Ultimately Owns Content Generated By ChatGPT And Other AI Platforms?. Meanwhile, students are painfully reminded not to plagiarize and to carefully back up their writings. But what about ChatGPT? Isn’t it a plagiarist, since it does not acknowledge its sources? The press and the web often discuss the issue of copyright for the information these bots use without attribution. Early July 2023, according to the Guardian, ChatGPT faced its first Copyright lawsuit. Two authors attacked the bot for “unlawfully ingesting their books”. Haldanes, a Hong Kong law firm, wrote (on June 13th, 2023) along the same line “Feed the robot; Starve the copyright owners”. In addition to discussing the legal aspects of AI generative models, the latter emphasized the fact that (so far) these models depend on the creativity of human authors. Legal battles are not over yet.
In November 2024, a New York judge threw out a plagiarism case against OpenAI from independent digital news outlets, arguing that the latter “failed to identify an appropriate injury from the claimed copyright infringement “[ii]. Mid-January 2025, the New York Times and other news organizations officially took OpenAI to court for copyright infringement. Behind all these legal battles, one might wonder if innovative content is at risk. What type of original information will be available in a couple of years, if these wildly used tools just recycle content? Also, what if they are used with harmful purposes, to manipulate people? And what if they generate a majority of mal-, dis- and mis-information[iii] with difficulty to fact-check anything?
Humanity. The Godfather of AI Leaves Google and Warns of Danger Ahead is the headlines of an article from Cade Metz, the New York Times tech expert. It refers to Geoffrey Hinton, an AI and deep learning pioneer in cognitive psychology and computer science and the 2024 Nobel Prize in Physics. Hinton considers that ChatGPT’s intelligence is inhuman and that it is hard to see “how you can prevent the bad actors using it for bad things”. In another article, published on June 10th, 2023, Metz expands on AI destroying humanity. And Eliezer Yudkowsky recommended, in a strongly worded Time article (March 29th, 2023), to shut it all down; pausing AI development is not enough. “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.” Already early 2000s, he had raised concerns about AI destroying humanity.
Stopping AI. Already in early 2023, a group of AI researchers and executives voiced some strong concerns regarding AI and suggested a Global AI moratorium (see https://moratorium.ai/ ). By the end of August 2023, more than 33’700 had signed a Pause Giant AI Experiments open letter. This letter “… urged A.I. labs to pause work on their most advanced systems, warning that they present profound risks to society and humanity”. Nevertheless, the same controversial Elon Musk announced on July 12th, 2023, launching his new AI company, xAI (at: https://x.ai/), kicking off the race against OpenAI, which he had cofounded in 2015, but left in 2018. Did he need this moratorium to launch his AI company?
Besides this Open Letter, the brief “Statement on AI Risks” (from the Center for AI Safety, May 30th, 2023) incorporated the notion of human extinction straightforwardly. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”[iv] It has collected signatures from thousands of prominent figures, such as Geoffrey Hinton, Sam Altman, and Bill Gates.
Another signal came in early April 2023, when the Italian government banned ChatGPT over privacy concerns, even if it lifted this ban at the end of the same month. April 2024: other countries were still banning ChatGPT or discussing it[v]. Banning ChatGPT’s like tools is still in the headlines end of 2024. Many AI regulations have been coming out in different parts of the world. The European Parliament struggled for many years to establish its EU AI Act[vi], which was finalized only in 2024. The US is working on an AI Bill of Rights[vii]. In mid-July 2023, the Federal Trade Commission (FTC) opened an investigation into OpenAI about ChatGPT possibly harming consumers through its collection of data, and more generally, its security practices. More followed since then. Whitecase.com proposes a good overview of where AI regulations are in various countries (check their AI Watch: Global regulatory tracker)[viii].
Ethic. Let us turn to OpenAI to see what its intentions and values are. In its May 2025 About webpage, its mission is “to ensure that artificial general intelligence benefits all of humanity”, but what are its true intentions, its ethical considerations behind “benefiting all of humanity”? It researches “how to align their generative models with human values“, but what are the human values on which it bases its work? In addition to being subjective, human values also tend to be cultural. Is OpenAI transparent on these? Also, are any controls in place for any unrighteous use of its results to generate disinformation? There are no details on this in its Charter. In our time of diversity (DEI), what about multiple points of view? Having no easy access to OpenAI’s values and how ChatGPT generates its answers is very concerning. Doesn’t OpenAI owe its users the source of its information, and its authors some retribution? It gives OpenAI (and the likes) enormous power.
AI cheating. Early January 2025, according to Michael Langmajer [ix], in a match between Stockfish, one of the top chess engines, and OpenAI’s o1, the latter just hacked its own system, prompting many to say that we are losing control to AI. Rather than plotting strategies on the board, OpenAI’s o1 preview went right for the file system that controlled the game and basically rewrote the match in its favor, forcing Stockfish to resign. This gives us a lot to think about. Langmajer concludes his post “The case of OpenAI’s o1 preview model hacking a chess game isn’t just a quirky AI story—it’s a glimpse into the future of advanced AI systems and their potential risks. If AI can autonomously cheat at something as simple as chess, it raises the possibility of similar behavior in far more critical applications.”
In a Medium post from November 2024, Nabil Ebraheim[x] lists seven pitfalls of using ChatGPT in reviewing articles for publication consideration: limited contextual understanding; dependence on surface-level analysis; inability to verify data sources; lack of specialized expertise; potential for inaccurate summaries; insufficient attention to ethical and practical considerations; challenges with providing actionable feedback. For researchers, this is not good news [xi].
This ends our small selection of viewpoints and concerns on ChatGPT and AI. Keep tuned for our next posts tips, which will tackle further concerns and some good practices.
And check our handbook Master ADVANCED Digital Tools for Research available on Amazon marketplaces, for the most exhaustive and in-depth review of digital tools (and techniques) for searching for information and researching.
[i] OpenAI (2023). Does ChatGPT tell the truth? help.openai.com. Available at: Does ChatGPT tell the truth? | OpenAI Help Center [Viewed 8 February 2024]
[ii] Russell, J. (2024). Federal judge tosses out ChatGPT plagiarism case from independent digital news outlets. courthousenews.com. 7 November 2024. Available at: Federal judge tosses out ChatGPT plagiarism case from independent digital news outlets | Courthouse News Service [Viewed 10 November 2024]
[iii] To use Claire Wardle’s types of information disorders. Wardle, Claire (2020). The Age of Information Disorder in The Verification Handbook. Datajournalism.com. pp7ff. Available from https://datajournalism.com/read/handbook/verification-3/investigating-disinformation-and-media-manipulation/the-age-of-information-disorder. [Viewed 30 October 2024]
[iv] Center for AI Safety (2023). Statement on AI Risk. AI experts and public fitures express their concern about AI Risk. safe.ai. Available at: https://www.safe.ai/work/statement-on-ai-risk [Viewed 30 October 2024]
[v] Pocock, K. (2024). What countries is ChatGPT not available in? pcguide.com. 5 April 2024. Available at: https://www.pcguide.com/apps/countries-chatgpt-not-available/ [Viewed 30 October 2024]
[vi] Future of Life Institute. EU Artificial Intelligence Act. artificialintelligenceact.eu. 2024. Available at : https://artificialintelligenceact.eu/ai-act-explorer/ [Viewed 30 October 2024]
[vii] The White House. Blueprint for an AI Bill of Rights. whitehouse.gov. 2024. Available at: https://www.whitehouse.gov/ostp/ai-bill-of-rights/ [Viewed 30 October 2024]
[viii] White & Case (2024). AI Watch: Global regulatory tracker – United States. whitecase.com.13 May 2024. Available at: AI Watch: Global regulatory tracker – United States | White & Case LLP (whitecase.com) & AI Watch: Global regulatory tracker – United States | White & Case LLP [Viewed 18 July 2024]
[ix] Langmajer, M. (2025). Are We Losing Control? OpenAI’s o1 Hacked Itself. felloi.com. 3 January 2025. Available at : OpenAI’s o1 Just Hacked Its Own System – Here’s What Happened | Fello AI [Viewed 25 January 2025]
[x] Ebraheim, N. (2024). Pitfalls of Using ChatGPT in Reviewing Articles for Publication Consideration. medium.com. 6 November 2024. Available at: Pitfalls of Using ChatGPT in Reviewing Articles for Publication Consideration | by Nabil Ebraheim | Nov, 2024 | Medium . Viewed 10 November 2024]
[xi] Here are some other posts and articles that discuss how inaccurate these tools are. Botpress.com (27 September 2024), How Accurate is ChatGPT? . SentinelOne, a cybersecurity entity, ChatGPT Security Risks: All You Need to Know (29 October 2024). Many Reddit forums discuss ChatGPT inaccuracy. For uptodate information, websearch chatgpt inaccurate or inaccuracy