The ongoing lawsuit against OpenAI, where The Times accused it of copyright infringement has come to light again. OpenAI stated on Monday this week that it collaborated with news organizations and that The Times was not telling the full story. It states that the New York Times lawsuit accusing it of copyright infringement was not telling the world how OpenAI and its adjacent technologies like ChatGPT actually operated.
Hence, on Monday OpenAI stated that the New York Times lawsuit was “without merit”. It further stated that it’s technologies supported and provided news opportunities for news organizations. This was the crux of the debate of the use of unauthorized published work to train artificial intelligence technologies.
The Times sued Microsoft and OpenAI on Dec 27 owing to the reason of using millions of its articles to train its technologies such as ChatGPT chatbot which is an AI technology. The lawsuit added that the chatbots now compete with The Times as a source of reliable information.
In a blog post on Monday, OpenAI stated that it collaborated with news organizations and that the use of copyrighted works to train its technologies was fair use under the law. It further stated that The Times is not telling the full story of how these technologies operate.
“We look forward to continued collaboration with news organizations, helping elevate their ability to produce quality journalism by realizing the transformative potential of A.I.,” the company wrote.
This case is interesting because The Times was the first major American media organization to sue OpenAI and Microsoft over copyright issues with regard to its written works. However, other novelists and computer programmers have also sued and filed a copyright suit against AI companies. This is insinuated by the boom of “generative AI” technologies that generate text and other media based on short prompts.
The part that becomes problematic is that OpenAI and other AI companies build this technology by feeding the system gigantic numbers of media and digital data – some of which are most likely copyrighted. This has caused the realization of the untapped value of online information which might be used to be profited by without permission and credit given to the producer of the said content. The argument of the AI companies is that since the material is public, they can use it to train their technologies without paying for it. This is added by the supplementary argument that they are not reproducing material in its entirety as well. While the OpenAI blog conceded the use of The Time’s works, history says otherwise. As Ian Crosby, an attorney for The Times at the law firm Susman Godfrey states, “that’s not fair use by any measure.”
Having briefed the situation at hand, it is essential to study and delve into the potential AI has in 2024. A New York Times article summarizes the scope of AI in the year 2024 and asks the question whether it will be AI’s “leap forward”. As the article states, the potential for AI’s growth is advancing at a rapid rate as it becomes more powerful and omnipresent in the physical world too.
When Sam Atlman, the chief execute of the artificial company OpenAI was asked what surprise the field would bring in 2024, he said, without a pause, “ChatGPT will take a leap forward that no one extended”. The most remarkable and rapid improvement of the technology giant is the ability of AI to generate new kind of media, mimic human reasoning and emerge into the physical world through a breed of new robots.
The upcoming months are to see the rise of AI powered image generators like DALL-E and Midjourney which will instantly deliver videos and still images. They are also set to gradually merge with chatbots like ChatGPT.
What does this mean to the current ChatGPT? It means, that it will soon have the potential to deal with other, much more powerful forms of media and digital information. This will enable it to exhibit behavior that is like human reasoning which will allow it to take forward increasingly complex tasks in fields such as mathematics and science. Once digital technology merges with the physical structure of robots, these technologies will soon solve problems beyond the boundaries of the digital world.
However interesting it seems to delve into the potentials of AI, one must also be cautious of the nefarious future that AI can bring. A New York Times article states that in the hands of nefarious users, AI can create waves of harassing and racist material. They also state that it has already been happening in the anonymous message board 4chan.
An instance of this nefarious behavior was in October of 2023, when the Louisiana parole board met with a potential convicted murderer and a doctor with experience in mental health to talk about the inmate. While the parole board was paying attention as expected, a collection of online trolls took screenshots of the doctor and edited her image with AI to make her appear naked. Upon doing that they shared it with 4chan, an anonymous message board which is known for propagating and spreading hateful content and conspiracy theory. They used numerous AI tools to spread false information about members of the parole board. This is just one among many instances of the nefarious nature of AI that any user must be aware of.
So, the larger question that this article keeps coming back to is not just the potential of AI, but the potential of technology and the potential of human intelligence. Should technology replace human effort simply because it can? Is there a way to collaborate? Are some questions we must ponder over. While currently the lawsuit might make some changes to the AI arrangements, the question of technology vs human intelligence, is an on going on. What we must stop to ponder is whether it must be a inquiry of ‘versus.’