Artificial Intelligence and Ownership

Doghouse – AI

Article from Issue 266/2023
Author(s):

If an artificial intelligence produces something new, who owns the new creation?

Some free software people do not believe in intellectual property and copyrights. I am not one of them. I do believe that people have the right to say what happens to their ideas and work, whether those are licensed as free and open source or whether they are closed and proprietary.

As such, I do not "spit on" people who decide to close their code and sell it, but I do believe that the best way of producing code for the end user is the free software model, which gives the end user the ability to maintain their system for as long as it is feasible.

Recently there have been more and more people asking me about the effects of artificial intelligence (AI) on the programming job market. They ask me if I think that AI will take over and put programmers out of work. My answer might not be popular, but if you take AI to its ultimate end, the answer must be "yes."

I have been hearing about "artificial intelligence" since the 1950s, with science fiction books like I Robot and movies and TV shows like Star Trek: Next Generation (STNG) having androids, like Mr. Data. I have seen computers become faster, logically larger, physically smaller, and more complex. I have seen more people work on and produce what they consider artificial intelligence, and I am sure that some day in the future we will find the algorithm that allows the computer we call the human brain to learn and gain knowledge and apply that to inorganic intelligence (what I prefer to call AI).

It is inevitable.

However, we have to think about what happens when this artificially intelligent artificial human (yes, there will probably first be AI dogs and AI birds) creates something new. Who owns that new thing? The artificial human? The "owner" of the artificial human? And if the artificial human is owned, is that slavery? Many of the same questions were asked and somewhat answered with regards to the android Data on STNG, as well as in many science fiction stories dating back to the 1950s.

But we may have a crisis a lot sooner, even without an artificial human.

Microsoft's Copilot, supposed AI software, has been trained on FOSS software that is both under copyright and under software licenses. The authors of this FOSS software probably did not consider or license the use of their software by AI, nor did they consider that some AI "mind" would use their software to generate its own code, and this is causing consternation among some FOSS developers regarding attribution.

The creation of new and unique code, by itself, should not cause many problems, because human programmers might look at existing code, learn how to write new code, and then generate new code from that knowledge. Students have been doing this for decades, but we also teach students about plagiarism and how to create sandboxes so they do not copy the code verbatim.

One issue, with both flesh-and-blood and inorganic intelligence, is when the output is exactly (or very, very close) to what was first written, and without the attribution requested by many licenses. In many places, this is known as plagiarism and could be a violation of copyright law unless licensed and with proper attribution.

The user of Microsoft's Copilot, which was trained through the use of FOSS source code, may not even realize that the code which Copilot outputs is an exact duplicate of a FOSS program, and the AI program might not even be "aware" that it created that exact copy. Therefore in a court of law, when the original copyright holder brings a copyright infringement against the holder of the duplicate code, how does the Copilot user prove that it was an innocent copy, and what happens to that copied code? If Copilot is true AI, then even running Copilot with the same commands and the same input might not create the same output, making it difficult to prove that Copilot generated the code in question.

Does the AI system have access to all appropriate patents? What happens when the AI system inserts a patented algorithm without knowing it? Of course this could happen with a human coder too, but this type of filtering should be built into something like Microsoft's Copilot or any other AI "creative" system.

A person by the name of Matthew Butterick has been asking these questions, and many more [1], and it may behoove us to think about companies inserting these types of tools into platforms (such as GitHub) that FOSS developers use all the time. It is not necessarily bad that developers use these tools, but there should be some discussion and understanding regarding the legality and impact of using them.

The Author

Jon "maddog" Hall is an author, educator, computer scientist, and free software pioneer who has been a passionate advocate for Linux since 1994 when he first met Linus Torvalds and facilitated the port of Linux to a 64-bit system. He serves as president of Linux InternationalÆ.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News