Tag Archives: Copilot

GPL compliance and permissive training data theory

This is the second post within a new series that I might start one day, about how companies abuse common misunderstanding of the GNU General Public License (GPL) to sell their stuff. Today, a slightly scary example. Scary, as it is so off the point.

The company Exafunction, Inc. claims that with their product “Codeium” they can provide intelligent programming assistance, based on a large language model (LLM). Just like Copilot of GitHub, Inc. and even better as they do not infringe any license and specifically not the GPL. Their writeup “GitHub Copilot Emits GPL. Codeium Does Not.” provides an adventurous interpretation about the GPL: You need consent to use it in a commercial context. Moreover training your model on purely permissive-licensed code will free you of any legal trouble.

Things are slightly different. Strange that nobody told them in their “… early conversations with the open source community”.

The GPL does not restrict commercial use. It does not even refer to it at all. You are fine in any fields of endeavour as long as you respect and fulfil its obligations.

The main problem with generative AI and the current ML-based programming assistants is that you cannot trace verbatim copies of code to its origin. Due to that you cannot fulfil the most essential obligation of any Free and Open Source Software license: attribution. Calling out the original authors.

It does not help if you train your model with just permissive-licensed code. You will infringe the underlying licensing terms if you do not provide any reference to the original authors and license(s). No matter if it is a permissive or copyleft license. Either way you will not have a valid legal base, speak license, to re-use the original work and it is as bad as any copyright violation with all of its consequences.

For more details or before starting the marketing campaign of your new programming assistant, it could be worth to take a closer look, for example at the ongoing GitHub Copilot litigation and its underlying motivation.

GitHub Copilot – Your AI-powered accomplice to steal code?

Last week GitHub and its parent company Microsoft announced “GitHub Copilot – their/your new AI pair programmer”. E.g. The New Stack, The Verge or CNBC have reported extensively about it. And there is a lot of buzz around this new service, especially within the Open Source and Free Software world. Not only by its developers, but also among its supporting lawyers and legal experts, although the actual news is not that ground breaking, because it is not the first of its kind. Similar ML-/AI-based offers like Tabnine, Kite, CodeGuru, and IntelliCode are already out there, which have also been trained with public code.

Copilot currently is in “technical preview” and planned to be offered as commercial version according to GitHub.

Illustration: GitHub Inc. © 2021

The core of it appears to be OpenAI Codex, a descendant of the famous GPT-3 for natural language processing. According to its homepage it “[…] has been trained on a selection of English language and source code from publicly available sources, including code in public repositories on GitHub”. Update 2021/07/08: GitHub Support appears to have confirmed that all public code at GitHub was used as training data.

GitHub is the platform where the majority of source code of the global Open Source community has meanwhile been accumulated: 65+ million developers, 200+ million repositories (as of 2021) or 23+ million owners of 128+ million public repositories (as of 2020). Alternatives to it have become scarce as long as you do not want to host it on your own.

Great, in what amazing times we are living in! Sounds like with Copilot you do not need your human co-programmers any longer, who assisted you during the good old times in form of pair-programming or code review. Lucky you and especially your employer. On top you will save precious time because it will help you to directly fix a bug, write typical functions or even “[…] learn how to use a new framework without spending most of your time spelunking through the docs or searching the web”. Not to forget about copying & pasting useful code fragments from Stackoverflow or other publicly available sources like GitHub.

At the same time, two essential questions arise, in case you care a bit about authorship:

  1. Did the training of the AI infringe any copyright of the original authors who actually wrote the code that was used as training data?
  2. Will you violate any copyright by including Copilot’s code suggestions in your source code?

Let’s not talk about another aspect that GitHub mentions in their FAQs – personal data: “[…] In some cases, the model will suggest what appears to be personal data – email addresses, phone numbers, access keys, etc. […]”

Continue reading GitHub Copilot – Your AI-powered accomplice to steal code?