GitCode, a git-hosting website operated Chongqing Open-Source Co-Creation Technology Co Ltd and with technical support from CSDN and Huawei Cloud.
It is being reported that many users’ repository are being cloned and re-hosted on GitCode without explicit authorization.
There is also a thread on Ycombinator (archived link)
It certainly can. Most licences require derivative works to be under the same or similar licence, and an AI based on FOSS would likely not respect those terms. It’s the same issue as AI training on music, images, and text, it’s a likely violation of copyright and thus a violation of open source licensing terms.
Training on it is probably fine, but generating code from the model is likely a whole host of licence violations.
Some, but probably not most. This is mostly an issue with “viral” licenses like GPL, which restrict the license of derivative works. Permissive licenses like the MIT license are very common and don’t restrict this.
MIT does say that “all copies or substantial portions of the Software” need to come with the license attached, but code generated by an AI is arguably not a “substantial portion” of the software.
How do you verify that though?
And does the model need to include all of the licenses? Surely the “all copies or substantial portions” would apply to LLMs, since they literally include the source in the model as a derivative work. That’s fine if it’s for personal use (fair use laws apply), but if you’re going to distribute it (e.g. as a centralized LLM), then you need to be very careful about how licenses are used, applied, and distributed.
So I absolutely do believe that building a broadly used model is a violation of copyright, and that’s true whether it’s under an open source license or not.
By comparing it to the original work.
And how will you know what original work(s) to compare it to?
How do you know anything about anything an LLM generates? Presumably if you’re the author you would recognize your own work?