01
The AI protocol referred to by the author is a contractual clause when signing with the platform, specifically involving the following content:
Party A may use all or part of the contracted works and related information (including work titles, synopses, outlines, chapters, characters, author's personal information, cover images, etc.), data, corpus, text, materials, etc., for annotation, synthetic data/database construction, AI artificial intelligence research and development, machine learning, model training, deep synthesis, algorithm research and development, and other currently known or future developed new technology research and development/application fields, including but not limited to:
(1) Used for the development and application of intelligent dialogue, intelligent text/image/audiovisual/speech works/products editing, generation, conversion, deep synthesis, virtual reality technology, etc.
(2) Used for AI artificial intelligence model training under any technology, or used to generate synthetic data/databases provided for model training.
(3) Any other new technology development or application scenarios.
Note: Party A refers to Tomato, Party B refers to the author
As the platform, Tomato Novel quickly provided a solution by removing the AI model training scenario clause from the original agreement and offering a supplementary agreement.
Tomato Novel author Yang Yang also noticed the latest supplementary clause and chose to continue signing. In her view, Tomato Novel is more suitable for new authors. Yang Yang joined Tomato Novel two months ago and recently completed a novel of over 200,000 words.
However, not all authors approve. Some are still posting on social media, questioning whether the punctuation and paragraphing in the platform's supplementary agreement are "setting traps," and the trust relationship between the two parties still needs to be repaired.
Author Ping Ping is dissatisfied with her content being used for AI training. She believes that AI development is a trend and can assist work as a production tool, liberating labor, "but it should not replace labor." She hopes that Tomato Novel can issue an announcement "promising not to use AI one-click writing" and other functions in the future before signing the supplementary agreement.
Both of the above authors clearly stated that they would choose to leave Tomato Novel in the future.
But there is another problem at present: many authors are worried about being sued by the platform for leaking contract content. If they lose the case, the compensation would also be a considerable expense.
02
With the rapid development of artificial intelligence, disputes between users or creators and platforms regarding AI infringement occur from time to time. Even OpenAI, which sparked the AI wave, has been sued multiple times by American media for using their news reports for AI training without permission. As a result, OpenAI has begun to sign cooperation agreements with media groups.
Famous American actress Scarlett Johansson once accused OpenAI of stealing her voice. One of the built-in voices in the company's product ChatGPT-4o, "Sky," was very similar to Scarlett's voice. Although OpenAI denied it, they eventually suspended the use of the "Sky" voice.
Meta, which has just released its most powerful model, will also require Instagram users to agree to have their uploaded content used for AI training, otherwise they cannot use the platform.
In China, Interface News has noticed that some startup companies do not shy away from collecting publicly published news reports or articles, using AI to "wash" and produce content for profit. Some companies' AI-generated images have also been accused of plagiarism by artists.
Regarding the above situations, You Yunting, a senior partner at Shanghai Dabang Law Firm, stated that using works for AI training without obtaining the author's authorization may infringe on other rights of copyright stipulated in the Copyright Law. However, up to now, there have been no corresponding court judgments to support whether training constitutes infringement. Due to its controversial nature, even if it is determined to be infringement by the court, it does not constitute a crime.
You Yunting added that if platforms use works for AI training, they need to negotiate separately with authors to reach a new agreement. If they unilaterally change the content of the agreement or force the signing of the agreement during updates, it violates the existing contract, and the court will not recognize the validity of the new contract.
Secondly, if the platform claims that AI training is only to improve service quality rather than commercial profit, this is legally untenable because improving commercial service quality is also for profit, and the court will not recognize the company's defense.
03
The widespread "AI training" conducted by platforms has further raised creators' concerns about the leakage of private content. Recently, some netizens claimed that WPS allegedly "fed" authors' unpublished content to Douyin's Dou Bao AI, as they were able to get corresponding content by asking questions on the AI.
In response, ByteDance stated that the relevant rumors are completely untrue, and some book information on Dou Bao comes from public information. Dou Bao and WPS have not carried out any form of cooperation at the AI training level, nor have they used any users' unpublished private data for training. WPS officials also responded that the relevant issues are completely false.
However, the first AI-generated voice personality rights infringement case in China, which was tried and judged by the Beijing Internet Court in April this year, can provide a reference for content creators to defend their rights. In this case, voice actor Yin found that a software company had AI-processed his voice-over works and sold them to relevant platforms, and the works were widely circulated on many well-known apps.
After trial, the court determined that the defendant cultural media company had copyright and other rights to the sound recordings, but this did not include the right to authorize others to use the plaintiff's voice for AI purposes. Their act of authorizing the software company to use the plaintiff's voice for AI purposes without the plaintiff's knowledge and consent had no legitimate source of rights. The court ordered the relevant defendants to apologize to the plaintiff and compensate for losses totaling 250,000 yuan.
The court pointed out that even if legal authorization for the work is obtained, it does not mean that one has the right to conduct AI training on it. This indicates that relevant rights holders and creators should have corresponding control over their works, and vague authorization clauses, without additional payment of consideration, cannot ensure that platforms can conduct AI training.
04
The "Interim Measures for the Management of Generative Artificial Intelligence Services" (hereinafter referred to as the "Measures") implemented in our country last year stipulate that in the process of providing and using generative artificial intelligence services, intellectual property rights, business ethics, etc., should be respected, and advantages in algorithms, data, platforms, etc., should not be used to implement monopoly and unfair competition behaviors.
The "Measures" also put forward a series of regulations for generative artificial intelligence service providers, including legally carrying out training data processing activities, assuming the responsibilities of network information content producers and personal information processors, and clarifying the applicable groups for services.
However, there are certain difficulties in determining infringement by generative AI, including the definition of infringement objects, judgment of the originality of generated content, evidence collection and technical analysis, and uncertainties in legal application.
Secondly, the generated content of generative AI is provided to specific users and does not have direct publicity in itself, so there will not be large-scale direct infringement phenomena, which is different from traditional network infringement.
The continuous progress of AI technology also impacts the legal "idea-expression dichotomy," and the traditional "access + similarity" infringement judgment standard is no longer fully applicable. Generative AI can quickly learn human works and generate different expression results, making it difficult to clearly "dichotomize" "ideas and expressions," thus increasing the difficulty of infringement determination.
Overall, existing legal provisions cannot fully cover all situations of AI application scenarios and generated forms. To address these difficulties, the legal community needs to continuously explore and improve relevant legal provisions and infringement determination standards, comprehensively considering factors such as technical characteristics and social impact, to achieve reasonable judgment and effective regulation of generative AI infringement behaviors.