OpenAI launched a product that became popular in an instant: Chat GPT. Have you read the terms of use carefully before using this product?
This article will analyze the terms of use provided by OpenAI and show you some important details that have not been noticed from the perspective of users and online service operators.
Note: This article is based on the terms of use published by OpenAI on 2023/03/09.
[User side]
This part will show you what to pay attention to as a user of ChatGPT.
- Who wrote the article generated by ChatGPT?
Can I mark the article generated by ChatGPT as written by me or OpenAI?
According to ChatGPT’s terms of use, they specifically state that users cannot express externally that output from the Services was human-generated when it is not. However, can we say that this is generated by OpenAI and use its trademark? The answer is also no. According to section 9(b) of ChatGPT’s terms of use: “Use of Brands. You may not use OpenAI’s or any of its affiliates’ names, logos, or trademarks, without our prior written consent.
” We may not use their trademarks, names, and any logos without obtaining OpenAI’s consent beforehand. In summary, when using ChatGPT’s generated text, you cannot say that this is an article completed by humans, nor can you state that this is an article completed by OpenAI.
The best way to use ChatGPT when completing an article is to manually edit it or only use ChatGPT as a source of inspiration or a tool for data collection and statistics. Finally complete the article through human means. This is also a better way to use ChatGPT. - Confidential information belongs only to ChatGPT
ChatGPT’s terms of use mention confidentiality clauses / obligations. However, according to their explanation,confidential information only refers to information that OpenAI, its affiliates or third parties designate as confidential or should reasonably be considered confidential under the circumstances, including software, specifications and other non-public business information
- Will OpenAI keep your confidential information secret? The answer is no. In addition, according to OpenAI’s explanation engineers have the right to view user input for correction and training purposes. You know some things they have to hire humans to enter and ensure some filtering involved. There are some cleaning up some research they will do how to make tools better so there are humans reading it Therefore personal confidential information or work-related confidential information if there is a need to use chat GPT it is recommended that key words sensitive information be masked or coded beforehand on personal computers so that personal information can be properly protected If you don’t want engineers to view content entered into chat GPT open AI provides a form in its terms of service for users who choose not provide engineer review but open AI does not guarantee whether refusing engineer review will affect services provided
“If you do not want your Non-API Content used to improve Services you can opt out by filling out this form.
“ - Opt out of arbitration agreement within 30 days
As often seen in similar documents, there is a dispute resolution section in OpenAI’s TOU that has a mandatory arbitration clause. It says, “You agree to arbitrate disputes related to the Service or this Usage Policy.
“
However, OpenAI also provides that users have the right to opt out of these arbitration terms and future changes to these arbitration terms by sending an email to openai.com within 30 days of agreeing to these arbitration terms or the relevant changes. This article suggests that you should check when you first start using ChatGPT and maybe send an email to opt out of mandatory arbitration because if users have to fully indemnify OpenAI in the TOU, then users may at least want a jury trial. Because juries are more protective of consumers in most cases and OpenAI has to present complete evidence to claim compensation from users in court trials, but this is not the case in arbitrations where arbitrators often do not scrutinize every piece of evidence as carefully as courts do for quick dispute resolution and under the premise that arbitration decisions cannot be overturned, these practices by juries will not sufficiently protect users.
[Online Service Operator]
If you also want to provide an online service, whether for global or local users, below will explain the shortcomings of OpenAI’s TOU and suggest avoiding these issues before launching your product or service.
- Whether people under 18 can use it needs redesign
- Validity of contract formation?
Common practice is to limit minors from registering and state that this service is not intended for children under 14. And for minors under 18 require legal guardians present and consent if they want to use this service. However, is this enough? Probably far from enough, especially since this is a dynamic and real-time response tool.
Facing whether legal guardians consented or whether minors under 18 can register, OpenAI should refer to Google’s regulations for minors creating accounts. OpenAI can do better on this point. This is also related to one point below that OpenAI needs attention. - California Age-Appropriate Design Act
More dangerous and incomplete regulatory requirements for OpenAI
California Age-Appropriate Design Act In September 2022 California Governor approved California Age-Appropriate Design Act which is expected to take effect on July 2024. This act states: “When developing and providing online services that children may use businesses shall consider children's best interests during design development and provision processes.
“
Online services that children (under 18) may use means (A) directly provided for children (B) have credible evidence audience are children (C) market towards children (D) same or similar services as (B) (E) content interesting for children such as cartoons or music etc (F) based on internal investigation have considerable number of audience are children are all within scope of regulation.
OpenAI’s ChatGPT obviously applies California Age-Appropriate Design Standards Act their service will have certain number of children using it. But currently haven’t seen ChatGPT how they regulate or monitor accounts for children or minors. This will be especially important area needing attention in future.
- Validity of contract formation?
- Personal data cross-border transfer appendix
For cross-border transfer of personal data such as scenarios transferring data from EU to US or any data transfer scenarios triggering EU regulations generally these all require signing official documents published by EU for data transfer.
If online service operators look closely at GCP AWS etc services data transfer appendices are all considered default options.
However OpenAI chose requiring users themselves request relevant documents from OpenAI this area clearly insufficient on regulatory compliance.data processing addendum Opt-in way insufficient protection for businesses. - The termination of service policy is not clear enough
The termination provision of OpenAI’s terms of use is evidently not clear enough. In the TOU, OpenAI mentions that users can terminate the service by not using it. However, OpenAI needs to notify users 30 days in advance when it wants to terminate the policy with users. This clause is quite inappropriate, because does it count as terminating the service if a user does not use ChatGPT for one day? Does it count as continuing to use the service after termination if a user resumes using it after one day? If not using counts as terminating the service, then OpenAI should delete the user’s personal data and account at that point. That would be a real termination of service.
Conclusion
This article analyzes the points to note in OpenAI’s terms of use from both the user’s and online service provider’s perspectives. Both online service providers and general users can see what they need to pay attention to from this article.
If you have more questions, please feel free to contact me using the form below.