3 Questions: Joseph Carrigan on the Risk of ChatGPT

Winter 2024

The case made headlines earlier this year: An attorney used ChatGPT in legal research on a case, and the AI chatbot cited fake rulings. The judge was not amused. The lawyer said he wasn’t aware that large language models could potentially provide inaccurate information.

Joseph Carrigan, a senior security engineer at the Johns Hopkins Information Security Institute, wasn’t surprised. Not only may ChatGPT’s output “be factually incorrect,” he warns, but companies running these models might use your information in other unexpected ways.

1. What are the key security considerations for using large language models (LLMs) safely?

As exemplified by that legal case, you should not be using LLMs to educate yourself about any topic. Also, you need to know what happens to the information you type into the program, even in the form of a question. Though Open AI’s end-user license agreement for ChatGPT states they do not use content to improve their model, you are still submitting it. Assume it is stored on their systems. Other LLM providers may use your input to train other models or sell data about you.

2. How can users safeguard their data and ensure privacy?

Always bear in mind that anything you provide to an LLM may be kept by the company hosting it. So don’t share any information that you would like to keep confidential—for example, intellectual property. Some users can install their own LLM locally but that
requires technical knowledge and computing power not available to many.

3. How can people using LLMs avoid plagiarism and other issues?

It depends on the use case. Students should always generate their own work. Using LLM to generate products for academic coursework may be considered academic dishonesty. Those who generate content professionally are going to have to use LLMs to keep up. The ethics are going to be dictated by the use case. A blogger is probably fine using an LLM to generate content. A commentator writing an opinion piece may not be. Whatever the use case, make sure that you take the time to fact-check the output. You can use a plagiarism scanner if you are specifically concerned about plagiarism.

In Impact