Type of Presentation

Individual paper/presentation

Conference Strand

Ethics in Information

Target Audience

Higher Education

Second Target Audience

Other

AI enthusiasts and AI critics

Relevance

With so much hype and halo around GenAI, it makes sense to pay heed to my presentation in that it alerts AI enthusiast and gullible AI enthusiasts to how GenAI has put so many things on the line.

Proposal

If an AI system like ChatGPT generates text used in a research paper, proper attribution, and delineation of human-written vs. AI-generated text is essential. Research has suggested that many readers cannot reliably distinguish between human and AI writing. Failing to attribute AI writing could constitute plagiarism (Dobrin, 2023). Guidelines need to be established. Similarly, if ChatGPT is used to analyze sensitive interviews or user data from research study participants, appropriate consent, privacy protections, and data security controls must be implemented. Researchers should be transparent about any AI analysis or exposure of protected participant data. On the heels of this comes the possibility that making assumptions in research based on AI texts could perpetuate harm (Cercone & McCalla, 1984). To that end, researchers have an ethical duty to consider bias in AI tools.

Moreover, it is incumbent on us to report when and how Generative AI was used throughout the research study takes on paramount importance. Researchers can uphold strong ethical principles even as AI collaboration changes the research landscape by considering these issues of authorship, privacy, bias, and transparency. Both human and technical aspects require ongoing thoughtful evaluation.

Goal: The goal of this presentation is cautionary. It aims to alert users of Generative AI to the potential ramifications of using AI for research and writing.

Audience: Anyone interested in hearing about some repercussions of AI in the realm of ethics can be my audience member.

Sources:

Cercone, N., & McCalla, G. (1984). Artificial intelligence: Underlying assumptions and basic

objectives. Journal of the American Society for Information Science, 35(5), 280-290.

Dobrin, S. I. (2023). Talking about Generative AI: A Guide for Educators. Broadview Press.

Presentation Description

If an AI system like ChatGPT generates text used in a research paper, proper attribution, and delineation of human-written vs. AI-generated text is essential. Research has suggested that many readers cannot reliably distinguish between human and AI writing. Failing to attribute AI writing could constitute plagiarism (Dobrin, 2023). Guidelines need to be established. Similarly, if ChatGPT is used to analyze sensitive interviews or user data from research study participants, appropriate consent, privacy protections, and data security controls must be implemented. Researchers should be transparent about any AI analysis or exposure of protected participant data. On the heels of this comes the possibility that making assumptions in research based on AI texts could perpetuate harm (Cercone & McCalla, 1984). To that end, researchers have an ethical duty to consider bias in AI tools. Moreover, it is incumbent on us to report when and how Generative AI was used throughout the research study takes on paramount importance. Researchers can uphold strong ethical principles even as AI collaboration changes the research landscape by considering these issues of authorship, privacy, bias, and transparency. Both human and technical aspects require ongoing thoughtful evaluation.

Keywords

Alpha persuasion, AI dilemma, ethical dilemma of AI, AI in research, AI pedagogy

Publication Type and Release Option

Presentation (Open Access)

Creative Commons License

Creative Commons Attribution 4.0 License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS
 
Apr 19th, 4:15 PM Apr 19th, 5:00 PM

Ethical Considerations in Using Generative AI in Writing Studies Research

If an AI system like ChatGPT generates text used in a research paper, proper attribution, and delineation of human-written vs. AI-generated text is essential. Research has suggested that many readers cannot reliably distinguish between human and AI writing. Failing to attribute AI writing could constitute plagiarism (Dobrin, 2023). Guidelines need to be established. Similarly, if ChatGPT is used to analyze sensitive interviews or user data from research study participants, appropriate consent, privacy protections, and data security controls must be implemented. Researchers should be transparent about any AI analysis or exposure of protected participant data. On the heels of this comes the possibility that making assumptions in research based on AI texts could perpetuate harm (Cercone & McCalla, 1984). To that end, researchers have an ethical duty to consider bias in AI tools.

Moreover, it is incumbent on us to report when and how Generative AI was used throughout the research study takes on paramount importance. Researchers can uphold strong ethical principles even as AI collaboration changes the research landscape by considering these issues of authorship, privacy, bias, and transparency. Both human and technical aspects require ongoing thoughtful evaluation.

Goal: The goal of this presentation is cautionary. It aims to alert users of Generative AI to the potential ramifications of using AI for research and writing.

Audience: Anyone interested in hearing about some repercussions of AI in the realm of ethics can be my audience member.

Sources:

Cercone, N., & McCalla, G. (1984). Artificial intelligence: Underlying assumptions and basic

objectives. Journal of the American Society for Information Science, 35(5), 280-290.

Dobrin, S. I. (2023). Talking about Generative AI: A Guide for Educators. Broadview Press.