Bioinspired Communication & Ethics

Module 4: Research Ethics

This module provides a focused exploration of ethical considerations in research and scientific communication, co-taught with Dr. Sarah Reckess, JD (Upstate Medical University). You’ll engage with real-world ethical dilemmas through case studies, debates, and interactive scenarios, developing the moral reasoning skills necessary to navigate complex ethical terrain in interdisciplinary research. The module moves from individual ethical principles to systemic pressures, then applies that framework to science communication, AI, authorship, and intellectual property.

Student Profile (Mentimeter survey, n≈16): “Research Ethics” associations: honesty, moral, integrity, responsibility. Top ethical challenge faced: ensuring data accuracy (13/16). Lowest confidence: navigating authorship disputes (2.8/5). Reading compliance: only 6/14 read pre-class materials. Root cause of research problems per Alberts: inadequate ethical training (9/15), never-ending growth assumption (6/15).


Module Structure: 7 Sessions

# Session Key Topics Guest
1 Introduction to Ethics in Research Ethics vs. law vs. morals; Resnik principles; systemic pressures
2 Research Misconduct, COI & Industry FFP, self-plagiarism, paper mills, conflicts of interest Dr. Reckess
3 Dual-Use Research DURC definition, risk-benefit analysis, oversight mechanisms Dr. Reckess
4 Science Communication as Ethical Obligation SciComm as ethics, public trust, genuine vs. false successes
5 Artificial Intelligence in Research AI ghostwriting, hallucinations, detection, regulatory frameworks Dr. Reckess
6 Authorship & Credit ICMJE criteria, CRediT taxonomy, AI authorship, disputes Dr. Reckess
7 Intellectual Property & Data Ownership Patents, copyright, data sharing, ownership of discoveries Dr. Reckess

📝 Running Assignment: Technology & Responsibility Report (Team)

The final team project runs alongside this module. Each team produces a professional report on a transformative technology:

Session Report Connection
1–2 Ethical frameworks and misconduct concepts → inform Section 4 (Ethical & Societal Analysis)
3 Dual-use analysis → directly applicable if team chose CRISPR, synthetic biology, or nuclear tech
4 Science communication principles → inform Section 5 (Recommendations for Responsible Stewardship)
5 AI ethics → applicable to all topics; also informs how teams use AI in their own writing
6–7 Authorship and IP → inform Section 6 (Team Reflection on collaborative process)

Topic choices: mRNA Technology, Advanced Robotics & Autonomous Systems, CRISPR & Gene Editing, Nuclear Technology, Synthetic Biology, Neurotechnology & Brain-Computer Interfaces, Quantum Computing.


Session 1: Introduction to Ethics in Scientific Research

Goal: Establish the distinction between ethics, law, and moral values. Apply Resnik’s ethical principles to real scenarios, then examine how systemic pressures (Alberts et al.) create the conditions for ethical failures.

📊 What Your Survey Told Us

Skill Confidence (1–5)
Identify a potential conflict of interest 3.9
Handle a labmate behaving unethically 3.5
Navigate authorship disputes on a paper 2.8

That 2.8 on authorship is the lowest confidence score in any module — Sessions 6–7 address it directly.

📖 Pre-Class Reading

🎯 In-Class Activities (~80 min)

Mentimeter warm-up (~10 min): “What’s the first word that comes to mind when you hear ‘Research Ethics’?” Reveal word cloud. Discuss: why are most responses about individual character, not systemic conditions?

Mini-lecture: Ethics vs. Law vs. Morals (~15 min): Establish the framework. Ethics is not just “following the rules” — it’s reasoning about what should be done when rules are ambiguous or conflicting.

Alberts deep dive (~20 min): Your survey showed 9/15 identified “inadequate ethical training” as the root cause, while 6 chose “assumption of never-ending growth.” Discuss: Alberts actually argues the growth assumption is root — the training gap is a symptom. What changes if you accept that framing?

Mentimeter: Resnik principles under pressure (~10 min): Rank which principles are most threatened by hypercompetition. Your results: (1) Honesty, (2) Responsible Publication, (3) Openness, (4) Objectivity. Discuss why honesty ranked #1 — is it because it’s the most valued or the most vulnerable?

Perverse incentives exercise (~15 min): Open-ended: “Name one perverse incentive that could lead to ethical deviation.” Your responses: “publish or die,” “quantity over quality,” “pressure to obtain funding,” “publication bias,” “overstating results.” Connect each to specific Resnik principles it threatens.

Closing poll (~10 min): “What is the most significant barrier to ethical research?” and “What is one action that could help create a more ethical research culture?”

💡 Key Takeaways


Session 2: Research Misconduct, COI & Industry Relationships

Goal: Define the three types of research misconduct (fabrication, falsification, plagiarism), explore the gray zone of “questionable research practices,” and engage with conflicts of interest through realistic scenarios.

📖 Pre-Class Reading

🎯 In-Class Activities (~80 min)

Mentimeter: Self-plagiarism (~10 min): “To what extent do you agree: self-plagiarism is a victimless crime because I’m only reusing my own work?” Then: “Which of the following is generally considered unethical self-plagiarism?” (Republishing a conference paper without new analysis, reusing methods with citation, publishing one study as a cohesive paper vs. salami-slicing, quoting your own thesis with citation). Reveal and discuss the nuances.

Mini-lecture: FFP + Questionable Practices (~15 min): The bright line (fabrication, falsification, plagiarism) vs. the gray zone (p-hacking, selective reporting, honorary authorship, salami-slicing). Most misconduct exists in the gray zone — these are the decisions students will actually face.

Paper mills case study (~15 min): The Washington Post article describes an “epidemic” of fraudulent papers. Discussion: What is the single biggest driver? Then rank solutions from most to least impactful: strengthen government oversight, abolish citation counts as quality metric, retire pay-for-play journal model.

COI scenario analysis (~25 min): Three scenarios, rated on a 1–5 “how ethically problematic” scale:

Teams discuss: Where is the line between acceptable and unacceptable? Does disclosure solve the problem, or just acknowledge it?

Closing: Most powerful change (~5 min): Open-ended: “Based on all the readings, what is the most powerful change we could make to reduce research misconduct?”

🔗 Connection to Module 2: In Module 2 Lecture 4, you discussed transparent peer review and how it affects accountability. Misconduct thrives in opacity — the same pressures that make transparent review uncomfortable also make it valuable.

💡 Key Takeaways


Session 3: Dual-Use Research

Goal: Identify the inherent tension between scientific openness and security, articulate the formal definition of DURC, and evaluate oversight mechanisms.

📖 Pre-Class Reading

📊 What Your Survey Told Us

Scenario Concern (1–5)
Engineering a crop pathogen resistant to all pesticides 4.7
H5N1 transmissibility study in ferrets 4.1
Publishing full genome of a highly pathogenic agent 3.4
Reconstituting extinct smallpox virus for vaccines 3.3

The crop pathogen scored highest — interesting because it’s the least discussed in popular media but arguably the most strategically concerning.

🎯 In-Class Activities (~80 min)

Mentimeter: Values in tension (~10 min): “What two values are in conflict in dual-use research?” Reveal responses. Frame the session around this tension.

Mini-lecture: DURC framework (~15 min): Formal definition, the 15 categories of DURC agents, institutional oversight structure (IBC, IRE, federal review). The H5N1 ferret study as the canonical case.

Scenario analysis (~25 min): Rate concern for the 4 scenarios above. Reveal results. Deep discussion on the crop pathogen: Why did this score highest? Is agricultural bioterrorism under-regulated compared to human pathogens?

Zoloth case discussion (~15 min): The malaria gene drive argument — when does the moral imperative to act override the precautionary principle? Teams debate: “Should scientists proceed with gene drive research for malaria even if we can’t fully predict ecological consequences?”

Oversight mechanism ranking (~15 min): Reveal the ranking. Discuss: If education ranked #3, why is it the mechanism most of you will actually encounter (like this course)? Is that a problem?

💡 Key Takeaways


Session 4: Science Communication as Ethical Obligation

Goal: Frame science communication not as a practical skill but as an ethical obligation grounded in engineering and scientific ethics codes (NSPE, ABET, Resnik). Analyze why some communications succeed and others fail, using your own survey data as evidence.

📊 What Your Survey Told Us

This session produced the richest survey data in the module:

Your examples of successful sci-comm: cigarettes/anti-smoking, ozone depletion, personal hygiene, weather forecasts, Silent Spring, vaccines, Ig Nobel Prizes, Startalk podcast.

Your examples of unsuccessful sci-comm: vaccines cause autism, COVID-19 vaccines, climate change, Tuskegee studies, nuclear bomb, Radium girls, pandemic masking guidelines, GLP-1, chatgpt.

The productive surprise: Several of your “successes” (vaccines, COVID-19) also appear on your “failures” list — suggesting students already sense that communication outcomes are more complex than simple success/failure categories. This is exactly where we dig in.

📖 Pre-Class Reading

🎯 In-Class Activities (~80 min)

Mentimeter: Discovery sources (~10 min): Reveal the gap between where you learn about science (literature, papers) and where your family learns (social media, TV). This is the communication problem in one data point.

The ethical obligation argument (~15 min): Science communication is not optional. NSPE Code of Ethics requires engineers to “hold paramount the safety, health, and welfare of the public.” ABET requires graduates who can “communicate effectively with a range of audiences.” Resnik’s “social responsibility” principle demands it. If publicly funded research doesn’t reach the public, is that an ethical failure?

Genuine successes vs. false successes (~20 min): Analyze your examples. Genuine successes (seatbelts, iodized salt, handwashing, ozone/CFCs) share traits: non-politicized, simple actionable message, clear personal relevance, no powerful opposition, sustained institutional support. Many commonly cited “successes” (climate change, GMOs, mRNA vaccines) are actually mixed or failed cases when measured by public behavior change. The vaccine/COVID appearing on both your lists proves this point.

Press release analysis (~15 min): The coffee/cancer press release. You correctly identified the misleading headline (14/15). Now rewrite it: produce a headline that is both accurate and compelling. Teams compete. Discuss: Is it possible to be accurate without being boring?

Preliminary findings scenario (~10 min): You were split (9 decline, 5 share with caveats). Discuss: What if the supplement is widely used? Does urgency change the ethical calculus? Connect to Resnik’s principle of social responsibility vs. responsible publication.

Closing: Barriers and next steps (~10 min): Your #1 barrier was “don’t know where/how to start” (12/15). Brief overview of concrete starting points: lab website, LinkedIn posts summarizing papers, departmental newsletters, local media contacts. The barrier is practical, and practical barriers have practical solutions.

🔗 Connection to Module 2: In Module 2 Lecture 4, you examined transparent peer review and open access. The same principle applies here — making science accessible isn’t just a nice-to-have, it’s an ethical obligation grounded in the same transparency values you identified as benefits of open review.

💡 Key Takeaways


Session 5: Artificial Intelligence in Research

Goal: Examine AI’s ethical implications in scientific research — from ghostwriting to hallucinations to detection failures — building on what students already experienced in Modules 2 and 3.

📖 Pre-Class Reading (Core — all students)

📖 Optional Reading (consult as interested)

🎯 In-Class Activities (~80 min)

Bridge from Modules 2 & 3 (~10 min): Recall: In Module 2 (Lecture 6), you saw AI hallucinate the authors and lab of a paper it was analyzing. In Module 3 (Lecture 6), you saw AI fail to detect AI-written text and over-value structural polish in proposal evaluation. Those were capability questions. Today we ask the ethical questions.

The ghostwriting problem (~15 min): Brainard reports far more authors use AI than disclose it. Discussion: Is undisclosed AI writing assistance the same as ghostwriting? As plagiarism? As a new category we don’t have a name for yet? Where is the line between “AI as spell-checker” and “AI as co-author”?

AI bias case study (~15 min): Lin’s Nature piece argues AI chatbots are already biasing research directions. If AI tools steer researchers toward certain hypotheses or methods, who is responsible for the resulting bias? The AI developer? The researcher? The institution that mandates AI tool use?

Detection paradox discussion (~15 min): Gerhard’s piece on the “double-edged sword.” Your Module 3 data showed AI detection achieved 0–20% accuracy. If we can’t detect AI-written text, should we stop trying? What replaces detection? Teams propose alternative accountability mechanisms.

Regulatory frameworks mini-debate (~15 min): Should AI in research be regulated through existing research integrity frameworks (extend current rules), through new AI-specific legislation (create new rules), or through professional self-regulation (scientists police themselves)? Teams take assigned positions.

Closing: Where do YOU draw the line? (~10 min): Mentimeter scale: Rate the ethical acceptability of these AI uses (1 = clearly unethical, 5 = clearly acceptable): using AI to check grammar, using AI to restructure an argument, using AI to draft a literature review section, using AI to generate hypotheses, using AI to write an entire methods section.

🔗 Connection to Module 2: The AI hallucination case study from Module 2 Lecture 6 (ChatGPT vs. DeepSeek evaluating the hydrogel abstract) demonstrated a capability limitation. Today’s session asks: given those limitations, what are the ethical obligations of researchers who use these tools? Resnik’s principle of honesty applies directly — presenting AI-generated content as your own work violates it, regardless of detection capability.

💡 Key Takeaways


Session 6: Authorship & Credit

Goal: Explore who qualifies as an author, how credit disputes arise, and how emerging tools (CRediT taxonomy, AI authorship policies) are reshaping these conversations. Directly addresses the lowest confidence score in the module (2.8/5 on authorship disputes).

📖 Pre-Class Reading

🎯 In-Class Activities (~80 min)

Mentimeter recall (~5 min): Your confidence in navigating authorship disputes was 2.8/5 at the start of the module. Let’s see if we can raise that today.

Mini-lecture: ICMJE criteria + CRediT (~15 min): The four ICMJE criteria (substantial contribution, drafting/revising, final approval, accountability). Why all four must be met. The CRediT taxonomy as a complementary tool that makes contribution types visible without gatekeeping authorship. Key question: under these criteria, can AI be an author? (Most journals now say no — but the reasoning matters.)

Scenario-based discussion (~25 min): Four authorship dilemmas — teams discuss and vote:

Connect to CRISPR credit dispute from Module 1 (Lecture 4): scientific credit (Nobel) and legal credit (patents) can diverge entirely.

Self-plagiarism revisited (~15 min): This was introduced in Session 2 with the iThenticate reading. Now apply it to authorship: Is republishing your own conference paper in a journal (with minor revisions) ethical if you disclose it? What about reusing your methods section across papers? The nuance: copyright transfer agreements may make your own words legally someone else’s property.

AI authorship exercise (~15 min): Take a paragraph from your Technology & Responsibility Report. If AI helped draft it, apply the ICMJE criteria: did AI make a “substantial contribution”? Can AI approve the final version? Can AI be accountable? What does this mean for your disclosure obligations?

Closing (~5 min): Mentimeter: “How confident are you now in navigating authorship disputes?” Compare with 2.8 baseline.

💡 Key Takeaways


Session 7: Intellectual Property & Data Ownership

Goal: Understand the four types of intellectual property and how they impact scientific research output, data sharing practices, and the ownership of discoveries.

📖 Pre-Class Reading

🎯 In-Class Activities (~80 min)

Mini-lecture: Four types of IP (~20 min): Patents, copyrights, trade secrets, trademarks — each with different implications for scientific research. Key distinctions: patents protect inventions (but require disclosure), copyright protects expression (but not ideas), trade secrets protect information (but require secrecy). How each creates different incentive structures for openness vs. protection.

Patent case study (~15 min): Return to the CRISPR patent dispute from Module 1. The Broad Institute holds key U.S. patents despite Doudna/Charpentier winning the Nobel. Over $100M in licensing at stake. Discussion: Does the patent system reward the right people? Should fundamental biological discoveries be patentable at all?

Data sharing dilemma (~20 min): The Human Genome Project’s “Bermuda Agreement” (release all data within 24 hours) vs. Celera’s proprietary model (use public data, keep own data private). Connect to Module 1’s case study on trust and open science. Modern parallel: Should AI training data be open? Should datasets funded by public grants be freely available?

IP in your research (~15 min): Each student identifies one potential IP issue in their own research: Does your university have a technology transfer office? Who owns your thesis data? If you develop code or a method, who controls it? What happens if you leave your lab?

Closing discussion (~10 min): “What is the right balance between protecting intellectual property and sharing knowledge openly?” Connect to the full module arc: Resnik’s openness principle, Alberts’ critique of competition, DURC’s tension between openness and security, and science communication’s obligation to the public.

💡 Key Takeaways



📚 Additional Resources

Ethics Frameworks

Dual-Use & Biosecurity

Bioinspired Ethics (Supplementary)

Authorship & Publication


📚 Module Activities & Learning Objectives

Through seven sessions, students develop skills in:

» Detailed case studies, assignment instructions, and submission portals are available on the course Blackboard site.


← Proposal Writing & Review Team Work →