Bioinspired Communication & Ethics

Module 3: Scientific Writing & Peer Review

This module provides comprehensive training in scientific communication through writing and the peer review process. Across seven lectures, you’ll learn evidence-based strategies for effective scientific writing, practice transparent peer review techniques, examine how AI tools interact with scientific writing, and develop skills to respond constructively to reviewer feedback—all grounded in real case studies and your own survey responses.

Student Profile (Mentimeter survey, n≈20): 8/15 have zero publications; 13/16 target 3–5 papers for PhD completion. Top pain points: generating ideas, writing introductions, deadlines, and revisions. Most-wanted support: feedback, collaboration, and good mentoring. Reading habits: 14/21 read papers weekly, but only 1/21 annotates deeply. 16/18 got their research idea from their advisor. Confidence in handling reviewer comments: 3.6/5.


Module Structure: 7 Lectures

# Lecture Key Topics
1 Your Writing Journey: Where We Start Publication landscape, pain points, reading habits
2 Writing Principles in Action Whitesides/Weitz/Suo frameworks, conclusion-first, figures-first
3 The Art of Revision: An Abstract Case Study 4 versions of a real abstract, what changed and why
4 Peer Review & Transparent Publishing eLife model, open access, transparent review
5 Responding to Reviewers Professional rebuttals, evidence-based responses
6 AI Tools in Scientific Writing AI evaluation of abstracts, hallucinations, limitations
7 From Hypothesis to Publication Research approaches, Nobel Prize patterns, bridge to proposals

📝 Running Assignment: Writing Portfolio (Team)

The central deliverable is a team Writing Portfolio — a research abstract that evolves across lectures, mirroring the real publication cycle:

Lecture Portfolio Milestone
1 Select topic & write rough research description
2 Draft 1: Apply conclusion-first, outline-driven principles
3 Draft 2: Revise using abstract evolution case study lessons
5 Peer Review: Another team reviews your abstract; you respond formally
6 AI Stress Test: Submit to AI, compare feedback with human review
7 Final Version: Polished abstract + 1-page revision history

Lecture 1: Your Writing Journey — Where We Start

Goal: Establish a shared baseline of the cohort’s writing experience, identify common challenges, and frame scientific writing as a learnable, structured skill — not a mysterious talent.

📊 What Your Survey Told Us

📖 Pre-Class Reading

🎯 In-Class Activities (~75 min)

Mentimeter warm-up (~10 min): Share your publication experience and biggest writing frustration.

Discussion: Why is the introduction so painful? (~10 min): Connect to the gap between knowing your work and communicating it to others.

CARS exercise (~20 min): Read a poorly structured introduction. In teams, identify what’s missing using the CARS model (Create A Research Space).

Speed-reading challenge (~30 min): Each team reads the same paper at a different depth level — title & abstract only (2 min), intro & conclusion (5 min), skim full paper (10 min), read deeply (15 min), annotate key sections (15 min). All teams then answer the same 5 questions. Compare: How much did reading depth actually matter for each question? When is skimming enough?

Why this activity? Your survey showed 9/21 read quickly and only 1 annotates. This exercise builds strategic reading skills — knowing when to skim and when to go deep.

📋 Writing Portfolio: Getting Started

Each team selects a research topic from their members’ work and writes a rough 1-paragraph description of the research question. No formatting rules yet — this is your starting point.

💡 Key Takeaways


Lecture 2: Writing Principles in Action

Goal: Master the core writing workflow — conclusion first, outline-driven, figure-led — and confront which principles feel counterintuitive.

▶️ Interactive Presentation (~25 min)

📊 What Your Survey Told Us

Your agreement ratings on writing principles revealed interesting tensions:

Principle Agreement (1–5) Your Take
Figures should be chosen before writing text 4.4 Strong consensus — visual storytelling resonates
Abstract should be written last 4.1 Most agree, but some resist
Start writing when you have a hypothesis 3.4 Split opinions — connects to hypothesis debate in Lecture 7
Never use parentheses in main text 2.7 Controversial! Field-dependent
Write conclusion first 2.6 Most disagreed — yet this is the experts’ top recommendation

The hardest principle to implement? “Write conclusion first” (11/19), followed by “Start writing immediately” (6/19). Both challenge the instinct to wait for “complete” data.

And yet — when asked “In your field, when do people typically start writing papers?”, 10/16 said “it varies widely”, only 4 said “as soon as initial results suggest a story,” and 2 said “after all experiments are finished.” The Whitesides principle — start writing immediately — contradicts what most of you observe in practice. That tension is exactly what makes this lecture productive.

📖 Pre-Class Reading

🎯 In-Class Activities (~50 min)

Mentimeter (~10 min): Rate your agreement with the 5 controversial principles. Reveal results and discuss where disagreement is highest.

Mini-debate (~15 min): “Should you write your conclusion first, or does that bias your analysis?” Teams argue both sides.

“Reviewer Simulator” (~25 min): Each team receives a badly written paragraph. Identify problems using the principles, then rewrite in 10 minutes. Compare versions.

📋 Writing Portfolio: Draft 1

Using the principles from today, each team writes a first-draft abstract. Requirements: must include a conclusion statement (1–2 sentences) written first, and at least one figure concept described in words. Bring to Lecture 3 for revision practice.

💡 Key Takeaways


Lecture 3: The Art of Revision — An Abstract Case Study

Goal: See how a real paper’s abstract evolved through 4 revisions — from initial submission to Nature Materials publication — and understand why each change was made.

📋 Case Study: “Tough Bonding of Hydrogels to Diverse Nonporous Surfaces”

This abstract went through four major revisions before publication. By tracking what changed at each stage, you’ll see the writing principles from Lecture 2 in action.

Key evolution across 4 versions:

Version Title Change Major Abstract Revisions
Initial “Multifunctional bonding…” (descriptive) Focused on strategy + method; mechanism described abstractly
1st Revision Same title Added quantitative benchmark (∼800 J/m²); simplified mechanism description
2nd Revision Same title Added “silanation” as specific method; expanded mechanistic explanation
3rd Revision “Tough Bonding of Hydrogels…“ (concise, impactful) Restructured entirely — cleaner narrative flow, stronger opening hook

What the revisions teach:

▶️ Interactive Presentation

📖 Materials

🎯 In-Class Activities (~50 min)

Side-by-side analysis (~20 min): Teams compare Version 1 and Version 4. List every change and categorize: clarity, specificity, narrative structure, or scope.

The “2-2-1” test (~15 min): Does the final abstract follow Weitz Lab’s formula (2 intro sentences, 2 result sentences, 1 significance sentence)? Where does it deviate and why?

Apply to your own work (~15 min): Each team applies the same revision lens to their Writing Portfolio abstract from Lecture 2. What needs to change?

📋 Writing Portfolio: Draft 2

Revise your team abstract based on the revision principles from today’s case study. Focus on: title (descriptive → impactful), opening hook, mechanism clarity, and narrative arc. Bring to Lecture 5 for peer review.

💡 Key Takeaways


Lecture 4: Peer Review & Transparent Publishing

Goal: Understand how peer review works, what transparent review changes, and form evidence-based opinions about modern publishing models.

📊 What Your Survey Told Us

Your familiarity with these concepts varied significantly:

📖 Pre-Class Reading

🎯 In-Class Activities (~75 min)

Mentimeter (~15 min): Gauge familiarity with eLife, Open Access, and transparent review. Reveal results.

Case study analysis (~35 min): Read the transparent peer review of “Autonomous self-burying seed carriers for aerial seeding” (8/12 read it; 4 did not — team up for cross-teaching). Each team answers: (1) What was the most challenging reviewer comment? (2) How did authors respond? (3) Was transparency helpful or harmful here?

Structured discussion: Open Access concerns (~20 min): Your top concern was predatory journals (12/20). How do you distinguish legitimate open-access journals from predatory ones? What role do APCs play? Is the current system equitable?

Closing poll (~5 min): “After seeing this example, are you more or less supportive of transparent peer review?”

🔗 Connection to Module 4 (Ethics): Transparent peer review raises fundamental ethical questions: Does reviewer anonymity protect honest critique, or enable abuse of power? What happens when early-career researchers review senior colleagues’ work publicly? We’ll return to these tensions in the Ethics module when we discuss systemic pressures in science (Alberts et al.).

💡 Key Takeaways


Lecture 5: Responding to Reviewers

Goal: Develop professional strategies for responding to reviewer comments — including how to disagree respectfully with evidence.

📊 What Your Survey Told Us

From your analysis of the seed carrier paper’s transparent review:

Peer Review Reflection (scale ratings):

Question Average
How well did authors handle Referee #2’s challenge about necessity? 4.5 / 5
How much did the manuscript improve through review? 4.2 / 5
How confident do you feel handling major reviewer comments? 3.6 / 5

That 3.6 confidence score is exactly why we’re spending a full lecture on this.

📖 Pre-Class Reading

🎯 In-Class Activities (~75 min)

Anatomy of a rebuttal (~20 min): Dissect 3 real reviewer comments and author responses from the seed carrier paper. Categorize each response as: agreed and changed, agreed partially, or respectfully disagreed with evidence.

Response drafting exercise (~25 min): Each team receives a challenging reviewer comment (drawn from real reviews). Draft a response in 15 minutes. Teams then peer-review each other’s responses.

The “tone test” (~15 min): Compare two responses to the same comment — one defensive, one professional. Discuss what makes the difference.

Mentimeter closing (~5 min): “What’s your confidence level for handling reviewer comments now?” Compare with the 3.6 baseline.

📋 Writing Portfolio: Peer Review

Each team submits their Draft 2 abstract. Another team acts as “reviewers” — writing 3 specific comments (1 major, 2 minor) using the constructive review principles from today. The authoring team then drafts a formal response to each comment. Both the review and the response are graded.

💡 Key Takeaways


Lecture 6: AI Tools in Scientific Writing

Goal: Critically evaluate what AI tools can and cannot do in scientific writing — using a concrete case study that exposes both capabilities and dangerous failures.

📋 Case Study: Two AI Models Evaluate the Same Abstract

Remember the hydrogel abstract we traced through 4 revisions in Lecture 3? We gave the final published version to ChatGPT (GPT-5) and DeepSeek (V3.1) and asked: “Is this a good abstract?” This creates a direct comparison — you already know what makes this abstract strong from human revision. Now see how AI evaluates it.

▶️ Interactive Presentation (~20 min)

What both AI models got right:

Where things went wrong — hallucinations and errors:

  ChatGPT (GPT-5) DeepSeek (V3.1)
Authors ✓ Correct (after follow-up) ✗ Wrong — listed authors from a different paper
Lab ✓ Correct — Zhao group at MIT ✗ Wrong — attributed to Suo group at Harvard
Title ✓ Correct ✗ Wrong — gave a completely different title, then “corrected” to another wrong title
Year ✓ Correct ✗ Wrong — initially wrong, self-corrected to another wrong answer

DeepSeek’s response is particularly instructive: it confidently provided incorrect details, then when asked to fact-check itself, produced an apologetic correction that was also wrong — attributing the paper to a completely different research group.

🎯 In-Class Activities (~55 min)

Reveal exercise (~15 min): Show the AI responses side by side. Teams identify what’s correct vs. hallucinated before seeing the answer key.

Test it yourself (~20 min): Each team submits an abstract from their field to an AI tool. Evaluate the feedback: What’s useful? What’s wrong? What’s missing?

Ethics discussion (~15 min): If AI can’t reliably identify authors of a paper it’s analyzing, what does that mean for using AI to write literature reviews? Where do you draw the line?

Mentimeter (~5 min): “After this exercise, how much do you trust AI feedback on scientific writing?” (Scale 1–5)

🔗 Connection to Module 4 (Ethics): AI hallucinations in citation are a research integrity issue. If a student uses AI-generated references without verification, is that fabrication? Negligence? We’ll explore this further in the Ethics module alongside discussions of responsible AI use in research (Resnik’s principles of honesty and due diligence).

📋 Writing Portfolio: AI Stress Test

Each team submits their revised abstract to an AI tool and evaluates the feedback. What did AI get right? What did it miss that your human peer reviewers caught? Write a 1-paragraph reflection comparing AI vs. human feedback quality.

💡 Key Takeaways


Lecture 7: From Hypothesis to Publication

Goal: Connect writing skills to research design by examining how different research approaches shape what and how you write — bridging to Module 3 (Proposal Writing).

📊 What Your Survey Told Us

Where do your research ideas come from? 16/18 said their idea came from their academic advisor or PI. Only 2 said “myself, based on my own curiosity.” Zero came from brainstorming, class projects, literature gaps, or existing projects. That’s completely normal at this stage — but it highlights a skill gap that Module 3 (Proposal Writing) directly addresses. Writing a proposal requires you to generate and defend your own research question.

▶️ Interactive Presentations (~25 min)

📖 Pre-Class Reading

🎯 In-Class Activities (~50 min)

Mentimeter (~10 min): Estimate what % of Nobel-winning research is hypothesis-driven. Reveal and discuss the actual data by field.

Team exercise (~15 min): Each team categorizes 5 famous discoveries as hypothesis-driven, serendipitous, or method-driven. Compare with the research literature.

Writing connection (~15 min): “How does your research approach change how you write your introduction?” Teams draft two versions of the same introduction — one hypothesis-framed, one exploration-framed.

Bridge to Module 3 (~10 min): Preview how these principles apply to proposal writing. What does NSF look for in a hypothesis statement? How do you frame exploratory work in a proposal? The research idea origin data (16/18 from advisors) shows why this transition matters — proposals require you to generate your own questions.

📋 Writing Portfolio: Final Version

Submit your final polished team abstract. It should reflect all revisions from Lectures 2–6: conclusion-first structure, clear narrative arc, peer review responses incorporated, and a brief note on how AI feedback did (or didn’t) improve it. Include a 1-page revision history showing how the abstract evolved across drafts.

💡 Key Takeaways



📚 Additional Resources

Writing Guides

Journal Author Guidelines

Peer Review


📚 Module Activities & Learning Objectives

Through seven lectures and guided practice, students develop skills in:

» Detailed assignment instructions, rubrics, and submission portals are available on the course Blackboard site.


← Team Work Proposal Writing & Review →