Bioinspired Communication & Ethics

Module 2: Proposal Writing & Review

This module provides training in research proposal development and evaluation. We begin with the research landscape — how breakthroughs happen and how funding works — then build proposal writing skills through practice, case studies, and peer review, before examining how AI is reshaping the research enterprise.

Student Profile (Mentimeter survey, n≈18): Most students are Year 2–3, preparing for qualifying exams. 10/14 require a written proposal; 8/16 do a formal proposal defense. Top concerns: novelty, clarity, and feasibility. Priority sections: Background & Significance (9), Research Design/Methods (9), Specific Aims/Hypotheses (7). Confidence in proposal writing: 3.4/5.


Module Structure: 8 Lectures

# Lecture Key Topics
1 The Research Landscape Nobel Prize discovery patterns, hypothesis vs. serendipity
2 Research Framing & Word Choice Hypothesis-driven framing, fundamental vs. applied, NSF directorate culture
3 Funding Agencies & Your First Research Narrative NSF vs. NIH, drafting challenge & objectives
4 Writing the Research Narrative Field-specific best practices, NIH specific aims, revision workshop
5 Intellectual Merit, Broader Impacts & Case Studies NSF review criteria, CAREER case study, review panel simulation
6 AI in the Research Enterprise AI history, hypothesis generation debate, AI detection failure
7 Peer Review, Ethics & Responding to Critique Mock panel review, AI evaluation limits, ethical framing
8 GCR Team Proposal Workshop Growing Convergence Research proposal drafting & cross-team review

📝 Running Assignments

Two threads build across the module — one individual, one team-based:

Individual: Research Narrative Draft

Lecture Milestone
3 Draft challenge statement (While…However) + 3 research objectives
3 Peer review in pairs (cross-discipline)
4 Revision workshop: strengthen challenge & sharpen objectives
4 “Elevator Test” — 90-second pitch of your narrative

This produces a draft challenge/objectives statement students can use for their qualifying exam proposals.

Team: GCR Convergence Proposal

Lecture Milestone
8 Draft convergent research question + challenge + objectives + IM/BI
8 Cross-team peer review using simplified rubric
Post-L8 Final revised submission on Blackboard

Lecture 1: The Research Landscape — How Breakthroughs Happen

Goal: Before writing proposals, understand what the research enterprise actually looks like — how discoveries are made, what patterns exist across fields, and why this matters for how you frame your own work.

📊 Presentation (~35 min)

🎯 Team Activity: “Classify Your Own Field” (~30 min)

Each team (15 min discussion + 15 min share-out):

  1. Pick 3 landmark discoveries in your team members’ fields (can be Nobel-winning or not)
  2. Classify each as hypothesis-driven, discovery-driven, or method-driven — and defend your choice
  3. Identify one discovery that doesn’t fit neatly into any category. Why not?
  4. Discuss: Is your field becoming more or less hypothesis-driven over time? What’s driving the shift?

This exercise forces students to apply the Nobel analysis framework to their own disciplines, rather than treating it as an abstract historical overview.

💬 Mentimeter Discussion (~15 min)

💡 Key Takeaways


Lecture 2: Research Framing — Hypothesis, Fundamental vs. Applied, & Word Choice

Goal: Learn why “hypothesis-driven” became synonymous with “fundamental research” in grant writing, where that association breaks down, and how to frame your work strategically without misrepresenting it.

📊 Presentations (~30 min)

✍️ In-Class Practice: Paragraph-Level Reframing (~35 min)

Use the interactive exercises built into the Word Choice presentation (Slides 7–8). Teams work on the realistic paragraph-level prompts:

💬 Mentimeter Quick Poll

💡 Key Takeaways


Lecture 3: Funding Agencies & Your First Research Narrative

Goal: Understand the key differences between NSF and NIH (and other agencies), then immediately apply that knowledge by drafting your first challenge–objective statement.

🏛️ Mini-Lecture: Know Your Audience (~20 min)

NSF vs. NIH at a Glance:

Feature NSF NIH
Review criteria Intellectual Merit + Broader Impacts Significance, Investigators, Innovation, Approach, Environment
Scoring Qualitative (E/VG/G/F/P) Numerical 1–9 (lower is better)
Proposal length 15 pages (project description) 12 pages (research strategy, R01)
Specific aims Integrated into narrative Separate 1-page document (critical)
Preliminary data Helpful but not required Essentially required for R01
Resubmission No formal response to reviews 1-page Introduction responding to prior reviews
Broader impacts Required, weighted equally Not a separate criterion
Fundamental vs. applied Strongly favors fundamental framing Accepts translational and clinical framing

Other key agencies (brief overview): DOE Office of Science, DARPA, private foundations (Sloan, Gates, CZI). Fellowships: NSF GRFP, NIH F31, Ford, Hertz.

Growing Convergence Research (GCR): NSF’s emphasis on deep integration across disciplines — not just collaboration, but disciplines reshaping each other. This connects to the team GCR proposal assignment in Lecture 8.

📚 Key References

✍️ Practice Session: Draft Your Challenge & Objectives (~60 min)

Individual writing exercise — students work on their own research, not hypothetical examples:

Step 1 — Draft a Research Challenge (20 min):

Using the “While…However” template:

While [broad area] is critical for [benefit/goal], a major challenge is [specific knowledge gap], which limits our ability to [achieve something important]. This gap exists because [current state] fails to [explain/account for phenomenon].

Write one paragraph (4–6 sentences) framing the core challenge of your dissertation research. If you don’t have a dissertation topic yet, frame a challenge from your lab’s recent work.

Step 2 — Draft 3 Research Objectives (15 min):

Following the three-objective framework:

  1. Foundational: Establish the core tool, method, or framework
  2. Mechanistic: Elucidate underlying mechanisms or test core hypotheses
  3. Application/Validation: Apply findings to demonstrate utility

Write one sentence each. Use strong action verbs (test, measure, establish, elucidate, characterize).

Step 3 — Peer Review in Pairs (15 min):

Swap with a partner from a different field. Each reviewer answers:

Step 4 — Revise and Submit (10 min):

Revise based on partner feedback. Submit on Blackboard for instructor review.

Why this matters: Your Mentimeter survey showed 10/14 students need a written proposal for qualifying exams. This exercise produces a draft they can actually use. It’s the most directly career-relevant activity in the module.


Lecture 4: Writing the Research Narrative — From Challenge to Objectives

Goal: Deepen narrative writing skills with field-specific best practices, the NIH specific aims page structure, and hands-on revision of the challenge statements drafted in Lecture 3.

📊 Presentation (~25 min)

📋 NIH Specific Aims Page (~15 min)

The 1-page specific aims document is arguably the most important page in biomedical research:

Paragraph 1 — The Hook: Open with the problem’s significance. Why does this matter?

Paragraph 2 — The Gap: What is unknown? What has been tried and why did it fail?

Paragraph 3 — Your Solution: Long-term goal, objective of this application, central hypothesis, and its basis.

The Aims (numbered): 2–3 specific, measurable aims with brief rationale for each.

Paragraph 4 — The Payoff: Expected outcomes and significance.

NIH Content — Placeholder for Development: This section will be expanded with:
• 2–3 real NIH specific aims pages (strong and weak) for in-class analysis
• Before/after comparison showing a weak aims page revised to be competitive
• NIH study section simulation exercise (scoring with 1–9 scale)
• Guest lecturer from an NIH-funded lab or study section member (TBD)
• Key differences in how NIH vs. NSF reviewers evaluate “significance” vs. “intellectual merit”
These materials will be developed in consultation with NIH-experienced faculty and may include a dedicated guest lecture session.

✍️ Revision Workshop: Strengthen Your Lecture 3 Drafts (~40 min)

Students receive written feedback on their Lecture 3 challenge/objectives submissions. Working in pairs:

Round 1 — Strengthen the Challenge (15 min):

Round 2 — Sharpen the Objectives (15 min):

Round 3 — The “Elevator Test” (10 min):

Each student reads their challenge + objectives aloud in 90 seconds. Partner answers: “What will you learn?” and “Why should I care?” If the partner can’t answer both, revise.

💡 Key Takeaways


Lecture 5: Proposal Components — Intellectual Merit, Broader Impacts & Case Studies

Goal: Understand what makes Intellectual Merit and Broader Impacts compelling through real case studies, then practice evaluating and writing these sections.

📊 Presentations (~30 min)

🎯 Team Activity: “Review Panel Simulation” (~40 min)

Teams role-play as an NSF review panel evaluating two short (1-page) proposal excerpts provided by the instructor:

Step 1 — Individual Review (10 min): Each student reads both excerpts and assigns ratings (Excellent / Very Good / Good / Fair / Poor) for Intellectual Merit and Broader Impacts separately. Write 2–3 sentences of justification for each rating.

Step 2 — Panel Discussion (15 min): Teams discuss as a panel. Appoint a “panel chair” who must synthesize the group’s views. Where do you agree? Where do you disagree? What would you tell the PI to improve?

Step 3 — Panel Summary and Share-Out (15 min): Each panel presents their consensus rating and the single most important strength and weakness they identified. Class compares how different panels rated the same proposals.

Why simulate a panel, not just review? Because the panel discussion is where proposals live or die. Students need to experience how individual ratings get negotiated into a group consensus — and how a champion or detractor can swing the outcome.

📚 References

💡 Key Takeaways


Lecture 6: AI in the Research Enterprise

Goal: Examine AI’s role in science through its history, capabilities, limitations, and ethical implications — connecting the proposal module to the paper writing module that follows.

🎮 AI Quiz & History (~25 min)

Interactive Mentimeter quiz on AI history and facts (competitive, with leaderboard):

The lesson: AI hype cycles repeat. The rhetoric of the 1960s is nearly indistinguishable from 2020s discourse. This is a science communication case study in real time.

🧩 AI Winter Jigsaw Discussion (~20 min)

Using the Toosi et al. (2021) paper “A brief history of AI: how to prevent another winter”:

🤖 Mini-Debate: Can AI Generate Research Hypotheses? (~20 min)

Prompt with data: “GPT-4 generated 100 hypotheses in 3 hours; experts rated 40% as ‘plausible.’ A PhD student generates 5–10 hypotheses over 3 years.”

🔎 AI Detection Failure (~15 min)

Brief presentation of the AI detection results from the CAREER proposal reviews:

Connect to students’ survey data: trust in AI for funding decisions was 2.7/5; 13/18 preferred human-centric AI role.

🌉 Bridge to Module 4

Close with: “You’ve now seen AI generate hypotheses, write proposal reviews, evaluate proposals, and create research timelines. In the Ethics module, we ask: what are the ethical responsibilities when using these tools? The same tensions between capability and judgment apply, but the stakes are different.”

📚 Reading

💡 Key Takeaways


Lecture 7: Peer Review, Ethics & Responding to Critique

Goal: Experience the reviewer’s perspective, develop constructive review skills, and engage with ethical questions about AI, framing, and responsible communication in science.

📊 Case Study: AI Over-Values Structure (~15 min)

Discussion prompt: “If AI can’t reliably evaluate proposals, what CAN it usefully do in the review process?” Connect to the student survey showing 13/18 prefer human-centric AI roles.

✍️ Mock Panel Review (~45 min)

Teams conduct a formal mock panel review of a provided proposal excerpt (different from Lecture 5):

Step 1 — Individual Written Review (15 min): Using simplified NSF criteria, each student writes a structured review:

Step 2 — Panel Deliberation (15 min): Teams discuss as a panel. The panel chair must:

Step 3 — Writing the Panel Summary (15 min): Each team writes a 1-paragraph panel summary that captures their consensus and the key reasons. Submit on Blackboard.

💬 Ethics Discussion (~20 min)

Structured around the Caltech case study (José Andrade’s mechanics course redesign: “When knowledge is instantly available, judgment becomes the differentiator”):

📚 References

💡 Key Takeaways


Lecture 8: GCR Team Proposal Workshop

Goal: Apply everything from the module by developing a Growing Convergence Research (GCR) proposal as a team, then conducting cross-team peer review.

📋 GCR Proposal Assignment Overview (~10 min)

Each team develops a Growing Convergence Research proposal that integrates skills from the entire module:

Requirements:

What “convergence” means (vs. multidisciplinary or interdisciplinary):

  Multidisciplinary Interdisciplinary Convergent
Structure Disciplines work side by side Disciplines integrate methods Disciplines reshape each other
Example A biologist and engineer share data An engineer uses biological models Biology and engineering co-create a new framework neither could conceive alone
GCR standard Partial

✍️ Team Working Session (~40 min)

Teams draft their GCR proposals. Instructor circulates to provide feedback. Key checkpoints:

🔄 Cross-Team Peer Review (~30 min)

Each team reviews another team’s draft using a simplified rubric:

  1. Convergence Quality: Do the disciplines genuinely reshape each other, or is this just collaboration? (1–5)
  2. Challenge Clarity: Can I understand the knowledge gap without being in this field? (1–5)
  3. Objective Logic: Do the three objectives follow the “if-then” chain? (1–5)
  4. Framing Match: Is the language appropriate for the target directorate? (1–5)
  5. One specific suggestion for strengthening the proposal

Teams return reviews; original teams have 10 minutes to discuss and plan revisions.

Final submissions (revised based on peer review) are due on Blackboard by the following week.

💡 Key Takeaways



🏥 NIH Content Development Plan

Current status: The module’s interactive materials are primarily NSF-focused. NIH content exists as structural guidance (specific aims page format, comparison table) but lacks the depth of interactive materials available for NSF.

Planned development (for future iterations):

Priority 1 — Real NIH Specific Aims Pages: Collect 2–3 publicly shared specific aims pages (strong and weak) for in-class analysis. Many funded PIs share these on lab websites or through institutional resources. These would anchor Lecture 4’s NIH section, replacing the current text-only template.

Priority 2 — Before/After Aims Page Revision: Develop a case study showing a weak specific aims page revised to be competitive — parallel to the NSF CAREER 2018→2019 case study. Ideally from a real resubmission, anonymized with permission.

Priority 3 — NIH Study Section Simulation: Design a mock study section exercise where students score a proposal excerpt using NIH’s 1–9 scale across all five criteria (Significance, Investigator, Innovation, Approach, Environment). This would parallel the NSF panel simulation in Lectures 5 and 7.

Priority 4 — Guest Lecturer: Invite an NIH-funded faculty member or current/former study section member to discuss how NIH review actually works in practice — especially the differences from NSF that can’t be captured in a comparison table (e.g., the role of the Scientific Review Officer, triage processes, payline politics).

Priority 5 — NIH-Specific Framing Guide: Develop an interactive presentation on NIH framing conventions — how “significance” differs from NSF’s “intellectual merit,” how to frame translational work, and how the innovation criterion is evaluated.


📚 Module Activities & Learning Objectives

Through this module, students develop skills in:

» Detailed assignment instructions, rubrics, and submission portals are available on the course Blackboard site.


📚 Additional Resources


← Scientific Writing & Peer Review Research Ethics →