This module provides training in research proposal development and evaluation. We begin with the research landscape — how breakthroughs happen and how funding works — then build proposal writing skills through practice, case studies, and peer review, before examining how AI is reshaping the research enterprise.
Student Profile (Mentimeter survey, n≈18): Most students are Year 2–3, preparing for qualifying exams. 10/14 require a written proposal; 8/16 do a formal proposal defense. Top concerns: novelty, clarity, and feasibility. Priority sections: Background & Significance (9), Research Design/Methods (9), Specific Aims/Hypotheses (7). Confidence in proposal writing: 3.4/5.
| # | Lecture | Key Topics |
|---|---|---|
| 1 | The Research Landscape | Nobel Prize discovery patterns, hypothesis vs. serendipity |
| 2 | Research Framing & Word Choice | Hypothesis-driven framing, fundamental vs. applied, NSF directorate culture |
| 3 | Funding Agencies & Your First Research Narrative | NSF vs. NIH, drafting challenge & objectives |
| 4 | Writing the Research Narrative | Field-specific best practices, NIH specific aims, revision workshop |
| 5 | Intellectual Merit, Broader Impacts & Case Studies | NSF review criteria, CAREER case study, review panel simulation |
| 6 | AI in the Research Enterprise | AI history, hypothesis generation debate, AI detection failure |
| 7 | Peer Review, Ethics & Responding to Critique | Mock panel review, AI evaluation limits, ethical framing |
| 8 | GCR Team Proposal Workshop | Growing Convergence Research proposal drafting & cross-team review |
Two threads build across the module — one individual, one team-based:
Individual: Research Narrative Draft
| Lecture | Milestone |
|---|---|
| 3 | Draft challenge statement (While…However) + 3 research objectives |
| 3 | Peer review in pairs (cross-discipline) |
| 4 | Revision workshop: strengthen challenge & sharpen objectives |
| 4 | “Elevator Test” — 90-second pitch of your narrative |
This produces a draft challenge/objectives statement students can use for their qualifying exam proposals.
Team: GCR Convergence Proposal
| Lecture | Milestone |
|---|---|
| 8 | Draft convergent research question + challenge + objectives + IM/BI |
| 8 | Cross-team peer review using simplified rubric |
| Post-L8 | Final revised submission on Blackboard |
Goal: Before writing proposals, understand what the research enterprise actually looks like — how discoveries are made, what patterns exist across fields, and why this matters for how you frame your own work.
Analysis of Physics, Chemistry, and Physiology/Medicine Nobel Prizes (1901–2025). Key findings: Physiology/Medicine has the highest rate of serendipitous discoveries (~50%); Physics is most theory-driven (~45–50% hypothesis-driven); Chemistry has the strongest tool-building tradition. All fields are shifting toward more hypothesis-driven research over time.
Each team (15 min discussion + 15 min share-out):
This exercise forces students to apply the Nobel analysis framework to their own disciplines, rather than treating it as an abstract historical overview.
Goal: Learn why “hypothesis-driven” became synonymous with “fundamental research” in grant writing, where that association breaks down, and how to frame your work strategically without misrepresenting it.
Analysis of 494 funded grant abstracts (2024–2025) across BIO, PHY, CMMI, CHE, and CBET. Physics favors measurement (47%), Biology/Chemistry favor hypothesis testing (~50–60%), Engineering requires interdisciplinary integration (88% CMMI). Open with this to show students what funded proposals actually look like.
The critical connection between hypothesis-driven and fundamental research. Includes corrected word choice guidance (words aren’t “forbidden” — context is everything), the 2×2 matrix (hypothesis vs. discovery × fundamental vs. applied), and the NSF quote explicitly endorsing optimization and control as research topics.
Use the interactive exercises built into the Word Choice presentation (Slides 7–8). Teams work on the realistic paragraph-level prompts:
Goal: Understand the key differences between NSF and NIH (and other agencies), then immediately apply that knowledge by drafting your first challenge–objective statement.
NSF vs. NIH at a Glance:
| Feature | NSF | NIH |
|---|---|---|
| Review criteria | Intellectual Merit + Broader Impacts | Significance, Investigators, Innovation, Approach, Environment |
| Scoring | Qualitative (E/VG/G/F/P) | Numerical 1–9 (lower is better) |
| Proposal length | 15 pages (project description) | 12 pages (research strategy, R01) |
| Specific aims | Integrated into narrative | Separate 1-page document (critical) |
| Preliminary data | Helpful but not required | Essentially required for R01 |
| Resubmission | No formal response to reviews | 1-page Introduction responding to prior reviews |
| Broader impacts | Required, weighted equally | Not a separate criterion |
| Fundamental vs. applied | Strongly favors fundamental framing | Accepts translational and clinical framing |
Other key agencies (brief overview): DOE Office of Science, DARPA, private foundations (Sloan, Gates, CZI). Fellowships: NSF GRFP, NIH F31, Ford, Hertz.
Growing Convergence Research (GCR): NSF’s emphasis on deep integration across disciplines — not just collaboration, but disciplines reshaping each other. This connects to the team GCR proposal assignment in Lecture 8.
Individual writing exercise — students work on their own research, not hypothetical examples:
Step 1 — Draft a Research Challenge (20 min):
Using the “While…However” template:
While [broad area] is critical for [benefit/goal], a major challenge is [specific knowledge gap], which limits our ability to [achieve something important]. This gap exists because [current state] fails to [explain/account for phenomenon].
Write one paragraph (4–6 sentences) framing the core challenge of your dissertation research. If you don’t have a dissertation topic yet, frame a challenge from your lab’s recent work.
Step 2 — Draft 3 Research Objectives (15 min):
Following the three-objective framework:
Write one sentence each. Use strong action verbs (test, measure, establish, elucidate, characterize).
Step 3 — Peer Review in Pairs (15 min):
Swap with a partner from a different field. Each reviewer answers:
Step 4 — Revise and Submit (10 min):
Revise based on partner feedback. Submit on Blackboard for instructor review.
Why this matters: Your Mentimeter survey showed 10/14 students need a written proposal for qualifying exams. This exercise produces a draft they can actually use. It’s the most directly career-relevant activity in the module.
Goal: Deepen narrative writing skills with field-specific best practices, the NIH specific aims page structure, and hands-on revision of the challenge statements drafted in Lecture 3.
Field-specific best practices from funded NSF proposals (BIO, PHY, CMMI, CBET, CHE). Covers the three-objective framework in depth with real examples, quantifying ambition, the “if-then” logic connecting challenge to objectives, and common mistakes to avoid.
The 1-page specific aims document is arguably the most important page in biomedical research:
Paragraph 1 — The Hook: Open with the problem’s significance. Why does this matter?
Paragraph 2 — The Gap: What is unknown? What has been tried and why did it fail?
Paragraph 3 — Your Solution: Long-term goal, objective of this application, central hypothesis, and its basis.
The Aims (numbered): 2–3 specific, measurable aims with brief rationale for each.
Paragraph 4 — The Payoff: Expected outcomes and significance.
NIH Content — Placeholder for Development: This section will be expanded with:
• 2–3 real NIH specific aims pages (strong and weak) for in-class analysis
• Before/after comparison showing a weak aims page revised to be competitive
• NIH study section simulation exercise (scoring with 1–9 scale)
• Guest lecturer from an NIH-funded lab or study section member (TBD)
• Key differences in how NIH vs. NSF reviewers evaluate “significance” vs. “intellectual merit”
These materials will be developed in consultation with NIH-experienced faculty and may include a dedicated guest lecture session.
Students receive written feedback on their Lecture 3 challenge/objectives submissions. Working in pairs:
Round 1 — Strengthen the Challenge (15 min):
Round 2 — Sharpen the Objectives (15 min):
Round 3 — The “Elevator Test” (10 min):
Each student reads their challenge + objectives aloud in 90 seconds. Partner answers: “What will you learn?” and “Why should I care?” If the partner can’t answer both, revise.
Goal: Understand what makes Intellectual Merit and Broader Impacts compelling through real case studies, then practice evaluating and writing these sections.
NSF’s two review criteria in depth. Side-by-side comparisons of weak vs. strong approaches. Key insight: the strongest proposals weave IM and BI together, rather than treating them as separate sections.
Two versions of the same CAREER proposal (2018 unfunded → 2019 funded). The “irregular lattice = FEM” proof was the killer preliminary result. Key lesson: scientific substance beats structural polish.
Teams role-play as an NSF review panel evaluating two short (1-page) proposal excerpts provided by the instructor:
Step 1 — Individual Review (10 min): Each student reads both excerpts and assigns ratings (Excellent / Very Good / Good / Fair / Poor) for Intellectual Merit and Broader Impacts separately. Write 2–3 sentences of justification for each rating.
Step 2 — Panel Discussion (15 min): Teams discuss as a panel. Appoint a “panel chair” who must synthesize the group’s views. Where do you agree? Where do you disagree? What would you tell the PI to improve?
Step 3 — Panel Summary and Share-Out (15 min): Each panel presents their consensus rating and the single most important strength and weakness they identified. Class compares how different panels rated the same proposals.
Why simulate a panel, not just review? Because the panel discussion is where proposals live or die. Students need to experience how individual ratings get negotiated into a group consensus — and how a champion or detractor can swing the outcome.
Goal: Examine AI’s role in science through its history, capabilities, limitations, and ethical implications — connecting the proposal module to the paper writing module that follows.
Interactive Mentimeter quiz on AI history and facts (competitive, with leaderboard):
The lesson: AI hype cycles repeat. The rhetoric of the 1960s is nearly indistinguishable from 2020s discourse. This is a science communication case study in real time.
Using the Toosi et al. (2021) paper “A brief history of AI: how to prevent another winter”:
Prompt with data: “GPT-4 generated 100 hypotheses in 3 hours; experts rated 40% as ‘plausible.’ A PhD student generates 5–10 hypotheses over 3 years.”
Brief presentation of the AI detection results from the CAREER proposal reviews:
Three AI models achieved 0–20% accuracy. The “angry reviewer” persona (AI-generated) fooled all detectors. Key implication: detection is unreliable; focus on demonstrating understanding instead.
Connect to students’ survey data: trust in AI for funding decisions was 2.7/5; 13/18 preferred human-centric AI role.
Close with: “You’ve now seen AI generate hypotheses, write proposal reviews, evaluate proposals, and create research timelines. In the Ethics module, we ask: what are the ethical responsibilities when using these tools? The same tensions between capability and judgment apply, but the stakes are different.”
Goal: Experience the reviewer’s perspective, develop constructive review skills, and engage with ethical questions about AI, framing, and responsible communication in science.
AI models attempted to evaluate which CAREER proposal version was stronger — and got it wrong. AI over-valued structural polish (“modular, reviewer-friendly”) and under-valued the scientific substance (preliminary data, demonstrated feasibility).
Discussion prompt: “If AI can’t reliably evaluate proposals, what CAN it usefully do in the review process?” Connect to the student survey showing 13/18 prefer human-centric AI roles.
Teams conduct a formal mock panel review of a provided proposal excerpt (different from Lecture 5):
Step 1 — Individual Written Review (15 min): Using simplified NSF criteria, each student writes a structured review:
Step 2 — Panel Deliberation (15 min): Teams discuss as a panel. The panel chair must:
Step 3 — Writing the Panel Summary (15 min): Each team writes a 1-paragraph panel summary that captures their consensus and the key reasons. Submit on Blackboard.
Structured around the Caltech case study (José Andrade’s mechanics course redesign: “When knowledge is instantly available, judgment becomes the differentiator”):
Goal: Apply everything from the module by developing a Growing Convergence Research (GCR) proposal as a team, then conducting cross-team peer review.
Each team develops a Growing Convergence Research proposal that integrates skills from the entire module:
Requirements:
What “convergence” means (vs. multidisciplinary or interdisciplinary):
| Multidisciplinary | Interdisciplinary | Convergent | |
|---|---|---|---|
| Structure | Disciplines work side by side | Disciplines integrate methods | Disciplines reshape each other |
| Example | A biologist and engineer share data | An engineer uses biological models | Biology and engineering co-create a new framework neither could conceive alone |
| GCR standard | ✗ | Partial | ✓ |
Teams draft their GCR proposals. Instructor circulates to provide feedback. Key checkpoints:
Each team reviews another team’s draft using a simplified rubric:
Teams return reviews; original teams have 10 minutes to discuss and plan revisions.
Final submissions (revised based on peer review) are due on Blackboard by the following week.
Current status: The module’s interactive materials are primarily NSF-focused. NIH content exists as structural guidance (specific aims page format, comparison table) but lacks the depth of interactive materials available for NSF.
Planned development (for future iterations):
Priority 1 — Real NIH Specific Aims Pages: Collect 2–3 publicly shared specific aims pages (strong and weak) for in-class analysis. Many funded PIs share these on lab websites or through institutional resources. These would anchor Lecture 4’s NIH section, replacing the current text-only template.
Priority 2 — Before/After Aims Page Revision: Develop a case study showing a weak specific aims page revised to be competitive — parallel to the NSF CAREER 2018→2019 case study. Ideally from a real resubmission, anonymized with permission.
Priority 3 — NIH Study Section Simulation: Design a mock study section exercise where students score a proposal excerpt using NIH’s 1–9 scale across all five criteria (Significance, Investigator, Innovation, Approach, Environment). This would parallel the NSF panel simulation in Lectures 5 and 7.
Priority 4 — Guest Lecturer: Invite an NIH-funded faculty member or current/former study section member to discuss how NIH review actually works in practice — especially the differences from NSF that can’t be captured in a comparison table (e.g., the role of the Scientific Review Officer, triage processes, payline politics).
Priority 5 — NIH-Specific Framing Guide: Develop an interactive presentation on NIH framing conventions — how “significance” differs from NSF’s “intellectual merit,” how to frame translational work, and how the innovation criterion is evaluated.
Through this module, students develop skills in:
» Detailed assignment instructions, rubrics, and submission portals are available on the course Blackboard site.