AI Prompting for Researchers
Getting useful results from AI with the RBTF framework
AI Prompting for Researchers
What we will cover
This workshop addresses why most people get mediocre results from AI and how to fix that. Specifically, we will look at:
- A simple framework to get better results from your prompts
- Why iteration beats perfection
- Common mistakes and how to avoid them
The Problem
The typical experience
What most people type:
“Help me with my literature review”
What the AI returns:
“A literature review is a comprehensive survey of scholarly sources on a specific topic. Here are some general steps: 1. Define your research question… 2. Search databases… 3. Evaluate sources…”
This is correct but useless — generic advice you could find on any study skills website.
Why does this happen?
The AI does not know who you are — whether you are a first-year student or a PhD candidate. It has no idea what you already have, be it 5 papers or 50, nor does it know what you actually need — a structure, a gap analysis, or a synthesis. And without being told, it cannot guess how you want the output — as a table, a paragraph, or bullet points.
The core problem
Without context, AI gives you the average answer for everyone — which helps no one.
The quality of the output is directly proportional to the quality of the input. This is true for every AI tool, whether it is ChatGPT, Claude, Gemini, or any other system.
Context and iteration
Context matters most
The more relevant detail you provide, the better the response. Think of it as briefing a new research assistant — they are brilliant but know nothing about your specific project.
Iteration beats perfection
Your first prompt does not need to be perfect. Treat AI conversations as a dialogue, not a one-shot query. Refine, redirect, and build on each response.
You would never hand a new colleague a one-sentence instruction and expect perfect results. Treat AI the same way.
The RBTF Framework
The framework: Role, background, task, format
| Component | Question to Ask Yourself | Example |
|---|---|---|
| Role | Who should the AI act as? | “Act as a methodologist in social science research” |
| Background | What context does it need? | “I am writing my Master’s thesis on climate policy in the EU” |
| Task | What exactly should it do? | “Identify gaps in my literature on carbon pricing mechanisms” |
| Format | How should the output look? | “Present this as a table with columns: gap, relevant authors, potential research questions” |
Why RBTF works
Each component serves a specific purpose:
- Role — by assigning a specific expertise, you shape the depth and vocabulary of the response, so the AI draws on the right domain knowledge instead of giving a generalist answer.
- Background — anchoring the prompt in your specific situation (your field, your data, your progress) prevents the AI from making assumptions that miss the mark.
- Task — a precisely stated task eliminates ambiguity, so the AI focuses on exactly what you need rather than guessing which of several possible requests you meant.
- Format — specifying the output structure (table, bullet list, APA paragraph) means you get something immediately usable instead of spending time reformatting.
You do not always need all four components. But the more you include, the better the result will be.
Context in action
“Explain machine learning”
AI Response: “Machine learning is a subset of artificial intelligence that enables systems to learn from data and improve over time without being explicitly programmed. There are three main types: supervised learning, unsupervised learning, and reinforcement learning…”
A textbook answer. Fine if you are a complete beginner, but not tailored to anyone.
Now compare that with a prompt that includes context:
“I am a social scientist studying voting behaviour. I have no programming background, but I need to understand how machine learning could help me predict election turnout from demographic data. Explain the relevant concepts I would need, avoiding technical jargon, and suggest which ML approach would suit my small dataset of 2,000 observations.”
This prompt gives the AI enough context to produce a useful, specific response — tailored to the researcher’s discipline, skill level, and dataset.
Iteration in practice
Refining your results
Your first response will rarely be exactly what you need. That is normal. Use follow-up prompts to steer the conversation.
| When you get… | Say this… |
|---|---|
| Too broad | “Focus specifically on [X] and go deeper” |
| Too technical | “Explain this for someone without a background in [field]” |
| Too superficial | “Provide more detail, especially regarding [aspect]” |
| Wrong angle | “Approach this from the perspective of [discipline/method]” |
| Missing something | “Also consider [factor/variable/theory]” |
| Too long | “Condense this to the 3 most important points” |
Iteration example
“I need help choosing a statistical test for my research.”
“I have survey data from 200 respondents. My dependent variable is satisfaction (measured on a 5-point Likert scale) and I want to compare results across 3 age groups. I am using SPSS.”
“My data is not normally distributed according to the Shapiro-Wilk test. What non-parametric alternative should I use, and how do I run it in SPSS? Also, how should I report the results in APA format?”
Prompt 1 would get a generic overview of statistical tests. Prompt 2 gives enough context for a specific recommendation. Prompt 3 addresses a real complication and asks for usable output. Each iteration builds on the previous response, turning a vague question into a concrete answer because the AI retains context from the full conversation.
From lazy prompt to RBTF prompt
“Help me with my literature review on climate change”
Result: A generic list of well-known climate change topics, basic search strategies, and textbook advice about synthesising sources. Nothing specific to your actual research.
Now apply RBTF to the same topic:
Role: “Act as an expert in environmental economics and systematic review methodology.”
Background: “I am a Master’s student at University of Hamburg writing my thesis on the effectiveness of carbon pricing mechanisms in the EU. I have already collected 35 papers focused on the EU Emissions Trading System from 2018–2025.”
Task: “Analyse the common themes across my collected literature and identify 3 potential research gaps that could form the basis of my thesis contribution.”
Format: “Present the themes as a structured table with columns: theme, findings, authors, and identified gap. Then provide a brief paragraph for each gap explaining why it matters.”
Without RBTF
- Generic advice that could apply to anyone
- No actionable output
- Requires extensive rework
With RBTF
- Tailored to your project and field
- Immediately usable structure
- Builds on your existing work
The RBTF prompt takes 2 minutes longer to write but saves hours of back-and-forth.
Wrap-up
Mistakes to avoid
Always iterate. The first output is a starting point, not a final answer. Push the AI to go deeper, be more specific, or reconsider its approach.
“Give me some ideas” produces a generic list. “Give me 5 ideas, each with a one-sentence description and a feasibility rating from 1–5” produces something you can actually work with.
AI can produce confident-sounding but incorrect information. Always verify claims, citations, statistics, and dates against primary sources. Never cite an AI-generated reference without checking it exists.
What to remember
These are the key principles to take away:
- Context matters — the more relevant detail you provide, the better the output
- Use the RBTF Framework — Role, Background, Task, Format
- Iterate — treat it as a conversation, not a single query
- Specify your format — tell the AI exactly how you want the output
- Always verify — AI is a useful assistant, not an oracle
AI does not replace your expertise. It amplifies it. The better researcher you are, the better results you will get from AI tools.
Pick one task from your current research — a paragraph to draft, a method to choose, a source to summarise — and try using the RBTF Framework. Compare the result to what you would get from a one-line prompt.