Majeed Kazemitabaar

I'm Majeed, a PhD candidate in Computer Science at the University of Toronto, working with Prof. Tovi Grossman where my research in Human-AI Interaction focuses on balancing productivity and cognitive engagement in AI-assisted programming.

In my research, I focus on a wide range of programmers, from novices learning to code to end-user programmers and professional developers. My work involves designing, building, and rigorously evaluating novel AI user experiences and interventions that empower programmers to maintain control and verify AI outputs effectively, while preventing over-reliance and skill-degradation to ensure long-term productivity.

My research has led to 12 publications, mostly in top-tier HCI and CS Education venues (CHI, UIST, SIGCSE, and IDC), and have won a Best-Paper award, and a Best Late-Breaking Work award for my first-authored publications.

Research Projects:

Design Space of Cognitive Engagement Techniques with AI-Generated Code

University of Toronto - as PhD Student


Novice programmers are increasingly relying on Large Language Models (LLMs) to generate code for learning programming concepts. However, this interaction can lead to superficial engagement, giving learners an illusion of learning and hindering skill development. To address this issue, we conducted a systematic design exploration to develop seven cognitive engagement techniques aimed at promoting deeper engagement with AI-generated code. In this paper, we describe our design process, the initial seven techniques and results from a between-subjects study (N=82). We then iteratively refined the top techniques and further evaluated them through a within-subjects study (N=42). We evaluate the friction each technique introduces, their effectiveness in helping learners apply concepts to isomorphic tasks without AI assistance, and their success in aligning learners' perceived and actual coding abilities. Ultimately, our results highlight the most effective technique: guiding learners through the step-by-step problem-solving process, where they engage in an interactive dialog with the AI, prompting what needs to be done at each stage before the corresponding code is revealed.

expand

Design Space of Cognitive Engagement Techniques with AI-Generated Code

Exploring the Design Space of Cognitive Engagement Techniques with AI-Generated Code for Enhanced Learning

In Submission · arXiv

Majeed Kazemitabaar, Oliver Huang, Sangho Suh, Austin Z. Henley, Tovi Grossman

Exploring the Design Space of Cognitive Engagement Techniques with AI-Generated Code for Enhanced Learning

Improving Steering and Verification in AI-Assisted Data Analysis

Microsoft Research UK - as Research Intern


LLM-powered tools like ChatGPT Data Analysis, have the potential to help users tackle the challenging task of data analysis programming, which requires expertise in data processing, programming, and statistics. However, our formative study (n=15) uncovered serious challenges in verifying AI-generated results and steering the AI (i.e., guiding the AI system to produce the desired output). We developed two contrasting approaches to address these challenges. The first (Stepwise) decomposes the problem into step-by-step subgoals with pairs of editable assumptions and code until task completion, while the second (Phasewise) decomposes the entire problem into three editable, logical phases: structured input/output assumptions, execution plan, and code. A controlled, within-subjects experiment (n=18) compared these systems against a conversational baseline. Users reported significantly greater control with the Stepwise and Phasewise systems, and found intervention, correction, and verification easier, compared to the baseline. The results suggest design guidelines and trade-offs for AI-assisted data analysis tools.

expand

Interactive Task Decomposition in AI-Assisted Data Analysis

Improving Steering and Verification in AI-Assisted Data Analysis with Interactive Task Decomposition

UIST 2024 · (Conditionally Accepted) ACM Symposium on User Interface Software Technology

Majeed Kazemitabaar, Jack Williams, Ian Drosos, Tovi Grossman, Austin Henley, Carina Negreanu, Advait Sarkar

Improving Steering and Verification in AI-Assisted Data Analysis with Interactive Task Decomposition

Deploying an LLM-based Coding Assistant in the Classroom

University of Toronto - as PhD Student


We developed CodeAid, an LLM-powered programming assistant designed to offer students timely and personalized feedback without directly revealing code solutions. This tool aids in answering conceptual questions, generating pseudo-code, and suggesting corrections for incorrect code. We deployed CodeAid in a large class of 700 students and conducted a thematic analysis of its 8,000 usages, supplemented by weekly surveys and student interviews. Further feedback was obtained from eight programming educators. Results showed that most students used CodeAid for understanding concepts and debugging, though some students directly asked for code solutions. While educators valued its educational merits, they also highlighted concerns about occasional inaccuracies and the potential for students to rely on tools like ChatGPT.


We then concluded with four key design considerations for AI assistants in educational contexts, centered around four main stages of a student's help-seeking process:

  • Exploiting Unique Advantages of AI: Decision to use the AI tool, emphasizing the unique advantages of AI over other resources.
  • Designing the AI Querying Interface: Query formulation, providing context, and balancing user-friendliness with meta-cognitive engagement.
  • Balancing the Directness of AI Responses: Nature of AI responses, managing directness, scaffolding type, and learning engagement.
  • Supporting Trust, Transparency and Control: Post-response actions, ensuring accuracy, trust, transparency, and control.
These stages highlight important trade-offs between usability, educational value, and the role of AI in learning.

expand

Deploying an LLM-based Coding Assistant in the Classroom

CodeAid: Evaluating a Classroom Deployment of an LLM-based Programming Assistant that Balances Student and Educator Needs

CHI 2024 · ACM Conference on Human Factors in Computing Systems

Majeed Kazemitabaar, Runlong Ye, Xiaoning Wang, Austin Z. Henley, Paul Denny, Michelle Craig, Tovi Grossman

CodeAid: Evaluating a Classroom Deployment of an LLM-based Programming Assistant that Balances Student and Educator Needs

Studying LLM-Based Code Generators in K-12 Computing Education

University of Toronto - as PhD Student


We studied the impact of Large Language Model (LLM)-based code generators, such as OpenAI Codex, on novice programmers (ages 10-17). In a controlled experiment involving 69 novices working on 45 Python tasks, we found that using Codex led to a 1.15x increase in code-authoring completion rate and 1.8x higher scores, without diminishing manual code-modification capabilities. Interestingly, those with prior Codex exposure had slightly improved performance in evaluations a week later. A deeper dive into data from 33 participants who used Codex revealed various ways they interacted with the tool. We identified four coding strategies: AI Single Prompt, AI Step-by-Step, Hybrid, and Manual coding. The AI Single Prompt strategy yielded the highest correctness in code-authoring but struggled in code-modification tasks. Our findings highlighted both the potentials and pitfalls of LLMs in educational settings, emphasizing the need for balanced integration and curriculum development.

expand

Studying LLM-Based Code Generators in K-12 Computing Education

Studying the effect of AI Code Generators on Supporting Novice Learners in Introductory Programming

CHI 2023 · ACM Conference on Human Factors in Computing Systems

Majeed Kazemitabaar, Justin Chow, Carl Ka To Ma, Barbara J Ericson, David Weintrop, Tovi Grossman

Studying the effect of AI Code Generators on Supporting Novice Learners in Introductory Programming

How Novices Use LLM-Based Code Generators to Solve CS1 Coding Tasks in a Self-Paced Learning Environment

Koli Calling 2023 · ACM Koli Calling Conference on Computing Education Research

Majeed Kazemitabaar, Xinying Hou, Austin Z. Henley, Barbara J Ericson, David Weintrop, Tovi Grossman

How Novices Use LLM-Based Code Generators to Solve CS1 Coding Tasks in a Self-Paced Learning Environment

From Blocks to Text-based Programming

University of Toronto - as PhD Student


We designed CodeStruct, an intermediary programming environment aimed at assisting learners transition from block-based programming, like Scratch, to text-based languages such as Python. CodeStruct bridges the learning curve between these two paradigms, offering design features that significantly reduce completion times and help requests when compared to a direct transition. In a study with 26 high school students, results indicated that those using CodeStruct had a smoother transition with a decrease in data-type and syntax issues, especially when initially aided by a structured editor. However, once they moved to an unstructured editor, the rate of syntax errors increased, though they still outperformed peers who transitioned directly.

expand
GitHubgithub.com/MajeedKazemi/code-struct

From Blocks to Text-based Programming

CodeStruct: Design and Evaluation of an Intermediary Programming Environment for Novices to Transition from Scratch to Python

IDC 2022 · ACM Conference on Interaction Design and Children

Majeed Kazemitabaar, Viktar Chyhir, David Weintrop, Tovi Grossman

CodeStruct: Design and Evaluation of an Intermediary Programming Environment for Novices to Transition from Scratch to Python

Scaffolding Progress: How Structured Editors Shape Novice Errors When Transitioning from Blocks to Text

SIGCSE 2023 · ACM Technical Symposium on Computer Science Education

Majeed Kazemitabaar, Viktar Chyhir, David Weintrop, Tovi Grossman

Scaffolding Progress: How Structured Editors Shape Novice Errors When Transitioning from Blocks to Text

Embedded Programming Development Environment

University of California, Berkeley - as Visiting Graduate Researcher


A key challenge in developing and debugging custom embedded systems is understanding their behavior, particularly at the boundary between hardware and software. Bifröst automatically instruments and captures the progress of the user's code, variable values, and the electrical and bus activity occurring at the interface between the processor and the circuit it operates in. This data is displayed in a linked visualization that allows navigation through time and program execution, enabling comparisons between variables in code and signals in circuits.

expand

Embedded Programming Development Environment

Bifröst: Visualizing and Checking Behavior of Embedded Systems across Hardware and Software

UIST 2017 · ACM Symposium on User Interface Software Technology

Will McGrath, Daniel Drew, Jeremy Warner, Majeed Kazemitabaar, Mitchell Karchemsky, David Mellis, Björn Hartmann

Bifröst: Visualizing and Checking Behavior of Embedded Systems across Hardware and Software

Programming by Demonstration for Kids

Microsoft Research, Redmond - as Research Intern


GestureBlocks incorporates a demonstrate-edit-review Machine Learning pipeline for authoring sensor-based gestures into Microsoft MakeCode and allows novices to program behaviors using both data-driven and conventional paradigms.

expand
GitHubgithub.com/microsoft/pxt-gestures

Programming by Demonstration for Kids

GestureBlocks: A Gesture Recognition Toolkit for Children

ICER 2017 Workshop · Workshop on Learning about Machine Learning

Majeed Kazemitabaar, Rob Deline

GestureBlocks: A Gesture Recognition Toolkit for Children

Interactive Wearables using Tangible Programming

Interactive Wearables using Tangible Programming

University of Maryland - as MSc Student


Wearable construction kits have shown promise in attracting underrepresented groups to STEM, and empowering users to create personally meaningful computational designs. These kits, however, require programming, circuits, and manual craft skills. Therefore, to lower the barriers of entry and help empowering young children create interactive wearables, I led a two-year iterative design process, including participatory design sessions with children design probe sessions with STEM educators, and iteratively building and pilot testing prototypes with children.


Informed by these experiences, we built MakerWear: a modular and wearable construction kit with a focus on enabling children to leverage the richness of wearability-their changing environments, their body movements, and social interactions. Our novel approach enabled children to program complex trigger-action behaviors using tangible modules. Our evaluations of MakerWear at multi-session workshops, show that children (ages 5-10) were able to successfully create a wide variety of wearable designs, and actively apply computational thinking.

expand
GitHubgithub.com/MajeedKazemi/MakerWear

MakerWear: A Tangible Approach to Interactive Wearable Creation for Children

CHI 2017 · ACM Conference on Human Factors in Computing Systems

Majeed Kazemitabaar, Jason McPeak, Alexander Jiao, Liang He, Thomas Outing, Jon E Froehlich

Best Paper Award
MakerWear: A Tangible Approach to Interactive Wearable Creation for Children

ReWear: Early Explorations of a Modular Wearable Construction Kit for Young Children

CHI 2016 LBW · ACM Conference on Human Factors in Computing Systems

Majeed Kazemitabaar, Liang He, Katie Wang, Chloe Aloinmonous, Tony Cheng, Jon Froehlich

Best LBW Paper Award
ReWear: Early Explorations of a Modular Wearable Construction Kit for Young Children

MakerShoe: Towards a Wearable E-Textile Construction Kit to Support Creativity, Playful Making, and Self-Expression

IDC 2015 Demo · ACM Conference on Interaction Design and Children

Majeed Kazemitabaar, Leyla Norooz, Mona Leigh Guha, Jon Froehlich

MakerShoe: Towards a Wearable E-Textile Construction Kit to Support Creativity, Playful Making, and Self-Expression

Exploring Example Code Usage by Programmers

Sharif University of Technology - as Undergraduate Researcher


When programmers face new frameworks they usually rely on example codes to learn about the API and accomplish their tasks. This work investigates and analyzes the activities performed by programmers when such sample codes are being used for task completion.

expand

Exploring Sample Code Usage by Programmers

Activities performed by programmers while using framework examples as a guide

SAC 2014 · ACM Symposium of Applied Computing

Reihane Boghrati, Abbas Heydarnoori, Majeed Kazemitabaar

Activities performed by programmers while using framework examples as a guide
Mentorhips

Carl Ma

University of Toronto

Justin Chow

University of Toronto

Viktar Chyhir

University of Toronto

Jason McPeak

University of Maryland

Alexander Jiao

University of Maryland

Katie Wang

University of Maryland