The Pictorial Trapezoid

Adapting McCloud’s Big Triangle for Creative Semiotic Precision in Generative Text-to-Image AI

Authors

DOI:

https://doi.org/10.34314/9q63c849

Abstract

Generative AI is rapidly being adopted in diverse research contexts that, given the specificity of theoretical frameworks and research objectives, require a high degree of semiotic precision in AI output. With text-to-image generative models, the selection of subject matter and subsequent stylistic variation both have the potential to influence measurable desired outcomes. A major challenge in using generative models in design research is achieving a form of fidelity between a visual representation and a corresponding concept that must be conveyed. Scott McCloud’s Big Triangle categorizes a broad range of visual representational stylistic variation, largely based on comic art. We extend McCloud’s work with a more systematically described framework called the Pictorial Trapezoid, which offers greater control in producing new pictures with generative AI. We provide a case study of the process by which we developed the Pictorial Trapezoid, and demonstrate its efficacy for an additional two research use cases. In each case we differentiate project-specific criteria for selecting what is being represented and visualizing that selection. Finally, we describe how an AI might be trained for semiotic precision in distinct research contexts using the Pictorial Trapezoid.

Author Biographies

  • Matthew Peterson, North Carolina State University

    Matthew Peterson, PhD, is Associate Professor of Graphic & Experience Design at North Carolina State University. His research focuses on visual representation, especially in the development of novel interfaces and environments. He integrates design into other disciplines, with publications, projects, and proposals in collaboration with experts in STEM education, engineering, psychology, advertising, biology, physics, and data science. Peterson is also engaged in advocating for and describing a more rigorous design discipline through publications and presentations.

  • Ashley L. Anderson, North Carolina State University

    Ashley L. Anderson is a PhD in Design candidate at North Carolina State University, focusing on issues of visual representation, visual metaphor, generative AI, and design for social psychological intervention. Prior to entering the PhD program, Anderson earned her Master of Graphic Design at NC State. Her current dissertation work evaluates the efficacy of mediated rescripting, a picture-based intervention designed to improve belonging for Black undergraduate engineering students.

  • Kayla Rondinelli, North Carolina State University

    Kayla Rondinelli is a UX designer, graphics illustrator, and artist based in Raleigh, NC. She is a Master of Graphic & Experience Design student at North Carolina State University, and works as a graduate research assistant in the College of Design and as a freelance graphic designer. Her interest in reducing the carbon impact of the built environment has pushed her to focus on sustainable design practices. Rondinelli hopes her work provides a platform for conversations concerning environmental conservation, equitable distribution of the burdens of climate change, and preservation of the natural world.

  • Helen Armstrong, North Carolina State University

    Helen Armstrong is Professor of Graphic & Experience Design at North Carolina State University, where she is director of the MGXD program. Her research focuses on accessible design, digital rights, and machine learning. Her books include Graphic Design Theory, Digital Design Theory, Participate, and Big Data, Big Design: Why Designers Should Care About Artificial Intelligence. Armstrong is a past member of the AIGA National Board of Directors, the editorial board of Design and Culture, and a former co-chair of the AIGA Design Educators Community. Her work has been recognized by Print and HOW, and included in numerous publications in the U.S. and the U.K. Armstrong is the proud mom of a kid with disabilities and a fierce advocate for designing interfaces and experiences that are inclusive and intelligent.

Published

2025-05-28

Issue

Section

Journal Article