|
### USER |
|
Many of my game design students object to the use of Generative AI vocally, but I suspect they quietly disagree on the reason to do so. Let's map out the space of concerns. |
|
|
|
### ASSISTANT |
|
# Mapping the Space of Generative AI Concerns in Game Design |
|
|
|
You're right that "I object to GenAI" likely masks a heterogeneous coalition. Here's a framework for disaggregating those concerns: |
|
|
|
## Economic/Labor Concerns |
|
- **Job displacement** - "This will eliminate entry-level positions" |
|
- **Wage suppression** - "Even if jobs exist, leverage shifts to employers" |
|
- **Skill devaluation** - "Why did I spend years learning this craft?" |
|
|
|
## Ethical/Justice Concerns |
|
- **Training data consent** - "Artists didn't agree to this" |
|
- **Compensation** - "Value extraction without payment" |
|
- **Attribution erasure** - "Laundering style without credit" |
|
|
|
## Craft/Quality Concerns |
|
- **Aesthetic homogenization** - "Everything converges to average" |
|
- **Loss of intentionality** - "Design is problem-solving, not generation" |
|
- **Technical debt** - "Output requires extensive cleanup anyway" |
|
|
|
## Pedagogical Concerns |
|
- **Skill atrophy** - "Students won't develop foundational abilities" |
|
- **Process matters** - "Learning happens in the struggle" |
|
|
|
## Identity/Cultural Concerns |
|
- **What it means to be a designer** - existential threat to self-concept |
|
- **Community solidarity** - using AI feels like crossing a picket line |
|
|
|
--- |
|
|
|
**The diagnostic question:** Which concerns would be resolved if training were consensual and artists compensated? Those who say "most of them" differ fundamentally from those who say "almost none." |
|
|
|
Want me to expand on any quadrant? |
|
|
|
### USER |
|
Include https://arxiv.org/pdf/2504.09779 |
|
|
|
### ASSISTANT |
|
<details type="tool_calls" done="true" id="toolu_bdrk_01KJ8m6r6D5w93cwCJkvWwBW" name="use_jina_reader_api" arguments=""{\"url\": \"https://arxiv.org/pdf/2504.09779\"}"" result=""Title: arXiv:2504.09779v1 [cs.CY] 14 Apr 2025\n\nURL Source: https://arxiv.org/pdf/2504.09779\n\nPublished Time: Tue, 15 Apr 2025 01:19:23 GMT\n\nNumber of Pages: 8\n\nMarkdown Content:\n> arXiv:2504.09779v1 [cs.CY] 14 Apr 2025\n\n# “All Roads Lead to ChatGPT”: How Generative AI is Eroding Social Interactions and Student Learning Communities \n\n# Irene Hou \n\nTemple University Philadelphia, PA, USA ihou@ucsd.edu \n\n# Owen Man \n\nTemple University Philadelphia, PA, USA owen.man@temple.edu \n\n# Kate Hamilton \n\nTemple University Philadelphia, PA, USA kate.hamilton@temple.edu \n\n# Srishty Muthusekaran \n\nTemple University Philadelphia, PA, USA srishty.muthusekaran@temple.edu \n\n# Jeffin Johnykutty \n\nTemple University Philadelphia, PA, USA jeffin.johnykutty@temple.edu \n\n# Leili Zadeh \n\nTemple University Philadelphia, PA, USA leili.zadeh@temple.edu \n\n# Stephen MacNeil \n\nTemple University Philadelphia, PA, USA stephen.macneil@temple.edu \n\n## ABSTRACT \n\nThe widespread adoption of generative AI is already impacting learning and help-seeking. While the benefits of generative AI are well-understood, recent studies have also raised concerns about in-creased potential for cheating and negative impacts on students’ metacognition and critical thinking. However, the potential im-pacts on social interactions, peer learning, and classroom dynam-ics are not yet well understood. To investigate these aspects, we conducted 17 semi-structured interviews with undergraduate com-puting students across seven R1 universities in North America. Our findings suggest that help-seeking requests are now often medi-ated by generative AI. For example, students often redirected ques-tions from their peers to generative AI instead of providing assis-tance themselves, undermining peer interaction. Students also re-ported feeling increasingly isolated and demotivated as the social support systems they rely on begin to break down. These findings are concerning given the important role that social interactions play in students’ learning and sense of belonging. \n\n## CCS CONCEPTS \n\n• Social and professional topics → Computing education ; • \n\nComputing methodologies → Artificial intelligence .\n\n## KEYWORDS \n\nGenerative AI, LLMs, help-seeking, peer learning, social impacts \n\nACM Reference Format: \n\nIrene Hou, Owen Man, Kate Hamilton, Srishty Muthusekaran, Jeffin Johnykutty, Leili Zadeh, and Stephen MacNeil. 2025. “All Roads Lead to ChatGPT”: How Generative AI is Eroding Social Interactions and Student Learning Communities. In Proceedings of the 30th ACM Conference on Innovation \n\n> This work is licensed under a Creative Commons Attribution 4.0 International License.\n> ITiCSE 2025, June 27-July 2, 2025, Nijmegen, Netherlands\n> © 2025 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-1567-9/2025/06. https://doi.org/10.1145/3724363.3729024\n\nand Technology in Computer Science Education V. 1 (ITiCSE 2025), June 27-July 2, 2025, Nijmegen, Netherlands. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3724363.3729024 \n\n## 1 INTRODUCTION \n\nDuring the last two years, computing students have substantially increased their use of generative AI (genAI) tools [21], closing pre-viously identified usage gaps [20, 42]. This growth may be explained by the many associated benefits, such as personalized explanations [5, 30, 34], intelligent teaching assistants [12, 24, 25, 31], and support for identifying bugs and debugging code [33, 57]. However, prob-lems are also being identified, such as inequitable access to these tools [20, 59], negative impacts on students’ metacognition [45], and threats to assessment [16, 19, 27, 48]. Although cognitive, metacognitive, and ethical aspects are be-ginning to be understood, social aspects are still largely unexplored. Inspired by recent studies of how help-seeking behaviors are chang-ing due to genAI [20, 51], we investigate whether and how AI affects the social dynamics of the classroom. As students turn to these tools for help, their social interactions with peers, instructors, and broader learning communities are likely to be impacted. This is important because classrooms are not just spaces for individual learning; they are social communities where students support each other, and where knowledge is socially constructed [44]. If genAI disrupts social interactions, there may be negative consequences for learning and for students’ sense of belonging, a factor that is consistently linked to academic success and retention [2, 52, 54]. We investigate the following research question: \n\nRQ: What are the impacts of generative AI on peer interac-tions and learning communities? \n\nTo investigate this question, we conducted 17 interviews with computing undergraduates (8 women, 9 men) from seven R1 uni-versities across North America. This diverse sample of participants varied in programming experience and frequency of genAI usage. Participants first compared and contrasted their experiences re-ceiving help from peers, instructors, and the internet with genAI ITiCSE 2025, June 27-July 2, 2025, Nijmegen, Netherlands Hou et al. \n\ntools such as ChatGPT. Participants were also asked to reflect on their peer interactions since the introduction of genAI. Our findings suggest that genAI tools are deeply embedded within the social dynamics of the classroom. \n\n• GenAI interferes with peer interactions. Instead of in-teracting with their classmates, students increasingly rely on AI tools for help. Students shared how GenAI acted as a mediator in their help-seeking process, since help providers often shared genAI outputs or redirected help requests to genAI rather than providing help themselves. \n\n• Students feel isolated, demotivated, and shameful. Stu-dents reported feeling isolated and missed solving problems collaboratively with friends. They also experienced shame associated with their use of AI tools in the presence of peers. These findings suggest that genAI may have harmful im-pacts on peer interactions and learning communities. Tra-ditional peer support networks appear to be eroding, which im-pacted both genAI users and non-users, by reducing opportunities for collaboration, mentorship, and community building. This also presents problems for students’ motivation and sense of belong-ing, especially for underrepresented groups who often benefit most from peer support and engagement [18, 36]. Educators must strike a balance between carefully integrating AI while fostering and sus-taining the social interactions that make learning meaningful. \n\n## 2 RELATED WORK \n\nRecent work suggests that the growing use of genAI tools, such as ChatGPT and GitHub Copilot, is already influencing how comput-ing students seek help and interact with course material [20, 39]. Increasingly, students report that they are relying on genAI tools instead of traditional resources like peers, instructors, or the in-ternet [20, 21]. These changes have prompted extensive research investigating the benefits and challenges that these tools present in computing education [42, 43]. Previous studies have examined the effects of genAI tools on individual learning outcomes and metacognitive processes [25, 45, 51, 58], while also sounding the alarm about threats to academic integrity and the potential for over-reliance on genAI tools [27, 42, 50, 59]. These works have provided valuable insight into how individual learners are affected by these tools. However, as students increasingly turn to genAI tools for help, a deeper understanding of its impacts on social learn-ing dynamics within computing education learning communities is needed. One key component of learning, help-seeking, is often fraught with challenges for students, who may encounter socio-emotional barriers [14] and decision-making challenges related to identify-ing and effectively using the appropriate resources [1, 8]. Students want to avoid burdening their peers, they may be worried about appearing incompetent, or they may fear being rejected when re-questing help. All of these factors can reduce their willingness to seek help from peers and instructors [23]. Moreover, although knowledge gained through social interactions can be invaluable, students may perceive it as coming with a social cost [9]. These barriers influence how and why students decide to seek help, the types of resources they use, and when they choose to engage with peers, instructors, or the internet (e.g. internet search, StackOver-flow, YouTube, etc.) [14, 23, 38, 46, 55]. With the emergence of genAI, prior work has shown that students increasingly prefer genAI because it lowers many of these help-seeking barriers, addressing fears of being burdensome or appearing foolish [20]. Unlike peers or instructors, genAI tools are accessible anytime and anywhere, effectively removing barriers that have historically hindered help-seeking [13]. With genAI usage also linked to perceptions of peer usage, some students maybe be more affected by these changes than others [39]. Given the social nature of help-seeking, research is needed to understand whether and how these changes affect peer interac-tions, relationships between students, or learning communities. Pre-vious research consistently shows the importance of collaboration, group work, and mentorship in promoting equitable access [10, 18, 26, 40], fostering a sense of belonging [15, 29, 47], support-ing self-regulated learning [44, 56], and developing essential soft skills [7, 41]. As genAI tools become embedded within education, it is critical to examine the potential impacts on social dynamics in the classroom. \n\n## 3 METHODOLOGY \n\nTo understand impacts of genAI on computing students’ social in-teractions, we conducted semi-structured interviews with 17 com-puting students across 7 R1 universities in North America. Each interview lasted 30-45 minutes. We recorded the interviews via Zoom with verbal consent, and participants were discouraged from sharing their screens or videos to protect their privacy. The re-search was approved by our university’s Institutional Review Board (IRB). \n\n## 3.1 Participant Recruitment \n\nTo ensure a diverse sample, we recruited participants from multi-ple universities through announcements made by CS faculty and within computing-related student organizations. Advertisements were also posted on relevant university subreddits and student Dis-cord servers. Each participant was compensated with a $10 gift card. The interviews were conducted in 2024 between June and October. Participants were all native English speakers. Further de-mographic information and genAI usage habits are summarized in Table 1. \n\n## 3.2 Interview Protocol and Rationale \n\nInterviews were semi-structured to provide flexibility in probing further into emerging themes. Participants were first asked demo-graphic questions about their major, year in university, program-ming experience, and how they use genAI tools. To ground par-ticipants’ perspectives to their actual experiences, we asked them about their help-seeking process. Similar to prior work on help-seeking [13, 19], students ranked help resources based on their usage and trust (e.g. peers, instructors, TAs, course discussion fo-rums, genAI, internet resources). We then asked participants to dis-cuss the pros and cons of using genAI resources versus resources like instructors and peers. Participants also compared their experi-ences with genAI versus these other resources. The use of compare How Generative AI is Eroding Social Interactions and Student Learning Communities ITiCSE 2025, June 27-July 2, 2025, Nijmegen, Netherlands \n\nTable 1: We interviewed 17 undergraduate computing students at seven R1 universities across North America. The ‘Years’ col-umn indicates how many years the participant has been programming so far. The majors Computer Science (CS), Information Science (IS), Graphic Design (GD), Cognitive Science (CogSci), and Interaction Design (IxD) have been abbreviated. \n\n> ID Sex University Level Major Years Frequency Usage Type\n> P1 F4th-year IS/CS 5Never Previously used GenAI, but prefers not to use it P2 M3rd-year CS 5Daily Primary source of help P3 F1st-year DS 1Sporadic Conceptual questions P4 F2nd-year CE/CS 2Daily Primary source of help P5 F3rd-year (returning) CS 6Never Never used GenAI before, prefers not to use P6 M4th-year CS 4Sporadic Tertiary source of help P7 M1st-year CS 10 Sporadic Documentation, code-writing assistance (secondary) P8 F3rd-year CS <1 Sporadic Conceptual questions, code-writing assistance (secondary) P9 M2nd-year IS 2Sporadic Conceptual questions, starting assignments (secondary) P10 M3rd-year CS 3Daily Primary source of help P11 M1st-year CS 4Daily Primary source of help P12 M4th-year (returning) CS 16 Daily Primary source of help P13 M3rd-year CS 4Daily Primary source of help P14 F2nd-year DS 1Sporadic Debugging (secondary) P15 M3rd-year GD/CS 2Sporadic Code-writing assistance (tertiary) P16 F4th-year CS 6Daily Primary source of help P17 F4th-year CogSci, IxD 1Daily Debugging (primary)\n\nand contrast questions elicited deeper responses as participants naturally incorporated examples to justify their reasoning. We also asked participants to reflect on whether and how access to genAI tools affected their social interactions with peers. They were then asked to share observations about any changes they noticed among their peers or within their learning communities. Asking participants to share their observations had two purposes: 1) it mitigated potential biases by encouraging descriptive rather than purely evaluative responses, and 2) it allowed interviewers to probe at complex social dynamics and potential implicit biases. \n\n## 3.3 Thematic Analysis of Interview Transcripts \n\nTwo researchers conducted the interviews on Zoom, which au-tomatically transcribed the interview recordings. The transcripts were reviewed, corrected for transcription errors, and anonymized. The transcripts were then analyzed using a reflexive thematic anal-ysis [6]. Three researchers first worked individually to open-code the responses [53], developing their own interpretations and en-suring reflexivity. After this individual phase, researchers held pe-riodic group discussions to share and reflect on their insights. The purpose of these discussions was to deepen their interpretation, but not necessarily to form consensus, as that is not the goal of in-ductive analysis [6]. Rather than compromising reflexivity, the dis-cussions supported it by encouraging researchers to interrogate their assumptions and consider alternative perspectives. Themes were developed iteratively, and each theme is presented with quotes from participants to provide interpretive context. \n\n## 4 RESULTS \n\nTable 1 summarizes the demographics of the participants. The par-ticipants varied by gender (8 women, 9 men), university levels (3 first-year, 3 second-year, 6 third-year, and 5 fourth-year students. This included returning students who took gap years), comput-ing majors, and years of programming experience. GenAI usage patterns also varied: some students used it daily as their primary source of help, others used it more sporadically, and some avoided using it altogether. These varied usage patterns informed our un-derstanding of the changing peer help-seeking interactions. \n\n## 4.1 Peer-to-Peer Relationships \n\n4.1.1 GenAI as an Intermediary in Help-Seeking Interactions. When asked about their experiences seeking help from peers, most stu-dents (13 out of 17) described how help-seeking interactions were now often mediated by genAI tools, regardless of whether they personally used these tools. For example, P5, a self identified non-user [4] of genAI, described beginning to notice that her friends would share AI-generated code with her anytime that she asked for a ‘push in the right direction.’ She went on to explain: \n\n“Every sentence you hear: ‘Oh, GPT!’ Even if I don’t use it, I definitely still indirectly use it. You can’t really escape that ...like if I asked for help, and the help came from a human, well, they probably they got it from ChatGPT still. They don’t redirect me to GPT. They just give me what they got out of GPT ...which is why I say like, even though I haven’t personally used it, I feel it’s inevitable.” (P5) ITiCSE 2025, June 27-July 2, 2025, Nijmegen, Netherlands Hou et al. \n\nP5’s experience illustrates a shift in help-seeking from authen-tic peer interactions to an AI-mediated exchange. Such mediated interactions were prevalent across participants, spanning both reg-ular and sporadic genAI users, suggesting that this phenomenon is not exclusive to non-users. From the perspective of P5, a student who was providing help, these ‘referrals’ to ChatGPT are typical: \n\n“Sometimes, they [peers] would ask me a question, and \n\nI would ChatGPT it and give it back. They’re like, ‘Thank you, you helped me so much!’ I’m like, ‘I did nothing.’ It’s such a thing now.” (P16) These ‘referrals’ to genAI, while efficient, appeared to erode opportunities for meaningful peer interaction. For some students, this shift appeared to cause harm. P3, reflecting on a time when she expressed vulnerability, shared: \n\n“If you say that you’re struggling, someone probably will respond, being like, ‘Oh, just ChatGPT that in-stead.’ And that’s like the biggest change I’ve seen.” \n\nStudents, like P3, who ask for help from peers and are rejected or redirected may be more reluctant to ask for help from their peers and friends in the future, especially given the pre-existing socio-emotional barriers for help-seekers [14]. Descriptions of these redirections and missed opportunities for authentic peer interaction were common in the interviews, espe-cially among more senior students (P3, P5, P6, P10, P13, P16), who described noticing a shift that has occurred with widespread genAI use. P13 lamented this as a loss but also acknowledged the trade-offs, sharing that the “sense of comfort, knowing that my friend will be able to help me...like that camaraderie because you know you’re both suffering in the assignment. [Now] most of the time, if GPT has been able to solve it, then we’re not gonna ask.” P13 elaborated by saying the perceived cost of asking a friend, whether it be time, so-cial capital, or effort, was often no longer worth paying given the convenience of genAI alternatives, despite finding it more comfort-ing and emotionally fulfilling to receive help from their friends. P5, a student who had left her R1 university for two years be-fore returning, described the prevalence of AI tools as a ‘culture shock,’ observing that “[unlike] how it was a few years ago, all roads lead to GPT. ” This reflects a broader trend among partic-ipants, illustrated by the use of adjectives like ‘unavoidable’ (P2) and ‘inevitable’ (P5) to describe the mediated help-seeking interac-tions between peers and genAI. The use of this language suggests that these types of interaction may have rapidly evolved into an acceptable, and perhaps even expected, norm in help-seeking. \n\n4.1.2 Shame and Stigma Surrounding GenAI Usage. Despite the normalization and widespread adoption of genAI, their use is not without tension. Seven participants expressed experiencing shame or stigma associated with genAI usage. These concerns emerged unprompted, suggesting they may be highly salient aspects of stu-dents’ lived experiences. Students indicated that openly using genAI— or being perceived as overly reliant on it—carried social risks, often tied to broader perceptions of academic integrity and competence. Students shared fears of being judged as ‘lazy,’ ‘stupid,’ or ‘fool-ish’ (P4, P15, P16), and skepticism toward genAI users was com-mon, with some describing reliance on these tools as a marker of being ‘less intelligent’ (P14, P16). P4 and P14 recounted how these social risks were compounded by fears regarding the use of genAI in the presence of professors or authority figures, even with ex-plicit permission to use them. For example, P4 recounted a seminar where students avoided using genAI, despite its permitted use, out of fear of being judged or accused of cheating: \n\n“Half the people are kind of scared. They don’t want to use [ChatGPT] in class like they’ll use it at home, be-cause [at home] no one’s watching them, no one cares... People were scared to use AI because they didn’t wanna be looked down on or make it seem like they were cheating . But to be honest, the first few people that figured it out were using Gemini.” (P4) This reluctance to engage with genAI in public reflects a new social norm students where private reliance coexists with public hesitation. P14 shared the following related perspectives, “People definitely use it. They just don’t talk about it... [Professors] allow you to use it. It still feels like it’s wrong somehow. ”\n\nThe role of social context in mitigating shame is also evident. P15 contrasted using genAI in front of strangers versus friends: \n\n‘The stranger might look at you and see your failure ...but with friends, you just understand [why they use genAI].’ The term ‘failure’ here is striking, indicating that reliance on genAI may be internal-ized as a sign of personal or academic inadequacy, with potential implications for students’ self-efficacy. However, the contrast P15 draws between strangers and friends highlights the role of trust and shared understanding in mitigating these negative emotions. This speaks to the nuanced social dynamics, where students’ will-ingness to disclose reliance on genAI may depend on how they perceive their standing within their social groups or communities. \n\n4.1.3 Impacts on Diverse Perspectives. P11 noticed that ChatGPT has made people less willing to interact: “It has made people more lazy when it comes to learning and with each other...People are less social now ’cause my peers will tend to not ask me or our other peers questions when they might have [before].” However, when asked if this applied to him personally, P11 acknowledged it impacted him “only a little bit. I still ask my friends what they got for their solution.” When prompted about why he preferred help from friends over genAI, P11 likened programming to handwriting, of-fering an analogy: \n\n“AI will only give you the direct best answer...which will work. But it can’t give you the different style of pro-gramming that humans have. My friends will have a different style of coding than I will. It’s like handwrit-ing, which is something AI can’t replicate. AI will only give you Times New Roman, and like, people will give you handwriting.” (P11) Four other students (P6, P8, P10, P11) also spoke about genAI increased homogenization and diminished discourse in their learn-ing communities. P6 was concerned that genAI could flatten criti-cal discourse, “When people are more satisfied with generative AI as their main source of information, that creates less dis-cussion, which is often needed more in schools, because discus-sion is what lets people in education actually adjust to the individual.” \n\nAlthough the majority of students were able to observe changes to social dynamics, only a small minority of students were able How Generative AI is Eroding Social Interactions and Student Learning Communities ITiCSE 2025, June 27-July 2, 2025, Nijmegen, Netherlands \n\nto articulate the advantages and disadvantages of these observed changes. \n\n4.1.4 Impacts on Mentorship. Reliance on genAI tools may poten-tially hinder students’ development of critical help-seeking skills and access to mentorship, resulting in a growing disconnect be-tween novice and experienced students. While many students dis-cussed the tangible benefits of genAI in addressing specific ques-tions, fewer acknowledged the intangible benefits of seeking hu-man assistance, such as navigating the hidden curriculum [35, 37], developing socio-emotional skills, and nurturing connections with peers and mentors. For example, P4 described the ability to avoid socio-emotional aspects by interacting with genAI tools, \n\n“There’s a lot you have to take into account: you have to read their tone, do they look like they’re in a rush...versus \n\nwith ChatGPT, you don’t have to be polite. ” (P4) \n\nSeveral senior students highlighted an emerging disconnect, ex-acerbated by genAI, between novice and experienced students. P6, a fourth-year and a CS honor society board member, shared: \n\n“There’s a lot less interaction between entry-level and more experienced [students]...There’s this disconnect: an over-reliance on AI and not really understanding prob-lems and not asking people who actually work in the field for help.” (P6) This anecdote illustrates the well-documented, pre-existing socio-emotional barriers that come with help-seeking. Students, who may struggle to articulate questions or accept the vulnerability that comes with asking for help, can increasingly turn to genAI to avoid these challenges. In this case, AI may be reinforcing these avoid-ance behaviors. As P15, a senior student, sums up: “It seems that GPT has everything, every answer. So you find students not then in-teracting with other classmates or colleagues.” \n\nHowever, multiple students recognized the role instructors and peers have in helping them navigate the hidden curriculum [35]. P9 describes this value of finding information that you might not know to look for when interacting with peers and instructors: \n\n“Human conversations can have the added benefit of, like, you can get knowledge that you weren’t re-ally intending to get ... Professors who really know their stuff can explain it and also connect it to different concepts. I don’t think ChatGPT can do that.” (P9) \n\n4.1.5 Impacts on Motivation. According to students, peers provided unique value by inspiring and motivating them. For example, stu-dents described how engaging with peers exposed them to relevant opportunities (P2, P4), fueled their passion for computing (P6, P7, P15, P17), and helped them stay engaged while learning (P13, P15). P17 said that connecting with humans and sharing life experiences inspired their interest in computing: \n\n“[My classmates’] experiences can be shared, their feelings, whatever desires they have, what drives them - it can also impact me. Like, ‘That was pretty cool, I kind of want to go into that, too’... I had a friend interested in designing a product for image generation AI systems, and I just saw their passion. Being pas-sionate about it made it so interesting to me. ” (P17) Students also spoke about how friends played an important role in keeping them engaged and motivated. P13 explained that, “When we’re here with friends, there’s more of the social aspect, which makes it more fun. Whereas with GPT, you’re like, ‘Okay, well, this is an-other tool.”’ Similarly, P15 explains how shared accomplishments and working hard together was a major motivator: \n\n“With friends, when you get it right together, it feels like an achievement; it’s you and your friends grinding it out. I’m more motivated with my friends than GPT.” \n\nIn contrast, P11, a self-taught CS student, admitted that genAI made him “very unmotivated to learn programming, which is something I used to really care about. I feel like, what point is there to it any-more, since AI can do it so well.\" Unlike P15, whose motivation was tied to peer collaboration, P11’s motivation was tied to developing his own skills, which he felt had lost meaning due to genAI. For students who are motivated primarily by competence or mastery, genAI may make learning feel less meaningful, while students mo-tivated by peer collaboration may be impacted differently. Those who said genAI tools increased their motivation described reduced wheel-spinning (P12, P16) and the ability to explore topics more easily (P9, P12). For instance, P12 said, “Rather than spend a whole evening stuck on a problem, I can quickly identify the issues and...make a lot more progress, and then I spend less time frustrated and more time programming because I enjoy it.\" \n\nWhile genAI tools can provide significant benefits when it comes to solving the tangible problems that students face in their assign-ments and work, there are aspects that cannot be replaced by these tools. The drive, unique interests, and passions of one’s peers and community cannot be replicated by genAI tools. \n\n4.1.6 Impacts on Community and Feelings of Isolation. Towards the end of the interview, participants were asked how genAI usage may or may not be affecting their learning community and if they had any of their own experiences on the matter. The majority of students (11 out of 17) had noticed impacts to their community, such as increased feelings of isolation. P2, a daily user of genAI, shared how genAI tools made it easier for him to become more isolated: “I don’t really actively go out of my way to socialize with people... So if I’m relying more on GPT, I might be more isolated in my room, instead of having to go out and talk to people. ” P2 later observed how personal experience and insight from his peers was an important part of developing a better understanding of the field and finding future career oppor-tunities: “If you’re alone, you might not even know about what’s out there, how to bolster your resume, things like that.” However, this awareness did not appear to reduce his genAI reliance or feelings of isolation. In addition, students observed that activity in online communi-cation platforms like Discord was decreasing with the rise of genAI. As these crucial community spaces become less active, students are cut off from a source of social support. P16 highlights this problem: \n\n“‘We used to in every class have a Discord. It used to be like a lot of people just asking questions about maybe like, a lab or a homework... I guess everyone’s just Chat-GPT now. Like the new classes that I have now, we still ITiCSE 2025, June 27-July 2, 2025, Nijmegen, Netherlands Hou et al. \n\nhave the Discord, but nobody really talks because most or all the questions are answered by ChatGPT.” \n\nP17, a student who no longer used Discord, shared a theory about why this is happening: “I did browse a lot more on like Dis-cord and Slack [before genAI] for what other people asked...once I started using ChatGPT a bit more, I stopped browsing through Slack and Piazza.” \n\nStudents’ responses suggest feelings of isolation that are com-pounded by the erosion of social spaces on which they once relied. This raises concerns that learning communities may be at risk as students disengage from them. \n\n## 5 DISCUSSION \n\nOur findings suggest that genAI tools may be reshaping the so-cial fabric of computing education. Students described that many of their peer interactions are now frequently mediated by genAI. For example, students reported that help requests were often redi-rected to genAI or included genAI outputs instead of direct support from peers, a trend that even affected non-users of genAI [4]. This mediation undermined the value of peer interactions, and students unanimously perceived a reduction in peer interactions as students receive help from genAI instead of their classmates. Traditionally, peer interactions fostered camaraderie and mutual support which contributed to the formation of informal student learning commu-nities [3, 28]. However, as genAI disrupts these social interactions, the mechanisms that drive community building may be eroding. Older students also shared concerns that incoming students are becoming less connected to senior mentors. This loss of mentor-ship opportunities reduces access to the hidden curriculum (i.e.: unwritten rules, strategies, and cultural norms that are essential for success [35]). Informal opportunities for interaction can serve as entry points into learning communities through legitimate periph-eral participation [28], and this reduced access will disproportion-ately impact first-generation students, who can not rely on their family to help them navigate the hidden curriculum [22]. Reductions in peer interactions and mentorship appear to have emotional consequences. Many students reported feeling lonely; some described how their passion for computing was sparked and sustained through collaboration and commiseration with peers. In contrast, genAI tools improve efficiency but cannot replace a friend who provides that socio-emotional or motivational support. In addition to these social and emotional harms, our findings suggest that recently discovered metacognitive pitfalls associated with AI usage [45] such as being misled by suggestions, may be further exacerbated. Students often rely on on socially shared reg-ulation to scaffold their own self-regulation abilities by observing how their peers set goals, monitor progress, and adjust their strate-gies [17, 49]. Without this scaffolding, students must face these new metacognitive challenges with even less less support. Our findings are both surprising and troubling. Students’ com-puting identities are socially constructed [32], they lean on each other for socio-emotional support and motivation [11, 44], and they develop a sense of belonging, which has been consistently linked to retention [54]. If genAI is interfering with these social help-seeking processes to the extent we observed through these interviews, research is urgently needed to protect these critical so-cial aspects of our learning environments. The social fabric of our learning communities—the peer interaction and connection that is integral to student success—appears to be at risk as genAI alters how students seek help and collaborate. Given the growing use of genAI [20, 21], researchers and educators must be increasingly mindful about fostering healthy peer interactions and scaffolding the development of formal and informal learning communities. \n\n## 5.1 Limitations \n\nThis study focuses on the perspectives of undergraduate comput-ing students and the perceived impacts of genAI on their peer in-teractions and learning communities. However, the changes de-scribed by participants are anecdotal and have not yet been cor-roborated empirically. Our sample size is small and focuses on the perspectives of students in North America. Consequently, our find-ings should not be generalized to all cultural contexts, as social interactions can differ across cultures. This study does not aim to generalize but to generate critical early insights into a changing social landscape, for which interviews are an appropriate method. Future work is needed to confirm the trends observed in this work. \n\n## 6 CONCLUSION \n\nIn this paper, we conducted interviews with 17 students from multi-ple universities to investigate the effects of genAI on social dynam-ics. Although exploratory in nature, our findings reveal concerning trends such as reduced social interactions, missed opportunities for mentorship, diminished motivation, and feelings of isolation. De-spite some benefits, AI may be inadvertently destroying the very \n\nsocial fabric that supports meaningful learning. Going forward, it is necessary to balance the integration of AI with the irreplaceable value of human interaction. \n\n## REFERENCES \n\n> [1] Vincent Aleven, Elmar Stahl, Silke Schworm, Frank Fischer, and Raven Wallace. 2003. Help Seeking and Help Design in Interactive Learning Environments. Re-view of Educational Research 73, 3 (2003), 277–320. [2] Jeff Allen, Steven B Robbins, Alex Casillas, and In-Sue Oh. 2008. Third-year college retention and transfer: Effects of academic performance, motivation, and social connectedness. Research in Higher Education 49 (2008), 647–664. [3] Albert Bandura. 1977. Social learning theory. Englewood Cliffs (1977). [4] Eric PS Baumer, Jenna Burrell, Morgan G Ames, Jed R Brubaker, and Paul Dour-ish. 2015. On the importance and implications of studying technology non-use.\n> Interactions 22, 2 (2015), 52–56. https://doi.org/10.1145/2723667 [5] Seth Bernstein, Paul Denny, Juho Leinonen, Lauren Kan, Arto Hellas, Matt Little-field, Sami Sarsa, and Stephen Macneil. 2024. \" Like a Nesting Doll\": Analyzing Recursion Analogies Generated by CS Students Using Large Language Models. In Proceedings of the 2024 on Innovation and Technology in Computer Science Ed-ucation V. 1 . ACM, 122–128. [6] Virginia Braun and Victoria Clarke. 2019. Reflecting on reflexive thematic anal-ysis. Qualitative Research in Sport, Exercise and Health 11, 4 (2019), 589–597. [7] Quincy Brown, Frank Lee, and Suzanne Alejandre. 2009. Emphasizing soft skills and team development in an educational digital game design course. In Proceed-ings of the 4th international Conference on Foundations of Digital Games .[8] Kun-Hung Cheng and Chin-Chung Tsai. 2011. An investigation of Taiwan Uni-versity students’ perceptions of online academic help seeking, and their web-based learning self-efficacy. Internet and Higher Education - INTERNET HIGH EDUC 14 (07 2011), 150–157. https://doi.org/10.1016/j.iheduc.2011.04.002 [9] Chao-Min Chiu, Meng-Hsiang Hsu, and Eric T.G. Wang. 2006. Understanding knowledge sharing in virtual communities: An integration of social capital and social cognitive theories. Decision Support Systems 42, 3 (2006), 1872–1888. [10] Michela Cozza. 2011. Bridging gender gaps, networking in computer science.\n> Gender, Technology and Development 15, 2 (2011), 319–337.\n\nHow Generative AI is Eroding Social Interactions and Student Learning Communities ITiCSE 2025, June 27-July 2, 2025, Nijmegen, Netherlands \n\n[11] Sloan Davis, Audrey Rorrer, Cori Grainger, and Sepi Hejazi Moghadam. 2023. Equitable student persistence in computing research through distributed career mentorship. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 . ACM, 94–100. [12] Paul Denny, Stephen MacNeil, Jaromir Savelka, Leo Porter, and Andrew Luxton-Reilly. 2024. Desirable characteristics for ai teaching assistants in programming education. In Proceedings of the 2024 on Innovation and Technology in Computer Science Education V. 1 . 408–414. [13] Augie Doebling and Ayaan M. Kazerouni. 2021. Patterns of Academic Help-Seeking in Undergraduate Computing Students. In Proceedings of the 21st Koli Calling International Conference on Computing Education Research . ACM. [14] Eureka Foong, Steven P Dow, Brian P Bailey, and Elizabeth M Gerber. 2017. On-line feedback exchange: A framework for understanding the socio-psychological factors. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems . 4454–4467. [15] Michail N Giannakos, Ilias O Pappas, Letizia Jaccheri, and Demetrios G Samp-son. 2017. Understanding student retention in computer science education: The role of environment, gains, barriers and usefulness. Education and Information Technologies 22 (2017), 2365–2382. [16] Sebastian Gutierrez, Irene Hou, Jihye Lee, Kenneth Angelikas, Owen Man, Sophia Mettille, James Prather, Paul Denny, and Stephen MacNeil. 2024. See-ing the Forest and the Trees: Solving Visual Graph and Tree Based Data Struc-ture Problems using Large Multimodal Models. arXiv preprint arXiv:2412.11088 \n\n(2024). [17] Allyson Hadwin, Sanna Järvelä, and Mariel Miller. 2011. Self-regulated, co-regulated, and socially shared regulation of learning in collaborative learning en-vironments . 65–84. [18] Susan Horwitz, Susan H Rodger, Maureen Biggers, David Binkley, C Kolin Frantz, Dawn Gundermann, Susanne Hambrusch, Steven Huss-Lederman, Ethan Munson, Barbara Ryder, et al. 2009. Using peer-led team learning to increase par-ticipation and success of under-represented groups in introductory computer science. ACM SIGCSE Bulletin 41, 1 (2009), 163–167. [19] Irene Hou, Owen Man, Sophie Mettille, Sebastian Gutierrez, Kenneth Angelikas, and Stephen MacNeil. 2023. More Robots are Coming: Large Multimodal Models (ChatGPT) can Solve Visually Diverse Images of Parsons Problems. In Proceed-ings of the 26th Australasian Computing Education Conference (ACE ’24) .[20] Irene Hou, Sophia Mettille, Owen Man, Zhuo Li, Cynthia Zastudil, and Stephen MacNeil. 2024. The Effects of Generative AI on Computing Students’ Help-Seeking Preferences. In Proceedings of the 26th Australasian Computing Educa-tion Conference (ACE ’24) . ACM, 39–48. https://doi.org/10.1145/3636243.3636248 [21] Irene Hou, Hannah Vy Nguyen, Owen Man, and Stephen MacNeil. 2024. The Evolving Usage of GenAI by Computing Students. In Proceedings of the 56th ACM Technical Symposium on Computer Science Education V.2 (SIGCSETS 2025) . ACM. [22] Anthony Abraham Jack. 2016. (No) Harm in Asking: Class, Acquired Cultural Capital, and Academic Engagement at an Elite University. Sociology of Education \n\n89, 1 (2016), 1–19. https://doi.org/10.1177/0038040715614913 [23] Stuart A Karabenick. 2003. Seeking help in large college classes: A person-centered approach. Contemporary educational psychology 28, 1 (2003), 37–58. [24] Majeed Kazemitabaar, Justin Chow, Carl Ka To Ma, Barbara J Ericson, David Weintrop, and Tovi Grossman. 2023. Studying the effect of AI code generators on supporting novice learners in introductory programming. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23) . ACM. [25] Majeed Kazemitabaar, Runlong Ye, Xiaoning Wang, Austin Zachary Henley, Paul Denny, Michelle Craig, and Tovi Grossman. 2024. Codeaid: Evaluating a class-room deployment of an llm-based programming assistant that balances student and educator needs. In Proceedings of the CHI Conference on Human Factors in Computing Systems . 1–20. [26] Celine Latulipe, Stephen MacNeil, and Brian Thompson. 2018. Evolving a data structures class toward inclusive success. In 2018 IEEE Frontiers in Education Con-ference (FIE) . IEEE, 1–9. [27] Sam Lau and Philip J. Guo. 2023. From ‘Ban It Till We Understand It’ to \"Resis-tance is Futile\": How University Programming Instructors Plan to Adapt as More Students Use AI Code Generation and Explanation Tools such as ChatGPT and GitHub Copilot. In Proceedings of the 2023 ACM Conference on International Com-puting Education Research V.1 (ICER ’23 V1) . ACM. [28] Jean Lave and Etienne Wenger. 2001. Legitimate peripheral participation in com-munities of practice. In Supporting lifelong learning . Routledge, 121–136. [29] Kathleen J Lehman, Kaitlin NS Newhouse, Sarayu Sundar, and Linda J Sax. 2023. Nevertheless, they persisted: Factors that promote persistence for women and racially/ethnically minoritized students in undergraduate computing. Computer Science Education 33, 2 (2023), 260–285. [30] Juho Leinonen, Paul Denny, Stephen MacNeil, Sami Sarsa, Seth Bernstein, Joanne Kim, Andrew Tran, and Arto Hellas. 2023. Comparing Code Expla-nations Created by Students and Large Language Models. arXiv preprint arXiv:2304.03938 (2023). [31] Mark Liffiton, Brad Sheese, Jaromir Savelka, and Paul Denny. 2023. Codehelp: Using large language models with guardrails for scalable support in program-ming classes. arXiv preprint arXiv:2308.06921 (2023). [32] Stephanie Lunn, Monique Ross, Zahra Hazari, Mark Allen Weiss, Michael Geor-giopoulos, and Kenneth Christensen. 2021. How do educational experiences predict computing identity? ACM Transactions on Computing Education (TOCE) \n\n22, 2 (2021), 1–28. https://doi.org/10.1145/3470653 [33] Stephen MacNeil, Paul Denny, Andrew Tran, Juho Leinonen, Seth Bernstein, Arto Hellas, Sami Sarsa, and Joanne Kim. 2023. Decoding Logic Errors: A Com-parative Study on Bug Detection by Students and Large Language Models. In \n\nProceedings of the 26th Australasian Computing Education Conference (ACE ’24) .[34] Stephen MacNeil, Andrew Tran, Arto Hellas, Joanne Kim, Sami Sarsa, Paul Denny, Seth Bernstein, and Juho Leinonen. 2023. Experiences from Using Code Explanations Generated by Large Language Models in a Web Software Develop-ment E-Book. In Proc. SIGCSE’23 . ACM, 6 pages. [35] Eric Margolis. 2001. The hidden curriculum in higher education . Psychology Press. [36] Allison Mishkin. 2019. Applying Self-Determination Theory towards Motivat-ing Young Women in Computer Science. In Proceedings of the 50th ACM Tech-nical Symposium on Computer Science Education (SIGCSE ’19) . Association for Computing Machinery, 1025–1031. https://doi.org/10.1145/3287324.3287389 [37] Kendall Nakai and Philip J Guo. 2023. Uncovering the Hidden Curriculum of University Computing Majors via Undergraduate-Written Mentoring Guides: A Learner-Centered Design Workflow. In Proceedings of the 2023 ACM Conference on International Computing Education Research-Volume 1 . 63–77. [38] Richard S. Newman. 1990. Children’s help-seeking in the classroom: The role of motivational factors and attitudes. Journal of Educational Psychology (1990). [39] Aadarsh Padiyath, Xinying Hou, Amy Pang, Diego Viramontes Vargas, Xingjian Gu, et al. 2024. Insights from social shaping theory: The appropriation of large language models in an undergraduate programming course. In Proceedings of the 2024 ACM Conference on International Computing Education Research .[40] Heather Pon-Barry, Becky Wai-Ling Packard, and Audrey St. John. 2017. Expand-ing capacity and promoting inclusion in introductory computer science: a focus on near-peer mentor preparation and code review. Computer Science Education \n\n27, 1 (2017), 54–77. [41] Leo Porter, Cynthia Bailey Lee, Beth Simon, Quintin Cutts, and Daniel Zingaro. 2011. Experience report: a multi-classroom report on the value of peer instruc-tion. In Proceedings of the 16th Annual Joint Conference on Innovation and Tech-nology in Computer Science Education . ACM, 138–142. [42] James Prather, Paul Denny, Juho Leinonen, Brett A. Becker, Ibrahim Albluwi, et al. 2023. The Robots Are Here: Navigating the Generative AI Revolution in Computing Education. In Proceedings of the 2023 Working Group Reports on Inno-vation and Technology in Computer Science Education . Association for Comput-ing Machinery. https://doi.org/10.1145/3623762.3633499 [43] James Prather, Juho Leinonen, Natalie Kiesler, Jamie Gorson Benario, et al. 2024. Beyond the Hype: A Comprehensive Review of Current Trends in Generative AI Research, Teaching Practices, and Tools. arXiv preprint arXiv:2412.14732 (2024). [44] James Prather, Lauren Margulieux, Jacqueline Whalley, Paul Denny, Brent N Reeves, Brett A Becker, Paramvir Singh, Garrett Powell, and Nigel Bosch. 2022. Getting by with help from my friends: Group study in introductory program-ming understood as socially shared regulation. In Proceedings of the 2022 ACM Conference on International Computing Education Research-Volume 1 . 164–176. [45] James Prather, Brent N Reeves, Juho Leinonen, Stephen MacNeil, Arisoa S Ran-drianasolo, Brett A Becker, Bailey Kimmel, Jared Wright, and Ben Briggs. 2024. The Widening Gap: The Benefits and Harms of Generative AI for Novice Pro-grammers. In Proceedings of the 2024 ACM Conference on International Comput-ing Education Research-Volume 1 . 469–486. [46] Thomas W Price, Yihuan Dong, and Dragan Lipovac. 2017. iSnap: towards intel-ligent tutoring in novice programming environments. In Proc. of the 2017 ACM SIGCSE Technical Symposium on computer science education . 483–488. [47] Mary Beth Rosson, John M Carroll, and Hansa Sinha. 2011. Orientation of un-dergraduates toward careers in the computer and information sciences: Gen-der, self-efficacy and social support. ACM Transactions on Computing Education (TOCE) 11, 3 (2011), 1–23. https://doi.org/10.1145/2037276.2037278 [48] Jaromir Savelka, Arav Agarwal, Marshall An, Chris Bogart, and Majd Sakr. 2023. Thrilled by Your Progress! Large Language Models (GPT-4) No Longer Struggle to Pass Assessments in Higher Education Programming Courses. (2023), 78–92. [49] Dale Schunk and Barry Zimmerman. 1997. Social origin of self-regulatory com-petence. Educational Psychologist - EDUC PSYCHOL 32 (09 1997), 195–208. [50] Judy Sheard, Paul Denny, Arto Hellas, Juho Leinonen, Lauri Malmi, and Simon. 2024. Instructor Perceptions of AI Code Generation Tools-A Multi-Institutional Interview Study. In Proceedings of the 55th ACM Technical Symposium on Com-puter Science Education V. 1 . 1223–1229. [51] Brad Sheese, Mark Liffiton, Jaromir Savelka, and Paul Denny. 2024. Patterns of student help-seeking when using a large language model-powered program-ming assistant. In Proceedings of the 26th Australasian Computing Education Con-ference .[52] Calvin Smith and Debra Bath. 2006. The role of the learning community in the development of discipline knowledge and generic graduate outcomes. Higher Education 51 (2006), 259–286. ITiCSE 2025, June 27-July 2, 2025, Nijmegen, Netherlands Hou et al. \n\n[53] Anselm L Strauss and Juliet Corbin. 2004. Open coding. Social research methods: A reader (2004), 303–306. [54] Vincent Tinto. 1997. Classrooms as communities: Exploring the educational character of student persistence. The Journal of higher education 68, 6 (1997). [55] Elizabeth Wirtz, Amy Dunford, Edward Berger, Elizabeth Briody, Gireesh Gu-ruprasad, and Ryan Senkpeil. 2018. Resource usage and usefulness: academic help-seeking behaviours of undergraduate engineering students. Australasian Journal of Engineering Education 23, 2 (2018), 62–70. [56] Carolin Wortmann and Jan Vahrenhold. 2024. Regulation, Self-Efficacy, and Par-ticipation in CS1 Group Work. In Proceedings of the 2024 ACM Conference on International Computing Education Research - Volume 1 (ICER ’24) . Association for Computing Machinery. https://doi.org/10.1145/3632620.3671115 [57] Stephanie Yang, Hanzhang Zhao, Yudian Xu, Karen Brennan, and Bertrand Schneider. 2024. Debugging with an AI Tutor: Investigating Novice Help-seeking Behaviors and Perceived Learning. In Proceedings of the 2024 ACM Con-ference on International Computing Education Research-Volume 1 . 84–94. [58] Ramazan Yilmaz and Fatma Gizem Karaoglan Yilmaz. 2023. The effect of gener-ative artificial intelligence (AI)-based tool use on students’ computational think-ing skills, programming self-efficacy and motivation. Computers and Education: Artificial Intelligence 4 (2023), 100147. [59] Cynthia Zastudil, Magdalena Rogalska, Christine Kapp, Jennifer Vaughn, and Stephen MacNeil. 2023. Generative ai in computing education: Perspectives of students and instructors. In 2023 IEEE Frontiers in Education Conference (FIE) .IEEE, 1–9. https://doi.org/10.1109/FIE58773.2023.10343467\n"" files="" embeds=""""> |
|
<summary>Tool Executed</summary> |
|
</details> |
|
This paper is extremely relevant to your concern! Let me integrate its findings into an expanded map of the concern space. |
|
|
|
--- |
|
|
|
# Mapping the Space of Generative AI Concerns in Game Design Education |
|
|
|
You're right that "I object to GenAI" likely masks a heterogeneous coalition. The Hou et al. (2025) paper—a study of 17 computing students across 7 universities—adds empirical texture, particularly around **social and pedagogical harms** that students may not even consciously articulate. |
|
|
|
--- |
|
|
|
## 1. Economic/Labor Concerns |
|
- **Job displacement** — "This will eliminate entry-level positions" |
|
- **Wage suppression** — "Even if jobs exist, leverage shifts to employers" |
|
- **Skill devaluation** — "Why did I spend years learning this craft?" |
|
- **Portfolio homogenization** — harder to differentiate yourself when everyone outputs similar work |
|
|
|
--- |
|
|
|
## 2. Ethical/Justice Concerns |
|
- **Training data consent** — "Artists didn't agree to this" |
|
- **Compensation** — "Value extraction without payment" |
|
- **Attribution erasure** — "Laundering style without credit" |
|
- **Power asymmetry** — benefits accrue to platform owners, not creators |
|
|
|
--- |
|
|
|
## 3. Craft/Quality Concerns |
|
- **Aesthetic homogenization** — "Everything converges to average" / "AI will only give you Times New Roman; people give you handwriting" (P11, Hou et al.) |
|
- **Loss of intentionality** — "Design is problem-solving, not generation" |
|
- **Technical debt** — output requires extensive cleanup |
|
- **Flattened discourse** — "Less discussion, which is often needed... because discussion is what lets people adjust to the individual" (P6) |
|
|
|
--- |
|
|
|
## 4. Pedagogical Concerns |
|
- **Skill atrophy** — "Students won't develop foundational abilities" |
|
- **Process matters** — "Learning happens in the struggle" |
|
- **Metacognitive harm** — students lose practice in recognizing *what they don't know* |
|
- **Hidden curriculum loss** — GenAI can't transmit unwritten professional norms, strategies, or mentorship (Hou et al., §4.1.4) |
|
|
|
--- |
|
|
|
## 5. Social/Relational Concerns *(the paper's core contribution)* |
|
|
|
| Phenomenon | Evidence from Hou et al. | |
|
|------------|--------------------------| |
|
| **AI as intermediary** | "All roads lead to ChatGPT"—even non-users receive AI-mediated help when peers copy-paste ChatGPT output (P5, P16) | |
|
| **Redirection instead of help** | "If you say you're struggling, someone will probably respond, 'Oh, just ChatGPT that'" (P3) | |
|
| **Erosion of peer learning** | Discord/Slack channels going silent: "We still have the Discord, but nobody really talks" (P16) | |
|
| **Mentorship disconnect** | Senior → junior knowledge transfer declining; first-gen students lose access to tacit knowledge (P6) | |
|
| **Isolation & loneliness** | "I might be more isolated in my room instead of having to go out and talk to people" (P2) | |
|
| **Loss of camaraderie** | "That sense of comfort knowing my friend will help me... that camaraderie because you're both suffering" (P13) | |
|
| **Motivational hollowing** | "I feel like, what point is there to [learning programming] anymore, since AI can do it so well" (P11) | |
|
|
|
--- |
|
|
|
## 6. Identity/Cultural Concerns |
|
- **What it means to be a designer** — existential threat to self-concept |
|
- **Community solidarity** — using AI feels like crossing a picket line |
|
- **Shame & stigma** — students hide AI use even when permitted, fearing they'll be seen as "lazy" or "less intelligent" (P4, P14, P15) |
|
- **Performative non-use** — public avoidance, private reliance |
|
|
|
--- |
|
|
|
## Diagnostic Questions for Your Students |
|
|
|
| Question | What it reveals | |
|
|----------|-----------------| |
|
| *"Would your concerns be resolved if training were fully consensual and artists compensated?"* | Separates **justice-based** objectors from those with craft/social/pedagogical concerns | |
|
| *"Would your concerns be resolved if AI output were obviously distinguishable and labeled?"* | Tests whether **attribution/authenticity** is the core issue | |
|
| *"Do you worry more about what AI does to the *industry* or what it does to *you as a learner*?"* | Distinguishes **economic** from **developmental** concerns | |
|
| *"Has your use of AI changed how often you talk to classmates about assignments?"* | Surfaces **social erosion** concerns—possibly ones they haven't consciously noticed | |
|
| *"When a peer asks you for help, do you ever just send them a ChatGPT answer?"* | May trigger recognition of their own role in the dynamic Hou et al. describe | |
|
|
|
--- |
|
|
|
## A Possible Hidden Fault Line |
|
|
|
Hou et al. reveal something your students may not articulate: **even objectors may be *participating* in the erosion**. A student who refuses to use GenAI personally but receives AI-mediated help from friends—or redirects others to ChatGPT when asked—is still inside the system. This could produce cognitive dissonance, moral residue, or quiet resentment that manifests as vocal objection without clear rationale. |
|
|
|
--- |
|
|
|
Want me to draft a classroom exercise or anonymous survey to surface these distinctions? |
|
|
|
### USER |
|
There's also energy and water usage (training and inference), likewise for building the hardware used, there's corporate control, corporate support and collaboration with militaries and governments or other parties, being an active or passive contributor to one or more harms, erasing indigenous and other non dominant cultures, etc. Some concerns are immediate and others focus on outcomes of trends. Some are centered on the behavior of specific models or specific companies while other are systemic. Some are about perceived advantage, conditional on actual effectiveness, etc. |
|
|
|
Let's organize all this from the perspective of students deciding as a project team about their team's internal GenAI policy this academic quarter (to be written in AGENTS.md so that tools can conform automatically). |
|
|
|
### ASSISTANT |
|
# Framework for Student Project Teams Developing GenAI Policies |
|
|
|
## The Decision Context |
|
|
|
Your team must write an `AGENTS.md` file that will govern AI tool behavior for your project this quarter. This isn't a personal moral stance—it's a **collective operational policy** that must be: |
|
|
|
1. **Specific enough** for tools to parse and conform to |
|
2. **Justified enough** that teammates with different underlying concerns can commit to it |
|
3. **Revisable** as circumstances or understanding change |
|
|
|
The challenge: teammates who all say "I have concerns about GenAI" may discover they disagree fundamentally about *which* concerns matter, *when* they apply, and *what* follows from them. |
|
|
|
--- |
|
|
|
## A Taxonomy of Concerns |
|
|
|
### Dimension 1: Locus of Harm |
|
|
|
| Locus | Example Concerns | |
|
|-------|------------------| |
|
| **Self** | Skill atrophy, grade integrity, portfolio authenticity, shame/stigma, motivational hollowing ("why bother learning?") | |
|
| **Team** | Free-rider dynamics, knowledge silos, eroded collaboration, unequal AI access/literacy, communication breakdown | |
|
| **Community** | Silent Discords, lost mentorship pipelines, weakened peer networks, norm erosion (Hou et al.) | |
|
| **Industry/Profession** | Job displacement, wage suppression, devalued credentials, homogenized aesthetics | |
|
| **Creators whose work was trained on** | Consent violations, unpaid value extraction, style laundering, attribution erasure | |
|
| **Broader society** | Epistemic pollution, cultural flattening, indigenous/minority erasure, misinformation | |
|
| **Environment** | Training energy/water, inference costs, hardware manufacturing, e-waste | |
|
| **Geopolitics** | Corporate power concentration, military/surveillance partnerships, authoritarian collaboration | |
|
|
|
### Dimension 2: Temporal Scope |
|
|
|
| Scope | Question | |
|
|-------|----------| |
|
| **Immediate/This Quarter** | Does using GenAI on *this project* harm *us* right now? | |
|
| **Developmental/Career** | Will reliance now compromise skills we need later? | |
|
| **Trend/Systemic** | Does participation—even if locally harmless—contribute to harmful trajectories? | |
|
|
|
### Dimension 3: Conditionality |
|
|
|
| Type | Structure | |
|
|------|-----------| |
|
| **Unconditional** | "GenAI is wrong regardless of outcomes" (deontological) | |
|
| **Conditional on effectiveness** | "If it actually helps, maybe okay; if it doesn't, why bother with the costs?" | |
|
| **Conditional on alternatives** | "Wrong if human labor was available; acceptable if not" | |
|
| **Conditional on specific actors** | "OpenAI no, local open-weights models yes" / "Midjourney no, Adobe Firefly yes" | |
|
| **Conditional on use type** | "Brainstorming yes, final assets no" / "Code scaffolding yes, art generation no" | |
|
|
|
### Dimension 4: Agency & Complicity |
|
|
|
| Stance | Reasoning | |
|
|--------|-----------| |
|
| **Active contribution** | "Using it increases demand, funds the companies, normalizes the practice" | |
|
| **Passive contribution** | "Even abstaining, I benefit from others' use and can't escape the ecosystem" | |
|
| **Complicity through silence** | "Not objecting publicly makes me part of the problem" | |
|
| **Moral taint regardless** | "The tools exist; my hands aren't clean either way" | |
|
| **Individual action futility** | "My choice doesn't matter at scale, so why sacrifice?" | |
|
|
|
### Dimension 5: What Counts as "Use" |
|
|
|
This matters for writing enforceable policy: |
|
|
|
| Category | Examples | |
|
|----------|----------| |
|
| **Direct generation** | Prompting ChatGPT, Midjourney, Copilot for outputs you use | |
|
| **Indirect generation** | Teammate generates, you incorporate | |
|
| **AI-assisted search** | Bing/Google AI summaries, Perplexity | |
|
| **AI-enhanced tools** | Photoshop generative fill, Premiere auto-captions | |
|
| **Training contribution** | Your work being scraped for future models | |
|
| **Consultation/learning** | Asking AI to explain a concept you then apply yourself | |
|
| **Editing/refinement** | Human-created base, AI polish | |
|
| **Rejection sampling** | Generate many, select one, modify heavily | |
|
|
|
--- |
|
|
|
## Surfacing Team Disagreements: A Workshop Protocol |
|
|
|
### Step 1: Individual Concern Mapping (10 min, private) |
|
|
|
Each teammate answers: |
|
|
|
1. List your top 3 concerns about GenAI use on this project |
|
2. For each, identify: |
|
- Locus of harm (self → geopolitics) |
|
- Temporal scope (this quarter → systemic trend) |
|
- Is this conditional? On what? |
|
3. What uses (if any) would you find acceptable? Unacceptable? |
|
|
|
### Step 2: Reveal & Cluster (15 min, shared) |
|
|
|
Post anonymously (or not). You may discover: |
|
|
|
- **Surface agreement, deep disagreement**: "We all oppose GenAI" but for completely different reasons with different implications |
|
- **Unexpected alliances**: The person worried about skill atrophy and the person worried about artist exploitation might agree on "no image generation" but disagree on "code assistance" |
|
- **Incommensurable values**: Someone's unconditional objection can't be reconciled with someone's "it's fine if effective" |
|
|
|
### Step 3: The Diagnostic Questions (10 min, discussion) |
|
|
|
| Question | What it distinguishes | |
|
|----------|----------------------| |
|
| "Would your concerns disappear if training were consensual and compensated?" | Justice-focused vs. other concerns | |
|
| "Would your concerns disappear if AI output were perfect and undetectable?" | Quality/authenticity vs. process concerns | |
|
| "Would your concerns disappear if only *you* used AI and no one else did?" | Individual vs. systemic thinking | |
|
| "Does it matter which company's model we use?" | Corporate-specific vs. technology-general objections | |
|
| "Is using AI to learn a concept different from using AI to produce a deliverable?" | Process vs. product orientation | |
|
| "If AI use became universal in industry, would that change your view?" | Idealism vs. pragmatic adaptation | |
|
|
|
### Step 4: Policy Drafting (20 min) |
|
|
|
Given the disagreements surfaced, find the **operational intersection**—what can everyone live with for 10 weeks? |
|
|
|
--- |
|
|
|
## AGENTS.md Template |
|
|
|
```markdown |
|
# Project GenAI Policy |
|
## Team: [Name] |
|
## Quarter: [Term Year] |
|
## Last Updated: [Date] |
|
|
|
### Policy Scope |
|
This policy governs AI tool usage for [Project Name]. All team members have agreed to these constraints. AI-assisted tools should parse and respect these rules. |
|
|
|
### Permitted Uses |
|
- [ ] Conceptual brainstorming (not incorporated verbatim) |
|
- [ ] Explaining error messages or documentation |
|
- [ ] Code autocompletion (single-line suggestions) |
|
- [ ] Grammar/spelling checking |
|
- [ ] Accessibility (captions, alt-text generation) |
|
- [ ] [Other: specify] |
|
|
|
### Prohibited Uses |
|
- [ ] Generating art assets for final deliverables |
|
- [ ] Writing design documents or narrative content |
|
- [ ] Multi-line code generation without line-by-line review |
|
- [ ] Generating content in styles of identifiable artists |
|
- [ ] Using outputs from models trained on non-consensual data (specify which) |
|
- [ ] [Other: specify] |
|
|
|
### Conditional Uses (requires team discussion) |
|
- [ ] [Describe use case and conditions] |
|
|
|
### Approved Tools |
|
- [Tool name]: [permitted uses] |
|
- [Tool name]: [prohibited] |
|
|
|
### Prohibited Providers |
|
- [Company/model]: [reason] |
|
|
|
### Documentation Requirements |
|
- All AI-assisted work must be [logged/marked/disclosed in commit messages] |
|
- Team members must be able to explain and modify any AI-assisted code |
|
|
|
### Review & Amendment |
|
- This policy may be revised by [unanimous/majority] team agreement |
|
- Disagreements escalate to [instructor/TA/process] |
|
|
|
### Signatures |
|
- [Name]: [Date] |
|
- [Name]: [Date] |
|
``` |
|
|
|
--- |
|
|
|
## Common Policy Archetypes |
|
|
|
| Archetype | Typical Justification | AGENTS.md Flavor | |
|
|-----------|----------------------|------------------| |
|
| **Full Prohibition** | Skill development, solidarity, unconditional objection | All generation prohibited; AI-enhanced tool features disabled | |
|
| **Learning Only** | Process matters, but AI can teach | Consultation/explanation permitted; no generated deliverables | |
|
| **Scaffolding Permitted** | Efficiency gains without replacement | Brainstorming, outlining, debugging OK; final work human-authored | |
|
| **Human-in-the-Loop** | Quality control + skill maintenance | Generation OK if every output is reviewed, understood, modified | |
|
| **Asset-Type Specific** | Different labor markets, different ethics | Code assistance yes, image generation no (or vice versa) | |
|
| **Provider Specific** | Corporate accountability matters | Open-weights models only; no OpenAI/Midjourney | |
|
| **Disclosure-Based** | Transparency over prohibition | Any use permitted if documented and disclosed | |
|
| **Pragmatic Adaptation** | Industry reality, competitive pressure | Match expected professional norms | |
|
|
|
--- |
|
|
|
## The Meta-Question for Your Team |
|
|
|
You're not just deciding what to *do*—you're deciding what kind of team you want to be, what kind of practitioners you're becoming, and what world you're helping build. |
|
|
|
The Hou et al. research suggests a warning: **even teams that don't use AI may find their collaboration eroded** by the ambient shift in norms. Your policy might need to address not just "should we use GenAI?" but "how do we protect the collaboration and learning that GenAI makes it easy to skip?" |
|
|
|
--- |
|
|
|
Want me to develop specific discussion prompts for teams with particular fault lines (e.g., one member strongly opposed, others pragmatic)? |
|
|
|
### USER |
|
Use svg to make a kind of tasteful grayscale flyer for this dimension system. We need recognition at a distance and precision up close. |
|
|
|
### ASSISTANT |
|
<details type="reasoning" done="true" duration="7"> |
|
<summary>Thought for 7 seconds</summary> |
|
> The user wants me to create an SVG flyer for the GenAI policy dimension system. They want it to be: |
|
> 1. Tasteful grayscale |
|
> 2. "Recognition at a distance" - meaning clear visual hierarchy, bold shapes, readable from far away |
|
> 3. "Precision up close" - meaning detailed information, fine typography, useful when examined closely |
|
> |
|
> This is essentially an information design challenge. I need to create something that works at multiple viewing distances - like a good poster. |
|
> |
|
> Let me think about the structure: |
|
> - A bold title/header that's visible from distance |
|
> - The 5 dimensions as the core organizational structure |
|
> - Clean typography |
|
> - Good use of white space |
|
> - Grayscale palette (blacks, whites, grays) |
|
> |
|
> For "recognition at a distance": |
|
> - Large bold title |
|
> - Clear geometric shapes to distinguish sections |
|
> - High contrast |
|
> - Maybe numbered dimensions with large numbers |
|
> |
|
> For "precision up close": |
|
> - The detailed taxonomy content |
|
> - Smaller text with the specifics |
|
> - Clean hierarchy within each dimension |
|
> |
|
> Let me design this as a vertical poster format, maybe around 800x1200 or similar proportions. |
|
> |
|
> I'll use: |
|
> - A strong header |
|
> - 5 dimensions as the main sections |
|
> - Each dimension gets a visual treatment |
|
> - Grayscale only |
|
> - Clean sans-serif feel (using system fonts available in SVG) |
|
> |
|
> Let me create this: |
|
</details> |
|
```svg |
|
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 800 1200"> |
|
<defs> |
|
<pattern id="dots" patternUnits="userSpaceOnUse" width="20" height="20"> |
|
<circle cx="10" cy="10" r="1" fill="#999"/> |
|
</pattern> |
|
<linearGradient id="fade" x1="0%" y1="0%" x2="0%" y2="100%"> |
|
<stop offset="0%" stop-color="#1a1a1a"/> |
|
<stop offset="100%" stop-color="#2d2d2d"/> |
|
</linearGradient> |
|
</defs> |
|
|
|
<!-- Background --> |
|
<rect width="800" height="1200" fill="#fafafa"/> |
|
|
|
<!-- Header block --> |
|
<rect width="800" height="200" fill="url(#fade)"/> |
|
<text x="50" y="85" font-family="Georgia, serif" font-size="14" fill="#888" letter-spacing="4">TEAM POLICY FRAMEWORK</text> |
|
<text x="50" y="145" font-family="Georgia, serif" font-size="52" fill="#fff" font-weight="bold">GenAI</text> |
|
<text x="235" y="145" font-family="Georgia, serif" font-size="52" fill="#666">Decisions</text> |
|
<text x="50" y="175" font-family="system-ui, sans-serif" font-size="13" fill="#999">Five dimensions for project teams writing AGENTS.md policies</text> |
|
|
|
<!-- Dimension 1: Locus of Harm --> |
|
<g transform="translate(0, 220)"> |
|
<rect x="0" y="0" width="8" height="160" fill="#1a1a1a"/> |
|
<text x="50" y="30" font-family="system-ui, sans-serif" font-size="64" fill="#e0e0e0" font-weight="200">1</text> |
|
<text x="110" y="30" font-family="Georgia, serif" font-size="22" fill="#1a1a1a" font-weight="bold">Locus of Harm</text> |
|
<text x="110" y="50" font-family="system-ui, sans-serif" font-size="11" fill="#666">Who or what is affected by this choice?</text> |
|
|
|
<!-- Scale visualization --> |
|
<g transform="translate(110, 70)"> |
|
<circle cx="0" cy="12" r="10" fill="#1a1a1a"/> |
|
<text x="0" y="16" font-family="system-ui, sans-serif" font-size="8" fill="#fff" text-anchor="middle">Self</text> |
|
|
|
<line x1="20" y1="12" x2="65" y2="12" stroke="#ccc" stroke-width="1"/> |
|
<circle cx="80" cy="12" r="10" fill="#333"/> |
|
<text x="80" y="16" font-family="system-ui, sans-serif" font-size="8" fill="#fff" text-anchor="middle">Team</text> |
|
|
|
<line x1="100" y1="12" x2="145" y2="12" stroke="#ccc" stroke-width="1"/> |
|
<circle cx="160" cy="12" r="10" fill="#555"/> |
|
<text x="160" y="16" font-family="system-ui, sans-serif" font-size="7" fill="#fff" text-anchor="middle">Com.</text> |
|
|
|
<line x1="180" y1="12" x2="225" y2="12" stroke="#ccc" stroke-width="1"/> |
|
<circle cx="240" cy="12" r="10" fill="#777"/> |
|
<text x="240" y="16" font-family="system-ui, sans-serif" font-size="7" fill="#fff" text-anchor="middle">Ind.</text> |
|
|
|
<line x1="260" y1="12" x2="305" y2="12" stroke="#ccc" stroke-width="1"/> |
|
<circle cx="320" cy="12" r="10" fill="#999"/> |
|
<text x="320" y="16" font-family="system-ui, sans-serif" font-size="6" fill="#fff" text-anchor="middle">Creators</text> |
|
|
|
<line x1="340" y1="12" x2="385" y2="12" stroke="#ccc" stroke-width="1"/> |
|
<circle cx="400" cy="12" r="10" fill="#aaa"/> |
|
<text x="400" y="16" font-family="system-ui, sans-serif" font-size="6" fill="#333" text-anchor="middle">Society</text> |
|
|
|
<line x1="420" y1="12" x2="465" y2="12" stroke="#ccc" stroke-width="1"/> |
|
<circle cx="480" cy="12" r="10" fill="#bbb"/> |
|
<text x="480" y="16" font-family="system-ui, sans-serif" font-size="7" fill="#333" text-anchor="middle">Env.</text> |
|
|
|
<line x1="500" y1="12" x2="545" y2="12" stroke="#ccc" stroke-width="1"/> |
|
<circle cx="560" cy="12" r="10" fill="#ccc"/> |
|
<text x="560" y="16" font-family="system-ui, sans-serif" font-size="6" fill="#333" text-anchor="middle">Geo.</text> |
|
</g> |
|
|
|
<text x="110" y="115" font-family="system-ui, sans-serif" font-size="9" fill="#888">Self: skill atrophy, shame • Team: free-riders, silos • Community: silent Discords</text> |
|
<text x="110" y="130" font-family="system-ui, sans-serif" font-size="9" fill="#888">Industry: jobs, wages • Creators: consent, compensation • Society: cultural erasure</text> |
|
<text x="110" y="145" font-family="system-ui, sans-serif" font-size="9" fill="#888">Environment: energy, water, hardware • Geopolitics: corporate power, military ties</text> |
|
</g> |
|
|
|
<!-- Dimension 2: Temporal Scope --> |
|
<g transform="translate(0, 395)"> |
|
<rect x="0" y="0" width="8" height="120" fill="#444"/> |
|
<text x="50" y="30" font-family="system-ui, sans-serif" font-size="64" fill="#e0e0e0" font-weight="200">2</text> |
|
<text x="110" y="30" font-family="Georgia, serif" font-size="22" fill="#1a1a1a" font-weight="bold">Temporal Scope</text> |
|
<text x="110" y="50" font-family="system-ui, sans-serif" font-size="11" fill="#666">When does the harm manifest?</text> |
|
|
|
<!-- Timeline visualization --> |
|
<g transform="translate(110, 65)"> |
|
<rect x="0" y="8" width="180" height="4" fill="#ddd"/> |
|
<rect x="0" y="8" width="60" height="4" fill="#1a1a1a"/> |
|
<rect x="60" y="8" width="60" height="4" fill="#666"/> |
|
<rect x="120" y="8" width="60" height="4" fill="#999"/> |
|
|
|
<text x="30" y="35" font-family="system-ui, sans-serif" font-size="10" fill="#1a1a1a" text-anchor="middle" font-weight="bold">This Quarter</text> |
|
<text x="30" y="47" font-family="system-ui, sans-serif" font-size="8" fill="#888" text-anchor="middle">Immediate effects</text> |
|
|
|
<text x="90" y="35" font-family="system-ui, sans-serif" font-size="10" fill="#666" text-anchor="middle" font-weight="bold">Career</text> |
|
<text x="90" y="47" font-family="system-ui, sans-serif" font-size="8" fill="#888" text-anchor="middle">Developmental</text> |
|
|
|
<text x="150" y="35" font-family="system-ui, sans-serif" font-size="10" fill="#999" text-anchor="middle" font-weight="bold">Systemic</text> |
|
<text x="150" y="47" font-family="system-ui, sans-serif" font-size="8" fill="#888" text-anchor="middle">Trend trajectories</text> |
|
</g> |
|
</g> |
|
|
|
<!-- Dimension 3: Conditionality --> |
|
<g transform="translate(0, 530)"> |
|
<rect x="0" y="0" width="8" height="145" fill="#666"/> |
|
<text x="50" y="30" font-family="system-ui, sans-serif" font-size="64" fill="#e0e0e0" font-weight="200">3</text> |
|
<text x="110" y="30" font-family="Georgia, serif" font-size="22" fill="#1a1a1a" font-weight="bold">Conditionality</text> |
|
<text x="110" y="50" font-family="system-ui, sans-serif" font-size="11" fill="#666">Under what circumstances might policy change?</text> |
|
|
|
<!-- Condition boxes --> |
|
<g transform="translate(110, 65)"> |
|
<rect x="0" y="0" width="130" height="28" fill="none" stroke="#1a1a1a" stroke-width="2"/> |
|
<text x="65" y="12" font-family="system-ui, sans-serif" font-size="9" fill="#1a1a1a" text-anchor="middle" font-weight="bold">UNCONDITIONAL</text> |
|
<text x="65" y="23" font-family="system-ui, sans-serif" font-size="8" fill="#666" text-anchor="middle">"Wrong regardless"</text> |
|
|
|
<rect x="140" y="0" width="130" height="28" fill="none" stroke="#555" stroke-width="1.5"/> |
|
<text x="205" y="12" font-family="system-ui, sans-serif" font-size="9" fill="#555" text-anchor="middle" font-weight="bold">IF EFFECTIVE</text> |
|
<text x="205" y="23" font-family="system-ui, sans-serif" font-size="8" fill="#666" text-anchor="middle">"Only if it works"</text> |
|
|
|
<rect x="280" y="0" width="130" height="28" fill="none" stroke="#888" stroke-width="1"/> |
|
<text x="345" y="12" font-family="system-ui, sans-serif" font-size="9" fill="#888" text-anchor="middle" font-weight="bold">IF NO ALTERNATIVE</text> |
|
<text x="345" y="23" font-family="system-ui, sans-serif" font-size="8" fill="#666" text-anchor="middle">"Last resort only"</text> |
|
|
|
<rect x="420" y="0" width="130" height="28" fill="none" stroke="#aaa" stroke-width="1" stroke-dasharray="4,2"/> |
|
<text x="485" y="12" font-family="system-ui, sans-serif" font-size="9" fill="#999" text-anchor="middle" font-weight="bold">BY PROVIDER</text> |
|
<text x="485" y="23" font-family="system-ui, sans-serif" font-size="8" fill="#666" text-anchor="middle">"OpenAI no, local yes"</text> |
|
</g> |
|
|
|
<text x="110" y="120" font-family="system-ui, sans-serif" font-size="9" fill="#888">Also: conditional on use-type (brainstorm vs. deliverable), on disclosure, on team consensus</text> |
|
</g> |
|
|
|
<!-- Dimension 4: Agency & Complicity --> |
|
<g transform="translate(0, 690)"> |
|
<rect x="0" y="0" width="8" height="130" fill="#888"/> |
|
<text x="50" y="30" font-family="system-ui, sans-serif" font-size="64" fill="#e0e0e0" font-weight="200">4</text> |
|
<text x="110" y="30" font-family="Georgia, serif" font-size="22" fill="#1a1a1a" font-weight="bold">Agency & Complicity</text> |
|
<text x="110" y="50" font-family="system-ui, sans-serif" font-size="11" fill="#666">How does individual choice relate to collective outcome?</text> |
|
|
|
<!-- Spectrum --> |
|
<g transform="translate(110, 65)"> |
|
<rect x="0" y="0" width="90" height="50" fill="#1a1a1a"/> |
|
<text x="45" y="20" font-family="system-ui, sans-serif" font-size="9" fill="#fff" text-anchor="middle" font-weight="bold">ACTIVE</text> |
|
<text x="45" y="32" font-family="system-ui, sans-serif" font-size="8" fill="#aaa" text-anchor="middle">Using = funding</text> |
|
<text x="45" y="43" font-family="system-ui, sans-serif" font-size="8" fill="#aaa" text-anchor="middle">and normalizing</text> |
|
|
|
<rect x="95" y="0" width="90" height="50" fill="#444"/> |
|
<text x="140" y="20" font-family="system-ui, sans-serif" font-size="9" fill="#fff" text-anchor="middle" font-weight="bold">PASSIVE</text> |
|
<text x="140" y="32" font-family="system-ui, sans-serif" font-size="8" fill="#aaa" text-anchor="middle">Benefiting from</text> |
|
<text x="140" y="43" font-family="system-ui, sans-serif" font-size="8" fill="#aaa" text-anchor="middle">others' use</text> |
|
|
|
<rect x="190" y="0" width="90" height="50" fill="#777"/> |
|
<text x="235" y="20" font-family="system-ui, sans-serif" font-size="9" fill="#fff" text-anchor="middle" font-weight="bold">TAINTED</text> |
|
<text x="235" y="32" font-family="system-ui, sans-serif" font-size="8" fill="#ddd" text-anchor="middle">No clean hands</text> |
|
<text x="235" y="43" font-family="system-ui, sans-serif" font-size="8" fill="#ddd" text-anchor="middle">either way</text> |
|
|
|
<rect x="285" y="0" width="90" height="50" fill="#aaa"/> |
|
<text x="330" y="20" font-family="system-ui, sans-serif" font-size="9" fill="#333" text-anchor="middle" font-weight="bold">FUTILE</text> |
|
<text x="330" y="32" font-family="system-ui, sans-serif" font-size="8" fill="#555" text-anchor="middle">Individual choice</text> |
|
<text x="330" y="43" font-family="system-ui, sans-serif" font-size="8" fill="#555" text-anchor="middle">doesn't matter</text> |
|
</g> |
|
</g> |
|
|
|
<!-- Dimension 5: What Counts as "Use" --> |
|
<g transform="translate(0, 835)"> |
|
<rect x="0" y="0" width="8" height="170" fill="#aaa"/> |
|
<text x="50" y="30" font-family="system-ui, sans-serif" font-size="64" fill="#e0e0e0" font-weight="200">5</text> |
|
<text x="110" y="30" font-family="Georgia, serif" font-size="22" fill="#1a1a1a" font-weight="bold">What Counts as "Use"</text> |
|
<text x="110" y="50" font-family="system-ui, sans-serif" font-size="11" fill="#666">Where do you draw the operational line?</text> |
|
|
|
<!-- Use spectrum --> |
|
<g transform="translate(110, 62)"> |
|
<!-- Row 1 --> |
|
<rect x="0" y="0" width="150" height="22" fill="#1a1a1a"/> |
|
<text x="75" y="15" font-family="system-ui, sans-serif" font-size="9" fill="#fff" text-anchor="middle">Direct generation → deliverable</text> |
|
|
|
<rect x="155" y="0" width="150" height="22" fill="#444"/> |
|
<text x="230" y="15" font-family="system-ui, sans-serif" font-size="9" fill="#fff" text-anchor="middle">Teammate generates, you use</text> |
|
|
|
<rect x="310" y="0" width="150" height="22" fill="#666"/> |
|
<text x="385" y="15" font-family="system-ui, sans-serif" font-size="9" fill="#fff" text-anchor="middle">AI-enhanced tool features</text> |
|
|
|
<!-- Row 2 --> |
|
<rect x="0" y="27" width="150" height="22" fill="#888"/> |
|
<text x="75" y="42" font-family="system-ui, sans-serif" font-size="9" fill="#fff" text-anchor="middle">AI-assisted search/summaries</text> |
|
|
|
<rect x="155" y="27" width="150" height="22" fill="#aaa"/> |
|
<text x="230" y="42" font-family="system-ui, sans-serif" font-size="9" fill="#333" text-anchor="middle">Explaining concepts to learn</text> |
|
|
|
<rect x="310" y="27" width="150" height="22" fill="#ccc"/> |
|
<text x="385" y="42" font-family="system-ui, sans-serif" font-size="9" fill="#333" text-anchor="middle">Generate many, select, modify</text> |
|
|
|
<!-- Row 3 --> |
|
<rect x="0" y="54" width="150" height="22" fill="#ddd"/> |
|
<text x="75" y="69" font-family="system-ui, sans-serif" font-size="9" fill="#333" text-anchor="middle">Grammar/spell check</text> |
|
|
|
<rect x="155" y="54" width="150" height="22" fill="#eee"/> |
|
<text x="230" y="69" font-family="system-ui, sans-serif" font-size="9" fill="#555" text-anchor="middle">Accessibility tools</text> |
|
|
|
<rect x="310" y="54" width="150" height="22" fill="none" stroke="#ccc"/> |
|
<text x="385" y="69" font-family="system-ui, sans-serif" font-size="9" fill="#999" text-anchor="middle">Autocomplete (single-line)</text> |
|
</g> |
|
|
|
<text x="110" y="158" font-family="system-ui, sans-serif" font-size="9" fill="#666" font-style="italic">Your policy must specify which cells are permitted, prohibited, or conditional</text> |
|
</g> |
|
|
|
<!-- Footer --> |
|
<rect x="0" y="1020" width="800" height="180" fill="url(#dots)"/> |
|
<rect x="50" y="1040" width="700" height="130" fill="#fff" stroke="#ddd"/> |
|
|
|
<text x="80" y="1070" font-family="Georgia, serif" font-size="16" fill="#1a1a1a" font-weight="bold">The Key Diagnostic</text> |
|
<text x="80" y="1095" font-family="Georgia, serif" font-size="12" fill="#555" font-style="italic">"Would your concerns disappear if training were consensual and artists compensated?"</text> |
|
<text x="80" y="1120" font-family="system-ui, sans-serif" font-size="10" fill="#888">YES → Justice/consent is your core concern</text> |
|
<text x="80" y="1135" font-family="system-ui, sans-serif" font-size="10" fill="#888">NO → Craft, pedagogy, social fabric, environment, or power are your deeper concerns</text> |
|
<text x="80" y="1155" font-family="system-ui, sans-serif" font-size="10" fill="#1a1a1a" font-weight="bold">Teams that skip this question often discover false consensus too late.</text> |
|
|
|
<!-- Corner mark --> |
|
<polygon points="750,1150 780,1150 780,1180" fill="#1a1a1a"/> |
|
|
|
<!-- Top corner accent --> |
|
<rect x="720" y="220" width="60" height="3" fill="#1a1a1a"/> |
|
<rect x="777" y="220" width="3" height="60" fill="#1a1a1a"/> |
|
</svg> |
|
``` |
|
|
|
This flyer uses: |
|
|
|
**For recognition at a distance:** |
|
- Large dimension numbers (64pt, light weight) as visual anchors |
|
- Strong left-edge bars with graduated grayscale (darkest = most proximate concerns) |
|
- High-contrast header block |
|
- Clear vertical rhythm—five distinct bands |
|
|
|
**For precision up close:** |
|
- Detailed spectrums and matrices within each dimension |
|
- Fine typography for specific examples (skill atrophy, silent Discords, etc.) |
|
- The operational grid in Dimension 5 showing exactly where policy lines might be drawn |
|
- The diagnostic question callout at bottom—the actionable takeaway |
|
|
|
**Design logic:** |
|
- The grayscale gradient across dimensions subtly suggests the "locus of harm" moving from personal (dark) to diffuse (light) |
|
- Dotted pattern in footer creates visual separation without heavy color |
|
- Serif for conceptual headers, sans-serif for operational details |
|
- Corner marks suggest a document meant to be printed and posted |