Measuring integrity and responsibility in AI integration

Are we still creative if our creative processes are increasingly invaded by machines?

MIRAI (Measuring Integrity and Responsibility in AI Integration) is a research-creation project that explores how student creatively interact, engage, and experiment with Generative AI by unpacking how it influences their decisions, workflows, and sense of agency. Is it possible to assess AI involvement in creative work the way Turnitin quantifies plagiarism? Should we? Through hands-on art-making and reflective discussions, MIRAI challenges students to move beyond binary claims of “using” or “not using” AI, urging them to articulate how and why they use it. One key outcome is the Creative AI Manifesto, a collective articulation of values and practices for responsible, transparent, and critically aware use of AI in creative education.

This group project is funded by the Generative AI Laboratory (GAIL) at the University of Edinburgh and initiated during the Generative Creative Visions workshop. The name MIRAI is also inspired by the Japanese word 未来 that means future.


WORKSHOP METHODS

Mapping human and AI contributions

Using the double diamond design thinking model as a foundation, the creative process was broken down into idea-driven (discover & define) and production-driven (develop & deliver) phases, layered with interaction points: human intent, AI contribution, and human feedback. This separation between the “mind” (conceptual thinking) and “hands-on” (execution) stages proved helpful in prompting participants to rethink their creative process more critically. Some initially overvalued GenAI due to its visible outputs, but later recognised that much of the creative direction stemmed from human input, especially during ideation. A recurring paradox emerged: even when ideas seemed wholly human, they may not have been possible without AI, as the technology inspired them. Ethical concerns also arose regarding GenAI systems trained on copyrighted content, leading to more complex questions on how this prior training should fit into an ethical creative process.

Contextualising AI roles

Participants assigned varied roles to AI in their creative processes, reflecting an evolving understanding of Generative AI. While tool and assistant were the most commonly assigned roles, several participants identified AI as a medium, muse, performer, analyst, and even a critic. This range reveals that AI’s perceived role is highly context-dependent, shaped by the creator’s intent and interaction style. Rather than a fixed actor, AI was seen as a flexible agent whose role could shift across the ideation and production process. The diversity of assigned roles also suggests a growing awareness of AI as a collaborator with substantial influence, capable of sparking ideas, shaping direction, or enacting performances, while still being guided by human authorship.

Rethinking AI involvement through transparency mapping

The transparency meter prompted participants to assess AI involvement in their creative process using a more structured method. Initially, participants provided percentage estimates based on intuition. After applying the meter, which used a 7-point Likert scale (or zero if AI is not used) across the discover, define, develop, and deliver phases, they recalculated AI involvement more precisely. The results revealed a mixed pattern: while some artworks showed a higher AI percentage post-assessment (e.g., Artwork 1: 40→46, Artwork 4: 20→25), most saw a decrease (e.g., Artwork 2: 90→54, Artwork 8: 90→43).

This suggests that intuitive estimations often overemphasise AI’s role, particularly when visible outputs dominate perception.

Still, one participant preferred their intuitive guess, suggesting that numbers alone can’t capture creative nuance. Others questioned whether all phases should be weighed equally, noting that a concept generated solely by AI could still shape the entire project, even if AI played no visible role in the final output. This led to debates on fairly measuring AI’s influence when it operates at a conceptual level. Many agreed that intent—the purpose and vision behind decisions—is central to authorship, reinforcing that tools like GenAI don’t create meaning independently. Instead, agency is distributed: AI tools act under human direction and inherit purpose from the creator. The transparency meter thus became less about assigning a number and more about encouraging ethical reflection on authorship.

WORKSHOP FINDINGS

Between expansion and constraint

Participants identified key limitations in AI models that both hinder and enhance the creative process. While AI can restrict original thinking—especially when constrained by prompts or when it clings to generic or widely accepted ideas—it can also act as a catalyst for expanding thought. Some noted that AI’s tendency to generate unexpected or surreal suggestions can distract their intended direction or create an overwhelming stream of possibilities. However, these constraints and surprises also encouraged them to reflect on future potentials and refine their concepts more deliberately. Like a snowball effect, AI-driven ideation grows rapidly, pushing boundaries but sometimes at the cost of clarity.

Reflecting on AI’s creative autonomy

Participants generally viewed AI as having limited creative autonomy. Its outputs often felt predictable, repetitive, and lacking the depth to challenge their original ideas. While AI could shape or expand a concept to some extent, it rarely challenges the author’s core vision. Some noted that AI’s surprises could still provoke new insights, even when flawed or inaccurate, and that these moments influenced future thinking. Others experimented with granting AI more autonomy, such as letting chatbots act as characters in a drama, but withheld decision-making power in key creative stages. Participants also expressed a desire for multimodal interactions beyond text and suggested that the current one-to-one exchanges limit AI’s creative richness.

Redefining authorship

Participants expressed mixed feelings about authorship in the age of AI-generated content, questioning whether it’s possible—or even necessary—to disclose AI involvement. Rather than a binary of “used” or “not used,” many advocated for more nuanced approaches, as AI’s influence often exists on a spectrum that is hard to quantify. Analogies were drawn to using pre-made Photoshop brushes or pens to write, raising the question of whether AI should be treated differently from other creative tools. Although the transparency meter helped initiate reflection, participants felt the resulting percentage could not fully represent the complexity of creative agency. Unlike Turnitin, it wasn’t about detection, but interpretation. The discussion also highlighted grey areas in authorship, such as tracing over AI-generated images. Hence, while some confidently believed they could outperform AI creatively, others questioned the legitimacy of claiming AI-assisted work as entirely their own. Ultimately, many agreed that drawing the ethical line depends on the context in which AI is used, not just the tools involved.

Fairness beyond access

Participants’ concerns around fairness focused less on access to AI tools and more on the ethics and transparency behind the datasets. Some expressed curiosity about the origins of open-source training data. In contrast, others experimented with training AI on their own sketches or using open-access resources, which led to more ethical and personalised outputs. These reflections revealed a growing interest in understanding what AI produces and how it learns, as they did not want to contribute to an unfair process indirectly. Underpinning all this was a shared concern: as AI-generated content becomes easier to produce, clients increasingly undervalue creative labour, assuming they can replicate results at minimal cost, further threatening the perceived worth of human creativity.

While some confidently believed they could outperform AI creatively, others questioned the legitimacy of claiming AI-assisted work as entirely their own. Ultimately, many agreed that drawing the ethical line depends on the context in which AI is used, not just the tools involved.

While AI can restrict original thinking—especially when constrained by prompts or when it clings to generic or widely accepted ideas—it can also act as a catalyst for expanding thought. Some noted that AI’s tendency to generate unexpected or surreal suggestions can distract their intended direction or create an overwhelming stream of possibilities. However, these constraints and surprises also encouraged them to reflect on future potentials and refine their concepts more deliberately.

Some initially overvalued GenAI due to its visible outputs, but later recognised that much of the creative direction stemmed from human input, especially during ideation. A recurring paradox emerged: even when ideas seemed wholly human, they may not have been possible without AI, as the technology inspired them. Ethical concerns also arose regarding GenAI systems trained on copyrighted content, leading to more complex questions on how this prior training should fit into an ethical creative process.

Many agreed that intent—the purpose and vision behind decisions—is central to authorship, reinforcing that tools like GenAI don’t create meaning independently. Instead, agency is distributed: AI tools act under human direction and inherit purpose from the creator. The transparency meter thus became less about assigning a number and more about encouraging ethical reflection on authorship.

MANIFESTO

Section 1: Technology as a social construct

[…]

Section 2: The value of human creativity

[…]

Section 3: Tools or companions? Redefining human-AI relationship

[…]

Section 4: Art, ownership, and the free market

[…]

Section 5: Academia’s AI dilemma

[…]

Section 6: Ethical stewardship in technology development

Section 7: Content overload & the productivity trap

[…]

Section 8: Human agency in systemic change

[…]

This is just a beginning of a new era and a story for humanity.

How it continues depends on our responsible and active role in these changes as part of our society.

This is just an ongoing MANIFEST!


The full manifesto can be accessed here.

Jean Louis Forain (1885) -Tight-Rope Walker


Leave a comment