kth.sePublications
Change search
Link to record
Permanent link

Direct link
Jääskeläinen, PetraORCID iD iconorcid.org/0000-0002-0028-9030
Publications (10 of 16) Show all publications
Jääskeläinen, P., Kaila, A.-K. & Holzapfel, A. (2024). Uncovering Challenges and Changes in Artists’ Practices as a Consequence of AI. In: Workshop Proceedings of GenAICHI - CHI 2024 Workshop on Generative AI and HCI: . Paper presented at ACM CHI Conference on Human Factors in Computing Systems. May 11, 2024 - May 16, 2024. Honolulu, USA.
Open this publication in new window or tab >>Uncovering Challenges and Changes in Artists’ Practices as a Consequence of AI
2024 (English)In: Workshop Proceedings of GenAICHI - CHI 2024 Workshop on Generative AI and HCI, 2024Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

Artistic uses of AI technologies are fast gaining prominence in a number of creative domains. In this paper, we describe our preliminary research exploring the challenges and changes that working with AI poses to artists based on interviews with N=20 artists. We present preliminary themes relating to challenges and changes that artists are encountering and highlight the importance of studying AI further in situated artistic practices.

National Category
Human Computer Interaction
Identifiers
urn:nbn:se:kth:diva-345778 (URN)
Conference
ACM CHI Conference on Human Factors in Computing Systems. May 11, 2024 - May 16, 2024. Honolulu, USA
Note

QC 20240502

Available from: 2024-04-18 Created: 2024-04-18 Last updated: 2024-05-02Bibliographically approved
Jääskeläinen, P. & Åsberg, C. (2024). What’s the Look of "Negative Gender" and “Max Ethnicity” in AI-Generated Images? A Critical Visual Analysis of the Intersectional Politics of Portrayal. In: AltCHI. ACM Conference on Human Factors in Computing Systems (CHI): . Paper presented at ACM Conference on Human Factors in Computing Systems (CHI), Honolulu, USA, May 11, 2024 - May 16, 2024  . ACM Digital Library
Open this publication in new window or tab >>What’s the Look of "Negative Gender" and “Max Ethnicity” in AI-Generated Images? A Critical Visual Analysis of the Intersectional Politics of Portrayal
2024 (English)In: AltCHI. ACM Conference on Human Factors in Computing Systems (CHI), ACM Digital Library, 2024Conference paper, Published paper (Refereed)
Abstract [en]

In this exploratory paper, we focus on intersecting design political and visual processes of gendering and racializing in online AI image generators, in particular ArtBreeder and Midjourney. While AI image generators are becoming an integrated part of our contemporary society, they draw on cultural and historical imaging conventions of sorting and ordering the world and the people in it. These tools' powerful visual rhetoric can potentially aggravate existing discrimination, if not critically reflected upon. We argue that these design-facilitated representations position the ‘user’ into cultural imagery of representations with political implications. With an intersectional perspective from the feminist visual analysis, we critique and uncover how gender and ethnicity are represented and built into the systems, both in terms of visual culture and in designed interactions. We problematize these design strategies, and urge the HCI community to engage in further design political inquiries regarding the visual culture mediated by AI image generators.

Place, publisher, year, edition, pages
ACM Digital Library, 2024
National Category
Human Computer Interaction
Research subject
Art, Technology and Design
Identifiers
urn:nbn:se:kth:diva-345776 (URN)10.1145/3613905.3644057 (DOI)
Conference
ACM Conference on Human Factors in Computing Systems (CHI), Honolulu, USA, May 11, 2024 - May 16, 2024  
Note

QC 20240503

Available from: 2024-04-18 Created: 2024-04-18 Last updated: 2024-05-03Bibliographically approved
Zhu, H., Faradynawati, I. A. & Jääskeläinen, P. (2023). Can AI evoke customers' sustainable investment preferences? A user study of Robo-advisors. In: The 2023 International Conference on Sustainability, Environment, and Social Transition in Economics and Finance (SESTEF 2023): . Paper presented at The 2023 International Conference on Sustainability, Environment, and Social Transition in Economics and Finance (SESTEF 2023).
Open this publication in new window or tab >>Can AI evoke customers' sustainable investment preferences? A user study of Robo-advisors
2023 (English)In: The 2023 International Conference on Sustainability, Environment, and Social Transition in Economics and Finance (SESTEF 2023), 2023Conference paper, Oral presentation only (Refereed)
Abstract [en]

AI-empowered financial advisory services (or robo-advisory services) emerge as one of the channels to attract customers to invest in financial products that integrate the environment, social, and governance (ESG) criteria to their investment objectives. This reasoning is often based on the assumption that AI is more transparent in fees and unbiased than conventional human advisors, while at the same time, lowering the entry bar for young and low-budget customers. In automated services with little or no human intervention, users’ experience and ability to use this type of service play a critical role in supporting consumer investment decisions. However, the impact of robo-advisor user experience towards customers’ preference for sustainable investment choices has not been addressed by previous studies. In the long run, can robo-advisory services significantly support or promote a more sustainable investment portfolio to customers? We provide initial insights on these questions based on a mixed-method user test, including a pre-test survey, observations of robo-advisor usage, and a post-test retrospective interview. The preliminary results show that this AI-empowered system and its service have not fulfilled the expectations to support customers’ sustainable investment decision-making due to the lack of comprehensible information and transparency. Firstly, we explain the relationships between customers’ intention to select a sustainable portfolio with their features and attitudes towards AI. The observation and interview data reveal that a standardized definition and criteria of sustainable investment and portfolios are needed for customers to establish a fundamental understanding of this service. Also, customers demand more explanations in the final recommendation automatically generated by AI.

National Category
Business Administration
Research subject
Business Studies
Identifiers
urn:nbn:se:kth:diva-340318 (URN)
Conference
The 2023 International Conference on Sustainability, Environment, and Social Transition in Economics and Finance (SESTEF 2023)
Note

QC 20231207

Available from: 2023-12-07 Created: 2023-12-07 Last updated: 2023-12-07Bibliographically approved
Jääskeläinen, P. (2023). Environmental Ethics of Creative-AI: Shifting from Modernist Rationality to Entangled, More-than-Human Felt Care. In: Creativity Unleashed - What Creative Technologies Would, Could, and Shouldn’t be: . Paper presented at Creativity Unleashed - What Creative Technologies Would, Could, and Shouldn’t be. Session: Impact & Interactions with Humans, APRIL 27 2023, Online/Virtual.
Open this publication in new window or tab >>Environmental Ethics of Creative-AI: Shifting from Modernist Rationality to Entangled, More-than-Human Felt Care
2023 (English)In: Creativity Unleashed - What Creative Technologies Would, Could, and Shouldn’t be, 2023Conference paper, Oral presentation only (Other (popular science, discussion, etc.))
National Category
Design
Identifiers
urn:nbn:se:kth:diva-337722 (URN)
Conference
Creativity Unleashed - What Creative Technologies Would, Could, and Shouldn’t be. Session: Impact & Interactions with Humans, APRIL 27 2023, Online/Virtual
Note

QC 20231009

Available from: 2023-10-06 Created: 2023-10-06 Last updated: 2023-10-09Bibliographically approved
Kaila, A.-K., Jääskeläinen, P. & Holzapfel, A. (2023). Ethically Aligned Stakeholder Elicitation (EASE): Case Study in Music-AI. In: NIME: . Paper presented at New Interfaces for Musical Expression (NIME), Mexico City, Mexico, 31 May–2 June.
Open this publication in new window or tab >>Ethically Aligned Stakeholder Elicitation (EASE): Case Study in Music-AI
2023 (English)In: NIME, 2023Conference paper, Published paper (Refereed)
Abstract [en]

Engineering communities that feed the current proliferation of artificial intelligence (AI) have historically been slow to recognise the spectrum of societal impacts of their work. Frequent controversies around AI applications in creative domains demonstrate insufficient consideration of ethical predicaments, but the abstract principles of current AI and data ethics documents provide little practical guidance. Pragmatic methods are urgently needed to support developers in ethical reflection of their work on creative-AI tools. In the wider context of value sensitive, people-oriented design, we present an analytical method that implements an ethically informed and power-sensitive stakeholder identification and mapping: Ethically Aligned Stakeholder Elicitation (EASE). As a case study, we test our method in workshops with six research groups that develop AI in musical contexts. Our results demonstrate that EASE supports critical self-reflection of the research and outreach practices among developers, discloses power relations and value tensions in the development processes, and foregrounds opportunities for stakeholder engagement. This can guide developers and the wider NIME community towards ethically aligned research and development of creative-AI.

Keywords
computational creativity, music, ethics, stakeholder, Value Sensitive Design
National Category
Information Systems, Social aspects
Identifiers
urn:nbn:se:kth:diva-328254 (URN)
Conference
New Interfaces for Musical Expression (NIME), Mexico City, Mexico, 31 May–2 June
Note

QC 20230614

Available from: 2023-06-06 Created: 2023-06-06 Last updated: 2023-08-14Bibliographically approved
Jääskeläinen, P. (2023). Explainable Sustainability for AI in the Arts. In: The 1st International Workshop on Explainable AI for the Arts, ACM Creativity and Cognition Conference, 2023: . Paper presented at The 1st International Workshop on Explainable AI for the Arts, ACM Creativity and Cognition Conference.
Open this publication in new window or tab >>Explainable Sustainability for AI in the Arts
2023 (English)In: The 1st International Workshop on Explainable AI for the Arts, ACM Creativity and Cognition Conference, 2023, 2023Conference paper, Oral presentation with published abstract (Other academic)
Abstract [en]

AI is becoming increasingly popular in artistic practices, but the tools for informing practitioners about the environmental impact (and other sustainability implications) of AI are adapted for other contexts than creative practices - making the tools and sustainability implications of AI not accessible for artists and creative practitioners. In this position paper, I describe two empirical studies that aim to develop environmental sustainability reflection systems for AI Arts, and discuss and introduce Explainable Sustainability in for AI Arts.

Keywords
sustainability, AI art, explainable sustainability
National Category
Human Computer Interaction
Research subject
Art, Technology and Design
Identifiers
urn:nbn:se:kth:diva-327919 (URN)
Conference
The 1st International Workshop on Explainable AI for the Arts, ACM Creativity and Cognition Conference
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), 2020.0102
Note

QC 20230614

Available from: 2023-06-01 Created: 2023-06-01 Last updated: 2024-05-03Bibliographically approved
Jääskeläinen, P. (2023). Generative AI for First-Person Meta Reflection in Design Research: More-than-Human Storying the Ecologies of AI Arts. In: Towards a Design (Research) Framework with Generative AI: . Paper presented at Designing Interactive Systems (DIS) '23 Workshop: Towards a Design (Research) Framework with Generative AI, Pittsburgh, Pennsylvania, 11 July 2023.
Open this publication in new window or tab >>Generative AI for First-Person Meta Reflection in Design Research: More-than-Human Storying the Ecologies of AI Arts
2023 (English)In: Towards a Design (Research) Framework with Generative AI, 2023Conference paper, Oral presentation with published abstract (Other academic)
Abstract [en]

While sensitivities toward ecologies and more-than-human design and research have become visible in the HCI, design, and humanities [1–3, 7–11], inquiries from these perspectives are not common in the specific case of AI Arts [6]. This is a challenge, as AI arts is often approached from a technological post-humanist perspectives, mixed with values of techno-positivism in an attempt to find newways of expression and creativity1 or ways of utilizing AI for creative work – and overlooking feminist care over environment and more-than-human ecologies. In this position paper, I explore Generative AI as a tool for reflecting on such challenges that relate to ’doing research in AI arts’ from a first-person perspective. In order to explore alternative ways of knowledge-making, I use the method of storying speculative conversations with ChatGPTs “More-than-Human Alter Ego” gAIa. Consequently, the results of this process surfaced that using Generative-AI for more-than-human storying can work as a method for first-person reflection of the challenges, experiences, and situated context of doing design research work from a first-person lense.

National Category
Design
Research subject
Art, Technology and Design
Identifiers
urn:nbn:se:kth:diva-337723 (URN)
Conference
Designing Interactive Systems (DIS) '23 Workshop: Towards a Design (Research) Framework with Generative AI, Pittsburgh, Pennsylvania, 11 July 2023
Funder
Marianne and Marcus Wallenberg Foundation, 2020.0102
Note

QC 20240429

Available from: 2023-10-06 Created: 2023-10-06 Last updated: 2024-04-29Bibliographically approved
Ren, Y., Sivakumaran, A., Niemelä, E. & Jääskeläinen, P. (2023). How to Make AI Artists Feel Guilty in a Good Way?: Designing Integrated Sustainability Reflection Tools (SRTs) for Visual Generative AI. In: ICCC '23 International Conference of Computational Creativity: . Paper presented at ICCC International Conference of Computational Creativity, Waterloo, Canada, 19-23 June, 2023.
Open this publication in new window or tab >>How to Make AI Artists Feel Guilty in a Good Way?: Designing Integrated Sustainability Reflection Tools (SRTs) for Visual Generative AI
2023 (English)In: ICCC '23 International Conference of Computational Creativity, 2023Conference paper, Published paper (Refereed)
Abstract [en]

AI can be energy intensive, and artists currently lack access to empowering information. With growing concerns of cli- mate change and calls for environmental sustainability, there is a real need to explore strategies to communicate sustain- ability information to artists using generative AI, given its increasing presence and widening accessibility. This paper presents an exploratory Research-through-Design study (in- cluding design-informing survey, design prototyping, user testing) of integrating sustainability reflection features into generative AI systems, and provides preliminary knowledge of the design characteristics that can be leveraged, including artists’ experiences of them. This paper finds that granular, relatable data visualizations and informed use of colors are effective in communicating about energy consumption. Fur- thermore, artists were positive towards ”feeling bad” in the process of becoming aware of their impacts, and called for systems that could provide them low-energy settings during exploratory stages of the artistic process.

Keywords
Sustainability of Generative AI, Sustainability Reflection Tools, Explainable Sustainability
National Category
Human Computer Interaction
Research subject
Art, Technology and Design
Identifiers
urn:nbn:se:kth:diva-327918 (URN)
Conference
ICCC International Conference of Computational Creativity, Waterloo, Canada, 19-23 June, 2023
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), 2020.0102
Note

QC 20231122

Available from: 2023-06-01 Created: 2023-06-01 Last updated: 2023-11-22Bibliographically approved
Jääskeläinen, P. (2023). On Creative-AI, Imaginaries, and Sustainability. In: Harald Pechlaner, Michael de Rachewiltz, Maximilian Walder, Elisa Innerhofer (Ed.), Shaping the Future: Sustainability and Technology at the Crossroads of Arts and Science. Eurac Research Center for Advanced Studies, Graffeg
Open this publication in new window or tab >>On Creative-AI, Imaginaries, and Sustainability
2023 (English)In: Shaping the Future: Sustainability and Technology at the Crossroads of Arts and Science / [ed] Harald Pechlaner, Michael de Rachewiltz, Maximilian Walder, Elisa Innerhofer, Eurac Research Center for Advanced Studies, Graffeg , 2023Chapter in book (Other (popular science, discussion, etc.))
Place, publisher, year, edition, pages
Eurac Research Center for Advanced Studies, Graffeg, 2023
National Category
Media Studies
Research subject
Art, Technology and Design
Identifiers
urn:nbn:se:kth:diva-328138 (URN)
Note

QC 20230612

Available from: 2023-06-06 Created: 2023-06-06 Last updated: 2023-06-12Bibliographically approved
Jääskeläinen, P. (2023). Speculation and Fiction in Exploring Values and Ethics in Creative-AI Technologies. In: : . Paper presented at Disruptive Imaginations -- Joint Annual Conference of Science Fiction Research Association (SFRA) and Gesellschaft für Fantastikforschung (GfF).
Open this publication in new window or tab >>Speculation and Fiction in Exploring Values and Ethics in Creative-AI Technologies
2023 (English)Conference paper, Oral presentation only (Refereed)
Abstract [en]

I see speculation and imaginaries as interesting objects for examining our values, norms, and onto-epistemological orientations, and I think they have both the capacity to actively transform our thinking and future-making, but also to provide rich material to reflect on in terms of our societal and individual situated perspectives and tensions within them. My work is directed towards addressing the structures that work towards oppressing; gendering, racialization, and disregard for non-humans (which also ties to the environmental crisis that serves as a gloomy backdrop of our times) - in the situated context of emerging AI art technologies. My work has included speculative first-person and workshop projects to explore probable, possible, and alternative pathways for AI Art technologies. In my presentation, I will describe my recent research works related to this. In the first work, through first-person speculation I imagined AI Art tools that would incorporate post-humanist and feminist care ethics orientation, exposing tensions in how the care of non-humans is not present in the artifacts/technologies and the peoples' mental landscapes. For the second study, I organized speculative sketching workshops, in which artists and engineers imagined future AI art tools for their own practice, and discussed critical considerations for these imaginary tools. Following this, I analyzed these imaginary tools from the perspective of what values and labor arrangements are embedded in them. A third research project that I am currently working on is focused on imaginary research abstracts: organizing workshops in which scientists (computational creativity scholars) write fictional future research paper abstracts. These abstracts are then critically discussed and analyzed from the perspective of values, ideologies, ontologies, and epistemologies that are embedded in them.To conclude, I see speculation and imaginaries as interesting objects for examining our situated orientations, directing my work towards addressing the structures that work towards oppressing; gendering, racialization, and disregard for non-humans.

National Category
Media Studies
Research subject
Art, Technology and Design
Identifiers
urn:nbn:se:kth:diva-337721 (URN)
Conference
Disruptive Imaginations -- Joint Annual Conference of Science Fiction Research Association (SFRA) and Gesellschaft für Fantastikforschung (GfF)
Funder
Marianne and Marcus Wallenberg Foundation, 2020.0102
Note

QC 20231009

Available from: 2023-10-06 Created: 2023-10-06 Last updated: 2023-10-09Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-0028-9030

Search in DiVA

Show all publications