Skip to main content
Service Design (MA)

Melanie Glöckler

Melanie studied her BA in Industrial Design at the Burg Giebichenstein in Germany and Design Thinking at the HPI d.school before she joined the RCA Service Design program in 2019.

Throughout her education, she always had a strong interest in materials & technologies. Fascinated how these advancements ever since shaped the human environment and humanity itself, her work sought to explore different intersections, pushed boundaries and tried to rethink & question current approaches or create multiple futures with and through means of technological use.

On a professional level, she got the chance to intern in automobility for BMW and worked in design education before transitioning with a DAAD scholarship to the RCA in London. Her work got exhibited in Museums and Design Festivals in Germany, Poland, Belgium, Italy, China and Cuba.

...more

Melanie Glöckler

The unstoppable desire to understand the world a little more is driving humanity.

Understanding something means to some extent being able to make use of it and transforming it. But altering something for the sake of human ‘need’ means also having an impact on the entire system in reverse. It’s like starting a domino effect.

With technological advancements such as Artificial Intelligence or Biotechnology, we are at a point where we are constantly facing decisions that are highly ethical and will shape humanity. But it is in our human hands to decide what happens with these tools, our future. Therefore we need to be able to make informed decisions from a holistic, multi- angled point of view to understand ecosystems, asking the right questions, questioning the status quo and by this creating a responsible and inclusive future.

The future of healthcare will be technology-driven at its core and data is the ‘material’ that allows pushing this transformation towards higher quality care at scale.

Datagym is a course that educates staff across the board in healthcare to understand why data is important and how they can improve healthcare by making use of it. This is done through a collaborative, multidisciplinary, experiential learning approach in which learners are able to work on real-world projects and expand their horizons.

Click to zoom in !
Click to zoom in !

Context:

Healthcare produces data daily in the form of surveys, lab test results, wearables, patient data and more. This data can be used in care, e.g. to monitor patients remotely, diagnose diseases and facilitate internal clinical processes that allow more seamless and faster planning.

The potential of data is enormous, but its use is far from efficient!

At the moment, many healthcare systems are collecting vast amounts of data. The challenge is that the healthcare workforce is not equipped to handle or make good use of it. And this lack of skills is not sufficiently covered by current existing education. All of this impedes a transformation that will be needed to provide quality care at scale in the future.

The skills needed

To enable the workforce across healthcare, meaning to enable clinicians, technologists and admin staff to improve and innovate, using data as building blocks, they need to be data literate and able to read, analyse and interpret data on a level that allows them to collaborate amongst professionals from different disciplines within healthcare and understand each others' professions on a foundational level. This and the ability to deal with ambiguity where there is no clear solution to a problem are essential skills. For this, Datagym aims to develop the needed critical, technical, creative and human-centred skills to understand and work with data as part of their professional practice.

[untitled]
[untitled]
[untitled]
[untitled]

The course

In the course of 3 months, participants will be led through the Datagym learning approach whilst still being able to pursue their jobs. The course is based on collaboration, multidisciplinarity, hands-on learning, and mentorship.

1 Participants would start with an initial self-assessment that analyses which areas they need to improve on, and will be given the chance to take a foundational preparatory course before jumping into the main project.

2 They will then be put in multidisciplinary teams and follow the double diamond approach in facilitated workshops where they identify and define an existing problem from their work environment using design methods. A two-day ideation session will get them started to think about first ideas and prototype quickly potential solutions.

3 In the final 2 months, they will be provided with online lectures and work more independently, supported by weekly mentoring and learning whilst testing, failing, iterating and refining their idea together in their teams.

[untitled]
[untitled]
1 The Origin Of Impact
1 The Origin Of Impact
2 Looking Beyond The Known
2 Looking Beyond The Known
3 AI Impact Assessment For Wider Social Implications
3 AI Impact Assessment For Wider Social Implications
4 Phase 1
5 Benefits Phase 1+2
5 Benefits Phase 1+2
6 Certified by users
6 Certified by users

Project Description

With the ultimate goal to develop responsible AI products/services, FIAAI is a two-staged AI Impact assessment that helps companies by means of governmental frameworks and the participation of the public involved in an impact assessment game to work in a more iterative manner towards ethical implementation of emerging technologies.


Challenge

AI (Artificial Intelligence) is increasingly, but invisible & silent, a part of our everyday lives. “ Is there a need to explain AI? You probably don’t care as long as AI systems do a good job” - But what if they don’t? In this case experienced unintended, negative, short or long term impacts can have different levels of consequences on people, their direct and indirect surroundings as well as the providing companies and the environment. Examples can be a wrong diagnosis in healthcare, a biased AI that selects just certain kinds of people for an application process and more. So why should we explain impact once something went wrong if we could avoid it happening in the first place? Where does impact come from? How is it assessed at the moment? Tracking this back, led to tech companies and governmental frameworks that aim to guide product development.

Looking Beyond The Known

But being provided with governmental frameworks is just one side of the coin. Insights showed, that people in charge to implement policy into products/services feel often insecure. They struggle with the usability of policy text. Timing, scope, structure, content, language and embodiment are reasons why policy might be not effectively translated and actionable for companies. And with regards to a more socio-technical impact assessment interviewees found it hard to go beyond known risks as the e.g. DPIA (Data Protection Impact Assessment) framework has a strong focus on data and doesn’t provide any examples. Therefore the question is: How can companies be enabled to do a socio-technical AI impact assessment informed by actionable policy?

FIAAI

FIAAI (Foresight Impact Assessment for AI) is a two-staged tool from regulators for companies. It consists of a platform for impact self-assessment and a game to test wider social implications by the public in order to help companies developing responsible AI products.

In the first phase, companies are being provided with access to a government platform where they answer questions related to their AI models. This way they can self assess whether their AI product would categorize as a high or low-risk model and policy recommendations could be made in a more targeted way.

High risk assessed companies would move on into phase two, where the companies would upload their models into a digital twin which would be a testing environment in form of a game that will be played by the public. People would select AI models as challenges to be gamed and feedback on the impacts they experience.

Phase 1+2

The first phase would be a set of questions regarding the intended model development. These questions ask for general information about the model and its purpose, in which countries it shall be deployed, information about data and policy that is already in use when developing a model. The given answers would then select appropriate policy recommendations and provide the companies with relevant information.

The second phase gives people the chance to select between scenario creation, playing scenarios and feedbacking other assessments to double-check responsible gaming.

In the beginning, every challenge would inform about the model's purpose, data being used for features. This would have a highly educative function for the people playing the game.

The gathered feedback would inform the companies development in an iterative manner until the AI model gets approved by the game.

Certified By Users

Every AI model that succeeds the FIAAI testing receives a seal and gets listed with the version accordingly on the FIAAI website.

This listing would be online accessible for the public and be used as a seal of quality.

Read more