Skip to main content
Global Innovation Design (MA/MSc)

Jingyi Li

I’m a human-computer interaction designer and researcher promoting digital inclusion in my work.


Education:

MA/MSc Global Innovation Design (RCA & ICL) - Distinction

BA Internet and New Media (Sun Yat-sen University) - First Class Honours


Experience:

I worked as an interaction designer for NetEase Game before I started my master’s.

Jingyi Li

Technological innovations have made life easier for many. However, others with capability limitations (either permanent, temporary, or situational) have disabilities thrust upon them by inconsiderate and inadequate interaction design.


The overarching theme of my practice and research has been designing inclusive interaction languages to enable the broadest possible audience to benefit from novel digital technologies.


I have a keen interest in exploring how different modalities and interaction metaphors can define and shape the inclusivity of digital technologies. By Integrating inclusivity as a fundamental guideline, we’ll be able to ensure accessible and human-centred experiences, revolutionizing the power and reach of emerging technologies at a crucial point of near maturity.


Please do get in touch if you fancy a chat.

Moception short film

Text entry and editing have always been cumbersome for visually impaired users. Current solutions for eyes-free text entry demand high cognitive and motor abilities. Mainstream touchscreen-based solutions usually involve both hands, making them difficult to use for visually impaired users with a cane in one hand.


Moception is a single-handed eyes-free text entry and editing method using a hybrid approach of speech and mid-air gesture input. Initiated by intuitive gestures, speech-to-text is used to input text content and correct the unwanted text using an ‘audio patching’ technique. Mid-air gestures detected by a wrist-mounted wearable are used to navigate through text contents, locate the insertion point, and perform discrete commands.


Moception applies principles of inclusive design and extreme-character methodologies. By addressing the needs of extreme users (visually impaired users) in extreme situations (with a cane in one hand), Moception provides a solution that could also be applied in many other contexts to benefit user groups beyond visually impaired users (e.g., elderly users or everyone with one hand occupied).


Special thanks to:

Wei Lin, staff of Tree of Life Disability Innovation Centre

Moception Wrist-worn Wearable
Moception Wrist-worn Wearable
Schematic of Wearable
Schematic of Wearable
Shake your arm twice to get started
Shake your arm twice to get started
Make a fist to start speech-to-text input, and release it when finished
Make a fist to start speech-to-text input, and release it when finished
Draw a square to review all text content (audio speak-out)
Draw a square to review all text content (audio speak-out)
Draw a horizontal line to review it line by line, and swipe right to go next
Draw a horizontal line to review it line by line, and swipe right to go next
Rotate your wrist to locate the insertion point (audio speak-out)
Rotate your wrist to locate the insertion point (audio speak-out)
Make a pinch to input audio-patch, release it to cover the unwanted text
Make a pinch to input audio-patch, release it to cover the unwanted text

Wrist-worn Wearable:

Moception presents a wrist-worn wearable for gesture recognition and audio/haptic feedback. Mid-air gestures will be detected by an embedded accelerometer. Audio feedback will be given through bond conduct headphones. Haptic feedback will be given through an embedded vibration module. With two stretchable junctions, the wristband enables eye-free interaction.


Mid-air Gestures:

Wrist-rotation:

Even with our eyes closed, we can still tell the positions of different body parts confidently. Taking advantage of proprioception, Moception maps the spatial layout of a line of text to the spherical space under our palm, which improves visually impaired user’s awareness and perception of the text layout. By rotating their wrist (as if when touching a football), users can control the insertion point and navigate through a line of text.

Gestures for discrete commands:

Low-key and intuitive gestures like swipe, draw a line were defined by 10 visually impaired participants in gesture study, following the participatory design paradigm proposed by Wobbrock et al. Each gesture was mapped to a basic command of the text entry system.

Literature review
Literature review
User research with visually impaired users at Tree of Life Disability Innovation Centre
User research with visually impaired users at Tree of Life Disability Innovation Centre
Hardware Design Development
Hardware Design Development
Prototypes used in modality study
Prototypes used in modality study
Final working prototype built in Unity with a Leap Motion Controller
Final working prototype built in Unity with a Leap Motion Controller
Functional validation using the final working prototype

Moception aims to provide a one-handed eyes-free text entry and editing method that is intuitive, effortless, and efficient to use. Three user studies were conducted to understand and address the challenges in a holistic way. 


User study 1 combined semi-structured interviews with 4 visually impaired participants and an online survey among 21 participants to define the challenges. Difficulties in orientation when locating the insertion point and wrong text correction were identified to be the two challenges Moception would tackle.

User study 2 was a participatory workshop where four prototypes making use of different modalities including head, arm, finger, and wrist, as well as the baseline method, were tested with 4 users and graded by them after their sessions. Wrist rotation for text review and audio-patch for text correction were the options that enabled the best user performance and provided the best user experience.

User study 3 was a gesture elicitation study where 140 gestures for 9 commands were defined by 10 visually impaired participants. The final gesture set was generated based on the participatory design paradigm (Wobbrock et al., 2009). 


The final working prototype was built in Unity with a Leap Motion Controller for gesture recognition and tested with 4 visually impaired participants. The average completion time of task “Enter-review-edit” was cut down by 53.2% compared to the current speech-based input method on an iPhone. Moception was also perceived to be more effortless and intuitive to use.

--
--

Reimagination of inclusive human interactions in smart kitchens

Mid-air gesture interface: Select control areas (cupboard/appliances)
Mid-air gesture interface: Select control areas (cupboard/appliances)
Mid-air gesture interface: Select appliances
Mid-air gesture interface: Select appliances
Mid-air gesture interface: adjust parameters
Mid-air gesture interface: adjust parameters
Mid-air gesture interface: move cupboards
Mid-air gesture interface: move cupboards
Superpower workshops: Investigating modalities and interaction metaphors
Superpower workshops: Investigating modalities and interaction metaphors
Information architecture
Information architecture
Gesture experiments with Leap Motion Controller
Gesture experiments with Leap Motion Controller
User testing with Leap Motion Controller and a projector & Exclusion calculation

Novina is a mid-air gesture interface that integrates all the interfaces on our kitchen appliances and a redesigned cupboard area that has a movable inner layer also controlled by the gesture interface. With simple and intuitive gestures like air tap, drag and move up on a near-body mid-air interface, users can interact with all of their kitchen appliances and move the inner layer of the cupboard to the desired position when they need to get a plate.


Gesture interactions in the Novina system were designed based on behavioral insights on modalities and interaction metaphors acquired from experiments. Direct manipulation by hand, World-in-hand spatial navigation are the two metaphors that proved to be intuitive and inclusive in a kitchen setting, which can significantly cut down cognitive loads and interaction burden.


The interactive prototype of Novina was tested with a Leap Motion gesture controller. The ‘exclusivity’ of Novina and the current kitchen solution was calculated with the Exclusion Calculator developed by Cambridge University. Based on the results, Novina is more friendly to users with different levels of motor, sensory and cognitive capabilities.

--
--

Capturing evidence on a mobile phone could put victims of domestic violence into dangerous situations if they get caught. When all the ‘visible’ interaction modalities are disabled, how do we design a workaround for them?

Using pre-defined trigger words to start capturing audio evidence
Using pre-defined trigger words to start capturing audio evidence
Toxicity estimation & what other victims in similar situations did
Toxicity estimation & what other victims in similar situations did
Disguise mechanism: ‘fake’ and ‘real’ way to log in
Disguise mechanism: ‘fake’ and ‘real’ way to log in
Disguise mechanism: changeable app icon and name & connect to other devices
Disguise mechanism: changeable app icon and name & connect to other devices
Information architecture of Cepi
Information architecture of Cepi
Demo of the trigger word function

Cepi is a platform enabling victims of domestic violence to collect admissible audio evidence safely using speech modalities embedded in conversations. Users can start capturing audio evidence or start an emergency call by saying trigger words defined by themselves.


To help victims to make more informed decisions, the algorithms of Cepi help identifying abusive languages related to insult, threat and manipulation, and emotional cues indicating violence. Users can also see what other victims having audios with similar toxicity did - how many of their peers reported their cases, how many percent of them got a protective order etc.


Several disguise mechanisms were designed to ensure the victims’ safety when the abusers constantly check their phones.


Beyond its current scope, Cepi has the potential to be applied in many other contexts including toxicity estimation in court systems and evidencing in the contexts of sexual harassment and school bullying.