Skip to main content
Global Innovation Design (MA/MSc)

Harika Adivikolanu

I am a designer and developer focused on the future of human experiences. I use a nonintrusive design approach in multimodal interaction systems to push the boundaries of emerging and immersive technologies.


Experience:

I currently work as a designer for Long Distance, a performance fashion company in L.A. focused on wearable technology, wellness, and longevity.

Before my Master’s, I worked as a software application developer at Workday, a FinTech company in California, and as a freelancer in UX Design. 

I recently helped design and produce the online exhibition, Museum of Design Atlanta, The Future Happened, and author an education initiative to empower teens in their creative expression.


Education:

MA/MSc Global Innovation Design, Royal College of Art & Imperial College London

University Scholars Program, Study Abroad, National University of Singapore

BSc Computer Information Systems, Barrett Honors College, Arizona State University

Minor in Digital Art | Minor in Fine Art | Certification in International Business


Guest Speaker:

Women’s Leadership Conference in Southern Oregon (2018)

Lesbians Who Tech & Allies (2018)

Grace Hopper Conference of Women in Computing (2017)


I design for the future of immersion to remove barriers between people and technology by designing intuitive and nonintrusive interaction systems.

As disparate technologies such as artificial intelligence, voice interfaces, smartwatches, and augmented reality converge, they present more challenges in our relationship with machines. Through GID, I have researched and applied cutting-edge technologies and practices to design multimodal interaction systems, which are essential to addressing these foreseen problems.

I believe in designing multisensorial solutions to improve and maintain presence, creating technology that becomes a more ambient extension of the human. My works are contextualized within fashion, wearable technology, augmented environments, and experience design. 

My practice focuses on designing interventions for delightful and seamless experiences that enhance connectivity, introspection, and well-being.

Encounter

Encounter has garnered validation and interest from industry experts across the fields of wearable technology, voice user interface, machine learning, and color psychology, from leading companies including Google and Pear Sports. 

Please get in touch for further details.


[untitled]
— Encounter Technology Vision
— Encounter Concept Vision

The future of personalized immersion will enable humans to be augmented and empowered to make time-efficient, customized decisions on how they engage with their data, and their environments in their cities.


Today, activity and purchase decisions in local and international travel begin with an Instagram post. Tech companies like Pinterest, Instagram and LinkedIn have reported a 276 percent increase in small-town travel, with an emphasis on people wanting more unexpected destinations, local knowledge, and eco-friendly means.


Could we solve for this desire in the day-to-day, local context by creating serendipitous moments closer to home? Most people have found themselves not knowing how to fill up their time between planned events in their calendar. This can lead to a lot of time spent on searching, decision making and wayfinding with a smartphone, and often wastes precious time.


Encounter is an AI-driven augmented reality experience that helps people discover serendipitous experiences and geotag audio memories in their cities by leveraging their digital information to enhance their physical journeys through voice assistance.


Encounter’s geotagged audio connects users with a rich tapestry of lived experience in their physical environment for a timeless connection to loved ones.


Click here to see the audio-visual prototype.

— Encounter is an AI-driven digital assistant that augments your journey with serendipitous experiences and geotagged audio. The system comprises of a hearable device, smartwatch, the encounter application.
[untitled]
— Encounter works through a customized hearable device and a smartwatch application to provide information on physical experiences near you, along with walking time.
— Modes of Encounter allows users to have passive haptic feedback for encounters or active feedback to engage with the voice assistant.
[untitled]
— Encounter's Visual Design provides glanceable feedback using physical and digital data, specifically, personal data, time and GPS location.
— Encounter's bone conduction adhesive hearable was designed using flexible electronics to feel like a second skin.
Encounter Audio-Visual Interaction — When the Encounter voice assistant is asked a question, it pushes only the information people need directly to their watch.
Encounter Geo-located Audio — Encounter's geolocated audio allows friends and family to send audio memories into physical locations for users to discover on their journey.
Encounter Spatial Audio — Encounter's spatial audio navigation uses a virtual corridor to guide people to a destination and can be prompted through voice.

Users begin the journey by wearing a custom hearable that connects to a  smartwatch application. Encounter has three different layers:  the first layer reflects your routine or tried encounters, the second layer recommends encounters based on your social media and manually entered data, and the third layer creates serendipitous encounters such as the perfect sunset view at exactly the right time.

Users can record and leave geotagged audio files for friends and family through their phone or watch, creating opportunities to revisit places through new perspectives.

Users are able to effortlessly navigate a city using the voice assistant designed with the needs of mobility, as personalized serendipitous encounters extend and enhance their physical outings into meaningful experiences. 

Encounter has massive potential in personalized immersion and how we can save precious time through more meaningful and customized experiences instead of spending most of our time planning. Encounter presents a vision of the future of X Reality (XR) and Human-Computer Interaction (HCI).

User & Expert Engagement — Surveys were conducted on 67 participants and synthesized using sentiment and keyword analysis software.
— A speculative design approach was initially used to scope the technology and context to design each component of the multimodal design systems. The components were individually validated by industry experts and users before being tested in a consolidated prototype.
— Visual designs were created using secondary research on glanceable notifications and color theory expert validation.
— An immersive walk and paper prototypes tested the overall interaction system of the multimodal design.
— The voice interface design was tested through stakeholder engagement feedback and run through google assistant.
— The hearable design process was informed by secondary research, participatory design and expert interviews.

The research behind Encounter focuses on three key concepts: wearable technology for more natural digital extension of the human body; ambient information design to enhance environmental presence; and the use of voice assistance in personalized immersion.

The design process of Encounter used a methodology driven by user research and stakeholder feedback in the testing of the multimodal design system. Encounter has garnered validation and interest from industry experts across the fields of wearable technology, voice user interface, machine learning and color psychology from leading companies such as Google and Pear Sports.


— Footsteps Wearable
— What would it be like to remove phone screens in a museum space where people want to introspect but also want to capture their favorite moments?
— Locking the phone serves as a physical reminder that visitors no longer need it in the following exhibition experience. However for emergencies, they can come back to use their phones in the lobby.
— Each wristband is unique to the user. At the ticket counter, a wristband will be scanned under an individual’s account so that their personal experience and artwork can be synced to the integrated phone and web app.
— Footsteps System Infrastructure
— After the visit, people can access a personalized collection of their exhibition visits and interactive captures of artwork which tell a story of their curiosity and interests.
— Footsteps Features
— Footsteps Research

Footsteps, is an exploration of space and how we can decouple the human experience with our over-dependence on screens. 

This reimagined museum experience allows users to practice being more present in the moment by replacing their phone interactions with a non-intrusive wearable device. Ultimately in hope to assist people in becoming more self-aware of their movements, habits, interests and behaviors.

The Experience:

The wristband acts as a medium for cultivating attention and awareness in how people might move through an exhibition space as new or returning members. The wristband does this silently and unobtrusively, activating only when necessary.

At the end, visitors’ journeys are accessible online from the comfort of their homes where they can view a personalized exhibition collection and see changes over multiple visits, compare paths with friends and revisit artwork as they would a photo.

Museums can make this more immersive by adding elements of ambiance  into interactive 360-degree images of artwork like: 

  • natural lighting from windows
  • exhibit lighting
  • the sound of crowds
  • the curated exhibit sounds. 

This can change based on museum layout, outdoor events, wildlife sites, life-size animated models and other special needs.

Designing and adapting environments not only for the human, but the technology we carry can help us create digital interactivity in unobtrusive ways aiding mental and physical presence and overall well-being.


Synesthesia is a condition where one sense is simultaneously perceived through one or more additional senses. 

For the BST Hyde Park festival, we hoped to give concertgoers a multisensorial experience where they can explore music through taste. This was an attempt to break away from the algorithm-based music distribution systems widely used today. 

The Experience:

We designed a spoon with a sensor that helps the user taste the specific music they are listening to and alter it to their pleasure. We designed four interactive shakers to change the flavor of music. At the end of the experience, we generated taste receipts to help the user understand their taste and expand their music "palate".

Tastebuds is a multisensorial dining experience that allows users to listen to, adjust, and ultimately “taste” music through a speculative design that breaks the physical boundaries of how we process and enjoy music.

This project attempts to bring more thought into the algorithmic ways of music consumption today and provides a livelier and delightful experience that creates engaging and provoking interactions. 

This project was done in collaboration with:

  • Ziqq Rafit
  • Ayana Enomoto-Hurst
  • Serra Umut



Showcased:

Museum of Design Atlanta, The Future Happened: Designing the Future of Music (2021)

San Francisco Design Week, Designing the Future of Music Presents: The look, sound and feel of expression, artistry, and connection. (2020)