A compassionate chatbot experience that helps individuals navigate complexity and choose a birth control that aligns with their personal goals and values.
Femtech Collaborative at the University of Pittsburgh
Our challenge was to design a mobile application that allows individuals of any gender to learn about, compare, and choose a contraception method.
The client, FemTech Collaborative, has been developing a suite of tools focused on helping people clarify their reproductive health goals, navigate decision-making, and communicate with their providers. We aimed to augment their existing suite of tools with a mobile experience.
A mobile chatbot that helps users navigate personal and complex decisions around reproductive health through compassionate, private conversation.
We began by researching other digital healthcare experiences, including Planned Parenthood and Bedsider. We found that existing tools to be overly clinical and prescriptive, even though goals around reproductive health aren’t always black and white. Furthermore, resources designed for women don't address the diversity of identities and needs of those who may use contraception for reasons beyond pregnancy prevention.
Seeking something less prescriptive, conversational AI stood out as a potential tool for navigating complex, personal decisions in a user-empowered way.
We looked into two existing resources that use chatbot technology for individual healthcare needs. Both chatbots take on clearly non-human personas to make asking for and receiving help feel less judgmental. Both use earnest tones and are extremely transparent about their abilities and limitations as AI.
"Roo" from Planned Parenthood, a sexual health education chatbot for teenagers.
"Woebot," a Cognitive Behavioral Therapy chatbot providing therapeutic tools to individuals.
"How might we empower users to access contraceptive care on their own terms?"
We wanted to understand who would benefit the most from a conversational tool, and what their needs, values, and personal goals around reproduction might be. We created profiles to empathize with a diversity of potential users.
From our user profiles, we mapped out dimensions of values, goals, and motivations to search for opportunities. Targeting users without access to traditional healthcare emerged as a high impact scenario.
A personable, accessible chatbot can have wide application but be especially supportive to people who feel unsafe in medical settings, face barriers due to stigma, or don't have insurance.
Our early wireframes divided the app into two main sections: the chat, where users can get guided assistance, and explore-and-compare, where users can get oriented to different types of methods on their own.
We conducted user testing with a total of eight users; two participated in quantitative research collection and six provided preferences through informal surveying. We sought to answer the following questions:
What values to our users hold, and how might this tool be relevant to them?
What persona of chatbot would users enjoy interacting with?
What is the most intuitive way to navigate through the app?
We originally designed an abstract identity for the chatbot, assuming users would prefer to have sensitive conversations with a clearly non-human persona. In testing, we found that users across genders preferred to interact with an image of a woman doctor — even knowing it's an AI. We redesigned the chat persona, including the icon and tone of voice, to reflect a warm, direct woman doctor.
Our original design recorded conversations by default, to make revisiting old data more accessible. After talking to our users, we redesigned an "opt-in" approach to data collection across all features: incognito chat functions, pin protected history, and optional intake questions.
Initially, users could navigate between two major sections of the app: a tab to freely explore and compare methods, and a chatbot function. We found that having both functions at the same hierarchy was confusing for users and distracted from the chatbot as the main service of the app. We redesigned the site map to center the user experience on conversation.
Ask Shirley's visual style is warm and professional. Shirley, our chatbot doctor, is named after Shirley Cholsom — the first Black woman in Congress and powerful advocate for womens reproductive rights.
In designing for disruptive technology, it was important that Ask Shirley maintain integrity in centering its users. We determined three core values based on our research that guided our design.
Kind and noninvasive. Makes no assumptions about a users' gender or values.
Private and secure. Transparent about abilities, strengths, and limitations.
Users have control over their personal data, decisions, and access to care.
Natural Language Processing
Break down input sentences to component parts
Respond based on understanding intent
Private, compassionate, and nonjudgemental. Ask Shirley is a conversational health experience for individuals making decisions about contraception.
The opening sequence, an earth-toned screen featuring a Ginko leaf, grounds the user and sets the stage for a safe, warm experience. Ginko trees are the only plant possessing both male and female reproductive parts — a nod to the app's commitment to gender inclusivity.
Upon first launch, users are prompted to complete intake questions, which are optional and transparent. Data gathered through the onboarding process provides context for the AI to create more personalized and contextualized guidance, but users can use the app with or without providing personal data.
1. Toggle between incognito mode off/on
2. Select a suggested pathway
Receive personalized recommendations
Prepare for a doctor's appointment
Explore various topics
3. Type in a question
4. Choose from commonly asked questions
The All Methods tab allows users to freely explore available birth control methods. Clicking into a method will display a short summary of key information; from there, users can choose to bring the method into chat for a conversation or compare it to another method.
Users can access password-protected historical artifacts, such as saved conversations, charts, and reports. Anchor points are automatically created at key conversational turning points and can also be manually created.
Impact & Reflections
The client was extremely excited by our innovative exploration of applying AI to healthcare in a personal, values-driven way. From this project, I learned about designing for multi-dimensional user needs and designing for emerging technologies to solve human problems.
Because this was a speculative project, we didn't have the opportunity to explore as many aspects of the tool Moving forward, the next steps for the project would include validating use cases, researching dimensions of user trust, expanding input methods, and off-platform implementation.
We were also left with some bigger questions to tackle in the field of healthcare AI:
Do we trust AI to have sensitive conversations?
What does it mean to take human interaction out of healthcare?