[Case Study 01]
CertAIn: Detecting the factual from the fake
IBM
We won 3rd place and increased users’ confidence levels by 60%
[Project Overview]
CertAIn is an AI-powered browser extension that helps users quickly verify online information during moments of crisis. This project was developed during IBM’s Spark Design Festival, a 2 week internal hackathon where IBMers come together to solve real-world problems.
[Challenge]
Over 4 out of 5 online global citizens believe they’ve been exposed to misinformation.
So how can IBMer’s leverage the power of AI to develop solutions that effectively mitigate the global impact of misinformation, ensuring the content we consume is reliable and trustworthy?
[My Role]
Lead Researcher
[Team]
1 UX Designer
1 Researcher
[Contributions]
Research
User testing
Protoyping
[Timeline]
2023
Here’s the gist
With so much content flooding our feeds, how do we know what—or who—to trust?
My research revealed that users wanted validation. They wanted something that would provide them with transparency around sources and claims, especially during uncertain times.
Our team ultimately designed CertAIn: an AI-powered tool that uses heatmaps, claim scores, and source context—to help users feel more confident digesting, and even sharing what they read.
Understanding the people
I interviewed 5 people with the research goals of 1) how people were consuming their information online, and 2) more importantly, what makes them trust it? Here’s what I asked:
1. Can you walk me through what prompts you to source an article?
2. How do you decide if what you’re reading is reliable?
3. Is there anything that would help you feel more confident in the accuracy?
These conversations revealed an interesting insight:
Though a lot of folks admitted to not fact checking their information, general healthcare questions, or uncertain times like the breakout of COVID-19 left a lot of people guessing.
So how might we help users establish credibility in their research when the world feels chaotic?

Aditi Raj
Marketing Manager
I just want the checkout to be quick and painless—no surprises or unnecessary steps!
Age: 35
Location: New York City
[Goal]
Staying informed with news
Sharing accurate news with friends and family
Make smart, informed decisions based on information
[Frustrations]
Overwhelmed with conflicting information online
Finds it hard to verify news, especially in fast pace moving events
Click-bait fatigue and lack of transparency
I took a closer look at how others were tackling misinformation
While most focused on B2B and flagging false claims, few offered transparency into why something was considered unreliable.
That gap became our opportunity: to design a solution that didn’t just call out misinformation, but helped people better understand how to evaluate it.
[01] Logically
Uses AI and human fact-checkers to assess the credibility of online content and news.
✅ Also offers analysis what’s being said, how it’s spreading, and where it’s gaining ground
❌ Geared towards specific industries
[02] Newsguard
Rates news sites based on journalistic standards and source reliability.
✅ Offers detailed “nutrition labels” for each news source, including history, ownership, and reliability.
❌ Doesn’t analyze individual articles
Trust the platform with her payment and personal information.
[03] Fact Check
Categorizes media outlets by political bias and factual reporting record.
✅ Provides bias rating and a factual reporting rating.
❌ lack of article level detail when it comes to analysis
[04] Meta
Scans content posted and reduces the reach of pieces flagged as false or altered by third-party fact-checkers
✅ Has potential to flag rapidly
❌ Very little visibility into how AI determines what's “false.”
In order to help Aditi assess content credibility in a transparent and trustworthy way, we ideated on what that might look like. I took a deeper dive into the pros and cons of each idea. (Option 3 was our winner!)
[01] ChatBot Assistant
Users can verify links with an AI chatbot
✅ Conversational
❌ Chatbot has to be integrated into every site (or used as separate software)
[02] Search Engine
Users were frustrated by unclear error messages and redundant form fields.
✅ Users were frustrated by unclear error messages and redundant form fields.
❌ Would not be in feed/platform
[03] Heatmap
Highlights sections of article based on credibility + provides scoring
✅ Visual feedback, helps readers scan at a glance
❌Potential accessibility barriers, and can be visually overwhelming
I sketched initial iterations of what this could look like
When sketching ideas, the main idea here was to show how users could quickly evaluate the validity of the content using a heatmap, and without leaving the page.
I did some user testing with 5 people and only 40% of users felt like they could interpret the data provided
Goals of the test included:
Are the visual cues clear? (What colors meant etc)
Can users interpret the data provided?
Was the overall experience helpful and intuitive?
While most users mentioned that the transparency was helpful, they wanted more. According to our users, they wanted to see the justification behind the score, along with the breakdown of details. I wanted to explore some more iterations that showcased critical details in a simplified approach.
Back to iterating
As I went back to the drawing board, I focused on the side panel. I explored ways the scoring could be 1) broken down 2) stacked against criteria. Based on the testing, our users wanted more clarity— so I created some designs that helped them anchor the score within a “rubric”. The team decided to move forward with option 4 for testing.
Results after I tested again with version 4
I tested with new wireframes using version 4, with a focus on clarity as our users mentioned.The goal was to help users feel more confident in interpreting the data and to ensure they understood not just the what, but the why behind each rating. We saw better outcomes after this change:
Simplification is key
Keep asking yourself, is there a way to still make this more simple?
Simplification is key
Keep asking yourself, is there a way to still make this more simple?
Our final product
I tested with new wireframes using version 4, with a focus on clarity as our users mentioned.The goal was to help users feel more confident in interpreting the data and to ensure they understood not just the what, but the why behind each rating. We saw better outcomes after this change:
What I learned (after placing in the top 3!)
Simplification is key
Keep asking yourself, is there a way to still make this more simple?
Testing pays off
Testing the wireframes really helped to ensure the designs met users like Aditi’s, needs.
Focus on the must-have first
It’s easy to want to include all the features you’re thinking through, I learned to focus on the most critical ones first.
[View Other Projects]
LawHub— Framer Website
Development
LawHub— Framer Website
Development








