Safety Nudges
✨ AI-Powered View on Chrome Web StoreChrome will indicate if you already have this installed.
Overview
As AI systems have become widely commercialized and integrated into consumer products, financial and market competition pressures may incentivize companies to promote AI products despite known safety risks or societal harms. Recent incidents and research has raised concerns about potential societal impacts associated with AI use, including overreliance on automated systems, excessive use, persuasive influence, social isolation, and user trust reinforced through sycophantic or overly agreeable responses from AI systems. These safety-related concerns can disproportionately affect vulnerable populations, such as older adults and children.
Because the incentives of commercial AI product developers may not always align with broader societal interests, there is a need for independent oversight mechanisms and user-facing tools that can promote awareness of potential risks.
Designed by AI safety researchers at Carnegie Mellon University, Safety Nudges provides a low-friction real-time auditing interface for chatbot conversations, restoring agency and awareness to end-users. It checks each conversation turn on ChatGPT and Claude by sending it to an external LLM for review; a principled and comprehensive taxonomy of harms is used to flag the conversation for common pitfalls like flattery, overconfidence, and anthropomorphization. Nudges are integrated gracefully into the chat interface and can be paused at any time.
NOTE: Safety Nudges is currently in an alpha release, with codes for free access granted to chosen users. If you do not have a code, you can also provide your own OpenRouter API key. We encourage you to submit feedback to help us improve!
Tags
Privacy Practices
🔐 Security Analysis
This extension hasn't been security-scanned yet.