Amurabi

Amurabi

Services de design

Paris, Île-de-France 3 630 abonnés

Fighting against dark patterns and deceptive design. Most Innovative Privacy Project 2022 IAPP and W@Privacy

À propos

Trust and fairness by design: we craft accessible, crystal-clear, fair and empowering information in the AI and digital age. We remove the headache of online legal disclosures and provide award-winning AI explainability, privacy-enhancing design, age-appropriate design and accessibility services. As pioneers of legal design since 2018, we provide a unique combination of neurosciences, plain language, legal, privacy and design expertise. With over 120 projects delivered in the world, we serve Fortune 500 companies, start-ups and scale-ups in the EU, the UK and the US.

Site web
https://amurabi.eu/en/
Secteur
Services de design
Taille de l’entreprise
2-10 employés
Siège social
Paris, Île-de-France
Type
Société civile/Société commerciale/Autres types de sociétés
Fondée en
2018
Domaines
DESIGN, LEGALDESIGN, UXDESIGN, INNOVATION, DESIGNTHINKING, STRATEGY, LEGALTECH, DARK PATTERNS, DECEPTIVE PATTERNS, DECEPTIVE DESIGN, ETHICAL DESIGN, AUTOMATED DETECTION, PLAIN LANGUAGE, COMPLIANCE, R&D LAB, USER TESTING, DIGITAL SERVICES ACT, AI ACT, AI EXPLAINABILITY et ACCESSIBILITY

Lieux

Employés chez Amurabi

Nouvelles

  • Voir la page d’organisation pour Amurabi, visuel

    3 630  abonnés

    💸 FTC’s 2023 Wins: $324 Million in Refunds and a Crackdown on Dark Patterns 💸 Hello world! 👋 Did you know that in 2023, the FTC (Federal Trade Commission) managed to return a whopping $324 million to consumers? Yep, you heard that right! This was all thanks to various lawsuits, including those targeting sneaky tactics known as dark patterns. Let’s dive into what went down and how it all worked out. 🤑 Big Numbers from 2023 👉 Total Refunds: Over $324 million 👉 Direct Refunds by FTC: $137.7 million, reaching 1.4 million people 👉 Biggest Payout: $99.4 million to Vonage customers Vonage was a huge case because they used dark patterns to keep charging customers hidden fees, even when those customers tried to cancel their services. The FTC stepped in and got those charges refunded. 👀 We also made a list of the Top 10 Cases of Dark Patterns and Deceptive Practices, all settled in 2023: 1️⃣ Triangle Media (2018): Tricked people with fake free trials, leading to $8.6 million in refunds. 2️⃣ Lending Club (2018): Lied about fees and loan backing, resulting in $17.5 million in refunds. 3️⃣ Elite IT Partners (2019): Scared people into buying unnecessary tech services, leading to $249,766 in refunds. 4️⃣ Napleton Auto (2022): Added sneaky fees and discriminated against customers, resulting in $9.6 million in refunds. 5️⃣ Digital Income System (2020): Made false promises of high earnings, leading to $559,404 in refunds. 6️⃣ Avant (2019): Misled borrowers with deceptive loan practices, resulting in $3.7 million in refunds. 7️⃣ Apply Knowledge (2014): Ran a bogus work-from-home scheme, leading to $29 million in refunds. 8️⃣ RevMountain (2017): Pulled a subscription scam, leading to $1.1 million in refunds. 9️⃣ RagingBull.com (2020): Used deceptive investment strategies, leading to nearly $3 million in refunds. 🔟 HomeAdvisor (2022): Misled on home improvement leads and subscription costs, leading to $3 million in refunds. 🤔 Why This Matters The FTC’s crackdown on these practices shows that they’re serious about protecting us from getting tricked. It’s a reminder that we need to stay vigilant and report shady practices. For more deets on these cases and the FTC’s work, check out their 2023 Refund Report: https://lnkd.in/ehfiMig3 Stay smart, stay safe, and let’s keep pushing for fair practices! 💪 💫 Regain your freedom online #fairpatterns #deceptivedesign #impactdriven #solutionmakers #onlinedeception #onlinemanipulation #privacy #privacylaw #darkpatterns

    RagingBull | Improve Your Trading Skills

    RagingBull | Improve Your Trading Skills

    ragingbull.com

  • Voir la page d’organisation pour Amurabi, visuel

    3 630  abonnés

    🔍 Understanding the EU AI Act: Tackling Dark Patterns 🔍 Today, let's dive into how the EU AI Act is cracking down on dark patterns and ensuring ethical AI practices. 🤖💡 🚫 Unacceptable Risk AI Systems 🚫 Some AI systems pose such a threat to humans that they are outright banned. These include: - Cognitive Manipulation: Targeting vulnerable groups (e.g., dangerous behavior in children's toys) 🎮 - Social Scoring: Classifying people based on behavior or personal characteristics 🏷️ - Biometric Identification: Using real-time and remote systems like facial recognition 🕵️♂️ 🔎 Article 5: Prohibiting Dark Patterns as “unacceptable risk” 🔎 Article 5 of the Act specifically bans AI that: - Uses subliminal techniques beyond a person’s consciousness 🌀 - Engages in manipulative or deceptive techniques 🎭 The goal is to prevent AI from: - Distorting behavior and impairing informed decision-making 🧠 - Causing significant harm through these distortions 🚨 🚨 High-Risk AI Systems 🚨 High-risk AI systems are those that negatively impact safety or fundamental rights, such as privacy and freedom. These are split into two categories: AI systems that are used in products falling under the EU Product Safety legislation, which include toys, cars, medical devices, and lifts. AI systems falling into specific areas, and that have to be registered in an EU database, like: - Management of critical infrastructure 🏗️ - Education and vocational training 📚 - Employment and worker management 💼 - Access to essential services 🏥 - Law enforcement 🚓 - Migration and border control 🌍 - Legal assistance ⚖️ 💬 Why This Matters 💬 This legislation is a major step forward in protecting consumers from unethical AI practices. By categorizing and banning harmful AI behaviors, the EU is ensuring a safer, more transparent digital landscape for everyone. 🌐 🔗 For a very useful and playful game to better understand the AI Act and its obligations, check out the AI Act Game from Telecom Paris, by Thomas Le Goff: https://lnkd.in/eGr4wCc3 Let's keep the conversation going! How do you think these regulations will impact the AI industry? Share your thoughts below! 👇 #AI #EthicalAI #DarkPatterns #EULaw #TechRegulation #ConsumerProtection

  • Voir la page d’organisation pour Amurabi, visuel

    3 630  abonnés

    🎙️ New Podcast Episode Alert! 🎙️ Join us in the latest episode of Fairpatterns Regain Your Freedom Online as we dive into the world of online safety and privacy with Claire Q., Chief Privacy Officer at PRIVO (Privacy Vaults Online).🎙️ Claire shares her fascinating career journey from traditional print media to leading privacy and safety initiatives for children online. Discover how Claire's early experiences shaped her dedication to protecting minors in the digital age. 🌐 We explore: ➡ The importance of balancing protection and empowerment for children online. ➡ The complexities and challenges of age assurance and verification. ➡ The significant impact of dark patterns on minors and how they differ from adults. ➡ Practical tips for parents and educators to safeguard children from deceptive design practices. Don't miss this insightful conversation packed with valuable information for anyone concerned about online safety and privacy. 🔗 Listen to the full episode here: https://lnkd.in/eAzhf5-Z #OnlineSafety #PrivacyProtection #ChildProtection #DarkPatterns #DigitalSafety #Privo #Podcast #FairPatterns

  • Voir la page d’organisation pour Amurabi, visuel

    3 630  abonnés

    🌟 This month, we're diving into a series on AI transparency—a topic that's gaining significant attention and importance. This comes especially in light with the implementation of the AI Act in Europe. If you want to know more about it, here’s our take and insights on the topic: https://lnkd.in/gFw4Cpmq 📚 Last week, we highlighted key aspects of AI and minors. Are AI tools safe for minors? https://lnkd.in/eFVJ5cXr This week, we're diving into a very central topic when it comes to AI: training methods & data privacy 👀 Starting with #AITrainingMethods… You might be wondering, does AI tools connect your ID with your conversations when training their model? Or are the personal identifiers segregated from the conversations? It’s an important question. Essentially, are personal identifiers removed when training these models? And, is the module training based on our conversations on by default, or off? Moreover, is the AI trained to clearly state that it’s not human but a machine learning model? This is a big deal because most people are more inclined to trust what they believe is a human than a machine, and also because anthropomorphism could be a dark pattern. 💡 It’s also very important for all of us to understand the testing methods applied to the models. For example, Red Teaming: internal and external “ethical hackers” who try to bypass safety and privacy by design measures, to identify any weaknesses and continuously correct them. Think of it as a watchdog ensuring AI is safe and fair, in combination with other testing methods. 🛡️ A lot is happening in Brussels right now about the AI Act. Regulators are debating how detailed AI companies should be about the content used to train their models. Some sections of the law are pretty controversial, especially the part requiring "detailed summaries" of training data. 🤔 ❓ Should this training data be a trade secret, or should AI companies be transparent? What do you think? The EU requests more transparency from AI companies about the data used to train their systems. This is a huge step because it reveals one of the industry's most closely guarded secrets. Since the launch of ChatGPT by OpenAI, there’s been a surge in AI use. But as this tech boom continues, there are rising concerns about how data, especially copyrighted content, is used in training AI models. What’s your take on AI transparency? Should companies reveal more about their training data? Let’s discuss! 🗣️ For more insights, check out the full article: https://lnkd.in/eyxJGr94 💫 Regain your freedom online. #AIAct #AIRegulation #TechEthics #DigitalTrust #ResponsibleAI

    Voir la page d’organisation pour Fairpatterns, visuel

    617  abonnés

    🚀 How to make AI systems transparent, understandable, and verifiable by humans? That’s the challenge that one of the largest AI companies in the world gave us! Well, challenge accepted! 🌟 #AItransparency is about making AI systems visible, clear, and understandable by humans. It fosters trust, ensures accountability, and enables informed decision-making. Key aspects include: ➡ Data Transparency: Disclosure of data sources and processing. ➡ Algorithm Transparency: Insights into the AI’s logic and parameters. ➡ Model Interpretability: Creating human-understandable models. ➡ Operational Transparency: Information on AI deployment and limitations. ➡ Accountability and Governance: Clear frameworks and responsible parties. ➡ Ethical Considerations: Fair, private, and non-discriminatory AI practices. Right, but how do we make these very complex systems accessible? Human-centricity. ✅ Acknowledging human cognitive biases triggered by so much change in so little time and information overload ✅ User-centric structure of the information and #plainlanguage ✅ Problem-solving design to empower all humans, users, regulators, policymakers to make their own, informed judgment. ❓Why does it matter? #AI is moving fast, and so is the #AIAct It will be published in the Official Journal and will enter into force 20 days after being published, likely in June. ⏰ 6 months after entry into force: “Unacceptable risk AI systems” will be banned—likely December 2024. ⏰ 1 year after entry into force: “General purpose AI models” must comply—likely June 2025. ⏰ 2 years after entry into force: “Limited risk,” “Minimal risk,” and “High risk” AI systems must comply—likely June 2026. 👥 The AI Act can be overwhelming, so we’ve broken down this complex regulation into a series of steps to determine if it applies to your activities. For more info, check out this "AI Act Guide" post: https://lnkd.in/eh7ZRMJM 📘 Transparency is essential for building public trust, ensuring regulatory compliance, and promoting ethical AI use. 🛡️✨ 💫 Regain your freewill online #AIAct #AIRegulation #TechEthics #DigitalTrust #ResponsibleAI

    • Aucune description alternative pour cette image
  • Amurabi a republié ceci

    Voir le profil de Marie Potel-Saville, visuel

    Founder & CEO | AITransparency, XAI, dark patterns detection and fixing. Age-appropriate design, privacy-enhancing design, litigation design.

    56,867 views and 491 likes celebrating #multicultural #diversity! Hey Linkedin, you simply made my day(s)! On Friday last week, as I was packing to celebrate my 15 years wedding anniversary (with the most wonderful man ever 🥰), I picked my passeport from our folder at home and could not help but smile, seeing all the different passports in our family: green, red, dark red… 🇲🇽 🇬🇧 🇫🇷 So I shared a post on what I deeply believe in: #multicultural #diversity makes us stronger, as citizens, as businesses and as a society. ❣️ https://lnkd.in/e5cxY5Zx 🚀 And it went viral! So many likes, positive comments and support for this simple idea - yet currently challenged in France, sadly. 🌳 We’re active optimists and believe we just have to do our share of the work, for what’s within our control: our (wonderfully diverse) team and our work. So, how do we take into account and respect our users’ diversity in our projects? 👉 Many young people are not sure of their sexual orientation, or their identity. We’re currently designing a #privacypolicy for one of the largest #datingapps in the world - incidentally founded by a woman 😊 - and of course, we made sure that our #UserTesting would duly take the diversity of users into account, beyond labels and “boxes”. 👉 30% of all internet users are #kids and #teens, often as young as 8 years old. That’s why we #cocreate with kids and teens dozens of #privacynotices, #privacydashboards dedicated to minors. 💯 Spoiler alert, it’s totally possible to have a #readingage of 13 years for a privacy policy - we’ve just done it! 👉 At least 16% of the worldwide population live with a #disability. That’s why we love creating #guidebooks and #decision-making tools to #empower people with disabilities to make the best choices for themselves 👉 About 1 in 5 people is likely to develop a cancer in their lifetime. We’re closely working with pharmaceutical laboratories, patients and patients associations to redesign “informed #consentforms” for #clinicaltrials or #patientagreements. Beyond the fact that each workshop with patients is an amazing lesson of #humanity and #resilience, it’s very important for our work to take into account the fact that the treatment might affect their #cognitiveabilities: we need to avoid flashy colors, illustrations that might be too complex… 👉 There are at least 180 #cognitivebiases identified to date: these “mental shortcuts” make us act fast, with low effort, but also make us vulnerable to #manipulation and #deception. That’s why we find and fix #darkpatterns and #deceptive design, and developed #fairpatterns: interfaces that empower users to make their own, free and informed #choices. ❣️ The great diversity of the users of the law make our work at Amurabi and Fairpatterns incredibly fulfilling, and impactful. 💫 Regain your freedom online

    • Aucune description alternative pour cette image
  • Amurabi a republié ceci

    Voir la page d’organisation pour Fairpatterns, visuel

    617  abonnés

    In this edition, we explore pivotal developments shaping the digital landscape, from a landmark FTC ruling against Amazon, to our ongoing series on AI transparency, and uncovering the harsh realities of digital slavery. Dive into the latest efforts to secure fairness, transparency, and accountability in the online world. #fairpatterns #deceptivedesign #impactdriven #solutionmakers #onlinedeception #onlinemanipulation #privacy #privacylaw #darkpatterns

    Ensuring Fair Digital Practices: FTC vs. Amazon, AI Transparency, and Tackling Digital Slavery

    Ensuring Fair Digital Practices: FTC vs. Amazon, AI Transparency, and Tackling Digital Slavery

    Fairpatterns sur LinkedIn

  • Voir la page d’organisation pour Amurabi, visuel

    3 630  abonnés

    🎵 Did you miss the latest podcast of Top of the Agenda by Oxera Consulting LLP? Go listen to it 👀 Our CEO and Founder Marie Potel-Saville talks about our favorite topic: #darkpatterns and how to fight them! 💪 Do you want to know more? 🚀 Register to our masterclass: https://lnkd.in/gxxamjFR 💡 Subscribe to our newsletter: https://lnkd.in/eDiarG9N 📚 Find all of our news updated on our site: https://lnkd.in/eiFBtjm4 🎙️ Listen to our podcast: https://lnkd.in/eht9qxZV #fairpatterns #deceptivedesign #impactdriven #solutionmakers #onlinedeception #onlinemanipulation #privacy #privacylaw #darkpatterns

    Voir la page d’organisation pour Oxera Consulting LLP, visuel

    16 689  abonnés

    We may not think much of selecting to ‘accept all cookies’ on a website to get to the content faster or making a quick purchase because we are told there are ‘only two rooms left’. However, these deceptive or dark patterns are designed to exploit inherent decision-making biases to change your choices, often to the benefit of the supplier rather than you. Listen to the latest episode of 'Top of the Agenda' where Helen Jenkins, Marie Potel-Saville and Anastasia Shchepetova discuss how we can identify and test the effects of these patterns and make these choices fairer. https://lnkd.in/gmffAY8s #darkpatterns #deceptivepatterns #choicearchitecture

    • Aucune description alternative pour cette image
  • Voir la page d’organisation pour Amurabi, visuel

    3 630  abonnés

    🌟 This month, we're kicking off a series on AI transparency—a topic that's gaining significant attention and importance. This comes especially in light of recent investigations into Meta's AI data scraping methods. Curious? Read more here: https://lnkd.in/gehxFvSE 📚 Last week, we highlighted key aspects of #AITransparency and discussed the upcoming regulations under the #AIAct. If you missed it, catch up here: https://lnkd.in/gFw4Cpmq 💡 This week, we're diving into a topic that's close to our heart: AI and minors. Are AI tools safe for minors? There are several critical points to consider: 🚫 Age gates, which often fail to work effectively. 🧹 The need for rigorous removal of harmful input from data sets 🔍 Training and testing models to prevent the creation of harmful outputs from minors 👨💻 Educating minors, parents and educators as to how to use AI safely, for example with a set of simple safeguards: ✅ Does the AI tool target minors, through the design, the copywriting or otherwise? ✅ Does the AI tool make it clear that users are not interacting with humans? ✅ Does the AI tool block any prompt that requests a harmful output, like content relating to violence, self-harm, bullying, adult content etc? Despite these challenges, we firmly believe that, once these issues are addressed, AI can be a fantastic tool for minors and their development. For instance, Harvard has analyzed the application of AI to create role-play scenarios for students, highlighting its potential. Check it out here: https://lnkd.in/ejw2xbzA 🌟 What do you think? Is AI ready to be used by minors - or rather, given that kids and teens use it anyway without us knowing, how to empower them to have truly learning experiences online? Let's work together to ensure a digital world that empowers rather than manipulates us. We'd love to hear your views! 🗣️ 💫 Regain your freedom online #AIAct #AIRegulation #TechEthics #DigitalTrust #ResponsibleAI

    • Aucune description alternative pour cette image
  • Voir la page d’organisation pour Amurabi, visuel

    3 630  abonnés

    🚀 How to make AI systems transparent, understandable, and verifiable by humans? That’s the challenge that one of the largest AI companies in the world gave us! Well, challenge accepted! 🌟 #AItransparency is about making AI systems visible, clear, and understandable by humans. It fosters trust, ensures accountability, and enables informed decision-making. Key aspects include: ➡ Data Transparency: Disclosure of data sources and processing. ➡ Algorithm Transparency: Insights into the AI’s logic and parameters. ➡ Model Interpretability: Creating human-understandable models. ➡ Operational Transparency: Information on AI deployment and limitations. ➡ Accountability and Governance: Clear frameworks and responsible parties. ➡ Ethical Considerations: Fair, private, and non-discriminatory AI practices. Right, but how do we make these very complex systems accessible? Human-centricity. ✅ Acknowledging human cognitive biases triggered by so much change in so little time and information overload ✅ User-centric structure of the information and #plainlanguage ✅ Problem-solving design to empower all humans, users, regulators, policymakers to make their own, informed judgment. ❓Why does it matter? hashtag #AI is moving fast, and so is the #AIAct It will be published in the Official Journal and will enter into force 20 days after being published, likely in June. ⏰ 6 months after entry into force: “Unacceptable risk AI systems” will be banned—likely December 2024. ⏰ 1 year after entry into force: “General purpose AI models” must comply—likely June 2025. ⏰ 2 years after entry into force: “Limited risk,” “Minimal risk,” and “High risk” AI systems must comply—likely June 2026. 👥 The AI Act can be overwhelming, so we’ve broken down this complex regulation into a series of steps to determine if it applies to your activities. For more info, check out this "AI Act Guide" post: https://lnkd.in/eh7ZRMJM 📘 Transparency is essential for building public trust, ensuring regulatory compliance, and promoting ethical AI use. 🛡️✨ 💫 Regain your freewill online #AIAct #AIRegulation #TechEthics #DigitalTrust #ResponsibleAI

    • Aucune description alternative pour cette image
  • Voir la page d’organisation pour Amurabi, visuel

    3 630  abonnés

    Hello folks! This period is dense with interesting announcement, and we’re here to make sure you do not miss any of them 👀 💪 So here’s an update coming from the UK Market! 🇬🇧 First of all, the Digital Markets Competition and Consumers Bill passed last month. The DMCCB reforms unfair commercial practices and combats subscription traps. It introduces specific pre-contract information obligations for subscription contracts, establishes 14-day 'renewal cooling-off periods', requires businesses to send reminder notices, and mandates straightforward subscription cancellation. The DMCCB also introduces rules against 'drip pricing' and fake reviews, requiring total price inclusion in ads and banning misleading review practices. ➡️ https://lnkd.in/egU5mWBb The “Online Rip-Off Tip-Off” campaign was also launched by the government! The UK has created a website to help educate consumers about spotting and avoiding misleading online sales tactics, and easily report online rip-offs via a digital reporting form. The report form claims to only take 2 minutes, and there are various videos and tips about how to spot sneaky sales tactics. Specifically, there are videos about: Hidden fees: https://lnkd.in/e5saCiqC Pressure selling: https://lnkd.in/ehiGy_Uy Subscription traps: https://lnkd.in/e9JCtK4N Fake reviews: https://lnkd.in/e4X4iBmR ➡️ https://lnkd.in/ggYnw-9W We hope you found these news interesting! Stay tuned for more 🚀 💫 Regain your freedom online #fairpatterns #deceptivedesign #impactdriven #solutionmakers #onlinedeception #onlinemanipulation #privacy #privacylaw #darkpatterns

    Competition and consumer law reform: the Digital Markets, Competition & Consumers Bill is passed by UK Parliament

    Competition and consumer law reform: the Digital Markets, Competition & Consumers Bill is passed by UK Parliament

    twobirds.com

Pages affiliées

Pages similaires

Parcourir les offres d’emploi