Welcome to SafeLab
SafeLab is a non-profit organization founded in 2022 (in Paris) that is dedicated to developing and promoting safe artificial intelligence (AI) systems that benefit humanity. Our mission is to ensure AI technology is created and used responsibly by researching, developing, and advocating for beneficial AI that respects human values.
Key Focus Areas
- AI Safety Research - We conduct technical research into making AI systems more robust, transparent, and aligned with human values. This includes studying AI alignment, interpretability, verification, and techniques to reduce biases.
- Mathematical tools to understand machine learning: The laws of thermodynamics were not yet mastered when the steam engine was invented. Machine learning was made to work even before comprehending the reasons behind its effectiveness. We hold the belief that proficiency in the mathematics underlying deep learning theory will result in improved efficiency.
- Educational Outreach - We educate the public and policymakers on the responsible development and use of AI through conferences, workshops, publications, and engaging with media. We aim to raise awareness about AI safety issues.
- AI Auditing and Testing Tools - We build open source tools for auditing AI systems and datasets to detect issues like biases, flaws, and misalignment with stated objectives. These tools are used by companies, researchers, and the public to evaluate AI.
- AI Policy Guidance - We provide analysis and policy recommendations to governments and companies to ensure AI is developed and used ethically. This covers topics like transparency, accountability, mitigating harm, and protecting privacy.
Current Projects
- An LLM text classifier to detect harmful AI-generated text content for use by academia and students. This helps identify text like hate speech or misinformation.
- A deepfake detection tool that analyzes audio and video to determine if it is synthetically generated or manipulated media. This is used to combat disinformation.
- Ongoing research into safe and beneficial reinforcement learning, machine learning transparency, natural language processing fairness, and other technical AI safety areas.
- Public workshops and lectures to educate policymakers on AI governance, product safety standards, and responsible publication norms.
- Resources like practical guides on AI safety for companies and technical standards for creating safe AI systems.
As a non-profit, SafeLab is supported through grants, donations, and partnerships with organizations aligned with our mission of benefiting society through safe and ethical AI development. We work closely with researchers, companies, government agencies, and civil society organizations to promote AI safety for the common good.