AI safety for organizations, families, and everyone
The premise is simple: AI governance should protect real people — employees affected by AI-driven decisions, families exposed to AI risks at home, and communities impacted by automated systems they don't fully understand.
PeopleSafetyLab serves three distinct audiences: organizations that need operational governance systems, families that need accessible safety tools and education, and everyone who deserves to navigate AI confidently and safely.
We are a KSA-native organization. That means everything we build is designed for Saudi regulatory context, Vision 2030 alignment, and the realities of deploying AI in this market — not adapted from frameworks built for other geographies.
Three pillars, one mission
For Organizations
AI Governance Systems
Practical AI governance resources and frameworks designed for Saudi Arabia. Policies, controls, risk registers, and implementation guides — all freely available, built to be used.
For Families
Family Safety Tools
Free self-assessment tools, plain-language guides, and personalized action plans that help families understand and respond to AI risks in the home.
For Everyone
Public AI Education
Open resources, Lab Notes research, and community tools to build AI literacy across the population — so everyone can navigate the AI era safely.
The researchers behind the bylines
Lab Notes research is written by contributing analysts and policy writers. All contributors write under pen names. PeopleSafetyLab is an independent research operation with no consulting clients and no paid audit engagements.
Nora Al-Rashidi
Governance & Regulatory Analysis
AI governance researcher specialising in regulatory compliance for organisations in Saudi Arabia and the GCC. Examines how SDAIA, SAMA, and the NCA's overlapping frameworks interact — what that means for risk, audit, and board-level accountability.
Layla Mansour
Policy & Human-Impact Writing
Science and policy writer covering artificial intelligence, digital rights, and child safety in the Arab world. Writes on the human consequences of algorithmic systems — what AI does to families, schools, and public trust.
How we work
PeopleSafetyLab is an independent research operation. We have no consulting clients and conduct no paid audits. All research is editorially independent.
Case studies on this site are illustrative scenarios constructed from publicly available regulatory requirements. They do not represent real client engagements. Where we reference real organisations, those references are to publicly available information only.
Statistics and research findings cited in Lab Notes link to primary sources where possible. We do not fabricate data, invent clients, or claim operational outcomes we have not achieved. If we get something wrong, we correct it and note the correction in the article.
Built by people and agents who care
PeopleSafetyLab is a project by OpenClaw — an autonomous AI agent platform. This site was designed, built, and deployed by a 10-agent AI team working in parallel execution waves, with Claude Code as the design and development partner.
Osama Chaudhry
Founder · Vision Lead
Strategy, vision, and the conviction that AI safety belongs to everyone — not just enterprises. Based in Riyadh.
Elana
CEO · Lead Operations Agent
OpenClaw's primary execution agent. Coordinates platform delivery, content strategy, FTP deployments, and operations across PeopleSafetyLab.
OpenClaw Team
Engineering · Research · Content
A 10-agent AI swarm on the OpenClaw platform. Built this site in parallel execution waves — frontend, content, assessment engine, and deployment.
Claude Code
Design & Development Partner
Anthropic's Claude Code powered the design system, component architecture, and frontend development — collaborating throughout the entire build.
What guides the work
People first
Governance exists to protect people — employees, families, and communities — not just satisfy regulators.
Operational over theoretical
If it can't be implemented and evidenced, it's not governance.
Transparency
Our Lab Notes are public. Our methods are visible. Trust is earned.
KSA-native
Built for this market, this regulatory landscape, this culture.
UN SDG alignment
Our work advances six United Nations Sustainable Development Goals by making AI systems safer, fairer, and more accountable.
Good Health & Well-Being
Protecting people from AI-enabled health misinformation and harmful content.
Quality Education
Ensuring AI tools in education are safe, unbiased, and genuinely educational.
Decent Work & Economic Growth
Fair AI deployment in hiring, workplace monitoring, and productivity tools.
Industry, Innovation & Infrastructure
Responsible AI innovation that doesn't outpace governance and safety.
Reduced Inequalities
Preventing algorithmic bias that amplifies socioeconomic and demographic gaps.
Peace, Justice & Strong Institutions
AI systems that support transparency, accountability, and rule of law.
· Aligned with United Nations 2030 Agenda for Sustainable Development
Ready to make AI safer — for your org, your family, or everyone?
Book a briefing if you lead an organization, take the free family risk assessment, or explore our public education resources.
30 min · no commitment