
How a Former Facebook Insider Is Reinventing Content Moderation for the Age of AI
Moonbounce has secured $12M to transform how AI platforms enforce safety policies — in real time, at scale, and with remarkable precision.
A $12 Million Bet on Smarter AI Safety
A startup founded by a former Facebook trust and safety executive has raised $12 million to tackle one of the most pressing challenges in the AI industry: making content moderation fast, accurate, and built directly into the products people use every day.
Moonbounce, co-led by Brett Levenson and Ash Bhardwaj, announced the funding round — backed by Amplify Partners and StepStone Group — as demand surges for real-time AI safety infrastructure across platforms ranging from dating apps to AI companion services.
The Problem Levenson Saw From the Inside
When Levenson departed Apple in 2019 to take on a business integrity role at Facebook, the social media company was still reeling from the Cambridge Analytica scandal. He arrived convinced that better technology could solve the platform's content moderation woes. What he found instead was a deeply human problem.
Content reviewers were handed a 40-page policy document — often machine-translated into their native language — and expected to memorize it. Each flagged piece of content received roughly 30 seconds of attention. Reviewers had to determine not only whether a post violated policy, but also what corrective action to take: remove it, restrict the user, or limit its reach.
The result? Decision accuracy that Levenson described as only "slightly better than 50%" — essentially a coin flip. Worse, those decisions were being made days after the harmful content had already circulated.
"It was kind of like flipping a coin, whether the human reviewers could actually address policies correctly, and this was many days after the harm had already occurred anyway," Levenson told TechCrunch.
From Policy Documents to Executable Code
That experience planted the seed for what Levenson calls "policy as code" — converting static, written policy documents into dynamic, enforceable logic that operates in real time. That core idea became the foundation of Moonbounce.
The company has developed its own large language model that ingests a client's policy documentation, evaluates content as it is generated, and delivers a decision within 300 milliseconds. Depending on how a customer configures the system, Moonbounce can flag content for delayed human review, throttle its distribution, or block it outright when the risk threshold is high enough.
This positions Moonbounce as an independent safety layer — sitting between the end user and the AI system itself — rather than a solution baked into the chatbot or platform. Because it operates outside the main conversation context, Moonbounce's system is not bogged down by the thousands of tokens a chatbot must track. Its sole function is rule enforcement at runtime.
Who Moonbounce Is Serving
The company currently focuses on three core markets:
- User-generated content platforms, such as dating apps
- AI companion and character platforms building conversational agents
- AI image and video generation tools
Moonbounce is already processing more than 40 million content reviews daily and serves upwards of 100 million daily active users. Its current client roster includes AI companion platform Channel AI, image and video generation service Civitai, and character roleplay apps Dippy AI and Moescape.
Tinder's head of trust and safety has also cited the use of LLM-powered moderation services similar to Moonbounce's approach as enabling a tenfold improvement in detection accuracy on the dating platform.
Safety as a Competitive Advantage
One of Levenson's most compelling arguments is that safety does not have to be an afterthought — it can be a genuine product differentiator.
"Safety can actually be a product benefit," he said. "It just never has been because it's always a thing that happens later, not a thing you can actually build into your product."
This framing resonates with investors. Lenny Pruss, general partner at Amplify Partners, emphasized that as LLMs become central to virtually every application, the need for objective, real-time content guardrails has never been greater.
"We invested in Moonbounce because we envision a world where objective, real-time guardrails become the enabling backbone of every AI-mediated application," Pruss said.
Legal Pressure Is Accelerating Demand
The timing of Moonbounce's rise is no coincidence. AI companies are under mounting legal and reputational scrutiny following a series of high-profile failures. Chatbots have been accused of directing teenagers and emotionally vulnerable users toward self-harm. Image generators, including xAI's Grok, have been exploited to produce nonconsensual intimate imagery.
As internal safety systems prove inadequate, more AI companies are turning to third-party providers to shore up their defenses — a trend Levenson says is accelerating.
A Gentler Approach: Steering Instead of Blocking
Perhaps the most forward-looking capability Moonbounce is developing is what the team calls "iterative steering." The concept emerged in response to tragedies like the 2024 death of a 14-year-old Florida boy who had developed an unhealthy fixation on a Character AI chatbot.
Rather than delivering a blunt refusal when a dangerous topic surfaces, the system would intercept the exchange and modify the user's prompt in real time — guiding the chatbot to respond not just with empathy, but with genuinely constructive support.
"We hope to be able to add to our actions toolkit the ability to steer the chatbot in a better direction," Levenson explained, "to essentially take the user's prompt and modify it to force the chatbot to be not just an empathetic listener, but a helpful listener in those situations."
An Independent Vision for the Future
Moonbounce operates with a 12-person team led by Levenson and Bhardwaj, who previously built large-scale cloud and AI infrastructure at Apple. When asked whether the company's future might involve an acquisition — perhaps by Meta, his former employer — Levenson gave a candid and notably unguarded answer.
"My investors would kill me for saying this, but I would hate to see someone buy us and then restrict the technology," he admitted. "Like, 'Okay, this is ours now, and nobody else can benefit from it.'"
It is a rare moment of transparency from a founder who clearly views Moonbounce's mission as larger than any single company's roadmap — including his own.


