Legal / Content Safety Policy
Content Safety Policy
How we protect Discord communities and how we report illegal content.
Effective April 30, 2026
1. Our commitments
Phantom exists to help moderators run safer Discord communities. We back that up with concrete commitments:
- We do not tolerate the use of Phantom to host, distribute, or enable child sexual abuse material (CSAM), terrorist or violent-extremist content, non-consensual intimate imagery, or other content that's illegal under applicable law.
- We act quickly on credible reports — within hours for the most serious categories.
- We cooperate with Discord's Trust & Safety team, the National Center for Missing & Exploited Children (NCMEC), and other competent authorities to address abuse on the platform.
- We design our features to give moderators effective tools — automod, logging, audit history, cross-server bans where admins opt in — and make those tools easy to use.
2. What's prohibited
You may not use Phantom — directly, indirectly, or by configuring features that produce the result — to:
- Distribute, link to, store, or facilitate access to child sexual abuse material (CSAM) or any sexualised content involving minors. There is no in-jest or "satire" exception for this category.
- Promote, distribute, or organise terrorism, violent extremism, or imminent real-world violence.
- Distribute non-consensual intimate imagery, doxxing material, stolen credentials, or content that violates someone's privacy rights.
- Facilitate the sale or distribution of illegal drugs, weapons regulated under your local law, or stolen goods.
- Engage in human trafficking, child grooming, or coordinated harassment campaigns.
- Distribute malware, phishing kits, or content designed to compromise other users' accounts or devices.
- Process or store content in violation of intellectual-property law (see also the DMCA Policy).
The full list of prohibited uses, including non-illegal but unacceptable uses, is in the Acceptable Use Policy. This page covers the safety-critical subset.
3. Child safety
Our position on CSAM is zero-tolerance. We:
- Cooperate with NCMEC's CyberTipline and equivalent international organisations as required by law.
- Preserve evidence as required by 18 U.S.C. § 2258A and analogous statutes in other jurisdictions when we report.
- Report to Discord Trust & Safety any user-generated content we encounter through our service that we have actual knowledge of constituting CSAM.
- Permanently terminate access to Phantom for any account, server, or Custom Branding application connected to such activity.
We rely on the Discord platform's own mechanisms (including its hash-matching for known CSAM) for detection on Discord itself, and on community reports for content that flows through bot features (custom commands, embed builder, welcome backgrounds, ticket attachments). Reports about content you've encountered through Phantom should go through the support server — see Section 4.
4. Reporting illegal or harmful content
If you've encountered content through Phantom that you believe falls into the prohibited categories above, contact us. We treat these reports as urgent. The fastest channel is the Phantom support server — open a private ticket so the report goes directly to the safety team and isn't visible to other members.
- Child safety reports: open a ticket flagged as a safety report. Include enough context (server ID, channel ID, message link) for us to investigate. Do not attach copies of CSAM to the ticket — describe the content and link to the location, or provide the message link without the file. We'll work with Discord and the relevant authorities from there.
- Terrorist or violent-extremist content: open a ticket flagged as a safety report with the same context.
- Other illegal content (doxxing, stolen credentials, copyright infringement, etc.): open a ticket so the right team picks it up. Copyright complaints follow the formal process in the DMCA Policy.
- Imminent threats of violence: contact local emergency services first. Then report to us through the support server so we can preserve any evidence we can.
We aim to acknowledge child-safety and imminent-violence reports within 4 hours and act within 24, around the clock. Other categories are reviewed within two business days.
5. Working with platforms and law enforcement
We cooperate with valid legal process — subpoenas, court orders, and other lawful requests — issued by competent authorities in the jurisdictions in which we operate. Where law allows, we challenge requests we believe are overbroad. Where law allows, we notify affected users before complying with a request; where it doesn't (for example, gag orders or imminent harm), we don't.
For voluntary disclosure of personal data under emergency circumstances, we follow the standards in the operating jurisdiction's law (in the US, 18 U.S.C. § 2702(b)(8) — disclosure permitted in good-faith belief of an emergency involving danger of death or serious bodily injury).
6. Working with Discord Trust & Safety
Phantom is a third-party developer on Discord's platform. Discord owns the underlying communication and is the primary point of escalation for content posted in servers. We share information with Discord Trust & Safety where:
- We've encountered content through our service that may violate Discord's Terms of Service or Community Guidelines.
- Discord asks us, through legitimate channels, for information needed to address abuse on the platform.
- We're terminating a Custom Branding application for safety reasons and Discord needs to know to evaluate the underlying app.
If you believe Discord itself needs to act on content (deleting messages or accounts), Discord's in-product reporting tools and discord.com/safety are usually the fastest path.
7. Custom Branding obligations
If you use Custom Branding to operate your own bot through Phantom, you're responsible for safety in the servers your bot serves. You commit to:
- Providing your own user-facing safety reporting channel (e.g. a contact email or a Discord support server).
- Preserving evidence of CSAM or terrorist content if you encounter it through your bot, and reporting per applicable law (in the US, NCMEC under 18 U.S.C. § 2258A).
- Cooperating with our requests for information when Phantom Trust & Safety needs it to investigate a report involving your bot.
- Not configuring features to undermine our enforcement — for example, you can't use the bot to mass-DM minors or to scrape and republish private messages.
We may suspend Custom Branding immediately for any application implicated in a credible safety violation, pending investigation.
8. Enforcement and consequences
Depending on severity and history, our responses to violations include:
- Removing the offending content from our systems where we host it.
- Suspending the offending feature module on the affected server.
- Terminating Phantom's access to the affected server and refusing further service to the responsible administrators.
- Suspending or terminating Custom Branding applications.
- Referring the matter to Discord Trust & Safety.
- Reporting to NCMEC or law enforcement as required.
For the most serious categories (CSAM, terrorism, imminent violence), termination and reporting are automatic — there is no "first warning".
9. Contact
Safety reports and legal process both go through the Phantom support server. Open a private ticket and flag whether it's safety- or legal-process-related so it reaches the right team immediately.
Questions about this policy?
Reach us in the Phantom support server — open a ticket if it's a private matter (data request, safety report, takedown notice) so it goes straight to the team. Copyright takedowns follow the formal flow on the DMCA Policy page.