Child Safety Standards Policy
Effective Date: April 29, 2026
Our Commitment
Ready to Mingle™ (“RTM,” “we,” “us,” or “our”) is a community-first connection platform whose mission is to fight the loneliness epidemic by fostering authentic, in-real-life connections for dating, friendship, networking, and learning. The integrity of that mission depends on maintaining an environment that is unequivocally safe for all users.
Our platform is strictly intended for adults aged eighteen (18) and over.
We do not permit minors to create accounts or access the service under any circumstances. RTM maintains a zero-tolerance policy toward Child Sexual Abuse and Exploitation (“CSAE”), Child Sexual Abuse Material (“CSAM”), and any form of sexualization of minors. We strictly prohibit the use of our platform by any individual seeking to identify, contact, groom, exploit, traffic, or otherwise harm a minor. This represents RTM’s absolute, non-negotiable commitment to protecting children, and we treat any attempt to circumvent our age requirements as among the most serious violations of our Terms of Service.
This policy aligns with Federal obligations including 18 U.S.C. § 2258A (reporting requirements) and California laws including the California Consumer Privacy Act (CCPA/CPRA) (consent and privacy issues) and applicable criminal statutes on the exploitation of minors.
This Child Safety Standards Policy (the “Policy”) is incorporated by reference into our Terms of Service, and Privacy Policy.
Scope & Applicability
This Policy applies to all users, content, communications, profiles, photographs, video, audio, and conduct on or through the Ready to Mingle™ service, including the RTM mobile application (iOS and Android), the RTM website, RTM-hosted events (including Ready Zones, Ready Hours, and launch parties), RTM-curated communications, and any third-party integrations operated by RTM. It applies to every employee, contractor, moderator, vendor, ambassador, event partner, and venue partner who acts on RTM’s behalf.
Age Assurance & Minimum Age Enforcement
Minimum Age
RTM users must be at least 18 years old to create and maintain an account. Where local law defines a “child” or “minor” at a higher age, the higher age applies.
Age Assurance Measures
We implement layered safeguards in order to prevent minors from accessing the platform.
Self-attestation (date of birth). Users cannot edit their date of birth after account creation without identity verification and human review.
Email/phone verification.
Risk-based ID verification checks (government-issued ID or trusted third-party services).
AI-assisted age estimation on profile images (with human review).
Enforcement
Accounts suspected to belong to minors are immediately restricted.
Users must complete age verification or face permanent removal of their account.
Any confirmed minor account is deleted and associated data handled per legal requirements.
Prohibited Content & Conduct
The following content and conduct are strictly prohibited on Ready to Mingle™.
Child Sexual Abuse and Exploitation (CSAE): Any sexual abuse or exploitation of a minor, including grooming, sextortion, sexual solicitation, sexual harassment, trafficking, livestreaming of abuse, and the production, possession, distribution, advertising, or solicitation of CSAM. CSAE includes conduct directed at fictional or AI-generated depictions of minors.
Child Sexual Abuse Material (CSAM): Any visual depiction, including photographs, videos, illustrations, anime, computer-generated imagery, deepfakes, and AI-generated content, of sexually explicit conduct involving a minor or that sexualizes a minor.
Grooming: Deliberate actions taken by an adult to build trust with a minor or with the minor’s caregiver in order to facilitate sexual contact, sexual content, or exploitation, whether online or offline. Actions include misrepresenting age in order to interact with minors and attempting to move conversations with suspected minors off the platform.
Sextortion: The threatened release or distribution of sexual imagery (real, fabricated, or AI-generated) of a minor in exchange for money, additional imagery, sexual contact, or any other consideration.
Violations result in immediate removal, permanent account and device-level bans, preservation of evidence, and reporting to law enforcement and the National Center for Missing and Exploited Children (NCMEC) where required by law.
Prevention, Detection, & Proactive Content Moderation
RTM employs a combination of automated and human moderation systems to prevent, detect, and respond to prohibited conduct.
Current Safeguards
All users are required to complete identity verification through Persona. New profiles are restricted from sending messages until verification is complete.
Profiles flagged as suspicious are subject to review prior to or after publication, as appropriate.
In-App Reporting Mechanism
Every member of the RTM community has access to clear, prominent, and easy-to-use reporting tools so they can submit concerns about CSAE, suspected minors, grooming, harassment, or any other safety issue. This in-app feedback mechanism provide a means for users to submit feedback, concerns, or reports inside the app.
Reports are available for profiles, messages, content including photos, voice/video clips, and events.
Dedicated reports include categories for underage users, suspected minors, child sexual abuse or exploitation, grooming and other safety concerns.
High risk reports (CSAE, CSAM, minors, imminent-harm scenarios) are flagged and prioritized for urgent review and immediate triage. Reports may be submitted anonymously. The reporting user is not exposed to the reported user.
Outside the app, users, parents, guardians, educators, the public, and trusted-flagger organizations may submit concerns via email to trust-safety@readytomingle.com.
Enforcement, Response, & Remediation
When RTM obtains actual knowledge of violations of prohibited conduct, we take appropriate action consistent with this Policy, our published standards, and applicable law as follows:
Immediate removal of the violating content.
Suspend or permanently ban account and where supported, the device(s) associated with the violation.
Restrict access to platform features.
Preservation of evidence for investigation.
Mandatory Reporting & Law Enforcement Cooperation
Federal reporting obligations
RTM complies with applicable laws and reports suspected child exploitation, including CSAM to the
National Center for Missing & Exploited Children.
Other applicable agencies.
Cooperation
RTM complies with lawful requests from U.S. and California law enforcement
Provides preserved records where legally required; and
Supports investigations into child exploitation.
Privacy, Data Retention, and Evidence Handling
RTM does not knowingly collect personal information from anyone under 18, and where we discover that we have, we delete the information promptly except where preservation is legally required.
Suspected CSAM and sensitive date including age verification data is preserved only in secure, access-restricted infrastructure for the period required by 18 U.S.C. § 2258A(h) and is not viewable by general staff.
Reviewer access to suspected CSAE material is logged, audited, and limited to specially trained personnel.
Material associated with confirmed CSAE may be retained indefinitely for the purpose of preventing re-registration and improving detection.
California users have rights under CCPA/CPRA, including the right to access personal data, the right to deletion (subject to legal exceptions) and the right to opt out of data sale/sharing (if applicable).
Safety by Design
We incorporate protective design principles including:
Controls/limits on unsolicited messaging (especially from new or unverified users)
Detection of suspicious or high risk interactions
Profile moderation before visibility (for high-risk accounts)
Staff Training, Governance, and Accountability
Annual mandatory Trust & Safety training for all personnel on:
Grooming detection
CSAM identification and escalation
Legal reporting obligation
Background screening of personnel with access to user content or moderation tools, consistent with applicable law.
Regular policy audits and updates.
Transparent & Accountability
Maintain periodic transparency reports that include
Number of reports received
Enforcement actions taken
CSAM reports submitted
Maintain internal audit logs of moderation decisions
Continuous Improvement
Regular updates to detection technologies
Engagement with external safety experts and NGOs
Continuous training of moderation teams
Monitoring emerging risks, and evolving California and federal regulatory requirements
Designated Child Safety Point of Contact
RTM has designated the following point of contact to receive notifications from users, the public, Google Play, Apple, regulators, NCMEC, and other safety partners regarding potential CSAM and content violating CSAE on our platform. This representative is empowered to speak to RTM’s enforcement and review procedures and to take action.
Role: Head of Trust & Safety / Child Safety Officer
Name: Gia Sullivan (Founder, acting interim Child Safety Officer)
Primary Email: child-safety@readytomingle.com
Backup Email: trust-safety@readytomingle.com
Mailing Address: RTM Technologies LLC, Attn: Child Safety Officer, 600 B. Street, Suite 2100, San Diego, CA, 92101 USA
Offline & Event Safety (Ready Zones, Ready Hours, Launch Events)
Because Ready to Mingle™ exists to drive real-life connections, our child-safety obligations extend to in-person experiences:
All RTM-hosted events are 18+ and require an RTM verified account for entry.
Venue partners are contractually required to confirm 21+ or 18+ door checks (whichever is applicable to the venue and jurisdiction) and to enforce house safety rules.
Ambassadors and event hosts are vetted and trained on this Policy, and may be subject to background checks consistent with applicable law.
Photography and video at events are reviewed before publication; images that may include minors are not published, and incidental minors must be blurred or excluded.
No Retaliation; Whistleblower Protection
RTM prohibits retaliation against any user, employee, contractor, ambassador, or partner who, in good faith, reports a suspected violation of this Policy. Reports may be submitted to the Child Safety Point of Contact identified above.
Policy Updates
RTM reviews this Policy at least annually and updates it as the regulatory landscape, threat landscape, and best practices evolve. Material updates are communicated via the website and in-app notifications. The current version, version history, and effective date are always published at the same URL.


