Safety Center - Soultrob

Safety Center

Effective date: October 11, 2025

Our commitment to safety

Soultrob is committed to creating a place where people can share honestly and safely. This Safety Center explains how we prevent abuse, how to report problems, and what we do after a report. We take safety seriously — for users, moderators, and our community at large.

What this page covers

  • How to report abusive, illegal, or dangerous content.
  • How we moderate and the types of enforcement actions we take.
  • How anonymity works and its limits.
  • Tips to protect your privacy and stay safe.
  • How we work with law enforcement and external partners.
  • Support resources for self-harm, exploitation, or emergency situations.

Content categories we do not allow (summary)

The following are prohibited on Soultrob and will be removed when detected or reported:

Harassment & Hate

Targeted harassment, threats, and demeaning language directed at an individual or protected class.

Doxxing & Privacy Violations

Publishing or soliciting private personal information (addresses, phone numbers, identity documents) about others.

Sexual Exploitation & CSAM

Any sexual content involving minors or exploitative material is strictly forbidden and will be immediately escalated to law enforcement.

Violence & Criminal Activity

Praise, instructions for, or organization of violent or illegal activity (e.g., bomb-making, drug distribution).

Self-harm & Suicide Content

Content encouraging self-harm or suicide that could cause imminent harm will be prioritized for removal and support referrals.

Impersonation & Fraud

Impersonating others to deceive, scam, or manipulate is prohibited.

Spam, Bot Abuse & Fake Engagement

Coordinated inauthentic activity, bot-driven engagement, and automated spam are not allowed.

How to report harmful content

We provide multiple reporting channels. Use whichever fits the urgency of the situation.

In-app reporting (recommended)

  1. Find the post, comment, or profile you want to report.
  2. select Report.
  3. Select the reason category (Harassment, Self-harm, Sexual, Doxxing, Spam, etc.).
  4. Provide context — why it violates rules. Attach screenshots if helpful.
  5. Submit the report. You will receive an acknowledgement when appropriate.

Email or secure forms

If you cannot use the app, or need to provide sensitive information, email [email protected] or use our secure reporting form at TODO: link to secure form.

What to include in a report

  • Link or permalink to the post/profile (if available).
  • Date and approximate time (timezone).
  • Short description of the issue and why it violates rules.
  • Any screenshots or attachments.

What happens after you report

  1. Automated triage: Reports are initially categorized by automated systems (spam/abuse classifiers) to prioritize urgent cases.
  2. Human review: A trained moderator reviews prioritized reports and takes action.
  3. Possible actions: content removal, warning, temporary suspension, permanent ban, visibility reduction, or referral to law enforcement.
  4. Notifications: We will notify reporters and affected users when action is taken where appropriate and permitted.
  5. Appeals: If you or someone you reported disagrees, an appeal can be submitted (see Appeals section).

Typical timelines: urgent reports (self-harm, minors, imminent threats) are expedited; other reports are processed in order. We aim to acknowledge or act quickly, but response times may vary depending on volume.

Appeals & disputes

If content was removed or your account acted against and you believe this was incorrect, you may appeal. The appeals process typically follows these steps:

  1. Submit an appeal via the link in the action notification or via [email protected] with a clear explanation.
  2. An independent reviewer (not the initial reviewer when possible) examines the case.
  3. We respond with the outcome and rationale. Further escalation is available for complex cases.

Note: appeals are reviewed in order and may take time during high volume periods.

Our moderation approach

We use a mix of automated systems and human teams to keep the platform safe:

  • Automated detection: machine learning models and rule-based filters detect spam, explicit content, and policy violations.
  • Human moderation: trained moderators evaluate edge cases, appeals, and sensitive reports.
  • Trusted flaggers: vetted partners and organizations may flag content for priority review.
  • Third-party moderation partners: we may use external vendors for additional moderation capacity.

Transparency: automated tools can make mistakes. If content was incorrectly flagged, please use the appeals process.

Data we collect while investigating

To investigate and address reports we may retain and process:

  • Post content, comments, and attached media.
  • Reporter-provided materials (screenshots, messages).
  • Metadata: IP addresses, device identifiers, timestamps, and interaction logs.

This data is retained per our retention policy and used solely for safety, compliance, and legal purposes.

Law enforcement & legal requests

Soultrob will comply with valid legal requests (subpoenas, court orders). When appropriate and lawful, we will attempt to notify affected users before complying unless prohibited. Emergency or exigent requests for imminent harm may be handled faster and may involve disclosure to authorities.

Law enforcement inquiries should be directed to our legal team at [email protected] and follow the guidelines in our Terms of Service.

Safety features you can use right now

  • Report: report posts, comments, or profiles directly.

Practical safety tips

  • Don't share personal contact details (phone, address, government ID) in public posts.
  • Before posting a photo, remove location/EXIF metadata — many camera apps and editors let you remove location info.
  • Use anonymous posting if you don't want your identity shared — but recognize anonymity has limits.
  • Be careful in direct messages with people you meet online; consider video calls only with mutual, verified trust.
  • If someone threatens your safety, contact local authorities immediately and submit a report to us.
  • Educate yourself on phishing and account-security best practices (unique passwords, password manager).

Self-harm, suicide or exploitation — immediate help

If you or someone else is in immediate danger, call your local emergency services right away. If you are experiencing a mental health crisis or suicidal thoughts, please seek immediate help from local health services or a trusted person.

We are not a crisis hotline. Below are actions you can take:

  • Contact local emergency services or a crisis hotline in your country.
  • Reach out to a trusted friend, family member, or health professional.
  • Use regional resources (national mental health services) to find immediate support.

Note: If you report content involving immediate risk of harm, we will prioritize it for review and seek to provide appropriate referrals.

Protecting minors

Content involving minors (sexual content, exploitation, or abuse) is taken extremely seriously. We will remove it and report to appropriate authorities. If you suspect a minor is at risk, report the content immediately and include as much context as you can.

Parents and guardians: use device-level parental controls and have conversations with children about online safety. If you believe a child is in danger, contact local child protection services and law enforcement.

Moderator & staff safety

Moderating disturbing content can be stressful. We provide internal support, counseling resources, and rotation to ensure moderators receive support and time off as needed.

Transparency & reporting

We may publish a quarterly transparency report containing anonymized moderation metrics: number of reports, removal rates, appeals, and government requests. This helps our community hold us accountable and helps us improve moderation practices.

Technical safeguards we use

  • Rate-limiting and CAPTCHAs to slow automated abuse.
  • Automated classifiers to surface priority reports.
  • Image and media scanning for known illegal content (hash matching).
  • IP/device/user blocks and throttling for repeat offenders.

Frequently Asked Questions

How long until I hear back after I report?

We triage urgent reports first. For non-urgent reports we aim to acknowledge or act within 24–72 hours; times vary with volume.

Will you share my identity if I report someone?

We treat reporter privacy carefully. In certain legal situations we may be required to disclose information to authorities; we will notify you unless legally prohibited.

Can I request content removal even if I didn't post it?

Yes — if you believe content violates our rules or harms your privacy, submit a report with evidence. We will review per our policy.

Changes to Safety practices

We update our safety practices periodically. Material changes will be announced in-app or via email (if you provided one). This page's "Effective date" will reflect the last major update.

Contact & escalation

For safety concerns, operation of reporting tools, or escalations:
Email: [email protected]
Law enforcement & legal requests: [email protected]

For urgent threats, always contact local emergency services first.

Final notes

Safety is a community effort. If you see something dangerous or illegal, report it. If you're struggling, seek help. We are committed to improving this platform and learning from each incident. Please help us by using the reporting tools responsibly and providing accurate information when you report.