Voice of the Customer - Q3 2022
Customer feedback is vital to our decision-making process at Grindr. On a regular basis, we pull qualitative and quantitative data from customer feedback, content moderation reports, UserVoice, app reviews, social media, and more, to inform what we do as a company.
We want to share with you what we are hearing, and what we’re doing as a result.
Image Moderation
This quarter, our customer support team fielded some questions about how we moderate profile images. Specifically, we’re hearing from some people who believe there is a discrepancy between when profile images are approved vs. rejected, and are questioning what process we use to approve photos.
We’ve designed our image moderation system to be as fair and unbiased as possible, while still being efficient enough to approve images quickly. We’re always working to make our process better, more fair and faster, but here’s how we do it right now:
- When you first upload a profile image, we run it through a Machine Learning system which checks for Community Guidelines violations, such as nudity. (You can read about our other Machine Learning system for text here.)
- Images that are found to be within our guidelines are approved
- Any image that is borderline, or probably violates our Community Guidelines, is sent for human review
- We do not auto-reject borderline photos – every borderline photo gets reviewed by a real person
- Our team then manually approves or rejects the borderline image
- When we reject an image, we’ll tell you why
- If an image is approved that shouldn’t have been, please report that image, and our human review team will take a look
- Similarly, if you would like to appeal an image rejection, contact our support team
It would be cheaper and more efficient for us to auto-reject photos based on the Machine Learning labeling, in addition to auto-approving, but we don’t do that. We know that it can feel personal when an image is rejected. We want to be as sure as we can that we’re making fair decisions when we don’t allow a photo to be posted.
Specifically, we want to moderate nudity fairly. As an app, we must follow Apple and Google’s guidelines for nudity, yet we still want Grindr to be the expressive, sex-positive place that our community enjoys. We also have many members who are trans and non-binary, and although we have to follow the app stores' guidelines on female nipples, we do not want to reject images based on perceived gender. For this reason, we send potentially nude photos to the human team who review images alongside our user’s self-reported gender when available.
We also must be careful to not allow any images that are pornographic (again: app store rules). The definition of pornography is a difficult one, and you may not agree with where we draw the line, but in general we try to be as permissive as possible while still being allowed on the app stores—after all, we want to ensure that the app remains available to be used. (You can read more here.)
Finally, we’ve specifically trained our team to look out for personal bias when moderating photos. We address potential bias with race and ethnicity, gender, body hair, body shape or size, age, and more. It’s important to us that our community feels able to express themselves freely and joyfully, and our photo moderation system is designed to support that.
Ban appeals and privacy
Data privacy, access, and control is something that people are becoming more and more concerned about. We think this is a great thing! Grindr has a privacy-by-design philosophy.
We sometimes hear the complaint from a banned user, “I got no explanation for my ban.” However, it’s important to protect the privacy of users who may have reported an issue. We can’t say “you were reported for [X] reason” because that could potentially help the banned user identify who reported them and put the reporting user at risk of retaliation.
This makes some people think we’re “banning for no reason,” but the truth is we can’t share the reason without putting someone else at risk – so we prioritize safety in this particular case over transparency. We’re okay with that tradeoff. (You can read more about why “banning for no reason” makes no business sense in this blog post I wrote last year).
We can never reveal details about someone else’s profile or reports. We’ve written more information about how we investigate and enforce our Community Guidelines here, along with information on our industry-leading ban appeal process.