Much of a fraud team's activity occurs after the fact — an account has been breached, credit card numbers stolen and resold on the dark web, customer names hacked from a database, etc. The job of a fraud investigator is to follow the leads to try and figure out the identities and locations of perpetrators, find out how they got in, and what their intentions are. Again — all of this typically occurs in response to an incident that has already happened in an attempt to contain it as much as possible and minimize the damage that’s already been done.
Trust and safety, on the other hand, is designed to focus on preventing negative experiences through policies and process improvements and goes beyond a fraud-only view. It’s more about deterrence than risk mitigation; it’s putting in place systems to deter bad actors, rather than dealing with the fallout from incidents. But when incidents happen, trust and safety teams need the resources and tools to investigate and act quickly.
Trust and safety teams are playing an increasingly critical role in helping companies protect customers from harmful content and preserve brand reputation. While automated moderation systems may flag content and usage violations as well as fraud, analysts need to understand the whole story behind an issue. Deeper research is often needed for scenarios such as:
In some cases, determining whom you can trust, or not trust, is relatively easy. For example, a new customer with a nonsensical email address who has made a large number of small purchases in a short time frame is automatically flagged. The customer can still try to process the latest purchase but would need to speak to a customer service representative to explain the situation first. Maybe it’s a legitimate situation, maybe it’s a case of fraud.
And while it may be easy to spot deception in some cases, many circumstances require the trust and safety teams to conduct deeper investigations in response to system flags. Researchers need to monitor online activity, keep an eye on marketplaces that are known to sell counterfeit goods or stolen information, and even observe what’s happening on dark web forums where criminals are known to congregate. Sometimes, it means engaging with bad actors directly, going undercover, to get to the bottom of what they are planning.
Trust and safety really boils down to the team’s ability to identify potentially harmful situations, respond to them in a timely manner, monitor user activity for actions that don’t meet security or acceptable use requirements, and setting up policies and safeguards for an environment of trust between all parties involved.
Although online communities and services rose to prominence in the past 20 years, the government began regulating content even earlier. In the U.S. Communications Decency Act (CDA) of 1996, Section 230 requires platform providers to remove federally illegal materials (e.g., sex trafficking, copyright infringement). In contrast, however, the CDA gives platforms immunity from being held responsible for third-party content. But businesses cannot turn a blind eye without inviting risk, so trust and safety teams need to establish and enforce usage policies. Trust and safety is further complicated by a lack of global standards. Currently, there is a patchwork of regulations around the world, with the EU, in particular, carrying hefty fines.
Automated content moderation and fraud detection are no longer enough to effectively manage online environments. Trust and safety teams often need to dive deeper into issues, and that’s where the risks start piling up. Analysts don’t know where an investigation may lead. Digging below the surface could require interaction with bad actors and malicious sites, which can introduce significant risk — to the analysts, the organization and the integrity of investigations.
Dealing with untrusted content and environments is risky. Investigators need to gain a complete picture for analysis, and establish a chain of evidence — but do it safely and securely. If an analyst’s presence or identity is exposed, targets may get “tipped off” and disappear. Or worse, they might retaliate with anything from phishing, malware and DDoS attacks on enterprise networks, to threatening investigators personally.
Successful investigations require the ability to eliminate as much risk as possible — and that starts with securely isolating and anonymizing browsing. For the most stringent protection, investigations need virtual browsing and managed attribution, which enable analysts to separate research from the network and customize their online presence for hyper-secure anonymity. Combining these capabilities gives trust and safety teams the power to shield identities, devices and enterprise resources from risk.
Why trust and safety is risky business: Know the risks you take on while conducting a trust and safety investigation, how adversaries could retaliate and how to counteract these risks.
What is trust and safety?: Trust and safety has become a critical business function for digital platforms and services. We look at why it’s so important, how it works and common challenges.
Trust and safety: rebranding of an old concept, or a new way to look at customer interaction?: What is a trust and safety team? What's their mission? How are trust and safety online investigators different from payment fraud and ATO?
Why online investigators need managed attribution: Why is it important that online investigators blend into digital environments and remain anonymous to the target of their investigation?