The Text Toxicity Capture API is designed to analyze and classify toxicity in text fragments. This API helps maintain safe and respectful conversation spaces by accurately assessing potentially harmful content. When any text is submitted as input, the system returns an overall toxicity score, a clear classification (such as “non_toxic” or “toxic”), and a detailed breakdown by category, including general toxicity, severe toxicity, obscene language, threats, insults, and identity-based hate.
Thanks to advanced natural language processing (NLP) models, this API can identify subtle nuances in language, detecting even expressions disguised as abuse, passive aggression, or polarizing language. Each analysis includes confidence levels to support automated decisions or human-assisted moderation.
To use this endpoint, you must specify a text to analyze toxicity levels.
Toxicity detection - Endpoint Features
| Object | Description |
|---|---|
Request Body |
[Required] Json |
{"request_id":"a92c6fa4-2649-4a1b-9c2e-0af536a77e17","overall_score":0.2841,"classification":"toxic","confidence":0.2841,"category_scores":{"toxic":0.2841,"severe_toxic":0.003,"obscene":0.0075,"threat":0.0313,"insult":0.0505,"identity_hate":0.0417}}
curl --location --request POST 'https://zylalabs.com/api/7802/text+toxicity+capture+api/12774/toxicity+detection' --header 'Authorization: Bearer YOUR_API_KEY'
--data-raw '{
"text": "I hate you.."
}'
| Header | Description |
|---|---|
Authorization
|
[Required] Should be Bearer access_key. See "Your API Access Key" above when you are subscribed. |
No long-term commitment. Upgrade, downgrade, or cancel anytime. Free Trial includes up to 50 requests.
The API returns an overall toxicity score, a classification label (e.g., "non_toxic" or "toxic"), and a detailed breakdown of toxicity categories such as general toxicity, severe toxicity, obscene language, threats, insults, and identity-based hate.
Key fields in the response include "toxicity_score," "classification," and category breakdowns like "general_toxicity," "severe_toxicity," "obscene," "threats," "insults," and "hate_speech," each accompanied by confidence levels.
The response data is structured in a JSON format, with a main object containing the overall toxicity score and classification, followed by nested objects for each toxicity category, detailing scores and confidence levels.
The primary parameter for the POST endpoint is the "text" field, where users input the text they want to analyze for toxicity. Additional parameters may include language settings or specific toxicity categories to focus on.
Data accuracy is maintained through advanced natural language processing (NLP) models that are regularly updated and trained on diverse datasets to recognize subtle language nuances and evolving expressions of toxicity.
Typical use cases include moderating online forums, analyzing user-generated content for harmful language, enhancing community guidelines, and developing tools for safe communication in chat applications.
Users can utilize the returned data by integrating the toxicity scores and classifications into moderation workflows, triggering alerts for high toxicity levels, or generating reports to assess community health and safety.
Quality checks include continuous model evaluation against real-world data, user feedback loops, and performance metrics to ensure the API accurately detects and classifies toxicity across various contexts and languages.
Zyla API Hub is like a big store for APIs, where you can find thousands of them all in one place. We also offer dedicated support and real-time monitoring of all APIs. Once you sign up, you can pick and choose which APIs you want to use. Just remember, each API needs its own subscription. But if you subscribe to multiple ones, you'll use the same key for all of them, making things easier for you.
Service Level:
100%
Response Time:
65ms
Service Level:
100%
Response Time:
950ms
Service Level:
100%
Response Time:
74ms
Service Level:
100%
Response Time:
246ms
Service Level:
100%
Response Time:
382ms
Service Level:
100%
Response Time:
65ms
Service Level:
100%
Response Time:
188ms
Service Level:
100%
Response Time:
404ms
Service Level:
100%
Response Time:
5,750ms
Service Level:
100%
Response Time:
111ms
Service Level:
100%
Response Time:
726ms
Service Level:
100%
Response Time:
655ms
Service Level:
100%
Response Time:
6,095ms
Service Level:
100%
Response Time:
106ms
Service Level:
100%
Response Time:
990ms
Service Level:
100%
Response Time:
1,437ms
Service Level:
100%
Response Time:
404ms
Service Level:
100%
Response Time:
6,712ms
Service Level:
100%
Response Time:
315ms
Service Level:
100%
Response Time:
115ms