Pilih preferensi cookie Anda

Kami menggunakan cookie penting serta alat serupa yang diperlukan untuk menyediakan situs dan layanan. Kami menggunakan cookie performa untuk mengumpulkan statistik anonim sehingga kami dapat memahami cara pelanggan menggunakan situs dan melakukan perbaikan. Cookie penting tidak dapat dinonaktifkan, tetapi Anda dapat mengklik “Kustom” atau “Tolak” untuk menolak cookie performa.

Jika Anda setuju, AWS dan pihak ketiga yang disetujui juga akan menggunakan cookie untuk menyediakan fitur situs yang berguna, mengingat preferensi Anda, dan menampilkan konten yang relevan, termasuk iklan yang relevan. Untuk menerima atau menolak semua cookie yang tidak penting, klik “Terima” atau “Tolak”. Untuk membuat pilihan yang lebih detail, klik “Kustomisasi”.

DetectToxicContent - Amazon Comprehend API Reference
Halaman ini belum diterjemahkan ke dalam bahasa Anda. Minta terjemahan

DetectToxicContent

Performs toxicity analysis on the list of text strings that you provide as input. The API response contains a results list that matches the size of the input list. For more information about toxicity detection, see Toxicity detection in the Amazon Comprehend Developer Guide.

Request Syntax

{ "LanguageCode": "string", "TextSegments": [ { "Text": "string" } ] }

Request Parameters

For information about the parameters that are common to all actions, see Common Parameters.

The request accepts the following data in JSON format.

LanguageCode

The language of the input text. Currently, English is the only supported language.

Type: String

Valid Values: en

Required: Yes

TextSegments

A list of up to 10 text strings. Each string has a maximum size of 1 KB, and the maximum size of the list is 10 KB.

Type: Array of TextSegment objects

Array Members: Minimum number of 1 item.

Required: Yes

Response Syntax

{ "ResultList": [ { "Labels": [ { "Name": "string", "Score": number } ], "Toxicity": number } ] }

Response Elements

If the action is successful, the service sends back an HTTP 200 response.

The following data is returned in JSON format by the service.

ResultList

Results of the content moderation analysis. Each entry in the results list contains a list of toxic content types identified in the text, along with a confidence score for each content type. The results list also includes a toxicity score for each entry in the results list.

Type: Array of ToxicLabels objects

Errors

For information about the errors that are common to all actions, see Common Errors.

InternalServerException

An internal server error occurred. Retry your request.

HTTP Status Code: 500

InvalidRequestException

The request is invalid.

HTTP Status Code: 400

TextSizeLimitExceededException

The size of the input text exceeds the limit. Use a smaller document.

HTTP Status Code: 400

UnsupportedLanguageException

Amazon Comprehend can't process the language of the input text. For a list of supported languages, Supported languages in the Comprehend Developer Guide.

HTTP Status Code: 400

See Also

For more information about using this API in one of the language-specific AWS SDKs, see the following:

PrivasiSyarat situsPreferensi cookie
© 2025, Amazon Web Services, Inc. atau afiliasinya. Semua hak dilindungi undang-undang.