または AWS SDK StartSpeechSynthesisTaskで使用する CLI - AWS SDK コード例

AWS Doc SDK Examples GitHub リポジトリには他にも AWS SDK例があります。

翻訳は機械翻訳により提供されています。提供された翻訳内容と英語版の間で齟齬、不一致または矛盾がある場合、英語版が優先します。

または AWS SDK StartSpeechSynthesisTaskで使用する CLI

以下のコード例は、StartSpeechSynthesisTask の使用方法を示しています。

CLI
AWS CLI

テキストを合成するには

次のstart-speech-synthesis-task例では、テキストを に合成text_file.txtし、結果のMP3ファイルを指定されたバケットに保存します。

aws polly start-speech-synthesis-task \ --output-format mp3 \ --output-s3-bucket-name my-s3-bucket \ --text file://text_file.txt \ --voice-id Joanna

出力:

{ "SynthesisTask": { "TaskId": "70b61c0f-57ce-4715-a247-cae8729dcce9", "TaskStatus": "scheduled", "OutputUri": "https://s3.us-east-2.amazonaws.com/my-s3-bucket/70b61c0f-57ce-4715-a247-cae8729dcce9.mp3", "CreationTime": 1603911042.689, "RequestCharacters": 1311, "OutputFormat": "mp3", "TextType": "text", "VoiceId": "Joanna" } }

詳細については、「Amazon Polly ディベロッパーガイド」の「長いオーディオファイルの作成 (CLI)」を参照してください。

  • API 詳細については、AWS CLI 「 コマンドリファレンスStartSpeechSynthesisTask」の「」を参照してください。

Python
SDK Python 用 (Boto3)
注記

の詳細については、「」を参照してください GitHub。用例一覧を検索し、AWS コード例リポジトリでの設定と実行の方法を確認してください。

class PollyWrapper: """Encapsulates Amazon Polly functions.""" def __init__(self, polly_client, s3_resource): """ :param polly_client: A Boto3 Amazon Polly client. :param s3_resource: A Boto3 Amazon Simple Storage Service (Amazon S3) resource. """ self.polly_client = polly_client self.s3_resource = s3_resource self.voice_metadata = None def do_synthesis_task( self, text, engine, voice, audio_format, s3_bucket, lang_code=None, include_visemes=False, wait_callback=None, ): """ Start an asynchronous task to synthesize speech or speech marks, wait for the task to complete, retrieve the output from Amazon S3, and return the data. An asynchronous task is required when the text is too long for near-real time synthesis. :param text: The text to synthesize. :param engine: The kind of engine used. Can be standard or neural. :param voice: The ID of the voice to use. :param audio_format: The audio format to return for synthesized speech. When speech marks are synthesized, the output format is JSON. :param s3_bucket: The name of an existing Amazon S3 bucket that you have write access to. Synthesis output is written to this bucket. :param lang_code: The language code of the voice to use. This has an effect only when a bilingual voice is selected. :param include_visemes: When True, a second request is made to Amazon Polly to synthesize a list of visemes, using the specified text and voice. A viseme represents the visual position of the face and mouth when saying part of a word. :param wait_callback: A callback function that is called periodically during task processing, to give the caller an opportunity to take action, such as to display status. :return: The audio stream that contains the synthesized speech and a list of visemes that are associated with the speech audio. """ try: kwargs = { "Engine": engine, "OutputFormat": audio_format, "OutputS3BucketName": s3_bucket, "Text": text, "VoiceId": voice, } if lang_code is not None: kwargs["LanguageCode"] = lang_code response = self.polly_client.start_speech_synthesis_task(**kwargs) speech_task = response["SynthesisTask"] logger.info("Started speech synthesis task %s.", speech_task["TaskId"]) viseme_task = None if include_visemes: kwargs["OutputFormat"] = "json" kwargs["SpeechMarkTypes"] = ["viseme"] response = self.polly_client.start_speech_synthesis_task(**kwargs) viseme_task = response["SynthesisTask"] logger.info("Started viseme synthesis task %s.", viseme_task["TaskId"]) except ClientError: logger.exception("Couldn't start synthesis task.") raise else: bucket = self.s3_resource.Bucket(s3_bucket) audio_stream = self._wait_for_task( 10, speech_task["TaskId"], "speech", wait_callback, bucket ) visemes = None if include_visemes: viseme_data = self._wait_for_task( 10, viseme_task["TaskId"], "viseme", wait_callback, bucket ) visemes = [ json.loads(v) for v in viseme_data.read().decode().split() if v ] return audio_stream, visemes
  • API 詳細については、AWS SDKPython (Boto3) APIリファレンス のStartSpeechSynthesisTask「」の「」を参照してください。