AWS Doc SDK ExamplesWord
기계 번역으로 제공되는 번역입니다. 제공된 번역과 원본 영어의 내용이 상충하는 경우에는 영어 버전이 우선합니다.
an AWS SDKSynthesizeSpeech
와 함께 사용
다음 코드 예제는 SynthesizeSpeech
의 사용 방법을 보여 줍니다.
- .NET
-
- AWS SDK for .NET
-
참고
더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리
에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요. using System; using System.IO; using System.Threading.Tasks; using Amazon.Polly; using Amazon.Polly.Model; public class SynthesizeSpeech { public static async Task Main() { string outputFileName = "speech.mp3"; string text = "Twas brillig, and the slithy toves did gyre and gimbol in the wabe"; var client = new AmazonPollyClient(); var response = await PollySynthesizeSpeech(client, text); WriteSpeechToStream(response.AudioStream, outputFileName); } /// <summary> /// Calls the Amazon Polly SynthesizeSpeechAsync method to convert text /// to speech. /// </summary> /// <param name="client">The Amazon Polly client object used to connect /// to the Amazon Polly service.</param> /// <param name="text">The text to convert to speech.</param> /// <returns>A SynthesizeSpeechResponse object that includes an AudioStream /// object with the converted text.</returns> private static async Task<SynthesizeSpeechResponse> PollySynthesizeSpeech(IAmazonPolly client, string text) { var synthesizeSpeechRequest = new SynthesizeSpeechRequest() { OutputFormat = OutputFormat.Mp3, VoiceId = VoiceId.Joanna, Text = text, }; var synthesizeSpeechResponse = await client.SynthesizeSpeechAsync(synthesizeSpeechRequest); return synthesizeSpeechResponse; } /// <summary> /// Writes the AudioStream returned from the call to /// SynthesizeSpeechAsync to a file in MP3 format. /// </summary> /// <param name="audioStream">The AudioStream returned from the /// call to the SynthesizeSpeechAsync method.</param> /// <param name="outputFileName">The full path to the file in which to /// save the audio stream.</param> private static void WriteSpeechToStream(Stream audioStream, string outputFileName) { var outputStream = new FileStream( outputFileName, FileMode.Create, FileAccess.Write); byte[] buffer = new byte[2 * 1024]; int readBytes; while ((readBytes = audioStream.Read(buffer, 0, 2 * 1024)) > 0) { outputStream.Write(buffer, 0, readBytes); } // Flushes the buffer to avoid losing the last second or so of // the synthesized text. outputStream.Flush(); Console.WriteLine($"Saved {outputFileName} to disk."); } }
Amazon Polly에서 a AWS SDK를 사용하여 스피치 마크를 사용하여 텍스트에서 스피치를 합성합니다.
using System; using System.Collections.Generic; using System.IO; using System.Threading.Tasks; using Amazon.Polly; using Amazon.Polly.Model; public class SynthesizeSpeechMarks { public static async Task Main() { var client = new AmazonPollyClient(); string outputFileName = "speechMarks.json"; var synthesizeSpeechRequest = new SynthesizeSpeechRequest() { OutputFormat = OutputFormat.Json, SpeechMarkTypes = new List<string> { SpeechMarkType.Viseme, SpeechMarkType.Word, }, VoiceId = VoiceId.Joanna, Text = "This is a sample text to be synthesized.", }; try { using (var outputStream = new FileStream(outputFileName, FileMode.Create, FileAccess.Write)) { var synthesizeSpeechResponse = await client.SynthesizeSpeechAsync(synthesizeSpeechRequest); var buffer = new byte[2 * 1024]; int readBytes; var inputStream = synthesizeSpeechResponse.AudioStream; while ((readBytes = inputStream.Read(buffer, 0, 2 * 1024)) > 0) { outputStream.Write(buffer, 0, readBytes); } } } catch (Exception ex) { Console.WriteLine($"Error: {ex.Message}"); } } }
-
API 세부 정보는 SynthesizeSpeech AWS SDK for .NET 참조의 API를 참조하세요.
-
- Java
-
- Java 2.x용 SDK
-
참고
더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리
에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요. import javazoom.jl.decoder.JavaLayerException; import software.amazon.awssdk.core.ResponseInputStream; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.polly.PollyClient; import software.amazon.awssdk.services.polly.model.DescribeVoicesRequest; import software.amazon.awssdk.services.polly.model.Voice; import software.amazon.awssdk.services.polly.model.DescribeVoicesResponse; import software.amazon.awssdk.services.polly.model.OutputFormat; import software.amazon.awssdk.services.polly.model.PollyException; import software.amazon.awssdk.services.polly.model.SynthesizeSpeechRequest; import software.amazon.awssdk.services.polly.model.SynthesizeSpeechResponse; import java.io.IOException; import java.io.InputStream; import javazoom.jl.player.advanced.AdvancedPlayer; import javazoom.jl.player.advanced.PlaybackEvent; import javazoom.jl.player.advanced.PlaybackListener; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class PollyDemo { private static final String SAMPLE = "Congratulations. You have successfully built this working demo " + " of Amazon Polly in Java Version 2. Have fun building voice enabled apps with Amazon Polly (that's me!), and always " + " look at the AWS website for tips and tricks on using Amazon Polly and other great services from AWS"; public static void main(String args[]) { PollyClient polly = PollyClient.builder() .region(Region.US_WEST_2) .build(); talkPolly(polly); polly.close(); } public static void talkPolly(PollyClient polly) { try { DescribeVoicesRequest describeVoiceRequest = DescribeVoicesRequest.builder() .engine("standard") .build(); DescribeVoicesResponse describeVoicesResult = polly.describeVoices(describeVoiceRequest); Voice voice = describeVoicesResult.voices().stream() .filter(v -> v.name().equals("Joanna")) .findFirst() .orElseThrow(() -> new RuntimeException("Voice not found")); InputStream stream = synthesize(polly, SAMPLE, voice, OutputFormat.MP3); AdvancedPlayer player = new AdvancedPlayer(stream, javazoom.jl.player.FactoryRegistry.systemRegistry().createAudioDevice()); player.setPlayBackListener(new PlaybackListener() { public void playbackStarted(PlaybackEvent evt) { System.out.println("Playback started"); System.out.println(SAMPLE); } public void playbackFinished(PlaybackEvent evt) { System.out.println("Playback finished"); } }); // play it! player.play(); } catch (PollyException | JavaLayerException | IOException e) { System.err.println(e.getMessage()); System.exit(1); } } public static InputStream synthesize(PollyClient polly, String text, Voice voice, OutputFormat format) throws IOException { SynthesizeSpeechRequest synthReq = SynthesizeSpeechRequest.builder() .text(text) .voiceId(voice.id()) .outputFormat(format) .build(); ResponseInputStream<SynthesizeSpeechResponse> synthRes = polly.synthesizeSpeech(synthReq); return synthRes; } }
-
API 세부 정보는 SynthesizeSpeech AWS SDK for Java 2.x 참조의 API를 참조하세요.
-
- Python
-
- Python용 SDK(Boto3)
-
참고
더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리
에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요. class PollyWrapper: """Encapsulates Amazon Polly functions.""" def __init__(self, polly_client, s3_resource): """ :param polly_client: A Boto3 Amazon Polly client. :param s3_resource: A Boto3 Amazon Simple Storage Service (Amazon S3) resource. """ self.polly_client = polly_client self.s3_resource = s3_resource self.voice_metadata = None def synthesize( self, text, engine, voice, audio_format, lang_code=None, include_visemes=False ): """ Synthesizes speech or speech marks from text, using the specified voice. :param text: The text to synthesize. :param engine: The kind of engine used. Can be standard or neural. :param voice: The ID of the voice to use. :param audio_format: The audio format to return for synthesized speech. When speech marks are synthesized, the output format is JSON. :param lang_code: The language code of the voice to use. This has an effect only when a bilingual voice is selected. :param include_visemes: When True, a second request is made to Amazon Polly to synthesize a list of visemes, using the specified text and voice. A viseme represents the visual position of the face and mouth when saying part of a word. :return: The audio stream that contains the synthesized speech and a list of visemes that are associated with the speech audio. """ try: kwargs = { "Engine": engine, "OutputFormat": audio_format, "Text": text, "VoiceId": voice, } if lang_code is not None: kwargs["LanguageCode"] = lang_code response = self.polly_client.synthesize_speech(**kwargs) audio_stream = response["AudioStream"] logger.info("Got audio stream spoken by %s.", voice) visemes = None if include_visemes: kwargs["OutputFormat"] = "json" kwargs["SpeechMarkTypes"] = ["viseme"] response = self.polly_client.synthesize_speech(**kwargs) visemes = [ json.loads(v) for v in response["AudioStream"].read().decode().split() if v ] logger.info("Got %s visemes.", len(visemes)) except ClientError: logger.exception("Couldn't get audio stream.") raise else: return audio_stream, visemes
-
API 세부 정보는 Word for Python(Boto3) SynthesizeSpeech 참조의 Word를 참조하세요. AWS SDK API
-
- Ruby
-
- Ruby용 SDK
-
참고
더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리
에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요. require 'aws-sdk-polly' # In v2: require 'aws-sdk' begin # Get the filename from the command line if ARGV.empty? puts 'You must supply a filename' exit 1 end filename = ARGV[0] # Open file and get the contents as a string if File.exist?(filename) contents = IO.read(filename) else puts "No such file: #{filename}" exit 1 end # Create an Amazon Polly client using # credentials from the shared credentials file ~/.aws/credentials # and the configuration (region) from the shared configuration file ~/.aws/config polly = Aws::Polly::Client.new resp = polly.synthesize_speech({ output_format: 'mp3', text: contents, voice_id: 'Joanna' }) # Save output # Get just the file name # abc/xyz.txt -> xyx.txt name = File.basename(filename) # Split up name so we get just the xyz part parts = name.split('.') first_part = parts[0] mp3_file = "#{first_part}.mp3" IO.copy_stream(resp.audio_stream, mp3_file) puts "Wrote MP3 content to: #{mp3_file}" rescue StandardError => e puts 'Got error:' puts 'Error message:' puts e.message end
-
API 세부 정보는 SynthesizeSpeech AWS SDK for Ruby 참조의 API를 참조하세요.
-
- Rust
-
- Rust용 SDK
-
참고
더 많은 on GitHub가 있습니다. AWS 코드 예시 리포지토리
에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요. async fn synthesize(client: &Client, filename: &str) -> Result<(), Error> { let content = fs::read_to_string(filename); let resp = client .synthesize_speech() .output_format(OutputFormat::Mp3) .text(content.unwrap()) .voice_id(VoiceId::Joanna) .send() .await?; // Get MP3 data from response and save it let mut blob = resp .audio_stream .collect() .await .expect("failed to read data"); let parts: Vec<&str> = filename.split('.').collect(); let out_file = format!("{}{}", String::from(parts[0]), ".mp3"); let mut file = tokio::fs::File::create(out_file) .await .expect("failed to create file"); file.write_all_buf(&mut blob) .await .expect("failed to write to file"); Ok(()) }
-
API 세부 정보는 Word for Rust SynthesizeSpeech
참조의 Word를 참조하세요. AWS SDK API
-