This example application analyzes and stores customer feedback cards. Specifically,
it fulfills the need of a fictitious hotel in New York City. The hotel receives feedback
from guests in various languages in the form of physical comment cards. That feedback
is uploaded into the app through a web client.
After an image of a comment card is uploaded, the following steps occur:
-
Text is extracted from the image using Amazon Textract.
-
Amazon Comprehend determines the sentiment of the extracted text and its language.
-
The extracted text is translated to English using Amazon Translate.
-
Amazon Polly synthesizes an audio file from the extracted text.
The full app can be deployed with the AWS CDK. For source code and deployment
instructions, see the project in
GitHub. The following excerpts show how the AWS SDK for JavaScript is used inside of Lambda functions.
import {
ComprehendClient,
DetectDominantLanguageCommand,
DetectSentimentCommand,
} from "@aws-sdk/client-comprehend";
export const handler = async (extractTextOutput) => {
const comprehendClient = new ComprehendClient({});
const detectDominantLanguageCommand = new DetectDominantLanguageCommand({
Text: extractTextOutput.source_text,
});
const { Languages } = await comprehendClient.send(
detectDominantLanguageCommand,
);
const languageCode = Languages[0].LanguageCode;
const detectSentimentCommand = new DetectSentimentCommand({
Text: extractTextOutput.source_text,
LanguageCode: languageCode,
});
const { Sentiment } = await comprehendClient.send(detectSentimentCommand);
return {
sentiment: Sentiment,
language_code: languageCode,
};
};
import {
DetectDocumentTextCommand,
TextractClient,
} from "@aws-sdk/client-textract";
export const handler = async (eventBridgeS3Event) => {
const textractClient = new TextractClient();
const detectDocumentTextCommand = new DetectDocumentTextCommand({
Document: {
S3Object: {
Bucket: eventBridgeS3Event.bucket,
Name: eventBridgeS3Event.object,
},
},
});
const { Blocks } = await textractClient.send(detectDocumentTextCommand);
const extractedWords = Blocks.filter((b) => b.BlockType === "WORD").map(
(b) => b.Text,
);
return extractedWords.join(" ");
};
import { PollyClient, SynthesizeSpeechCommand } from "@aws-sdk/client-polly";
import { S3Client } from "@aws-sdk/client-s3";
import { Upload } from "@aws-sdk/lib-storage";
export const handler = async (sourceDestinationConfig) => {
const pollyClient = new PollyClient({});
const synthesizeSpeechCommand = new SynthesizeSpeechCommand({
Engine: "neural",
Text: sourceDestinationConfig.translated_text,
VoiceId: "Ruth",
OutputFormat: "mp3",
});
const { AudioStream } = await pollyClient.send(synthesizeSpeechCommand);
const audioKey = `${sourceDestinationConfig.object}.mp3`;
const s3Client = new S3Client();
const upload = new Upload({
client: s3Client,
params: {
Bucket: sourceDestinationConfig.bucket,
Key: audioKey,
Body: AudioStream,
ContentType: "audio/mp3",
},
});
await upload.done();
return audioKey;
};
import {
TranslateClient,
TranslateTextCommand,
} from "@aws-sdk/client-translate";
export const handler = async (textAndSourceLanguage) => {
const translateClient = new TranslateClient({});
const translateCommand = new TranslateTextCommand({
SourceLanguageCode: textAndSourceLanguage.source_language_code,
TargetLanguageCode: "en",
Text: textAndSourceLanguage.extracted_text,
});
const { TranslatedText } = await translateClient.send(translateCommand);
return { translated_text: TranslatedText };
};
Services used in this example
Amazon Comprehend
Lambda
Amazon Polly
Amazon Textract
Amazon Translate