Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich.
Vorlagen für Eingabeaufforderungen und Beispiele für Textmodelle von Amazon Bedrock
Zu den häufigsten Aufgaben, die LLMs auf Amazon Bedrock unterstützt werden, gehören Textklassifizierung, Zusammenfassung sowie Fragen und Antworten (mit und ohne Kontext). Für diese Aufgaben können Sie die folgenden Vorlagen und Beispiele verwenden, um Eingabeaufforderungen für Amazon Bedrock-Textmodelle zu erstellen.
Textklassifizierung
Bei der Textklassifizierung enthält die Aufforderung eine Frage mit mehreren Antwortoptionen und das Modell muss mit der richtigen Option antworten. Außerdem gibt Bedrock LLMs auf Amazon genauere Antworten aus, wenn Sie Antwortoptionen in Ihre Aufforderung aufnehmen.
Das erste Beispiel ist eine einfache Multiple-Choice-Klassifizierungsfrage.
Prompt template for Titan
"""{{Text}}
{{Question}}? Choose from the following:
{{Choice 1}}
{{Choice 2}}
{{Choice 3}}"""
|
User prompt:
San Francisco, officially the City and County
of San Francisco, is the commercial, financial, and cultural
center of Northern California. The city proper is the fourth
most populous city in California, with 808,437 residents,
and the 17th most populous city in the United States as of 2022.
What is the paragraph above about? Choose from the following:
A city
A person
An event
Output:
A city
|
(Quelle der Aufforderung: Wikipedia über San Francisco, verwendetes Modell: Amazon Titan Text)
Die Stimmungsanalyse ist eine Form der Klassifizierung, bei der das Modell die Stimmung aus einer Liste von im Text ausgedrückten Antwortmöglichkeiten auswählt.
Prompt template for Titan:
"""The following is text from a {{Text Type, e.g. “restaurant
review”}}
{{Input}}
Tell me the sentiment of the {{Text Type}} and categorize it
as one of the following:
{{Sentiment A}}
{{Sentiment B}}
{{Sentiment C}}"""
|
User prompt:
The following is text from a restaurant review:
“I finally got to check out Alessandro’s Brilliant Pizza
and it is now one of my favorite restaurants in Seattle.
The dining room has a beautiful view over the Puget Sound
but it was surprisingly not crowed. I ordered the fried
castelvetrano olives, a spicy Neapolitan-style pizza
and a gnocchi dish. The olives were absolutely decadent,
and the pizza came with a smoked mozzarella, which
was delicious. The gnocchi was fresh and wonderful.
The waitstaff were attentive, and overall the experience
was lovely. I hope to return soon.”
Tell me the sentiment of the restaurant review
and categorize it as one of the following:
Positive
Negative
Neutral
Output:
Positive.
|
(Quelle der Aufforderung: AWS, verwendetes Modell: Amazon Titan Text)
Das folgende Beispiel verwendet Anthropic Claude Modelle zur Klassifizierung von Text. Wie vorgeschlagen in AnthropicClaude Leitfäden verwenden XML Tags,<text></text> um z. B. wichtige Teile der Aufforderung zu kennzeichnen. Wenn Sie das Modell bitten, direkt in XML Tags eingeschlossene Ausgaben zu generieren, kann dies dem Modell auch dabei helfen, die gewünschten Antworten zu erhalten.
Prompt template for Anthropic Claude:
"""
Human: {{classification task description}}
<text>
{{input text content to be classified}}
</text>
Categories are:
{{category name 1}}
{{category name 2}}
{{category name 3}}
Assistant:"""
(Quelle: Geschrieben von AWS) |
User prompt:
Human: Classify the given product description into given
categories. Please output the category label in
<output></output> tags.
Here is the product description.
<text>
Safe, made from child-friendly materials with smooth edges.
Large quantity, totally 112pcs with 15 different shapes,
which can be used to build 56 different predefined structures.
Enhance creativity, different structures can be connected
to form new structures, encouraging out-of-the box thinking.
Enhance child-parent bonding, parents can play with their
children together to foster social skills.
</text>
Categories are:
(1) Toys
(2) Beauty and Health
(3) Electronics
Assistant:
Output:
<output>Toys</output>
|
(Quelle der Aufforderung: AWS, verwendetes Modell: Anthropic Claude)
Frage-Antwort, ohne Kontext
Bei einer Frage-Antwort-Eingabeaufforderung ohne Kontext muss das Modell die Frage mit seinem internen Wissen beantworten, ohne Kontext oder ein Dokument zu verwenden.
Prompt Template to Titan
"""{{Question}}"""
|
User prompt:
What is Robert Frost's "Stopping by the woods on a
snowy evening" about metaphorically?
Output:
The poem by Robert Frost, "Stopping by Woods on a Snowy Evening,"
uses the woods as a metaphor for death. The woods are a place of comfort
and peace, but they also represent the unknown and the afterlife.
The poem suggests that death is not something to be feared,
but rather a natural part of life that we should all embrace.
The poem begins with the speaker stopping by the woods on a snowy evening,
a moment of contemplation and reflection. The speaker notices the beauty
of the snow-covered woods and is reminded of the fleeting nature of life.
The poem continues as the speaker considers the temptation of continuing
on to his destination, but instead chooses to pause and take in the beauty
of the woods.
The woods are used as a metaphor for the unknown and the afterlife.
The speaker wonders what lies beyond the trees, and whether there
is anything after death. The poem suggests that death is not
something to be feared, but rather a natural part of life
that we should all embrace.
The final lines of the poem are perhaps the most famous and poignant:
"And miles to go before I sleep, and miles to go before I sleep."
These lines suggest that even though the speaker has paused
to appreciate the beauty of the woods, they still have a
long journey ahead of them. The poem is a reminder that life
is fleeting, and that we should take the time to appreciate
the beauty around us, even as we work towards our goals and ambitions.
Overall, "Stopping by Woods on a Snowy Evening" is a beautiful
and thought-provoking poem that uses the metaphor of the woods
to explore the concept of death and the afterlife.
|
(Quelle der Aufforderung: AWS, verwendetes Modell: Amazon Titan Text)
Die Ermutigung von Modellen kann auch bei Frage-Antwort-Aufgaben hilfreich sein.
Prompt Template to Titan
"""{{Model encouragement}}
{{Question}}"""
|
User prompt:
You are excellent at answering questions,
and it makes you happy when you provide the correct answer.
What moon in the solar system is most likely to host life?
Output:
The moon of Titan, which is a moon of the planet Saturn,
is the most likely to support life in the solar system.
Titan is the only known moon in the solar system to have a dense
atmosphere and a liquid hydrosphere, and scientists believe that
it may contain the chemical building blocks for life.
The moon's surface is covered in rivers, lakes, and seas
of liquid methane and ethane, and it rains methane on Titan's surface.
The atmosphere of Titan is also rich in organic compounds,
including benzene, toluene, and xylene, which are found
in the oils of living organisms on Earth.
Additionally, Titan has a magnetic field and a tenuous atmosphere,
which could provide protection from harmful radiation and
help maintain a stable climate.
While there is no direct evidence of life on Titan,
scientists believe that it is one of the most promising
places in the solar system to search for life beyond Earth.
|
(Quelle der Aufforderung: AWS, verwendetes Modell: Amazon Titan Text)
Prompt template to Titan
"""{{Model encouragement}}
{{Question}}
{{Constraints}}"""
|
User prompt:
You feel rewarded by helping people learn more about climate change.
Could you please explain what climate change is?
Assume your audience is composed of high school students.
Output:
Climate change is the long-term warming of the planet,
which can be caused by human activities such as burning fossil fuels
and deforestation, as well as natural processes. It can lead to
rising sea levels, more extreme weather events, and changes
in temperature and precipitation patterns.
|
(Quelle der Aufforderung: AWS, verwendetes Modell: AI21 Labs Jurassic-2 Ultra v1)
Frage-Antwort, mit Kontext
In einer Frage-Antwort-Eingabeaufforderung mit Kontext wird von Benutzern ein Eingabetext gefolgt von einer Frage zur Verfügung gestellt. Das Modell muss die Frage auf der Grundlage der im Eingabetext bereitgestellten Informationen beantworten. Wenn Sie die Frage am Ende nach dem Text platzieren, kann dies bei LLMs Amazon Bedrock dazu beitragen, die Frage besser zu beantworten. Die Ermutigung von Modellen funktioniert auch für diesen Anwendungsfall.
Prompt template to Titan
"""{{Text}}
{{Question}}"""
|
User prompt:
The red panda (Ailurus fulgens), also known as the lesser panda,
is a small mammal native to the eastern Himalayas and southwestern China.
It has dense reddish-brown fur with a black belly and legs, white-lined ears,
a mostly white muzzle and a ringed tail. Its head-to-body length is 51–63.5 cm
(20.1–25.0 in) with a 28–48.5 cm (11.0–19.1 in) tail, and it weighs between
3.2 and 15 kg (7.1 and 33.1 lb). It is well adapted to climbing due to its
flexible joints and curved semi-retractile claws.
The red panda was first formally described in 1825. The two currently
recognized subspecies, the Himalayan and the Chinese red panda, genetically
diverged about 250,000 years ago. The red panda's place on the evolutionary
tree has been debated, but modern genetic evidence places it in close
affinity with raccoons, weasels, and skunks. It is not closely related
to the giant panda, which is a bear, though both possess elongated
wrist bones or "false thumbs" used for grasping bamboo.
The evolutionary lineage of the red panda (Ailuridae) stretches
back around 25 to 18 million years ago, as indicated by extinct
fossil relatives found in Eurasia and North America.
The red panda inhabits coniferous forests as well as temperate broadleaf
and mixed forests, favoring steep slopes with dense bamboo cover close
to water sources. It is solitary and largely arboreal. It feeds mainly
on bamboo shoots and leaves, but also on fruits and blossoms.
Red pandas mate in early spring, with the females giving birth
to litters of up to four cubs in summer. It is threatened
by poaching as well as destruction and fragmentation of habitat
due to deforestation. The species has been listed as Endangered
on the IUCN Red List since 2015. It is protected in all range countries.
Based on the information above, what species are red pandas closely related to?
Output:
Red pandas are closely related to raccoons, weasels, and skunks.
|
(Quelle der Aufforderung: https://en.wikipedia.org/wiki/Red_panda, verwendetes Modell: Amazon Titan Text)
Wenn Sie dazu aufgefordert werden Anthropic Claude Bei Modellen ist es hilfreich, den Eingabetext in XML Tags zu unterteilen. Im folgenden Beispiel wird der Eingabetext in <text></text>
eingeschlossen.
Prompt template for Anthropic Claude:
"""
Human: {{Instruction}}
<text>
{{Text}}
<text>
{{Question}}
Assistant:"""
|
User prompt:
Human: Read the following text inside <text></text>
XML tags, and then answer the question:
<text>
On November 12, 2020, the selection of the Weeknd to headline
the show was announced; marking the first time a Canadian solo artist
headlined the Super Bowl halftime show. When asked about preparations
for the show, the Weeknd stated, "We've been really focusing
on dialing in on the fans at home and making performances
a cinematic experience, and we want to do that with the Super Bowl."
The performance featured a choir whose members were dressed in white
and wore masks over their faces with glowing red eyes, and were
standing within a backdrop of a neon cityscape. The performance
opened with a white figure dressed the same as the choir being
lowered into the backdrop where the choir was standing while singing
“Call Out My Name". At this time, the Weeknd sat in a convertible
against a skyline backdrop designed to resemble the Las Vegas Strip.
For the next part of the performance, the backdrop then split open
to reveal the Weeknd, who then performed "Starboy", followed by "The Hills".
Next, performing the song "Can't Feel My Face", the Weeknd traveled
through a labyrinth constructed behind the stage, joined by dancers
dressed in red blazers and black neckties similar to his,
but with their faces covered with bandages, in keeping with
the aesthetic of his fourth studio album After Hours (2020).
The dancers would wear these bandages throughout the performance.
In the labyrinth section of the performance, camerawork was visually unsteady.
The next songs performed were "I Feel It Coming", "Save Your Tears",
and "Earned It". For the "Earned It" performance, the Weeknd
was accompanied by violinists. For the finale of the show,
the Weeknd took to the field of the stadium with his dancers to perform
“Blinding Lights". He and the dancers entered the field by performing
"House of Balloons / Glass Table Girls". The performance ended
with an array of fireworks.
</text>
Based on the text above, what songs did the Weeknd play
at the Super Bowl halftime show?
Assistant:
Output:
Based on the text, the songs the Weeknd played
at the Super Bowl halftime show were:
- Call Out My Name
- Starboy
- The Hills
- Can't Feel My Face
- I Feel It Coming
- Save Your Tears
- Earned It
- Blinding Lights
- House of Balloons / Glass Table Girls
|
(Quelle der Eingabeaufforderung: Wikipedia über die Halbzeitshow des Super Bowl LV, verwendetes Modell: Anthropic Claude)
Zusammenfassung
Bei einer Zusammenfassungsaufgabe besteht die Eingabeaufforderung aus einer Textpassage und das Modell muss mit einer kürzeren Passage antworten, die die wichtigsten Punkte der Eingabe erfasst. Für diesen Anwendungsfall ist es hilfreich, die Länge der Ausgabe (Anzahl der Sätze oder Absätze) anzugeben.
Prompt template for Titan
"""The following is text from a {{Text Category}}:
{{Text}}
Summarize the {{Text Category}} in {{length of summary,
e.g., “one sentence” or “one paragraph”}}"""
|
User prompt:
The following is text from a restaurant review:
“I finally got to check out Alessandro’s Brilliant Pizza
and it is now one of my favorite restaurants in Seattle.
The dining room has a beautiful view over the Puget Sound
but it was surprisingly not crowed. I ordered the fried
castelvetrano olives, a spicy Neapolitan-style pizza
and a gnocchi dish. The olives were absolutely decadent,
and the pizza came with a smoked mozzarella, which was delicious.
The gnocchi was fresh and wonderful. The waitstaff were attentive,
and overall the experience was lovely. I hope to return soon.”
Summarize the above restaurant review in one sentence.
Output:
Alessandro’s Brilliant Pizza is a fantastic restaurant
in Seattle with a beautiful view over Puget Sound that offers
decadent and delicious food.
|
(Quelle der Aufforderung: AWS, verwendetes Modell: Amazon Titan Text)
Im folgenden Beispiel Anthropic Claude fasst den angegebenen Text in einem Satz zusammen. Um Eingabetext in Ihre Eingabeaufforderungen aufzunehmen, formatieren Sie den Text mit XML Markup:. <text> {{text content}} </text>
Die Verwendung XML innerhalb von Eingabeaufforderungen ist eine gängige Praxis bei Eingabeaufforderungen Anthropic
Claude Modelle.
Prompt template for Anthropic Claude:
"""
Human: Please read the text:
<text>
{{text context}}
</text>
Summarize the text in {{length of summary, e.g., “one
sentence” or “one paragraph”}}
Assistant:"""
|
User prompt:
Human: Please read the text:
<text>
In game theory, the Nash equilibrium, named after the mathematician
John Nash, is the most common way to define the solution
of a non-cooperative game involving two or more players.
In a Nash equilibrium, each player is assumed to know
the equilibrium strategies of the other players,
and no one has anything to gain by changing only one's own strategy.
The principle of Nash equilibrium dates back to the time of Cournot,
who in 1838 applied it to competing firms choosing outputs.
</text>
Summarize the text in one sentence.
Assistant:
Output:
The Nash equilibrium is a concept in game theory
where each player has adopted a strategy that they have no incentive
to deviate from given the other players' strategies.
|
(Quelle der Eingabeaufforderung: Wikipedia zum Nash-Gleichgewicht, verwendetes Modell: Anthropic
Claude)
Textgenerierung
Wenn Sie dazu aufgefordert werden, kann Bedrock LLMs auf Amazon mit einer Textpassage antworten, die der Beschreibung entspricht. Hier ist ein Beispiel:
Prompt template for Titan
"""Please write a {{Text Category}} in the voice of {{Role}}."""
|
User prompt:
Please write an email in the voice of a friend
congratulating someone on a new job.
Output:
Hey [Name],
I just found out about your new job, and I'm so happy for you!
Congratulations! You deserve this opportunity since you worked so hard
to get it. I know you'll do great and make a positive impact on your team.
I'm here to support you every step of the way. Wishing you all the best
in this exciting new chapter of your life!
Cheers,
[Your Name]
|
(Quelle der Aufforderung: AWS, verwendetes Modell: Amazon Titan Text)
Für Anwendungsfälle zur Textgenerierung kann die Angabe detaillierter Aufgabenanforderungen gut funktionieren. Im folgenden Beispiel fordern wir das Modell auf, eine Antwort mit Ausrufezeichen zu generieren.
Prompt template for Titan
"""{{Task specification}}
Please write a {{Text Category}} in the voice of {{Role}}."""
|
User prompt:
Write text with exclamation points.
Please write a formatted email in the voice of a friend
congratulating someone on a new job.
Output:
Hey [name]!
I just saw the news and I am so, so happy for you!
You deserve this position and this company with all your heart.
I can't wait to see you soar and accomplish all the things
you've been working towards. If anyone is deserving
of a promotion, it's you!
Congratulations again, friend!
Love,
[name]
|
(Quelle der Aufforderung: AWS, verwendetes Modell: Amazon Titan Text)
Im folgenden Beispiel wird das Modell aufgefordert, bei der Generierung des Textes die Rolle einer bestimmten Person zu übernehmen. Beachten Sie, wie die Signatur die Rolle widerspiegelt, die das Modell in der Antwort übernimmt.
Prompt template for Titan
"""{{Role assumption}} {{Task description}}
"""
|
User input:
My name is Jack. Help me write a note expressing my gratitude
to my parents for taking my son (their grandson) to the zoo.
I miss my parents so much.
Output:
Dear Mom and Dad,
I wanted to take a moment to express my heartfelt gratitude
for taking my son to the zoo last weekend. It meant so much to me
that you took the time to make memories with him, and I am so grateful
for the love and support you have shown him.
I miss you both so much, and I wish we could be together more often.
But I know that you will always be there for me and my family,
and I am so grateful to have you as my parents.
Love,
Jack
|
(Quelle der Aufforderung: AWS, verwendetes Modell: AI21 Labs Jurassic-2 Ultra v1)
Codegenerierung
Die Eingabeaufforderung beschreibt die Aufgabe oder Funktion und die Programmiersprache für den Code, den das Modell generieren soll.
Prompt template for Titan
"""Write a function in {{programming language}} to
{{task or purpose}}""".
|
User input:
Write a python function to download a s3 file to local disk.
Output:
The following code can be used to download an s3 file to the local disk.
```Python
import boto3
# Creates a boto3 client object
s3 = boto3.client('s3')
def download_file(bucket_name, object_key, file_path):
try:
# Downloads the s3 file to the specified file path
s3.download_file(bucket_name, object_key, file_path)
except Exception as e:
print(e)
# Specifies the values of bucket_name, object_key, and file_path
bucket_name = "samplebucketname"
object_key = "sampleobjectkey"
file_path = "/samplefilepath"
download_file(bucket_name, object_key, file_path)
```
The given code uses the download_file function to download an s3 file
to the local disk. The download_file function takes the bucket name,
object key, and file path as input parameters. The bucket name
is the name of the s3 bucket where the file is located,
the object key is the unique identifier of the file within the bucket,
and the file path is the destination path on the local disk
where the file will be downloaded. The function uses the download_file method
of the s3 client to download the file from the specified bucket
and write it to the specified file path.
|
(Quelle der Aufforderung: AWS, verwendetes Modell: Amazon Titan Text)
Prompt template for Anthropic Claude
"""
Human: Write a function in {{programming language}} to
{{task or purpose}}
Assistant:"""
|
User prompt:
Human: Write some python code that quickly approximates
the square root of a number.
Assistant:
Output:
Here is a simple function to approximate
the square root of a number using Newton's method:
```python
def sqrt(n, iterations=10):
x = n
for i in range(iterations):
x = (x + n/x) / 2
return x
```
To use it:
```python
print(sqrt(25))
# 5.0
print(sqrt(15, iterations=20))
# 3.872983346207417
```
This repeatedly takes the average of x and n/x to converge on the square root.
More iterations produce a more accurate approximation.
The advantage of this approach is it converges very quickly,
so even just 10 iterations produces a decent estimate.
|
(Quelle der Aufforderung: AWS, verwendetes Modell: Anthropic Claude)
Mathematik
Die Eingabe beschreibt ein Problem, das auf einer bestimmten Ebene mathematische Überlegungen erfordert, die numerischer, logischer, geometrischer oder anderer Natur sein können. Bei einem solchen Problem ist es hilfreich, das Modell aufzufordern, das Problem Stück für Stück zu lösen, indem Anweisungen mit Ausdrücken wie Let’s
think step by step
oder Think step by step to come up with the right
answer.
versehen werden.
Prompt template for Titan
"""{{Math problem description}}
Let’s think step by step."""
|
User prompt:
A triangle has two angles of 70 degrees and 50 degrees.
What is the third angle in degrees?
Let’s think step by step.
Output:
The sum of the three angles in a triangle is always 180 degrees.
The two given angles are 70 degrees and 50 degrees. 70 + 50 = 120.
Therefore, the third angle is 180 - 120 = 60 degrees.
The third angle is 60 degrees.
|
(Quelle der Aufforderung: AWS, verwendetes Modell: Amazon Titan Text)
Das folgende Beispiel zeigt auch, wie das Modell aufgefordert wird, Schritt für Schritt vorzugehen:
Prompt template for Titan
"""{{Math problem description}}
Think step by step to come up with the right answer."""
|
User prompt:
Lucy has 12 colorful marbles, and she wants to share them
equally with her 4 friends. How many marbles will each friend receive?
Think step-by-step to come up with the right answer.
Output:
1. Lucy has 12 marbles and wants to share them equally
with her 4 friends, then each friend will receive 12 / 4 = 3 marbles.
2. Therefore, each friend will receive 3 marbles.
|
(Quelle der Aufforderung: AWS, verwendetes Modell: AI21 Labs Jurassic-2 Ultra v1)
Argumentation/logisches Denken
Bei komplexen Argumentationsaufgaben oder Problemen, die logisches Denken erfordern, können wir das Modell auffordern, logische Schlussfolgerungen zu ziehen und seine Antworten zu erklären.
Prompt template for Titan and AI21 Labs Jurassic: """Question: {{question}} Please output the answer and then explain your answer:""" | User prompt: Question: Which word is the odd one out? A. accomplished B. good C. horrible D. outstanding Please output the answer and then explain your answer: Output: The odd one out is C. horrible. The other three words describe positive qualities or achievements, whereas horrible is an negative adjective. |
(Quelle der Aufforderung: AWS, verwendetes Modell: Amazon Titan Text)
Hier ist ein weiteres Beispiel mit der Anthropic Claude Modell:
Prompt template for Anthropic Claude: """ Human: {{question}} Please provide the answer and show the reasoning. Assistant:""" | User prompt Human: The barber is the "one who shaves all those, and those only, who do not shave themselves". Does the barber shave himself? Why is this a paradox? Please provide the answer and show the reasoning. Assistant: Output: This is an example of a paradox known as the Barber paradox. Here is an explanation: - The statement defines a barber who shaves all and only those people who do not shave themselves. - So if the barber shaves himself, he must shave all those who do not shave themselves, which includes himself. But the barber cannot shave himself as per the initial statement. - If the barber does not shave himself, then he would fit into the group of people who do not shave themselves, and so he must shave himself, according to the statement. But again this contradicts the initial statement. - So the barber cannot shave himself and also cannot not shave himself. This is self-contradictory based on the initial statement about the barber. - Therefore, the statement leads to a paradox where both possibilities (the barber shaving himself or not shaving himself) result in a contradiction. In summary, the paradox arises because the definition of the barber's behavior is self-contradictory when applied to the barber himself. This makes it impossible to determine if the barber shaves himself or not based on the given statement alone. |
(Quelle der Aufforderung: https://en.wikipedia.org/wiki/Barber_paradox, verwendetes Modell: Anthropic Claude)
Für die Extraktion von Entitäten aus einer bereitgestellten Eingabefrage. Extrahieren Sie Entitäten aus generiertem Text und platzieren Sie sie zur weiteren Verarbeitung in XML Tags.
Prompt template for Titan
"""You are an expert entity extractor from provided input question. You are responsible for extracting following entities: {{ list of entities}}
Please follow below instructions while extracting the entity A, and reply in <entityA> </entityA> XML Tags:
{{ entity A extraction instructions}}
Please follow below instructions while extracting the entity B, and reply in <entityB> </entityB> XML Tags:
{{ entity B extraction instructions}}
Below are some examples:
{{ some few shot examples showing model extracting entities from give input }}
|
(Quelle der Aufforderung: AWS, verwendetes Modell: Amazon Titan Text G1- Premier)
Beispiel:
User: You are an expert entity extractor who extracts entities from provided input question.
You are responsible for extracting following entities: name, location
Please follow below instructions while extracting the Name, and reply in <name></name>
XML Tags:
- These entities include a specific name of a person, animal or a thing
- Please extract only specific name name entities mentioned in the input query
- DO NOT extract the general mention of name by terms of "name", "boy", "girl",
"animal name", etc.
Please follow below instructions while extracting the location, and reply
in <location></location> XML Tags:
- These entities include a specific location of a place, city, country or a town
- Please extract only specific name location entities mentioned in the input query
- DO NOT extract the general mention of location by terms of "location", "city", "country",
"town", etc.
If no name or location is found, please return the same input string as is.
Below are some examples:
input: How was Sarah's birthday party in Seattle, WA?
output: How was <name>Sarah's</name> birthday party
in <location>Seattle, WA</location>?
input: Why did Joe's father go to the city?
output: Why did <name>Joe's</name> father go to the city?
input: What is the zipcode of Manhattan, New york city?
output: What is the zipcode of <location>Manhattan,New york city<location>?
input: Who is the mayor of San Francisco?
Bot:
C) Argumentation hain-of-thought
step-by-step Analysieren Sie, wie die Antwort abgeleitet wurde. Überprüfen Sie die Fakten und validieren Sie, wie das Modell zu einer Antwort geführt hat.
Prompt template for Titan
""" {{Question}}
{{ Instructions to Follow }}
Think Step by Step and walk me through your thinking
"""
|
(Quelle der Aufforderung: AWS, verwendetes Modell: Amazon Titan Text G1- Premier)
Beispiel:
User: If Jeff had 100 dollars, and he gave $20 to Sarah,
and bought lottery tickets with another $20. With the lottery
tickets he bought he won 35 dollars. Jeff then went to buy
his lunch and spend 40 dollars in lunch. Lastly he made a
donation to charity for $20. Stephen met with Jeff and wanted
to lend some money from him for his taxi. How much maximum money
can Jeff give to Stephen, given that he needs to save $10 for
his ride back home?. Please do not answer immediately, think
step by step and show me your thinking.
Bot: