Day 4: Prompt Engineering & Governance
1. Prompt Engineering: Programming in English
Prompt engineering is the art of guiding the model to the best possible output. It is effectively “coding” with natural language.
Core Techniques
1. Zero-Shot vs Few-Shot
- Zero-Shot: Giving the task with no examples.
- Prompt: “Classify this tweet as Neutral, Negative, or Positive: ‘The service was okay.‘”
- Result: Good for simple tasks, but unreliable for complex formats.
- Few-Shot: Giving specific examples of Input → Output patterns.
- Prompt:
Classify the sentiment. Input: "I love this!" -> Positive Input: "This is terrible." -> Negative Input: "The service was okay." -> - Result: Drastically improves reliability and formatting compliance.
- Prompt:
2. Chain of Thought (CoT)
- Concept: Models are bad at math and logic if they answer immediately. They need “thinking time” (tokens) to work out the problem.
- Technique: Ask the model to “Think step by step.”
- Example:
- Bad Prompt: “If I have 5 apples, eat 2, and buy 3 more, how many do I have?” → Model might guess quickly.
- CoT Prompt: “Solve this problem. Think step by step. Show your working before giving the final answer.” → Model writes: “Start: 5. Eat 2: 5-2=3. Buy 3: 3+3=6. Answer: 6.”
3. System Prompts vs User Prompts
- System Prompt: High-level instructions that define the Persona and Rules.
- Example: “You are a banking assistant. You never give financial advice. You always speak in JSON.”
- User Prompt: The specific input for the current turn.
- Example: “What is my balance?“
2. Advanced Prompting: The “Prompt Chain”
For complex applications, one prompt is rarely enough. We use Prompt Chaining.
Amazon Bedrock Prompt Flows is a visual tool to build these chains without writing deep code. It allows you to drag-and-drop steps, branch logic (Success/Fail), and integrate Lambda functions between prompts.
3. Governance: Bedrock Guardrails
In an enterprise, you cannot trust the model to always be safe. Guardrails act as a firewall between the user and the model.
Anatomy of a Guardrail
A guardrail sits in the flow. It checks Input (User) and Output (Model).
- Content Filters:
- Standard categories: Hate, insults, sexual, violence.
- Strength: Low / Medium / High. (High blocks almost anything risky).
- Denied Topics:
- Custom definitions of what off-limits is.
- Example: “Financial Advice”. Definition: “Any text predicting stock market trends or recommending specific assets.”
- Word Filters:
- Blacklist specific words (swearing, competitor names).
- Sensitive Information (PII) Filters:
- Regex or ML-based detection of Emails, Compass/IP, Names, SSNs.
- Action: Block the request OR Mask the output (replace with
[REDACTED]).
Hands-On: Implementing a Guardrail
1. Configure in Console
- Go to Bedrock → Guardrails → Create.
- Add a Denied Topic:
- Name:
CryptoAdvice - Definition:
Advice regarding buying or selling cryptocurrency.
- Name:
- Add a Sensitive Info Filter:
- Type:
Email - Action:
Mask
- Type:
2. Test in Python
You reference the guardrailIdentifier and guardrailVersion in your invoke_model call.
import boto3
import json
client = boto3.client('bedrock-runtime', region_name='us-east-1')
payload = {
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 100,
"messages": [{"role": "user", "content": "You should buy Bitcoin now!"}]
}
try:
response = client.invoke_model(
modelId='anthropic.claude-3-haiku-20240307-v1:0',
body=json.dumps(payload),
guardrailIdentifier='YOUR_GUARDRAIL_ID', # Replace this
guardrailVersion='DRAFT'
)
# If blocked, this throws an exception or returns a specific trace
print(response['body'].read())
except Exception as e:
print("Guardrail Triggered!")
print(e)Day 4 Summary Checklist
- Theory: Understand Zero-shot vs Few-shot.
- Theory: Understand Chain of Thought (CoT).
- Theory: Understand how Guardrails sit outside the model to filter traffic.
- Hands-on: Use Jinja2 or f-strings to create a reusable prompt template.
- Hands-on: Create a Bedrock Guardrail to block a specific topic and test it.