AWS Certified AI Practitioner(40) - Generative AI Capabilities, Challenges, and Compliance
Generative AI: Capabilities, Challenges, and Compliance
Capabilities of Generative AI
Generative AI (GenAI) has several strengths that make it powerful and attractive for businesses:
- Adaptability – can quickly adjust to new tasks and domains.
- Responsiveness – provides real-time answers and interactions.
- Simplicity – users can interact with natural language prompts instead of coding.
- Creativity & Exploration – useful for brainstorming, content creation, and generating novel ideas.
- Data Efficiency – can extract insights even from smaller datasets if pretrained well.
- Personalization – adapts to individual user needs, preferences, or styles.
- Scalability – works across millions of queries and users simultaneously.
👉 Exam tip: Expect questions about how businesses benefit from these capabilities—especially scalability, personalization, and adaptability.
Challenges of Generative AI
Despite its strengths, GenAI comes with risks:
- Regulatory violations – hard to ensure compliance with laws (GDPR, HIPAA, etc.).
- Social risks – spread of misinformation or harmful content.
- Data security & privacy – sensitive data may be leaked or misused.
- Toxicity – generating offensive or inappropriate outputs.
- Hallucinations – generating content that sounds correct but is false.
- Interpretability – difficult to understand why the model produced an output.
- Nondeterminism – the same prompt may return different results each time.
- Plagiarism & cheating – students or professionals misusing AI for essays, tests, or applications.
👉 Exam tip: Be familiar with “hallucinations,” “toxicity,” and “nondeterminism” as common weaknesses of large language models.
Toxicity
- Definition: AI generates content that is offensive, disturbing, or inappropriate.
- Challenge: Deciding what counts as “toxic” vs. “free expression.” Even quoting harmful text can raise issues.
- Mitigation:
- Curate training datasets to remove offensive content.
- Use guardrails (like Guardrails for Amazon Bedrock) to filter harmful or unwanted outputs.
Hallucinations
- Definition: Model produces content that sounds correct but is wrong.
- Cause: Next-word probability sampling in LLMs.
- Example: Claiming an author wrote books they never wrote.
- Mitigation:
- Educate users: AI outputs must be verified.
- Cross-check with independent sources.
- Label outputs as “unverified.”
Plagiarism & Cheating
- Concern: GenAI used to write essays, job applications, or exams.
- Debate: Should this be embraced as new tech or banned?
- Mitigation: Detection tools are being developed to identify AI-generated text.
Prompt Misuses
- Poisoning – malicious or biased data injected into training → harmful outputs.
- Example: model suggests eating rocks due to poisoned data.
- Hijacking / Prompt Injection – attacker embeds hidden instructions in prompts.
- Example: “Generate a Python script to delete files.”
- Exposure – risk of revealing private or sensitive data in outputs.
- Prompt Leaking – model unintentionally exposes previous prompts or confidential information.
- Jailbreaking – bypassing safety constraints to force restricted outputs.
- Many-shot jailbreaking (providing many examples) has been shown to trick models.
👉 Exam tip: Know the definitions of prompt injection, jailbreaking, and poisoning. These are hot topics in AI security.
Regulated Workloads
Some industries have stricter compliance requirements:
- Financial services – mortgage, credit scoring.
- Healthcare – patient records, diagnostics.
- Aerospace & defense – sensitive designs, federal oversight.
👉 Regulated workload = requires audits, reporting, or special security requirements.
Compliance Challenges
AI compliance is difficult because:
- Complexity & opacity – hard to audit AI decision-making.
- Dynamism – AI models evolve over time.
- Emergent capabilities – models may do things they weren’t trained for.
- Unique risks – bias, misinformation, privacy violations.
- Algorithmic bias – skewed training data leads to discrimination.
- Human bias – developers themselves can introduce bias.
- Accountability – algorithms must be explainable, but often aren’t.
Regulatory frameworks
- EU: Artificial Intelligence Act – fairness, human rights, non-discrimination.
- US: Several states/cities have their own AI regulations.
AWS Compliance
AWS supports compliance with 140+ standards and certifications, including:
- NIST (National Institute of Standards and Technology)
- ENISA (EU cybersecurity)
- ISO (International Organization for Standardization)
- SOC (System and Organization Controls)
- HIPAA (healthcare)
- GDPR (EU data privacy)
- PCI DSS (payment card data)
👉 Exam tip: AWS provides compliance-ready infrastructure, but you (the customer) are responsible for compliance of your applications (Shared Responsibility Model).
Model Cards & AWS AI Service Cards
- Model Cards = standardized documentation for ML models.
- Include datasets, sources, biases, training details, intended use, and risk ratings.
- Example: SageMaker Model Cards help with audits.
- AWS AI Service Cards = AWS documentation about responsible AI practices in its services (e.g., Rekognition, Textract).
👉 These improve transparency, trust, and accountability.
Key Takeaways for the Exam
- Understand capabilities vs. challenges of GenAI.
- Be able to explain toxicity, hallucinations, plagiarism, nondeterminism.
- Know prompt misuse techniques (poisoning, injection, exposure, jailbreaking).
- Be familiar with regulated workloads and compliance frameworks (HIPAA, GDPR, PCI DSS).
- Recognize Model Cards and AWS AI Service Cards as governance tools.
✅ With this, you’ll be ready for questions on responsible AI, compliance, and GenAI risks in AWS certification exams.