Teaches students to work effectively with AI tools across all professional contexts. Covers prompting techniques, critical evaluation of AI output, building AI-augmented workflows, understanding the AI tool landscape, AI ethics and responsible use, and code/technical literacy for non-developers. This is the foundational course that every other Core and Domain course builds on.
Levels: Remember · Understand · Apply · Analyze · Evaluate · Create — highest demands most original thinking.
Real examples of how LLMs generate through pattern completion. Map the AI landscape and capture your baseline thinking before AI influence.
Exercise: First MapLearn prompt construction using five components. Choose your tool, build a reusable prompt, and test it with different inputs.
Exercise: Build a Real PromptPrompt for documents, analytical reasoning, and code. Learn the document editor workflow — generate, paste, edit, and annotate.
Exercise: Three Output TypesSystematic method for evaluating AI output: mark claims, verify against sources, identify errors, and write a corrected version.
Exercise: Red Team a ResponseBeyond factual errors: evaluate logic flaws, tone mismatches, and context blindness across three dimensions.
Exercise: Evaluation PracticeLearn a 30-minute tool evaluation framework. Evaluate hosting platforms and AI tools by capabilities, limitations, pricing, and data handling. Build and deploy your portfolio site.
Exercise: Tool Evaluation + Portfolio BuildPractical ethics through scenarios: disclosure norms, bias detection, privacy in AI use, and responsible deployment boundaries.
Exercise: Ethical ScenariosWhen to use AI, when not to. Map AI integration into three real tasks — heavy use, selective use, no use — with verification and risk assessment.
Exercise: AI Integration MapMinimum viable technical literacy: read AI-generated code, spot problems, interpret errors, and ask for help — without being a developer.
Exercise: Code Reading WorkshopComplete your AI Workflow Audit portfolio artifact. Review your First Map from Session 1, document growth, and assemble the final portfolio piece.
Exercise: AI Workflow AuditCrafting effective prompts, multi-turn refinement, context management, output format specification.
Detecting hallucination, verifying facts against sources, assessing quality and completeness.
Knowing when, where, and how to insert AI into professional workflows; calibrating AI use by task type.
Evaluating new tools, building a personal tool stack, adapting as tools evolve, tracking emerging capabilities.
Bias recognition in AI output, disclosure norms (when to tell stakeholders AI was used), data privacy (never paste PII/PHI without safeguards), responsible deployment boundaries.
Reading code at a conceptual level, understanding AI-generated code output, interpreting error messages, understanding version control concepts as a collaborator.
AI Workflow Audit — Document your AI-assisted workflow for a real task from your life or target career: the prompts used, output evaluation decisions, corrections made, ethical considerations, and a reflection on where AI helped vs. hindered. Includes a comparative evaluation of 2-3 AI tools for the task with a recommendation.
Anthropic's AI assistant for writing, analysis, coding, and research tasks with strong reasoning capabilities.
OpenAI's conversational AI for drafting, brainstorming, and general-purpose assistance across professional tasks.
AI coding assistant integrated into code editors. Used in demonstration mode to understand AI-generated code output.
AI image generation tools. Used in demonstration mode to understand the AI creative tool landscape.
Spreadsheet tool used for data tasks, analysis exercises, and organizing AI evaluation results.
Read-only exploration of code environments to build conceptual understanding of AI-generated code.
Your first 2 sessions are free. No credit card. Start building AI fluency today.
Start Session 1 Free