# M3 - Working with Generative AI
# Learning Objectives
These topics will be covered in this module’s knowledge check and entry ticket.
- Understand how to use language models (LLMs) in applications by calling an API endpoint.
- Describe the role of Large Language Models (LLMs) as a category of generative AI.
- Understand the basic process of how LLMs produce text using the attention mechanism.
- Identify common strengths and weaknesses of LLMs.
- Explore popular web interfaces for interacting with LLMs, such as ChatGPT, Gemini, and Claude.
- Compare the performance of different LLMs based on factors such as quality, speed, and price.
- Understand the concept of prompt engineering and its role in improving LLM performance for specific use cases.
- Understand and apply the three fundamental elements of successful prompt engineering: clarity, context, and scope.
- Identify ways in which various kinds of AI can be incorporated in applications.
# How Large Language Models Work
# Working with Generative AI and LLMs
# Chat Web UIs
# Platforms / APIs
# LLM Comparison Tool
# Calling LLM APIs
# Example of an AI-Driven Application
Note: This application uses Google and Gemini but all the major cloud provides have similar tools. You can use your green river students accounts to sign up for most of these without credit card information and get some free credits to play around with. You can also use platforms like Groq that have free services. When building your own applications, I would encourage the use of free-tier services so you don’t run out of credits part way though development.