· 4 min read

The LLM Use Case Framework: Simplifying Generative AI with Six Categories

A practical framework categorizing LLM use cases into six types to streamline generative AI workflows.

A practical framework categorizing LLM use cases into six types to streamline generative AI workflows.

It is really hard to follow all the applications, news, and innovations in the world of generative AI. New updates and breakthroughs from OpenAI, Microsoft, and others are exciting, but the pace can be overwhelming. How do you keep up and make the most of these tools?

A clear framework can help. By categorizing large language model (LLM) use cases into six types—Expansion, Compression, Conversion, Seeker, Action, and Reasoning—you can simplify your workflows. This framework is practical, easy to use, and makes navigating the world of generative AI much smoother.


Why Categorize? The Case for Simplicity

Why does categorizing your AI work matter? Experimentation is important, but a clear structure can save you time and effort. Here’s how:

  1. Faster Decisions: Each category requires different tools and strategies. Categorizing helps you choose the right one quickly.

  2. Simpler Prompts: Knowing the category of your task makes it easier to design effective prompts.

  3. Reusable Benchmarks: You can measure success differently for each category. This makes refining your approach easier.

  4. Better Workflows: When building AI agents, clear categories make designing workflows simpler.


The Six Categories of LLM Use Cases

1. Expansion

Expansion prompts generate larger outputs from small inputs. They are ideal for creating content, learning new ideas, or brainstorming.

Use Cases:

  • Writing blog posts, essays, or documentation
  • Brainstorming ideas
  • Explaining complex concepts in detail

Example Prompt: Write the introduction to a blog post about the impact of AI on software development.

2. Compression

Compression prompts shrink large amounts of information into concise summaries. These prompts help you quickly extract key points.

Use Cases:

  • Summarizing research papers or meeting notes
  • Condensing blog posts into bullet points
  • Extracting highlights from documents

Example Prompt: Summarize the latest updates from the Gemini 2.0 release in three bullet points.

3. Conversion

Conversion prompts change information from one format to another. The core data stays the same, but the format shifts.

Use Cases:

  • Turning natural language queries into code
  • Translating between programming languages
  • Formatting data for APIs

Example Prompt: Convert this natural language query into an SQL statement: “Show all customers and their total spending in the last 30 days.”

4. Seeker

Seeker prompts help you find specific information. They focus on retrieving precise data instead of summarizing large volumes.

Use Cases:

  • Querying databases or knowledge bases
  • Extracting specific data points from reports
  • Searching for answers in documents

Example Prompt: Find the best-performing product from this sales report for Q3.

5. Action

Action prompts execute commands with real-world effects. They trigger workflows or tool calls.

Use Cases:

  • Generating git commands
  • Triggering API calls
  • Running predefined scripts

Example Prompt: Generate the git commands needed to commit and push recent changes to the main branch.

6. Reasoning

Reasoning prompts handle complex decision-making and problem-solving. They analyze multiple inputs to provide insights or recommendations.

Use Cases:

  • Evaluating trade-offs in technical approaches
  • Planning workflows or strategies
  • Conducting risk assessments

Example Prompt: Evaluate three approaches for implementing user authentication: custom JWTs, classic OAuth, and Firebase Auth. Recommend the best option for a team with minimal backend expertise.


Applying the Framework

Categorizing LLM use cases doesn’t just make your work more efficient. It also helps you unlock new possibilities:

  • Agentic Workflows: Build AI agents by chaining prompts across categories. For example, a Seeker prompt could find data, a Reasoning prompt could analyze it, and an Action prompt could execute the next steps.
  • Reusable Tools and Patterns: Grouping similar tasks lets you standardize tooling, benchmarks, and metrics for each category.
  • Faster Development Cycles: Clear categories help you quickly identify the best tools or models (e.g., GPT-4, Gemini, Llama) for your needs.

Conclusion: Build Smarter, Not Harder

Generative AI is advancing rapidly. By using this six-category framework, you can simplify your workflows and focus on creating value. Whether you’re working with OpenAI’s latest model, Microsoft’s breakthroughs, or another tool, this approach will make your efforts more effective.

What do you think? Are there other categories you use in your workflows? Drop a comment to share your thoughts!

Stay curious and keep building!


In a way, this article was inspired by videos from Vicky Zhao who is master of Framework Thinking

https://www.youtube.com/@VickyZhaoBEEAMP

Back to Blog

Related Posts

View All Posts »