Master prompt engineering in under 8 minutes

INSIDE: Prompt Engineering, Single-Shot Prompting, LLMs

Hey everyone,

If you’re new here, welcome to Thoughts by Adel, where I brain dump interesting stuff I’ve been working on or thinking about over the last week.

In today's blog, let's talk about how I mastered prompt engineering.

If you've checked out my previous blogs, especially the AI projects I've built, they're all LLM-based. This means that the quality of those projects heavily relied on how good my prompts were.

Nail prompt engineering, and you're golden. Miss the mark, and you can forget about using AI voice systems, creating AI agents like GPTs, or integrating AI tasks into automations. None of it will work.

So, let’s dive into the strategies I used to vastly improve the quality of my prompts.

Different Prompting Techniques

First things first—conversational prompting ≠ single-shot prompting. Writing prompts for ChatGPT should differ from writing prompts for your AI projects.

Conversational prompting works when you have the luxury of follow-ups and human tweaking. It’s forgiving in the sense that you can start with an unstructured, low-quality prompt and still have all the chances in the world to correct it.

Single-shot prompting? It’s a one-and-done deal—no second chances, no human intervention. The prompt only runs once per task or action, with virtually no room for correction. This is the kind of prompting you should use for your AI projects.

If you try using conversational prompting techniques for single-shot tasks, you’ll end up with low-quality results. On the other hand, using single-shot techniques for conversational tasks can be overkill and unnecessarily complex.

Since single-shot prompting is crucial for AI projects, we'll focus on optimizing it in this blog.

Components of a Good Prompt

A solid prompt consists of five main components:

  1. Role Prompting

  2. Chain of Thought Prompting

  3. Emotional Prompting

  4. Few-Shot Prompting

  5. Lost in the Middle

Now let's break each of them down using the example of writing a LinkedIn connection request message.

1/ Role Prompting

Role prompting involves assigning a clear role to the LLM. This helps the model understand its function and purpose better. When the LLM knows its specific role, it can tailor its responses to fit that role, leading to more relevant and accurate results.

Example:

  • Before: “Write a connection request.”

  • After: “You are a job seeker with extensive expertise in crafting engaging and personalized outreach messages for LinkedIn connection requests. Write a connection request.”

Why it works: Assigning a role helps the model focus on the specific context and nuances needed for the task, which enhances the relevance and quality of the output. Learn more.

2/ Chain of Thought Prompting

Chain of thought prompting involves giving the LLM step-by-step instructions on how to approach the task. This method helps the model follow a logical sequence, improving the accuracy and quality of the output, especially for complex tasks.

Example:

  • Before: “You are a job seeker with extensive expertise in crafting engaging and personalized outreach messages for LinkedIn connection requests. Write a connection request.”

  • After: “You are a job seeker with extensive expertise in crafting engaging and personalized outreach messages for LinkedIn connection requests.

    Write a LinkedIn connection request message to connect with a potential hiring manager from XYZ company. Follow these steps:

    1. Start by mentioning a mutual interest in AI technology.

    2. Briefly describe your professional background.

    3. Highlight why you want to connect with this person.

    4. Suggest a meeting to discuss potential collaboration.

    5. Close with a friendly sign-off.”

Do note that this isn’t required for non-complex tasks! Simple enough tasks often perform well with simple enough prompts.

Why it works: Step-by-step instructions help the model understand the process and structure it needs to follow, resulting in more coherent and well-structured responses. As opposed to letting the LLM figure the steps out all on its own. Learn more.

Alright, some author side note: It might seem obvious that adding more context and detail improves a prompt’s performance. After all, we’re just giving the LLM more to work with to meet our needs. But get ready to be surprised—the next three components will completely change how you think about prompting!

3/ Emotion Prompting

Emotion prompting involves adding phrases that convey the importance and emotional tone of the task. This can enhance the model’s engagement and improve the quality of the response.

Example:

  • “This message is crucial for me to build a valuable professional connection.”

  • “Your careful crafting of this message is highly appreciated.”

Why it works: Surprisingly enough, adding emotional context makes the AI take the task more seriously, resulting in more thoughtful and carefully crafted responses. Learn more.

4/ Few-Shot Prompting

Few-shot prompting involves providing the model with a few examples of the desired output. This helps the model understand the format and content you expect without cluttering the Chain of Thought section of our prompt.

Example:

  • Output 1: “Hi [Name], I noticed we both share a passion for AI technology. I'd love to connect and discuss how we can collaborate. Best, Adel”

  • Output 2: “Hello [Name], your recent post about AI in business really resonated with me. Let's connect and explore potential synergies. Regards, Adel”

  • Output 3: “Hi [Name], as a fellow AI enthusiast, I believe we could have some insightful discussions. Let's connect. Cheers, Adel”

Why it works: Providing examples gives the model a clear understanding of what you want. This significantly improves its ability to produce accurate and relevant responses. Learn more.

5/ Lost in the Middle

The “lost in the middle” effect suggests that LLMs perform better when critical information is placed at the beginning or end of the input context. The beginning of the prompt is usually where the role and tasks are placed. The end, on the other hand, is where we remind our model what the important things are in the prompt.

Example:

  • “As a recap, write a LinkedIn connection request message to a hiring manager. Make sure to describe our mutual interests in AI. Be friendly with your tone, but maintain a hint of professionalism.”

Why it works: Placing important information at the start and end of the prompt ensures the model pays more attention to these details, resulting in better comprehension and output quality. Learn more.

The Perfect Prompt

Now that we’ve learned all about improving our single-shot prompts, how should we glue things together?

Specifically for GPT models, although there are no studies out there that say Markdown is king, I still end up using it a lot. For one, it is both extremely human-readable and machine-readable. Plus, I’m an engineer who likes to document stuff. What better way to document prompts than through Markdown!

Cool, so combining all the components above, here’s the final prompt we should end up with in Markdown format.

# Role
You are a job seeker with extensive expertise in crafting engaging and personalized outreach messages for LinkedIn connection requests.

# Task
Write a LinkedIn connection request message to connect with a potential hiring manager from XYZ company. Follow these steps:
1. Start by mentioning a mutual interest in AI technology.
2. Briefly describe your professional background.
3. Highlight why you want to connect with this person.
4. Suggest a meeting to discuss potential collaboration.
5. Close with a friendly sign-off.

# Context
- This message is crucial for me to build a valuable professional connection.
- Your careful crafting of this message is highly appreciated.

# Examples
- Output 1: "Hi [Name], I noticed we both share a passion for AI technology. I'd love to connect and discuss how we can collaborate. Best, Adel"
- Output 2: "Hello [Name], your recent post about AI in business really resonated with me. Let's connect and explore potential synergies. Regards, Adel”
- Output 3: "Hi [Name], as a fellow AI enthusiast, I believe we could have some insightful discussions. Let's connect. Cheers, Adel"

# Recap
As a recap, write a LinkedIn connection request message to a hiring manager. Make sure to describe our mutual interests in AI. Be friendly with your tone, but maintain a hint of professionalism.

Yep, there you have it! By mastering the five components of a good prompt, you can significantly enhance the effectiveness of your single-shot prompts. Go ahead, try it, and as always, let me hear your thoughts and experiences!

That’s all for this week…but one more thing. If you’re enjoying this, can you do me a favor and forward it to a friend? Thanks.