The recent comeuppance of generative AI tools has raised the discussion around prompt engineering. Prompt engineering is the art and science of giving generative AI models the right prompts to receive the output you want. A broader definition could also include your ability to choose the right models for the right purposes and determine whether to use generative AI in the first place.
The value you can gain from generative AI — both as a company and as an individual knowledge worker — is directly correlated to your prompt engineering ability. However, there has been a lot of debate lately about what role this skill will play in the future of work. Some people argue that a small subset of people who become experts at prompt engineering now will dominate the labor market tomorrow. But I see it unfolding similar to the way using Zoom or searching on Google became a technology cornerstone for any knowledge worker.
There might be a very small group of specialized prompt engineers or highly-paid generative AI specialists with deep industry-specific expertise, but most people will add it to their regular toolbox without being paid explicitly for it. This is partly because the organizations behind AI research are incentivized to empower broad adoption and accessibility; this is why ChatGPT came out in the first place. Just like the Internet, it will only become easier to use. In many cases, users will not even know generative AI is being used.
Which companies are the most important vendors in AI and hyperautomation? Check out the Acceleration Economy AI/Hyperautomation Top 10 Shortlist.
Prompt Engineering Principles
In the spirit of bringing everyone on board the generative AI movement, here are some value-added prompt engineering principles you can apply in your company. These principles come from a mix of my personal experience having applied generative AI tools in my startup, advice from friends building AI companies, a ChatGPT prompting course run in partnership with OpenAI that I just finished, my LinkedIn feed that’s been almost exclusively AI-related for the past few months, and this excellent podcast episode from Andreessen Horowitz and Guy Parsons, a designer who compiled a book on prompt engineering.
- Prompting is a highly iterative process. Just like building a startup, the best strategy for prompting is to start as quickly as possible. Even if you’re not sure exactly what output you want, start with a prompt that heads in the right direction. You can refine your follow-up prompt based on the model’s first response. Meticulously planning out your early prompts is like renting an office and buying a fancy espresso machine for your startup before having a customer.
- Write clear and specific prompts — that doesn’t mean short. Your prompts can be entire paragraphs with multiple sections, specific sub-requests, or examples.
- Use delimiters like quotes, brackets, or dashes to help the model differentiate each part of your prompt. For example, you can summarize an article by copy-pasting it into ChatGPT or an API call to GPT-3, but you need to identify the quoted text through quotation marks and leave your questions about the text unmarked. This helps the model parse through your input.
- Ask for a structured and specific output by telling the model exactly what kind of output you want. If you’re doing market research on competitors, for instance, ask for a list of 30 bullet points wherein each bullet contains the name of a competitor separated by their employee count and their monthly recurring revenue (MRR).
- Be flexible on details. Sometimes models will not accommodate all your given rules. GPT-3, for example, often will not adhere to exact word count limits. In these cases, you need some flexibility, perhaps by asking for a sentence count limit instead.
- Few-shot prompting is when you give a successful example or setup, and ask the model to complete it or repeat it but with different content. Try this technique if the model is not following the guidelines you set in your input. This problem can arise either because your desired output is harder to describe in words than in examples or because the model doesn’t have much training data relating to the topic.
- To reduce hallucinations, or outputs that are blatantly false but sound convincing, ask the model to pull actual quotes or pieces of information from a source document and ask for the source so you can fact-check the output. This is especially relevant if you’re using the AI-powered Bing or ChatGPT plugins.
- Play around with temperature, which is a large language model‘s (LLM) degree of randomness or freedom of exploration. For creative applications, a higher temperature can yield more original results. This variable is accessible through API calls to models like GPT-3.5, not through the web ChatGPT interface.
- Fine-tuned, application-specific LLMs beat base models like GPT-4. When it comes to current AI systems, which don’t have full-fledged consciousness yet, breadth is at a tradeoff with depth in capability. Designing an LLM with custom data sets that are fine-tuned for your industry-specific application will yield better results. You can use base models, like the GPT-n series, as a starting point.
- It’s easy to get outputs 80% right but nailing the final 20% is often impossible. It’s difficult to edit small details within outputs compared to, say, using Photoshop instead of Midjourney. This is one reason why companies need to keep a human in the loop if generative AI is being used in a critical workflow. If generated assets are being posted on company social media accounts, for example, there should be human oversight. That human should also be able to make manual tweaks if needed. Oftentimes, generative AI is just a starting point. This is especially true if you’re generating code to be used in production.
- Leverage resources like PromptBase, AIPRM, and Parsons’ book on prompting to get inspiration, find effective prompts for your application or industry, and supercharge your use of generative AI.
Here are some additional in-depth prompt engineering principles described in an enterprise context, courtesy of Microsoft.
Evolving AI Skillsets
The current definition of prompt engineering is certainly not final. Companies of all shapes and sizes must continue adapting the skills of their team around new tools and trends. Whether it’s encouraging your engineers to build a ChatGPT plugin through a weekend hackathon, which we recently held at the incubator where I work, or supercharging your growth team through generative AI-powered A/B testing, there are many possibilities.
One tool to highlight is AutoGPT, which is essentially GPT-3.5 paired with a bot that autonomously completes tasks. For example, a friend of mine recently ordered a pizza using only the AutoGPT interface. AutoGPT will take a user’s high-level instruction — like ordering a pizza, writing a business plan, or booking a hotel — and use the GPT-3.5 LLM in combination with various programs to complete the task. It will ask follow-up questions to receive the information it needs from you, like your login. This is similar to ChatGPT plugins which give users access to real-time data streams, perform functions with existing services, and make transactions directly through the ChatGPT interface.
Being a new project, AutoGPT comes with many limitations. Its breadth and flexibility are offset by its lack of capability. It seems like ChatGPT plugins like those of Instacart or Expedia, which have direct access to company databases, are still a better option for consumers. Nonetheless, it highlights an exciting trend of combining traditional software stacks with natural language processing. This combination is powering a new wave of convenience for consumers or internal teams.
Final Thoughts
Altogether, prompting is an amazing skill to have. However, as tools become more widespread and easier to use, I believe prompt engineering will not be a very unique skill. It probably won’t give anyone a major advantage in labor markets five or 10 years down the line but, rather, become a baseline necessity. As such, it is vital for companies to upskill their workforce as we continue into the AI era. The residual internal innovation and even product development of doing so won’t hurt either.
Looking for real-world insights into artificial intelligence and hyperautomation? Subscribe to the AI and Hyperautomation channel: