Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and Copilots
    • Innovation & Leadership
    • Cybersecurity
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
  • Summit NA
  • Dynamics Communities
  • Ask Copilot
Twitter Instagram
  • Summit NA
  • Dynamics Communities
  • AI Copilot Summit NA
  • Ask Cloud Wars
Twitter LinkedIn
Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and CopilotsWelcome to the Acceleration Economy AI Index, a weekly segment where we cover the most important recent news in AI innovation, funding, and solutions in under 10 minutes. Our goal is to get you up to speed – the same speed AI innovation is taking place nowadays – and prepare you for that upcoming customer call, board meeting, or conversation with your colleague.
    • Innovation & Leadership
    • CybersecurityThe practice of defending computers, servers, mobile devices, electronic systems, networks, and data from malicious attacks.
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
    • Login / Register
Cloud Wars
    • Login / Register
Home » How ‘Prompt Engineering’ Optimizes Generative AI Output and Impacts the Future of Work
AI and Copilots

How ‘Prompt Engineering’ Optimizes Generative AI Output and Impacts the Future of Work

Toni WittBy Toni WittMay 23, 2023Updated:May 23, 20237 Mins Read
Facebook Twitter LinkedIn Email
Prompt Engineering
Share
Facebook Twitter LinkedIn Email

The recent comeuppance of generative AI tools has raised the discussion around prompt engineering. Prompt engineering is the art and science of giving generative AI models the right prompts to receive the output you want. A broader definition could also include your ability to choose the right models for the right purposes and determine whether to use generative AI in the first place.

The value you can gain from generative AI — both as a company and as an individual knowledge worker — is directly correlated to your prompt engineering ability. However, there has been a lot of debate lately about what role this skill will play in the future of work. Some people argue that a small subset of people who become experts at prompt engineering now will dominate the labor market tomorrow. But I see it unfolding similar to the way using Zoom or searching on Google became a technology cornerstone for any knowledge worker.

There might be a very small group of specialized prompt engineers or highly-paid generative AI specialists with deep industry-specific expertise, but most people will add it to their regular toolbox without being paid explicitly for it. This is partly because the organizations behind AI research are incentivized to empower broad adoption and accessibility; this is why ChatGPT came out in the first place. Just like the Internet, it will only become easier to use. In many cases, users will not even know generative AI is being used.

Which companies are the most important vendors in AI and hyperautomation? Check out the Acceleration Economy AI/Hyperautomation Top 10 Shortlist.

Prompt Engineering Principles

In the spirit of bringing everyone on board the generative AI movement, here are some value-added prompt engineering principles you can apply in your company. These principles come from a mix of my personal experience having applied generative AI tools in my startup, advice from friends building AI companies, a ChatGPT prompting course run in partnership with OpenAI that I just finished, my LinkedIn feed that’s been almost exclusively AI-related for the past few months, and this excellent podcast episode from Andreessen Horowitz and Guy Parsons, a designer who compiled a book on prompt engineering.

  1. Prompting is a highly iterative process. Just like building a startup, the best strategy for prompting is to start as quickly as possible. Even if you’re not sure exactly what output you want, start with a prompt that heads in the right direction. You can refine your follow-up prompt based on the model’s first response. Meticulously planning out your early prompts is like renting an office and buying a fancy espresso machine for your startup before having a customer.
  2. Write clear and specific prompts — that doesn’t mean short. Your prompts can be entire paragraphs with multiple sections, specific sub-requests, or examples.
  3. Use delimiters like quotes, brackets, or dashes to help the model differentiate each part of your prompt. For example, you can summarize an article by copy-pasting it into ChatGPT or an API call to GPT-3, but you need to identify the quoted text through quotation marks and leave your questions about the text unmarked. This helps the model parse through your input.
  4. Ask for a structured and specific output by telling the model exactly what kind of output you want. If you’re doing market research on competitors, for instance, ask for a list of 30 bullet points wherein each bullet contains the name of a competitor separated by their employee count and their monthly recurring revenue (MRR).
  5. Be flexible on details. Sometimes models will not accommodate all your given rules. GPT-3, for example, often will not adhere to exact word count limits. In these cases, you need some flexibility, perhaps by asking for a sentence count limit instead.
  6. Few-shot prompting is when you give a successful example or setup, and ask the model to complete it or repeat it but with different content. Try this technique if the model is not following the guidelines you set in your input. This problem can arise either because your desired output is harder to describe in words than in examples or because the model doesn’t have much training data relating to the topic.
  7. To reduce hallucinations, or outputs that are blatantly false but sound convincing, ask the model to pull actual quotes or pieces of information from a source document and ask for the source so you can fact-check the output. This is especially relevant if you’re using the AI-powered Bing or ChatGPT plugins.
  8. Play around with temperature, which is a large language model‘s (LLM) degree of randomness or freedom of exploration. For creative applications, a higher temperature can yield more original results. This variable is accessible through API calls to models like GPT-3.5, not through the web ChatGPT interface.
  9. Fine-tuned, application-specific LLMs beat base models like GPT-4. When it comes to current AI systems, which don’t have full-fledged consciousness yet, breadth is at a tradeoff with depth in capability. Designing an LLM with custom data sets that are fine-tuned for your industry-specific application will yield better results. You can use base models, like the GPT-n series, as a starting point.
  10. It’s easy to get outputs 80% right but nailing the final 20% is often impossible. It’s difficult to edit small details within outputs compared to, say, using Photoshop instead of Midjourney. This is one reason why companies need to keep a human in the loop if generative AI is being used in a critical workflow. If generated assets are being posted on company social media accounts, for example, there should be human oversight. That human should also be able to make manual tweaks if needed. Oftentimes, generative AI is just a starting point. This is especially true if you’re generating code to be used in production.
  11. Leverage resources like PromptBase, AIPRM, and Parsons’ book on prompting to get inspiration, find effective prompts for your application or industry, and supercharge your use of generative AI.
Insights into the Why & How of AI & Hyperautomation's Impact_featured
Guidebook: Insights Into the Why and How of AI and Hyperautomation’s Impact

Here are some additional in-depth prompt engineering principles described in an enterprise context, courtesy of Microsoft.

Evolving AI Skillsets

The current definition of prompt engineering is certainly not final. Companies of all shapes and sizes must continue adapting the skills of their team around new tools and trends. Whether it’s encouraging your engineers to build a ChatGPT plugin through a weekend hackathon, which we recently held at the incubator where I work, or supercharging your growth team through generative AI-powered A/B testing, there are many possibilities.

One tool to highlight is AutoGPT, which is essentially GPT-3.5 paired with a bot that autonomously completes tasks. For example, a friend of mine recently ordered a pizza using only the AutoGPT interface. AutoGPT will take a user’s high-level instruction — like ordering a pizza, writing a business plan, or booking a hotel — and use the GPT-3.5 LLM in combination with various programs to complete the task. It will ask follow-up questions to receive the information it needs from you, like your login. This is similar to ChatGPT plugins which give users access to real-time data streams, perform functions with existing services, and make transactions directly through the ChatGPT interface.

Being a new project, AutoGPT comes with many limitations. Its breadth and flexibility are offset by its lack of capability. It seems like ChatGPT plugins like those of Instacart or Expedia, which have direct access to company databases, are still a better option for consumers. Nonetheless, it highlights an exciting trend of combining traditional software stacks with natural language processing. This combination is powering a new wave of convenience for consumers or internal teams.

Final Thoughts

Altogether, prompting is an amazing skill to have. However, as tools become more widespread and easier to use, I believe prompt engineering will not be a very unique skill. It probably won’t give anyone a major advantage in labor markets five or 10 years down the line but, rather, become a baseline necessity. As such, it is vital for companies to upskill their workforce as we continue into the AI era. The residual internal innovation and even product development of doing so won’t hurt either.


Looking for real-world insights into artificial intelligence and hyperautomation? Subscribe to the AI and Hyperautomation channel:

Artificial Intelligence chatbots featured
Share. Facebook Twitter LinkedIn Email
Analystuser

Toni Witt

Co-founder, Sweet
Cloud Wars analyst

Areas of Expertise
  • AI/ML
  • Entrepreneurship
  • Partners Ecosystem
  • Website
  • LinkedIn

In addition to keeping up with the latest in AI and corporate innovation, Toni Witt co-founded Sweet, a startup redefining hospitality through zero-fee payments infrastructure. He also runs a nonprofit community of young entrepreneurs, influencers, and change-makers called GENESIS. Toni brings his analyst perspective to Cloud Wars on AI, machine learning, and other related innovative technologies.

  Contact Toni Witt ...

Related Posts

SAP Sapphire: My 5-Item Wish List for World’s Hottest Apps Vendor

May 19, 2025

AWS Report Finds GenAI Overtaking Security Spending

May 19, 2025

SAP Sapphire: 5 Big Issues They Need to Discuss

May 19, 2025

Workday Dismisses Agentic AI Arms Race, Focuses on Business Impact

May 16, 2025
Add A Comment

Comments are closed.

Recent Posts
  • SAP Sapphire: My 5-Item Wish List for World’s Hottest Apps Vendor
  • AWS Report Finds GenAI Overtaking Security Spending
  • SAP Sapphire: 5 Big Issues They Need to Discuss
  • Workday Dismisses Agentic AI Arms Race, Focuses on Business Impact
  • Workday Lone Wolf: Focuses on 8 AI Agents vs. Competitors’ Hundreds

  • Ask Cloud Wars AI Agent
  • Tech Guidebooks
  • Industry Reports
  • Newsletters

Join Today

Most Popular Guidebooks

Accelerating GenAI Impact: From POC to Production Success

November 1, 2024

ExFlow from SignUp Software: Streamlining Dynamics 365 Finance & Operations and Business Central with AP Automation

September 10, 2024

Delivering on the Promise of Multicloud | How to Realize Multicloud’s Full Potential While Addressing Challenges

July 19, 2024

Zero Trust Network Access | A CISO Guidebook

February 1, 2024

Advertisement
Cloud Wars
Twitter LinkedIn
  • Home
  • About Us
  • Privacy Policy
  • Get In Touch
  • Marketing Services
  • Do not sell my information
© 2025 Cloud Wars.

Type above and press Enter to search. Press Esc to cancel.

  • Login
Forgot Password?
Lost your password? Please enter your username or email address. You will receive a link to create a new password via email.