Writing a Smart Email Responder with GPT-3 and C#

In this post I’ll show you how to create your own smart email responder using the GPT-3 API.

If you’re not familiar, GPT-3 is a huge model trained for the purpose of generating text. It was trained by OpenAI and is available for (paid) public access at https://beta.openai.com/. At the time of this writing each new account gets $18 of credit which will get you a LOT of text generation. To follow along with this blog post you’ll want to start by signing up for an OpenAI account and finding your API key.

It should be noted that this model is only accessible through the API and cannot be hosted in your own environment. If that is something that interests you then you might want to check out OPT-175B, a competing model to GPT-3 which can be downloaded and hosted on capable hardware.

For those of you following along, I am using Visual Studio 2022 Community and starting off with a new C# Azure Function project. In the project template screen I’m choosing .NET 6 with a Timer trigger. With the default schedule of 0 */5 * * * * the function will execute automatically every 5 minutes.

So what do we want the function to do each time it executes? Let’s start with an outline:

  1. Read some configuration like email details and OpenAI API key (keep this out of the source code)
  2. Pre-load some FAQs. I’ll be hard coding this part in. In a production app you’ll probably want to get this from a database or something along those lines.
  3. Connect to the email inbox and look for any new emails.
  4. Every time we get a new email, run through it through a process like this:
    1. Does the FAQ have an answer to this question already?
    2. If yes, which FAQ question is the right one?
    3. Formulate an email reply from the FAQ answer.
    4. Reply to the sender

Grab the full source code on GitHub.

Reading Configuration

The first and least interesting thing is reading in the configuration. It’s important to do this in a way that doesn’t let secrets accidentally get into your source code and make their way into source control. One way to do this in an Azure Function project is to use the local.settings.json for local development, and environment variables when deployed to Azure. I’ve added four properties which makes my local.settings.json look like this:

"IsEncrypted": false,
"Values": {
	"AzureWebJobsStorage": "",
	"EmailUser": "",
	"EmailPassword": "",
	"PopServer": "",
	"OpenAiKey": ""

To run this locally you’ll need to populate these properties. The PopServer is the mail server that the inbox is accessible through. EmailUser and EmailPassword define the user and password to authenticate with, and finally OpenAiKey is your API key from OpenAI. Note, you can get this API key from the account area at https://beta.openai.com/ once you have created your account.

Because we need a database of questions to start with, I’ve created a simple POCO named Faq to hold a question and answer. FaqLoader loads in the hard coded list of questions that we’ll be using. In your application you’ll want to replace these with your own questions, or write a quick loader to grab them from a text file or database.

Checking for new Emails

An old western trainNow that the configuration is loaded we’re ready to check for emails. We do that by using the NuGet package: MailKit. The code for this part is in the CheckInboxService. It is quite simple: it creates an instance of the MailKit Pop3Client class and then uses that to connect to the inbox and start getting messages.

In the demo I use a HashSet called ProcessedMessages to store a list of emails that the service has seen before so that we don’t keep processing the same ones over and over again. The HashSet though if just an in-memory collection so it will reset each time the function runs.

Before deploying this code it is critical that you persist the list of processed messages and skip processing records you’ve already processed.

Build the Auto-responder

This is the meat of the exercise, and where we make the calls to GPT-3 through the OpenAI API. The NuGet package used here is Betalgo.OpenAI.GPT3 (https://github.com/betalgo/openai). This is a very nice wrapper around the GPT3 API and will make it much faster to work with.

Sending a call to the API is rather simple in that you send a text input and receive a text output. You can find my code for this in the EmailAnsweringService class.

The way this code works is that given an email body, I’m returning an EmailDisposition: a simple POCO to hold the results of the processing. The actual processing itself is done by crafting special text inputs (called prompts) which will be sent to the API. These prompts are crafted in the static PromptService class.

The code first calls CanAnswer() which takes the email body and crafts the first prompt. Here is a sample of the first prompt it has crafted:

Here is an email with a question:
Hi there!
I was wondering if you could tell me a little bit about your coffee beans. What type of coffee beans do you use?
Thank you for your time!

And here is a list of frequently asked questions:
-What are your hours?
-Do you have any gluten-free or vegan options?
-What type of coffee beans do you use?
-How do you make your coffee?
-Do you have any iced coffee options?
-What are your most popular drinks?
-Do you have any seasonal drinks?
-Do you have any food options?
-Do you offer catering?
-Do you have a loyalty program?
-Do you sell gift cards?

Is there a frequently asked question that answers the email? Answer yes or no:

The prompt has three parts:

  • Define the email with the question
  • The list of frequently asked questions
  • The question: Is there a frequently asked question which answers this email?

The output from the API might be more verbose than you want but it will start with either a yes or no. In the code I look for a yes which does not come after a no, and if I find one then I return true to CanAnswer().

The next function, WhichQuestionAnswers(), works in a similar fashion. But in this case it’s looking for OpenAI to return the question from the list, and it uses that to choose the matching Faq object. This object contains both the question and answer, so we can pull the answer which is We use 100% Arabica coffee beans.

The full out from running the program can be seen here:

An email has been processed:
Email Body: Hi there!
I was wondering if you could tell me a little bit about your coffee beans. What type of coffee beans do you use?
Thank you for your time!

Can answer: Yes
The Answer: We use 100% Arabica coffee beans.

Which is quite good. It gets us the specific FAQ question which answers the question asked by the sender.

If you’d like to try it yourself, grab the full source code on GitHub. Thanks for reading!


There are a couple things I’ve left out here from a fully functional solution which I will leave as exercises for the reader. If you felt so inclined you could implement the logic to forward the “FAQ miss” emails to someone else to respond to manually, or to flag/move them in the inbox.

I’ve also left out the code that sends the email. In its current state the final response is just the answer from the FAQ, but you could also add a 3rd trip to the OpenAI API which would write a nice customized email reply to the customer.

If you found this information interesting but don‘t want to implement it yourself, you may want to consider using my consulting company which specializes in Custom C# Software Development. We’d love to help you out with your next project.

Further Reading: The Complete Guide to Generating Content with Big ML Models