Your cart is currently empty!
The 5-Part Formula for Building Bots That Behave

I’ve been working on building our own custom GPTs using ChatGPT’s workflow. In theory it’s a simple process. You tell it what it should do in plain language and off you go. In reality, it’s a hot mess 99% of the time out of the gate. The custom GPT starts off strong and quickly goes off the rails on tangents and hallucinations I never intended. I was struggling and decided to phone a friend. In this case my friend happens to be Rob Kall, CEO and co-founder of AI startup Cien. Rob is a software engineer at the core. What he showed me gave a clear sense of what I was doing wrong and how to approach prompt engineering in a much more effective and efficient way.
Prompt engineering is….engineering!
The promise of natural language processing is real. It’s also not quite here yet. Yes, you can tell a chatbot to research something or even complete a task for you but inevitably it’s going to struggle at some point. It will either not complete the task. Or, as is often the case, it might fail to complete the task but tell you it was done. In all cases it will always stay positive, complement you and tell you you’re the best. Thanks Mom, ChatGPT.
The first thing Rob explained to me was that despite the accessible language, prompt engineering is still engineering. You are telling a machine what to do. When you fail to account for specific use cases, older machines would throw an error. Today’s machines just make shit up and keep going. To be honest, I prefer the older version so at least I know something’s broken. The implication here is that you can’t ignore the specificity of your prompt. It has to be exact about both what you want the bot to do and what you don’t want it to do.
For example, I’ve been working on a course and trainer recommendation bot for Sense & Respond Learning. I designed it to prompt the user for some input about their needs, topics of interest, team makeup, etc and the bot is then supposed to offer up courses from our catalog as well as our list of upcoming public courses. When testing it yesterday it got into a situation where the input from the user didn’t match any of our courses or trainers. What did it do? It sent the user to the competition! Now, don’t get me wrong. I love my competition. Most of them are my friends. However, they do a great job generating leads on their own. Why did it do this? Because I didn’t tell it not to. I assumed it wouldn’t because that’s obvious to me, a human. It’s not obvious to the bot because it is not quite intelligent (yet).
The other thing my test bot started doing was quoting pricing for the courses it was suggesting. The public classes do have ticket prices on them but the private training does not publicize pricing anywhere online. What did it do? It made up the data! Why did it do it? Because, once again, I didn’t tell it not to.
Formal structure is key to successful bot prompts
Approaching prompt engineering for my chatbots as a non-technical human proved mildly successful. However, when Rob showed me how he programs (yes, programs) his bots I got a new appreciation for formal structure. Essentially you’re writing a computer program that has 5 parts:
- The premise – here is where you paint a high-level picture for the bot of what its purpose is, what its goals are and what it should and should not do.
- The knowledge – here is where you tell the bot exactly where to get the data it will require to do its job. This includes both URLs as well as clearly labeled and structured static documents. You explicitly then connect each data source with a prompt that tells the bot what it contains and what it should look for in each source. For example, “You will find all the data on our available courses in a document called “Course Descriptions.pdf. Use this data only to derive descriptions for the courses we offer.”
- The workflow – here is where you tell the bot how it should go about doing its job. You connect the steps for it and make sure to tell it where it should not go as well.
- The example output – here is where you show the bot what good looks like. You add in specific content, tables, templates, whatever you need to make sure that what gets output to the user looks good and meets their needs.
- The call to action – finally, here is where you tell the bot to follow the instructions you gave it and only those instructions as well as what to do in case of an exception.
This structure not only gives the bot clear steps to follow but it forces you to think through the exact process you’d like to see happen. It makes you think about successful results and the exceptions you’d like the bot to avoid and how to deal with them.
ChatGPT’s custom GPT prompt input has a character limit of 8000. So whatever you come up with has to fit within those constraints. When you add up all the steps above, this can end up being challenging.
It’s not totally intelligent, but it’s still pretty good
We’ve been told we’re in the age of AI. We’re close, for sure. Yet the tools we have at our disposal today are still software products that require programming. The language we can use is a lot more familiar to us but it doesn’t absolve us from “programming” the machines. If you approach your next AI initiative with a bit of software engineering rigor your results and the time spent tweaking the bot will diminish significantly.




