If I have some canned ChatGPT prompts that are parameterized (eg., [Name], [email], etc), does it make more sense to (1) create them as separate scenarios, or (2) as one scenario where I send a parameter that says which prompt to use, in addition to the name=value parameters that it needs?
In case (2), I guess I’d want to put the prompts into separate nodes and use a filter to determine which one to use based on the input parameter (?).
Hi, You need to embed your variables directly into the prompt so that they are passed correctly. Essentially, the prompt acts as a placeholder. For example, the prompt with variables might look like this:
Prompt with variables:
Generate a welcome email for a new user. Their name is {{$71.user_name}}, they signed up for the {{$71.plan_type}} plan, and their registration date is {{$71.registration_date}}.
Generate a welcome email for a new user. Their name is Alex, they signed up for the Pro plan, and their registration date is May 15, 2025.
And this is how it will respond:
Subject: Welcome to the Pro Plan, Alex!
Hi Alex,
Thank you for signing up for the Pro plan on May 15, 2025. We’re thrilled to have you on board!
With the Pro plan, you’ll get access to all our premium features, priority support, and exclusive updates.
If you have any questions or need assistance getting started, feel free to reach out to our support team.
Welcome aboard,
The Team
I realized I was using the wrong terminology, so I edited the question.
This looks like the one-prompt-per-scenario approach.
What about having several prompts in the same scenario and selecting among them with a separate parameter? (I asked about how to do this earlier and I was pointed to the “Adding and configuring routes” help topic.)
(I guess the blobs in the editor are “nodes” and the entire thing in the editor is a “scenario”. I look at everything in the editor as equivalent to a normal “function” when programming.)
In programming terms, I lean towards something like a single function that has a case or switch statement in it and a param that says which of several prompts to use, rather than a bunch of functions (scenarios) that each handle just one.
I’m asking for some insights into which approach might be best: separate scenarios for each query? Or run them all through one? Aside from the obvious, is one better than the other based on platform quirks or whatever?
Could you please describe an example of your use case, including the expected inputs and their processing? We can then suggest the best way to proceed.
I don’t have anything specific, but you used an example of a parameterized emails. Imagine five different emails with slightly different parameters that are sent in sequence: #1 then #2, #3, #4, #5; at different intervals.
Would it be best to make 5 separate scenarios?
Or just one scenario with 5 nodes, and you passed in one param for the seq#, then 3-5 parameters required by each email?
But in this case, it would be AI nodes rather than email nodes.