Skip to main content

Hi all

Following an interaction with support and a recent silent JO failure, would like to request they please disappear altogether.

Scenario:

  • Data set A is modified because some parts of it are not relevant to a region, which impacts custom fields and how they’re used in version criteria
  • Email version filters are forgetfully not remapped/removed - the field shows as blank in versions
  • Program is set to publish later - everything works, it shows as “scheduled”
  • At publication time, program reverts to draft

Yes it sends an email, but it should just not allow you to save (just like it doesn’t when you have forgetfully not mapped email tokens in some versions). 

When I publish for later, I need to know now that something’s missing at this very moment, with an error message like we get for email tokens not mapped. Not in the middle of the night because I’m not there and that is why I have published it for later.

I would also argue the default email should send anyway.

In short: no silent failures or later failures. If something’s wrong, it needs to be in our face immediately.

Last rant of the week, thank you 😉.

This should really be considered a bug.  It happens when tokens in versions are not mapped as well.  I have experienced this at 2 companies now.


Almost anything you can do to make JO less terrifying would be great. I might be overly cautious, but in general the whole thing often feels like it’s one errant misclick away from emailing my personal shopping list to fifty thousand reporters around the world.


Circling back to this because I’ve encountered a growing issue with “silent” JO failures - instances where a Participant fails out and no email is even attempted. These slide under the radar because we can’t report on a bounce or a reject, of course (no email sent) but we ALSO cannot report effectively on the failed Participants.

 

AO Failed Participant is a fantastic idea, but I feel like it was designed in a hurry - it is practically worthless out of the box for any ACTUAL work. The JSON data used for these fields isn’t supported as a filter condition in report builder, you cannot apply ad hoc column header filters to it in reports so a dashboard is out, and none of the fields is usable as an automatic filtering condition for deployment to the C360. What options are left? I can’t see any practical application for this otherwise-perfect fix to our problem.

 

I’d like to lay out a few premises that I think are true so the context is perfectly clear here…

  1. JO is a powerful feature, and a large part of the value proposition which Gainsight presents to many customers.
  2. Many business emails are VERY IMPORTANT and must be RELIABLY delivered
  3. Things happen and backend failures cannot ever be fully eliminated
  4. Any failure to deliver a JO email poses a potential threat to the business which must be addressed and corrected with the associated client
  5. Scaleability is a vital component of any solution, since the nature of JO is to handle bulk automation at scale with very little human intervention

If we all agree on these 5 points, it seems to me that we’ve got a serious gap in our ability to satisfactorily respond to them. Please help us deliver reliable automation, at scale, for our companies. Thank you for your time.


We’re looking at expanding our use of JO and this is not a problem we should have to consider. @revathimenon can we get some eyes on this?


How did I never see this post a year ago? If I could vote more than once, I would.

Silent “system errors” are my biggest pet peeve. We don’t have the time to be going into every program and chasing down individual participant failures, especially at scale. And with new dynamic JO’s inability to re-add participants, it makes things difficult when we need to bring them back through an active program.

There’s so many things wrong with the way these silent errors are being handled.

Actionable notifications would be a gamechanger. It’d be fantastic if we were able to look at a notification, see that a participant (or a group of participants) had failed a step due to whatever error, and push them either in bulk or individually back through that step.


Thank you for sharing your feedback and bringing your concerns to our attention. I understand the challenges you've been facing with the JO, and I want to assure you that we are fully committed to addressing these issues. Our current top priority is to enhance the stability of the product while empowering the capabilities of JO.

  • In parallel we are gradually implementing features that add validations during the publish flow to alert you of any missing tokens, filters, or other critical elements. This will help prevent silent failures by providing clear and informative error messages and improve transparency.
  • We will be introducing capabilities that allow you to preview and test the evaluation step of your programs, execution history to queries and enriching snapshot counts to give you better visibility into your program's progress.

While we acknowledge that there have been delays in releasing some planned features, such as the test run of the program and post-publish editing capabilities, please know that our team is actively working to stabilize the product and deliver these features to you as quickly as possible. We are also planning meetings with you all to discuss the roadmap and the current improvements we have in progress. Your input during these discussions will be invaluable as we move forward.

If you have any further questions or suggestions, please don't hesitate to reach out tag me directly @vmallya 

 

-Vijay, Product Manager - JO


New IdeaDiscovery

  1. In parallel we are gradually implementing features that add validations during the publish flow to alert you of any missing tokens, filters, or other critical elements. This will help prevent silent failures by providing clear and informative error messages and improve transparency.
  2. We will be introducing capabilities that allow you to preview and test the evaluation step of your programs, execution history to queries and enriching snapshot counts to give you better visibility into your program's progress.
  3. We are also planning meetings with you all to discuss the roadmap and the current improvements we have in progress. Your input during these discussions will be invaluable as we move forward.

Thank you for the detailed post, @vmallya!

To your points above:

  1. This is awesome -- just seeing the ⚠️ icon on a step but not knowing what the issue is definitely frustrating. Getting a notice of what the issue is would be great.
  2. Previewing and testing the evaluation steps, as well as enriching the snapshot counts, will be particularly useful -- I’m testing an internal dynamic JO today, and it’d be much more helpful to be able to test those evaluate steps without having to clone, adjust the filters to preview the participants, and then clone yet again to create a program that filters to a specific subset of participants that will let us test each branch in the evaluate step -- all while changing the recipient email address for the subsequent email actions to ensure that the necessary actions are performed.
  3. I’d be more than happy to join any and all of these planning meetings. Please share whenever you have them scheduled!

Thank you again! 🙌