Skip to main content
Hello Team,





Currenly we dont have the option to change or provide the batch size specifically for a rule to update records,





Can we have that option going for future release?





Thanks,





Sushanto



Hi Sushanto, can you share a bit more around the use case that you're trying to solve for with this enhancement?




I agree with this. I currently have to update 16,000+ records and running into various limitations when trying to update more than 20 records at a time. If I had the ability to set a max batch of 20 I could set it to run a few times a day and then just wait for records to complete rather than constantly having to update the rule. I don't care which of the 16000 records are updated when so any random sample is fine.




Hi Dan,





Sorry for the late reply.





We have the following use case :





We update a lot of Post sales records which is a child of Account.





In our SFDC space Account is one of the most complex business object with lot of business logic being developed on it.





when we update say 12 k records of Post sales via a GS rule it fails giving CPU time limit exception. We can't lower the batch size in this scenario as we dont have that option with us. We raise a case with Gainsight and they ultimately change it for the rule.





We saw by changing the batch size to 60 from 100 improved the performance by a big margin.





Thus if that option can be made customizable for us, we can atleast test it before pushing such changes to production and wont have to raise GS support cases everytime.




@susbhatt and @john_apple We understand your concern, the only workaround for now is to do it via a support ticket, we have a process to reduce to the batch size from the backend. May be helpful for others,so posting it here again. This batch size can be reduced for a particular action too.





@rajesh_yaddanapalli and @indraneel_pampati could you confirm?




Yes @sai_ram_pulluri ,





I know the backend option is there but wanted something on UI so that we dont have to raise a ticket and go back and forth.





If it could be made a product feature and enabled for users to choose the value.





We are currently doing the backend option only but a custom UI option would be really helpful as a product feature




@susbhatt Got you and I have redirected this to our product team. Thanks for bringing this up.





FYI..Just to let everyone know that there is a workaround to reduce the batch size, I did highlight the point here.





@All, I know most of them are facing the same issue, please show your presence here so that I can raise an alarm to change the priority on road-map.




We have a rule which triggers a job in SF and if more than 5 records are sent through at once it times out, so would like to be able to only send a maximum of 5 at a time (again don’t care what order they are done in) and then the next time it runs another 5 etc etc


@HollySimmons I will let you know. Thanks!


So it turns out support can only do the reduction for those on SFDC, and we’re on NXT. So we’re still stuck. 


@HollySimmons

There might be some technical nuance here.

For SFDC we have custom implementation for Load to SFDC, where batch size can be customized. For NXT, we use the bulk API by SFDC. I’ll check with the team if we can make the batch size customizable.

 

Edit: I’ve reconfirmed with the engineering team that because we use the bulk API’s that Salesforce itself provides, we should not have any problems because Salesforce takes care of optimizations. Is anyone facing any challenges with Load to SFDC in NXT edition too? 


@HollySimmons did you get a chance to view the comments here. 


Thanks for the nudge @sai_ram 

@rakesh this was the answer/suggestion from support; 
 

We actually reduce batch size for the instances which are on SFDC edition and not the NXT ones . As in NXT instance, the rule action uses  ' SFDC Bulk Api V2' (https://developer.salesforce.com/docs/atlas.en-us.api_bulk_v2.meta/api_bulk_v2/) for your tenant. This means we are no longer relying on the Apex Rest APIs from Gainsight managed package(used in SFDC edition instance) and instead using the API which SFDC has provided. In this case, the batching of records is handled by SFDC and we cannot reduce the batch size for rules.

In order to fix this, there are two options-

Either disable or adjust the triggers that are causing the issue or,
Exclude the Integration user(one who authorizes SFDC connection) from the SFDC triggers? 

 

But the options suggested aren’t feasible as the update we’re doing is in order to kick off the trigger itself. We know from our own testing using data loader that it can handle about 5 records at a time (we’ve managed more, but at 5 it never fails), and the API listed above creates batches of 10,000 records. 

What we’re hoping for is something like the dataloader settings where we can choose to override that to our own determined batch size (per action)

 

PS incase you’re wondering, what we’re doing is converting leads to contacts via a trigger.

In SFDC we have a duplication rule which means email must be unique across both leads and contacts, so if someone exists as a Person in Gainsight but a lead in SFDC we then need to convert them to a contact in SFDC. So what we do is check leads in SFDC for a match, and if there is we just set a boolean field to true on that lead record, which then triggers the process to convert them. 


Hi Holly, Bulk API route for Load to SFDC is the approach we want to take in NXT because of the massive performance improvements we gain here (for 1 Million records the performance improvement is ~1000%). One enhancement we are thinking of is if the number of records are small, use rest calls. In that, we can think of an enhancement you are proposing.

cc: @swaroop_badam 

 

Thanks for the nudge @sai_ram 

@rakesh this was the answer/suggestion from support; 
 

We actually reduce batch size for the instances which are on SFDC edition and not the NXT ones . As in NXT instance, the rule action uses  ' SFDC Bulk Api V2' (https://developer.salesforce.com/docs/atlas.en-us.api_bulk_v2.meta/api_bulk_v2/) for your tenant. This means we are no longer relying on the Apex Rest APIs from Gainsight managed package(used in SFDC edition instance) and instead using the API which SFDC has provided. In this case, the batching of records is handled by SFDC and we cannot reduce the batch size for rules.

In order to fix this, there are two options-

Either disable or adjust the triggers that are causing the issue or,
Exclude the Integration user(one who authorizes SFDC connection) from the SFDC triggers? 

 

But the options suggested aren’t feasible as the update we’re doing is in order to kick off the trigger itself. We know from our own testing using data loader that it can handle about 5 records at a time (we’ve managed more, but at 5 it never fails), and the API listed above creates batches of 10,000 records. 

What we’re hoping for is something like the dataloader settings where we can choose to override that to our own determined batch size (per action)

 

PS incase you’re wondering, what we’re doing is converting leads to contacts via a trigger.

In SFDC we have a duplication rule which means email must be unique across both leads and contacts, so if someone exists as a Person in Gainsight but a lead in SFDC we then need to convert them to a contact in SFDC. So what we do is check leads in SFDC for a match, and if there is we just set a boolean field to true on that lead record, which then triggers the process to convert them. 

 


Highly needed! Where are we on this? It looks like you wouldn’t be considering it? What’s the workaround?