Best Practice for .saveLog() with PlatformEvent #244
-
I'm just wondering if there is a "best practice" for using the default .saveLog() with Platform Events method of Nebula. Am I free to litter .saveLog() all over my code-base in all of my exception handling code? Or should I build it so that Exception handlers just create log records in the buffer and then relegate the .saveLog() as the final step of, say, our Trigger Handler? Edit: I know that when using Publish Immediately, the Transaction Limit of 150 Publish Events per Transaction applies, but other than that - what other concerns should I have (if any) with calling .saveLog() ~100 times in a single transaction? Has there been any testing done with that type of .saveLog volume from a performance or governor perspective? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hi @MorganMarchese - that's a great question, you will definitely want to put some thought into where you call I've implemented Nebula Logger in several different trigger handler frameworks over the years (Nebula Logger was originally part of a trigger handler framework from another repo I maintained) - I definitely recommend incorporating some calls to When using logger within a trigger handler framework, I personally aim for 2 uses:
Here's an example lightweight trigger handler framework I've used in the past - this is without incorporating Nebula Logger, just to show a "before" version public abstract class TriggerHandler {
public void execute() {
switch on Trigger.operationType {
when BEFORE_INSERT { this.executeBeforeInsert(Trigger.new); }
when BEFORE_UPDATE { this.executeBeforeUpdate(Trigger.new, Trigger.newMap, Trigger.old, Trigger.oldMap); }
when BEFORE_DELETE { this.executeBeforeDelete(Trigger.old, Trigger.oldMap); }
when AFTER_INSERT { this.executeAfterInsert(Trigger.new, Trigger.newMap); }
when AFTER_UPDATE { this.executeAfterUpdate(Trigger.new, Trigger.newMap, Trigger.old, Trigger.oldMap); }
when AFTER_DELETE { this.executeAfterDelete(Trigger.old, Trigger.oldMap); }
when AFTER_UNDELETE { this.executeAfterUndelete(Trigger.new, Trigger.newMap); }
}
}
protected virtual void executeBeforeInsert(List<SObject> newRecords) {}
protected virtual void executeBeforeUpdate(List<SObject> updatedRecords, Map<Id, SObject> updatedRecordsById, List<SObject> oldRecords, Map<Id, SObject> oldRecordsById) {}
protected virtual void executeBeforeDelete(List<SObject> deletedRecords, Map<Id, SObject> deletedRecordsById) {}
protected virtual void executeAfterInsert(List<SObject> newRecords, Map<Id, SObject> newRecordsById) {}
protected virtual void executeAfterUpdate(List<SObject> updatedRecords, Map<Id, SObject> updatedRecordsById, List<SObject> oldRecords, Map<Id, SObject> oldRecordsById) {}
protected virtual void executeAfterDelete(List<SObject> deletedRecords, Map<Id, SObject> deletedRecordsById) {}
protected virtual void executeAfterUndelete(List<SObject> undeletedRecords, Map<Id, SObject> undeletedRecordsById) {}
} With this example trigger framework, I would then make these changes to incorporate Nebula Logger in this "after" version: public abstract class TriggerHandler {
public void execute() {
// Wrap your trigger handler's logic in a try/catch block
try {
switch on Trigger.operationType {
when BEFORE_INSERT { this.executeBeforeInsert(Trigger.new); }
when BEFORE_UPDATE { this.executeBeforeUpdate(Trigger.new, Trigger.newMap, Trigger.old, Trigger.oldMap); }
when BEFORE_DELETE { this.executeBeforeDelete(Trigger.old, Trigger.oldMap); }
when AFTER_INSERT { this.executeAfterInsert(Trigger.new, Trigger.newMap); }
when AFTER_UPDATE { this.executeAfterUpdate(Trigger.new, Trigger.newMap, Trigger.old, Trigger.oldMap); }
when AFTER_DELETE { this.executeAfterDelete(Trigger.old, Trigger.oldMap); }
when AFTER_UNDELETE { this.executeAfterUndelete(Trigger.new, Trigger.newMap); }
}
} catch(Exception ex) {
// Within the catch block, log the error, save it, then rethrow it
Logger.error('Trigger exception occurred', ex);
Logger.saveLog();
throw ex;
}
// After all other logic has executed for the current SObjectType, save the log
if (Trigger.isAfter == true) {
Logger.saveLog();
}
}
protected virtual void executeBeforeInsert(List<SObject> newRecords) {}
protected virtual void executeBeforeUpdate(List<SObject> updatedRecords, Map<Id, SObject> updatedRecordsById, List<SObject> oldRecords, Map<Id, SObject> oldRecordsById) {}
protected virtual void executeBeforeDelete(List<SObject> deletedRecords, Map<Id, SObject> deletedRecordsById) {}
protected virtual void executeAfterInsert(List<SObject> newRecords, Map<Id, SObject> newRecordsById) {}
protected virtual void executeAfterUpdate(List<SObject> updatedRecords, Map<Id, SObject> updatedRecordsById, List<SObject> oldRecords, Map<Id, SObject> oldRecordsById) {}
protected virtual void executeAfterDelete(List<SObject> deletedRecords, Map<Id, SObject> deletedRecordsById) {}
protected virtual void executeAfterUndelete(List<SObject> undeletedRecords, Map<Id, SObject> undeletedRecordsById) {}
} This approach has worked fairly well for me in several orgs/implementations - but you should definitely do some of your own testing in your own to make sure that this approach works well for your org. Hope this helps, but let me know if you have any follow up questions! |
Beta Was this translation helpful? Give feedback.
Hi @MorganMarchese - that's a great question, you will definitely want to put some thought into where you call
saveLog()
since there are some transactional limits that could be exceeded if you call it too many times.I've implemented Nebula Logger in several different trigger handler frameworks over the years (Nebula Logger was originally part of a trigger handler framework from another repo I maintained) - I definitely recommend incorporating some calls to
saveLog()
directly into your trigger framework.When using logger within a trigger handler framework, I personally aim for 2 uses: