Skip to content
This repository has been archived by the owner on Jun 7, 2023. It is now read-only.

UNABLE_TO_LOCK_ROW unable to obtain exclusive access to this record or 1 records #67

Open
jackdserna opened this issue Jan 4, 2018 · 2 comments

Comments

@jackdserna
Copy link

I am getting this error when bulk updating in a custom object.
batchSize = 2000 ; data frame > 8000 (5 batches), and the data frame was cleaned and deduplicated prior to updating.
The issue is that I was prevented from updating several thousand records, but when changing batchSize = 10000 there were no errors. Upon checking to see what this error means, I found that it dealt with updating the same record simultaneously and Salesforce will prevent users from doing so. However, that cannot be the case with me.

Further information can be found here: https://developer.salesforce.com/forums/?id=9060G000000I2r1QAC
The locking statement referred here: https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/langCon_apex_locking_statements.htm

Is there a locking statement that can be included in the R update function, or is that included in the query?

@StevenMMortimer
Copy link
Contributor

@jackdserna Set the concurrencyMode argument equal to Serial when you create the bulk job.

Here is a passage from the Bulk API PDF:

Use Parallel Mode Whenever Possible
You get the most benefit from the Bulk API by processing batches in parallel, which is the default mode and enables faster loading of data. However, sometimes parallel processing can cause lock contention on records. The alternative is to process using serial mode. Don't process data in serial mode unless you know this would otherwise result in lock timeouts and you can't reorganize your batches to avoid the locks.

You set the processing mode at the job level. All batches in a job are processed in parallel or serial mode.

Essentially, parallel mode is faster, but sometimes not smart enough to avoid lock contentions, so try using serial processing to see if the issue is resolved.

@jackdserna
Copy link
Author

I could do that, but it defeats the purpose of running a low batch size and speeding the process up by taking advantage of the parallel processing. It's a very simple flaw. I'm not entirely sure if I had done any user error, but there were no other batches through UI or code except by myself. I thoroughly clean the data before updating every time, as anyone should, so that fact that there are errors seems to have no cause. Hence, this issue.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants