• Jobs
  • About Us
  • professionals
    • Home
    • Jobs
    • Courses and challenges
  • business
    • Home
    • Post vacancy
    • Our process
    • Pricing
    • Assessments
    • Payroll
    • Blog
    • Sales
    • Salary Calculator

0

156
Views
SQlalchemy database level locking

I wonder if it is possible to somehow to lock a couple of database tables for protected write or something like that, to prevent another application to modify these when a transaction is in progress?

I have now something like this. Tables A, B and C, with one to many relations between them A->B and B->C. This function receives data from rabbitmq and either updates A, B and/or C (usually only C) or creates new rows if missing.

Session=scoped_session(session_factory)
try:
    foo = Session.query(A).filter(....).one()
except NoResultFound:
    Session.remove()
    return

try:
    bar = Session.query(B).filter(......).one()
except NoResultFound:
    bar = B(field1=x, field2=y etc.)
    Session.add(bar)

try:
    xyzzy = Session.query(C).filter(...).order_by(...).limit(1).one()
except NoResultFound:
    xyzzy = C(.......)
    Session.add(xyzzy)

foo.fieldn = var1
bar.fieldn = var2
xyzzy.fieldn = var3
etc. 

Session.commit()
Session.remove()

This all works fine. The problem is, I have another program (entirely different python script) that does a cleanup on request. It basically deletes everything from A, B and C. This also works.

My first program crashes immediately after a deletion to this:

sqlalchemy.orm.exc.StaleDataError: UPDATE statement 
on table 'C' expected to update 1 row(s); 0 were matched.

I can catch this exception and react accordingly, but do I need to? This problem would be solved if I could in both programs reserve A, B and C on database level for a short file. All these transactions are short in duration. I understand from sqlalchemy documentation that a session should also start a transaction, but apparently this means something different than a transaction on a database level.

What seems to happen is that my Session.query(C) finds a row but before the first program issues Session.commit(), the other program has deleted it.

20+ years ago with 90's databases and C I always started a transaction with a DECLARE TRANSACTION ... RESERVING A,B,C FOR PROTECTED WRITE; or something like that. Is that gone now and I just need to catch the exception, or can I still benefit from locking on database level?

Hannu

over 3 years ago · Santiago Trujillo
1 answers
Answer question

0

You can perform explicit locking PostgreSQL with the LOCK command:

LOCK A, B, C IN ACCESS EXCLUSIVE MODE;

However, locking three tables is a very heavy-handed way to solve the problem. Since you have a 1-m relationship, you can consider locking on the row in A instead, at the beginning of each transaction:

SELECT * FROM A WHERE ... FOR UPDATE;

The way to do this in SQLAlchemy is:

foo = Session.query(A).filter(....).with_for_update().one()
over 3 years ago · Santiago Trujillo Report
Answer question
Find remote jobs

Discover the new way to find a job!

Top jobs
Top job categories
Business
Post vacancy Pricing Our process Sales
Legal
Terms and conditions Privacy policy
© 2025 PeakU Inc. All Rights Reserved.

Andres GPT

Recommend me some offers
I have an error