Skip to content

cannot perform operation: another operation is in progress #258

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
filantus opened this issue Feb 8, 2018 · 5 comments
Closed

cannot perform operation: another operation is in progress #258

filantus opened this issue Feb 8, 2018 · 5 comments

Comments

@filantus
Copy link

filantus commented Feb 8, 2018

  • asyncpg version: 0.14.0
  • PostgreSQL version: Postgres in docker, from official docker hub page. Version - PostgreSQL 10.1 on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18) 6.3.0 20170516, 64-bit
  • Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
    the issue with a local PostgreSQL install?
    :
  • Python version: 3.6.4
  • Platform: linux debian
  • Do you use pgbouncer?: no
  • Did you install asyncpg with pip?: yes
  • If you built asyncpg locally, which version of Cython did you use?:
  • Can the issue be reproduced under both asyncio and
    uvloop?
    :

Hi! I want make db queries in different async tasks, but then i try, i got this:

Task exception was never retrieved
future: <Task finished coro=<worker() done, defined at /home/.../test.py:15> exception=InterfaceError('cannot perform operation: another operation is in progress',)>
Traceback (most recent call last):
  File "/home/.../test.py", line 18, in worker
    value = await con.fetchval(f'SELECT hash FROM images WHERE url = $1 LIMIT 1;', url)
  File "/usr/local/lib/python3.6/dist-packages/asyncpg/connection.py", line 363, in fetchval
    data = await self._execute(query, args, 1, timeout)
  File "/usr/local/lib/python3.6/dist-packages/asyncpg/connection.py", line 1287, in _execute
    with self._stmt_exclusive_section:
  File "/usr/local/lib/python3.6/dist-packages/asyncpg/connection.py", line 1657, in __enter__
    'cannot perform operation: another operation is in progress')
asyncpg.exceptions._base.InterfaceError: cannot perform operation: another operation is in progress

Code to reproduce that:

import time
import asyncio
import asyncpg


DB_SETTINGS = {
    'host': 'localhost',
    'port': 5432,
    'database': 'postgres',
    'user': 'root',
    'password': '***',
}


async def worker(str):
    for i in range(4):
        url = f'{str}/{time.time()}'
        value = await con.fetchval(f'SELECT hash FROM images WHERE url = $1 LIMIT 1;', url)
        print(url, value)


if __name__ == '__main__':
    loop = asyncio.get_event_loop()
    con = loop.run_until_complete(asyncpg.connect(**DB_SETTINGS))

    asyncio.async(worker('aaa'))
    asyncio.async(worker('bbb'))
    loop.run_forever()

Please, can you help?

@filantus
Copy link
Author

filantus commented Feb 8, 2018

I found solution - need use pool with many connections instead only one connection:

async def worker(str):
    for i in range(4):
        url = f'{str}/{time.time()}'

        async with db_pool.acquire() as conn:
            value = await conn.fetchval(f'SELECT hash FROM images WHERE url = $1 LIMIT 1;', url)
        print(url, value)


if __name__ == '__main__':
    loop = asyncio.get_event_loop()
    db_pool = loop.run_until_complete(asyncpg.create_pool(**DB_SETTINGS))

    asyncio.async(worker('aaa'))
    asyncio.async(worker('bbb'))
    loop.run_forever()

Thank you! =)

@filantus filantus closed this as completed Feb 8, 2018
@danieljfarrell
Copy link

Does this occur because the same connection is used to handle multiple transactions with the database?

@charterchap
Copy link

My issue with this same error message was due to the fact that you can't use the same pool in different event loops. I had been using a singleton to simplify using my connection pool. That was a bad idea. Now I am using 1 pool per loop with no issues.

@ghost
Copy link

ghost commented Mar 2, 2021

This is how I fixed the issue:

Previously I had one session shared among multiple coroutines which made concurrent queries to the database:

async def calculate_scores(ids: List):
    async with AsyncSession(async_engine) as session:
      tasks = [create_task(
          calculate_score(id, session)
      ) for id in ids]
      await wait(tasks)
      scores = [task.result() for task in tasks]

    return scores

async def calculate_score(id, session):
    score = (await session.execute(select(Item.score).filter(Item.id == id)).scalar_one_or_none()
    return score

I fixed it by having one session per co-routine

async def calculate_scores(ids: List):
    tasks = [create_task(
        calculate_score(id)
    ) for id in ids]
    await wait(tasks)
    scores = [task.result() for task in tasks]

    return mean(score)

async def calculate_score(id):
    async with AsyncSession(async_engine) as session:
        score = (await session.execute(select(Item.score).filter(Item.id == id)).scalar_one_or_none()

    return score

@Valt25
Copy link

Valt25 commented Dec 9, 2024

Hope someone will see this message?

Why I must use connection pool to perform multiple operations at the same time?

I understand if I use same connection for serveral read and write queries, then I can not support consistency of results, since I do not know before hand how long it takes to execute a query.

However if I have read only queries, there is no problem with consistency and race conditions. I can not use single query to fetch several tables(i think I can but that is much more complicated, than few simple select queries.

Is there any plans to allow concurrent queries with the same connection? Or is it constraint from postgres(or even overall databases) side?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants