I have a sequence of pairs of async requests in which a pair consists of Request A and Request B. Additionally, Request B is dependent on Request A. In other words, I need to pass data from Response A into Request B. Therefore, I need to schedule tasks such that each task sends Request A, then sends Request B only after Response A has returned.
from aiohttp import ClientSession
from typing import *
import asyncio
async def request_A(url: str, session: ClientSession) -> dict:
async with session.request('get', url) as response:
return await response.json()
async def request_B(url: str, data: dict, session: ClientSession) -> dict:
async with session.request('post', url, json=data) as response:
return await response.json()
async def request_chain(url_A: str, url_B: str, session: ClientSession) -> dict:
response_A_data = await request_A(url_A, session)
response_B_data = await request_B(url_B, response_A_data, session)
return response_B_data
async def schedule(url_chains: List[Tuple[str, str]]) -> list:
tasks = []
async with ClientSession() as session:
for url_chain in url_chains:
url_A, url_B = url_chain
task = asyncio.create_task(request_chain(url_A, url_B, session))
tasks.append(task)
return await asyncio.gather(*tasks)
def run_tasks(url_chains: List[Tuple[str, str]]) -> list:
return asyncio.run(schedule(url_chains))
Now, my question: Per each task consisting of a pair of requests, is Request A guaranteed to return before Request B is sent? Please explain. I am concerned that within the task, while Request A is being awaited, Request B may execute.
If not, how can I keep the tasks async and non-blocking, but also ensure that within the task, Request A blocks execution of Request B until Response A has returned?
I understand that I can run all Request A calls in a batch, then run all Request B calls in a batch, but for reasons specific to my use case, I need to run a batch of all (Request A, Request B) pairs.