I'm trying to retrieve a big file from an API and save it on an Azure Storage account, so I am designing an Azure Function. I don't want my code to download all the data and then write all the data, I can have a data input stream from this API, and I would like to stream data to an output blob.
Here is a small example
import azure.functions as func
def main(req: func.HttpRequest, outputblob: func.Out[func.InputStream]) -> func.HttpResponse:
name = "stranger"
# mimick a stream
for char in name:
outputblob.set(char)
return func.HttpResponse(
"Hello "+name,
status_code=200
)
Here is my function.json:
{
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"type": "blob",
"direction": "out",
"name": "outputblob",
"path": "container/hello.txt",
"connection": "connection_storage"
}
]
}
And when I open the file container/hello.txt from my storage, it contains only the last character, "r", and weighs only 1 byte.
I think that outputblob.set(data) overwrites the data to the output blob.
How can I stream data and append it to my output blob? I'd rather use output blob bindings, but I can use "ContainerClient" objects.
(EDIT: In the docs, they specify that we can use
Streams as func.Out[func.InputStream]
)