Different meanings of "in-place"
Part of the confusion with the other post comes from what exactly is to be changed in-place.
One interpretation is changing the size of the mapped file without copying any of the underlying data. This is accomplished by your code. No copy takes place. At the end of your code snippet, you will have an arr object that refers to the same data on-disk and, as far as the data is still in the page cache, the same physical pages in memory; not counting the new memory, of course.
This data will be mapped to a different virtual address, as you can see by inspecting arr.data, but it is the same physical memory. If you keep the old arr object around, you will find that both map the same memory and writing to one will be visible to the other. This works across processes. The mapping is said to be coherent. This also works on Windows. However, the mapping may or may not be coherent with regular reads and writes. That's what the flush() method is for.
Another interpretation of "in-place" is changing the arr object to represent the larger memory range. In this regard, the snippet is not in-place, while a functioning (see bug) memmap.resize() would. I believe this is what the comments on the other question refer to. As discussed above, you get a new arr object and have to replace all references to the old arr yourself.
Portability
There are some concerns with different system behaviors and your code. If this was C/C++, then simply mapping a larger region would result in undefined (or OS-specific) behavior. For example the POSIX standard for mmap defines that larger regions can be mapped but that accessing them will result in SIGBUS errors. On Windows, the file would be extended.
You would get this behavior if you used the mmap module and then created a numpy array with np.frombuffer. However, numpy.memmap takes care of extending the file across platforms. I've not inspected the code but traced the system calls. Numpy checks the file size and expands the file to the appropriate minimum. Unless Numpy changes its behavior, your code is perfectly fine.
The way this is accomplished, at least on Linux and my particular numpy version 1.26.4, is to seek to the end of the file and write a single zero byte. We thus create a sparse file. I don't know why they don't use os.ftruncate, probably portability concerns.
Performance
Expanding the array like this is not necessarily the fastest way. In general, memory-mapped IO works best for repeated, random access to data that is already in memory (the OS's page-cache). Large sequential IO and appending a file is often faster with normal routines. Additionally not all file systems support sparse files in which case the file expansion done by Numpy may actually fill the file on disk with zeros before reading it back with the memory mapping. Exact performance depends on the use case. Consider something like this (and always benchmark!):
arr = np.memmap("fingerprints.mm", …)
arr.flush()
new_data = np.array(…)
with open("fingerprints.mm", "ab") as fout:
fout.write(new_data)
new_shape = (len(arr) + len(new_data), ) + arr.shape[1:]
arr = np.memmap("fingerprints.mm", …, shape=new_shape)
Meaning, fill the file with data before mapping the written data for repeated access. Of course all of this depends on your use-case. The above only really applies if the new appended data exists as a regular Numpy array at some point or in a similar form.
Other questions
What happens if the following blocks in memory are already in use?
I assume you mean the disk memory. You will see the old data already in the file. If another process has the file opened, you will see each other's memory writes. If you don't want that, create a new file or truncate it. For example:
with open("fingerprints.mm", "w+b") as fout:
arr = np.memmap(fout, …, shape=arr_shape)
"w+b" truncates any existing file. Be careful that you don't truncate a file that another process has mapped. Use other modes as appropriate, e.g. "x+b" to create a file only if it does not already exists. Or remove old file and create a new one. Existing mappings to removed ("unlinked") files continue to work.
If you mean virtual memory when you say following blocks in memory, then this is no issue because the virtual address changes anyway. The OS will find a new, suitably large location. In theory you can run out of virtual memory, especially due to fragmentation. However, that is only a concern on 32 bit platforms.
The physical memory is not contiguous anyway. Pages are allocated as they are requested and the allocation uses whatever is available or can be reused with the least expected impact on other uses, e.g. using the least recently used page from the page cache. A new mapping changes nothing about this.
arrobject itself is not updated in-place. You get a new object representing the same (enlarged) memory. If you keep other references to the old object around, they will continue to see the same memory with the old size.