0

I am writing output Fortran data in binary format of an NxMxL matrix as follows

open(94, file = 'mean_flow_sp.dat', status = 'replace', action = 'write', form = 'unformatted')
  do k = 0,L-1
    do j = 0,M-1
      do i = 0,N-1
        write(94) u(i,j,k), v(i,j,k), w(i,j,k)
      enddo
    enddo
  enddo
close(94)

where u, v, w are single precision values allocated as e.g. u(0:N-1,0:M-1,0:L-1). Then I read the output file in Python as follows

f = open('mean_flow_sp.dat', 'rb')
data = np.fromfile(file=f, dtype=np.single).reshape(N,M,L)
f.close()

The first odd thing I notice is that the output Fortran file is 10,066,329,600 bytes long (this is using L = 640, M = 512, N = 1536). So the question is why this file is not 1536*512*640*3(variables)*4(bytes) = 6,039,797,760 bytes long?

Obviously, the Python script throws me an error when trying to reshape the read data as is not of the size of NxLxM x3 (in single precision).

Why is the output file so big?

2
  • 1
    Your compiler is probably adding header/footer data to each record written, which you are not accounting for. You could either search for other questions corresponding to your setup or look at using stream output. Commented May 1, 2017 at 1:19
  • Thanks, I realized that a bit later and posted the answer. Commented May 1, 2017 at 1:32

1 Answer 1

0

Ok, so I just realized that, as posted here, "Fortran compilers typically write the length of the record at the beginning and end of the record.", so then the size of the output file checks out.

Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.