Discussion:
Questions on MPI_FILE_READ_AT
(too old to reply)
Mars
2008-12-26 04:34:30 UTC
Permalink
Hello all,

Merry Christmas!

I encountered some problems with MPI_FILE_READ_AT. I generated some
data files and wrote a code using MPI_FILE_READ_AT to read them but
never got correct answer. My MPI code is as follows:

======================================================================
program test_mpi_read

implicit none

include 'mpif.h'

integer :: fh, ierr, myid, numprocs
integer :: status(MPI_STATUS_SIZE)
integer :: irc
integer(kind=mpi_offset_kind) :: offset
integer :: i, j, k

integer :: num

character(len=64) :: filename

real(8):: q

call MPI_INIT( ierr )
call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr )
call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr )

write(filename,'(a)') 'test_unformatted_direct'
call MPI_FILE_OPEN(MPI_COMM_WORLD, filename, MPI_MODE_RDONLY,
MPI_INFO_NULL, fh, ierr)

offset = 1

call MPI_FILE_READ_AT(fh, offset, q, 1, MPI_REAL8, status, ierr)

if ( myid == 0 ) then
print*, 'myid =', myid, 'q=', q
end if

call MPI_FILE_CLOSE(fh, ierr)

call MPI_FINALIZE(irc)

stop
end program test_mpi_read

============================================================================



The data files used for testing are generated by the following code:

============================================================================
program write_data

implicit none

integer :: i, j, k

character(len=64) :: filename
integer, parameter :: num = 100

real(8) :: q

write(filename,'(a)') 'test_unformatted_direct'
open (10, file=filename, form='unformatted', access='direct',
recl=4, action='write')

write(filename,'(a)') 'test_unformatted'
open (20, file=filename, form='unformatted', action='write')

write(filename,'(a)') 'test_binary'
open (30, file=filename, form='binary', action='write')

write(filename,'(a)') 'test_formatted'
open (40, file=filename, form='formatted', action='write')

do i = 1, num
q = exp(dble(i))
write(10, rec=i) q
write(20) q
write(30) q
write(40, '(e24.16)') q
end do

close(10)
close(20)
close(30)
close(40)

end program write_data
==============================================================================


'test_formatted' here is only for checking. I tried
'test_unformatted_direct', 'test_formatted' and 'test_binary' but
never got right answer. The output I got from these three files are:
1. myid = 0 q= 1.782556321367550E-307
2. myid = 0 q= 7.044881699546222E-258
3. myid = 0 q= -6.443473835168681E-086

The correct answer should be 0.7389056098930650E+01. I cannot figure
out where the problem is. Could you give me some advice? Another
questions is: if I need to use MPI_FILE_READ_AT, is there any
requirement on the format of the input data file?

I appreciate your help and look forward to your reply. Thanks!


Kan
Georg Bisseling
2008-12-28 10:26:32 UTC
Permalink
The "offset" argument in MPI_FILE_READ_AT is interpreted
as a count of "etype" in the current "view" associated with
the file.
Unless you specify a so called "view" onto a file using
MPI_FILE_SET_VIEW a default view will be used that treats
the file as a stream of bytes.
So in this particular example you have at least two choices.
One is to establish a view onto the file that defines the
file as a stream of REAL8 and use the offset 1 in MPI_FILE_READ_AT
or to go with the default view and pass the offset in bytes
to MPI_FILE_READ_AT.

This is as always documented at lenght in the MPI standard
documents http://www.mpi-forum.org/docs/docs.html . Use the 2.1
versions. This will be much easier than gathering all bits and
pieces from man pages yourself.
Post by Mars
Hello all,
Merry Christmas!
I encountered some problems with MPI_FILE_READ_AT. I generated some
data files and wrote a code using MPI_FILE_READ_AT to read them but
======================================================================
program test_mpi_read
implicit none
include 'mpif.h'
integer :: fh, ierr, myid, numprocs
integer :: status(MPI_STATUS_SIZE)
integer :: irc
integer(kind=mpi_offset_kind) :: offset
integer :: i, j, k
integer :: num
character(len=64) :: filename
real(8):: q
call MPI_INIT( ierr )
call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr )
call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr )
write(filename,'(a)') 'test_unformatted_direct'
call MPI_FILE_OPEN(MPI_COMM_WORLD, filename, MPI_MODE_RDONLY,
MPI_INFO_NULL, fh, ierr)
offset = 1
call MPI_FILE_READ_AT(fh, offset, q, 1, MPI_REAL8, status, ierr)
if ( myid == 0 ) then
print*, 'myid =', myid, 'q=', q
end if
call MPI_FILE_CLOSE(fh, ierr)
call MPI_FINALIZE(irc)
stop
end program test_mpi_read
============================================================================
============================================================================
program write_data
implicit none
integer :: i, j, k
character(len=64) :: filename
integer, parameter :: num = 100
real(8) :: q
write(filename,'(a)') 'test_unformatted_direct'
open (10, file=filename, form='unformatted', access='direct',
recl=4, action='write')
write(filename,'(a)') 'test_unformatted'
open (20, file=filename, form='unformatted', action='write')
write(filename,'(a)') 'test_binary'
open (30, file=filename, form='binary', action='write')
write(filename,'(a)') 'test_formatted'
open (40, file=filename, form='formatted', action='write')
do i = 1, num
q = exp(dble(i))
write(10, rec=i) q
write(20) q
write(30) q
write(40, '(e24.16)') q
end do
close(10)
close(20)
close(30)
close(40)
end program write_data
==============================================================================
'test_formatted' here is only for checking. I tried
'test_unformatted_direct', 'test_formatted' and 'test_binary' but
1. myid = 0 q= 1.782556321367550E-307
2. myid = 0 q= 7.044881699546222E-258
3. myid = 0 q= -6.443473835168681E-086
The correct answer should be 0.7389056098930650E+01. I cannot figure
out where the problem is. Could you give me some advice? Another
questions is: if I need to use MPI_FILE_READ_AT, is there any
requirement on the format of the input data file?
I appreciate your help and look forward to your reply. Thanks!
Kan
--
This signature was left intentionally almost blank.
http://www.this-page-intentionally-left-blank.org/
Mars
2009-01-05 00:58:10 UTC
Permalink
Thanks, Georg!

Although I still have some problems with the 'unformatted' files, now
I can get correct results for the binary format file. But I have
another question: have you ever observed any memory leakage problem
related with the MPI_FILE_READ_AT_ALL subroutine? My code can run on a
Dell workstation with 2 quadcores without any problem, but when I move
it to a big cluster, very severe memory leakage problems are observed.
Could you give me some advice on this? Thanks again.


Kan
Georg Bisseling
2009-01-09 18:30:32 UTC
Permalink
Post by Mars
Thanks, Georg!
Although I still have some problems with the 'unformatted' files, now
I can get correct results for the binary format file. But I have
another question: have you ever observed any memory leakage problem
related with the MPI_FILE_READ_AT_ALL subroutine? My code can run on a
Dell workstation with 2 quadcores without any problem, but when I move
it to a big cluster, very severe memory leakage problems are observed.
Could you give me some advice on this? Thanks again.
Kan
I didn't get that crystal ball for xmas - again...

Well, is the memory given back to the OS when the file is closed?

Are you sure that whenever you call a routine MPI_FILE_* that is flagged
as being collective that all member processes of the communicator used participate
in the call (use MPI_COMM_SELF if you want to do your own stuff in a process)?

You may search in the documentation of your particular MPI implementation
if the strategy for caching file contents in memory can be tuned somehow.
--
This signature was left intentionally almost blank.
http://www.this-page-intentionally-left-blank.org/
Mars
2009-01-15 05:04:31 UTC
Permalink
Hi Georg,

The problem has been solved. The cause was the mpi implementation on
the cluster I was using. And the latest version has fixed this
problem. So when I switch to the new version, everything goes well.

Thanks a lot for your help! :)


Kan
Post by Georg Bisseling
Post by Mars
Thanks, Georg!
Although I still have some problems with the 'unformatted' files, now
I can get correct results for the binary format file. But I have
another question: have you ever observed any memory leakage problem
related with the MPI_FILE_READ_AT_ALL subroutine? My code can run on a
Dell workstation with 2 quadcores without any problem, but when I move
it to a big cluster, very severe memory leakage problems are observed.
Could you give me some advice on this? Thanks again.
I didn't get that crystal ball for xmas - again...
Well, is the memory given back to the OS when the file is closed?
Are you sure that whenever you call a routine MPI_FILE_* that is flagged
as being collective that all member processes of the communicator used participate
in the call (use MPI_COMM_SELF if you want to do your own stuff in a process)?
You may search in the documentation of your particular MPI implementation
if the strategy for caching file contents in memory can be tuned somehow.
--
This signature was left intentionally almost blank.http://www.this-page-intentionally-left-blank.org/
Loading...