Containers using F202Y's generic programming

Unfortunately, a more containerized version (with the actual storage within a derived type) won’t work with ifx (or with nvfortran) with the pointer function on the LHS. It works with gfortran (even when enforcing -std=f2018 -pedantic) or flang, though. I don’t know which compiler is right?

You are right, at least ifx 2025.0.0 does not accept it. I do not see why this should be prohibited, though, but I am not an expert in the standard either. Maybe this is a matter of oversight? I will ask on the Intel Forum. And if anybody here knows the answer, please share it.

Marc Grodent found a workaround after I posted the issue on the Intel Fortran forum. By declaring a separate result variable in the function v, the Intel compiler is happy to oblige.

Code along these lines:

function v(this,i) result(res) class(container_t), target, intent(inout) :: this integer, intent(in) :: i real, pointer :: res res => this%a(i) end function

Slightly inconvenient, but only slightly :innocent:

1 Like

Wow… Do we agree it’s a compiler bug ?

Yes, this is a good point and I agree. There is another approach too. I’ve avoided this technique because it’s portability is questionable, but I’ve seen build systems that do this in an automatic way. The idea is to compile the code with one fortran compiler, and then partially link that code against that fortran’s library. This resolves the compiler dependence at that point in the build process. The output is an object file, typically *.o, rather than an executable file or a library file. The other parts of the code are compiled and partially linked with the other fortran compiler(s). The only remaining unresolved symbols at this time are those involving one *.o file calling routines in another *.o file. A final load step then performs that task. This approach also handles the situation where two different libraries have the same symbol but with different functionality or interface, so that the appropriate symbols are bound at the appropriate places in the codes. Look to see if your loader has a -r option. That is how this works.

In the case of a regular procedure (not a type bound procedure), I think that combination of pointer and intent(in) attributes has already been given a special meaning, and it does not require that the actual argument has the target or pointer attributes. it has the same meaning as, for example,

integer, intent(in), target :: m
integer, pointer :: pm
pm => m

In other words, it is a shorthand way of declaring a local pointer assigned to a dummy target. In this case, the pointer is undefined upon return unless the actual argument has the target or pointer attribute, so it is up to the programmer to ensure that consistency with no help from the compiler. If this seems confusing, just for the sake of eliminating a few keystrokes of typing, then you are not alone.

If a different meaning were given to this same attribute combination for the type bound procedure argument case, one might argue that it would be even more confusing to a programmer. Then, if the pointer is not local, but, say, a function result, even further confusion can ensue.

Yes, that is the consensus on the Intel forum.

1 Like

By the way, I don’t really understand why the possibility to associate an actual target to a dummy pointer (in the regular non-TB procedures) is limited to the case where the dummy pointer is intent(in) ?

I don’t think, pointer, intent(in) has any special meaning. If I understand it right, It expresses the need of the subroutine for a pointer argument and the promise, that the association status of the pointer won’t change within the routine. And the actual argument definitely requires the target or pointer attribute for the actual argument.

Take the following code as example:

program test
  implicit none

  integer :: i1
  integer, target :: ti1
  integer, pointer :: pi1

  !! Uncommenting following call raises compiler error
  ! call pointer_demo(i1)
  call pointer_demo(ti1)
  pi1 => ti1
  call pointer_demo(pi1)

contains

  subroutine pointer_demo(ptr)
    integer, pointer, intent(in) :: ptr

    print *, "associated(ptr):", associated(ptr)

  end subroutine pointer_demo

end program test

Uncommenting the call with i1 (which has neither pointer nor target attribute) would trigger a compiler error. GFortran, for example stops with

Error: Actual argument for ‘ptr’ at (1) must be a pointer or a valid target for the dummy pointer in a pointer assignment statement

Because if the pointer formal argument had the intent(out) or the intent(inout)attribute, the subroutine would be allowed to change the association status of the pointer. This does not really makes sense for automatic pointer targeting, where the subroutine basically gets a “temporary pointer” generated by the compiler (and pointing to the actual variable with the target attribute).

1 Like

Right… I am always confused about intent applying to the pointer association or to the content…

1 Like

I disagree completely. Swift is general-purpose language, Fortran is not. Moreover, the whole idea of Fortran is to express ideas, not implementations. Dictionaries and “flexible” arrays with easy ways to move and access data is what would make 99% of complaints about Fortran go away, since people would stop needing to re-implement basic data structures that should be GIVEN.

It’s a science and data language first, trying to make it into a clunky and poor version of C++ is counterproductive in my opinion.

Fortran does already have generics: but they are only allowed for intrinsic procedures. move_alloc is an example of a generic subroutine that cannot be implemented in Fortran itself. Fortran is not self-complete in a way C++ is and it is completely fine. But to be even considered attractive for any future projects (independent codes or libraries), it needs to excel at data and number handling. It is the only space where it has any chance to fight.

It is awesome to see lfortran take the right way, it is the only sane solution in my opinion.

I completely agree with that. However, even for this purpose we need the possibility of a robust generic framework. I am thinking about implementing algorithms, which can be used with both, real numbers and dual numbers (in order to get automatic forward differentiation). In C++ and Julia, this can be realized easily already right now. In Fortran, this will/might only work cleanly with the generics feature planned for F202Y (unless you use the usual “dirty” tricks). But then, the same framework could be also used implementing arbitrary containers not provided by the core language…

The question is also, where do you stop the inclusion of containers into the core language. I use Python long enough to remember that sets were not always part of the core language. Should be sets part of the Fortran core language? And queues? And various trees (might be needed for graphs, etc.)? And so on. While there might be a consensus about lists and dictionaries, there are several other data types, which might be also useful in number crunching. Including all of them into the core language would unnecessary bloat it. (They should better live in a stdlib anyway). Surely, Fortran should not become C++ (on the contrary!), but it would be beneficial if it provided robust ways for its users to implement such data structures if needed.

Only if a stdlib is implemented like the STL in C++ where it is for the most part standard across all compilers, shipped as part of the compiler distribution, and its use is basically invisible to the user (ie. users don’t have to explicitly link the library themselves or go through extra hoops to set correct paths etc). I don’t see this ever happening with the commercial compilers. Maybe with the open source projects if they would just get together and form some kind of consensus on how their projects can work together and arrive at truly transportable and interoperable capabilities.

Why is it the users responsibility to implement things that are considered standard features in most other languages. This attitude appears to be wide spread in the Fortran compiler development community. While I agree adding them directly in the language MIGHT lead to “bloat”, I think the real reason is the cost and disruption of existing code bases required to implement them in some seamless manner. I also think that the compiler development community is guilty of thinking that most numerical codes are still array based. Yes and no. As an example in the CFD world, unstructured grid solvers (the topology does not directly map to a Cartesian or tensor product mesh and by inference a multidimensional array) need to be able to define connectivity data such as nearest elements, edges, nodes and nearest neighbors of neighbors. Yes you can do that with arrays (the finite element community has done it for decades) but using spatial trees like KDtrees makes life a a lot easier. Plus dictionaries and C++ vector like structures also make life a lot easier. Another example would be the so-called “meshless” particle based algorithm such as SPH (Smooth Particle Hydrodynamics) where the “mesh” might be dynamic so connectivity data must be updated every time step (or so). Using arrays for this can be a big headache if you have to add or subtract the number of particles in a particular support region for the underlying interpolation required to reconstruct values at nodes or “element” centers. From a user perspective having to reinvent the wheel by either writing your own data structures or relying on a third party library that may or may not have long term support or the level of V&V that the compiler community could give it is a waste of time and resources. The availability of ADT structures via the STL (along with some uses of templates) are the major reason almost all new FEM codes and most new CFD codes are written in C++ and not Fortran. For some reason, no matter how many times those of us in the Fortran user community bring this up, the response from the compiler development community is basically “write it yourself”