slow memory allocation

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
    Posts
  • #50107
    Daniel JuncuDaniel Juncu
    Participant

    Hello,
    I noticed that for me the allocation call to “rttov_alloc_direct” is rather slow (both for allocation and deallocation). In fact it takes longer than the actual reflectance calculation in “rttov_direct”. It takes around 6e-5 s per profile on my machine, which is more than I would expect, considering normally memory allocation is relatively quick.

    I call both functions for a batch of profiles (around 10’000), and allocation timing seems to scale linearly with number of profiles.

    Is there something I can do to fix this?

    Thanks and best regards,
    Daniel

    #50109
    James HockingJames Hocking
    Keymaster

    Hi Daniel,

    It is perhaps not surprising that the allocation is linearly related to the number of profiles: the profile member data must be allocated individually for each element of the profiles(:) array, so adding more profiles means more individual allocations.

    Allocation can be relatively expensive (system-dependent), but you would typically call the allocation routine(s) once, and then potentially call RTTOV multiple times to process all the profiles.

    If you are currently allocating once for all your profiles and then calling RTTOV once to simulate all those profiles, it would be unusual/surprising for the allocation call to be as or more expensive as the simulation call, although it does sound like you are allocating an extremely large number of profiles.

    You could consider allocating data for a smaller number of profiles (e.g. 10 or 100) and calling RTTOV multiple times, simulating your profile data in batches.

    Best wishes,
    James

    #50110
    Daniel JuncuDaniel Juncu
    Participant

    Hi James,

    Many thanks for your quick reply!

    Yes I had already accounted for the number of profiles when calculating the timings, and still allocation time per profile is longer than reflectance calculation time (per profile), so allocation becomes a bottleneck in our use case.

    Your suggestion of processing in batches sounds like a good idea, I will try that and report back.

    Thanks,
    Daniel

Viewing 3 posts - 1 through 3 (of 3 total)
  • You must be logged in to reply to this topic.