Free/Busy event planner with CalDav slow (redundantly spams outbox endpoint)

Hey everybody,

we’re using the Nextcloud calendar (CalDav) together with eM Client. When I create an event, add some attendees to it (colleagues on the same Nextcloud calendar instance) and then open the Free/Busy planner window, I’ve noticed that it loads the initial free/busy times fast, but as soon as I start scrolling horizontally, it loads the timeframes for the adjacent days much slower.

After some more testing, I can observe the following:

  • The more I use the horizontal scroll bar to move the timeframe around, the slower it gets.
  • If I wait a bit and then just minimally “peek” into the next unloaded day by a single scroll step, that day does load fast again. If I scroll a lot at once, it cumulatively takes a lot longer for all the days that just became visible to load than just loading each with that minimal peeking technique.
  • Resizing the Free/Busy window also seems to make it worse. Basically the more you fidget around with scrolling/resizing, the worse it gets.
  • Looking into the CalDav logs, I can see that there appears to be some sort of request queue. It does one free/busy request at a time and starts the next one immediately once the previous one has finished, but not in parallel. I can also see that a lot of them are identical and query the exact same DTSTART and DTEND in the VFREEBUSY block.

This leads to me suspect the following: On every scroll/resize event when there’s a day visible that is still missing its free/busy data, a new free/busy CalDav request is queued up to fetch that missing data. However, if you keep scrolling/resizing a lot before the response comes in, it will just keep queuing up more and more (redundant) requests for that same data. If you then scroll yet further until the next day in line become visible, loading its data will have to wait until all of the redundant requests that were queued up before have finished in order.

It seems simply checking if there already is a free/busy request in the queue (or currently active) for a given timeframe before queueing up more identical requests would fix this.

I can post my full logs too if it helps.

All the best!