Bug #86018
openFolderTreeView::getBrowseableTreeForStorage() will timeout on network file systems
0%
Description
FolderTreeView::getBrowseableTreeForStorage() contains this code:
// If the mount is expanded, go down: if ($isOpen && $storageObject->isBrowsable()) { // Set depth: $this->getFolderTree($rootLevelFolder, 999); }
While it could be ok to browse 999 levels on the local file system, it fails on network file systems with AWS or WebDav due to much longer time needed to fetch all those levels. If I lift php timeout restrictions, it took about 6 hours over high speed connection to fetch folder tree from the NextCloud with a single first level subfolder expanded. This is absolutely not acceptable. Folder tree fetches EVERY possible subfolder of each expanded first level folder in the tree. It fetches deep inside, even those subfolders that the user never opened. This takes huge time over network.
In general, FAL is not very usable with network storages.
Updated by Benni Mack about 6 years ago
Hey Dmitry,
nice to read you here.
I agree with you. what would be the best solution to "mitigate"? seems like this is a deeper issue which will be hard to fix in existing LTS versions, and come up with a better folder tree separately?
All the best,
Benni.
Updated by Dmitry Dulepov about 6 years ago
I could not find a quick solution for this :(
Ideally, folder tree should only load folders that the user expanded. Now it loads everything. Even more: when I expand another folder, that function is called again and it will fetch again all those expanded folders starting from the root. This means that the more folders you expand, the more it fetches over network and the slower the tree becomes. Important thing is that it always fetches all expanded folders from the root down to their last node.
In the end user just hits the php timeout. Also he cannot insert images to content elements because image browser also uses the same tree component. The only cure for this is to clear all user settings to reset file tree's expanded node list. Even more, users continue clicking on the file module to see the tree. They spawn more and more php processes and the server quickly becomes overloaded and network becomes saturated with requests. This is an unexpected DoS made by editors in attempt to see their files. The problem is very serious with network FAL drivers.
Updated by Benni Mack almost 4 years ago
Hey Dmitry,
I am trying to reproduce the issue. Can you tell me:
- What remote driver you're using (TER extension / custom) e.g. S3
- How many files / data is in the remote storage?
Updated by Dmitry Dulepov almost 4 years ago
It is webdav. We took the one from TER. Connection is to OwnCloud. There are about 5-10 subdirectories on each level, up to 10 levels deep, about 10-100 files on each level, mostly pdfs. Due to the way FAL works, it fetches the same data multiple times, so request to fetch the content of the same (!) directory comes more than once (I think 2 or 3 times in a row). This means a new http request each time. Same for getting each file's properties (size, etc). So it gets the directory and then gets information for each file. So 2*(1 dir + 100 files) = 202 http requests just for one directory. And so on for each subdirectory. This happens for every directory, every file, and every level even when the tree is not expanded (and probably will not be expanded at all).
If you do not have a remote OwnCloud, possibly you can reproduce it if you mount a directory with many files (like a TYPO3 core) using sshfs somewhere inside fileadmin/. But make sure that the server is remote on the Internet.
Updated by Benni Mack almost 4 years ago
- Status changed from New to Accepted
- Assignee set to Benni Mack
Updated by Benni Mack almost 3 years ago
- Status changed from Accepted to Needs Feedback
Hi Dmitry,
thanks for your report.
We have now built the file storage tree based on AJAX (like the pagetree) with multiple levels. Does this problem still occur in v11?
Updated by Dmitry Dulepov almost 3 years ago
Hi Benni!
I will not be able to test it. We decided not to use webdav directly but instead make a local copy of necessary files. It has its disadvantages but it works for us.
Sorry :(
Updated by Benni Mack almost 3 years ago
Dmitry Dulepov wrote in #note-8:
Hi Benni!
I will not be able to test it. We decided not to use webdav directly but instead make a local copy of necessary files. It has its disadvantages but it works for us.
Sorry :(
Hi Dmitry,
Thanks for the quick reply. Guess we need somebody else to validate this issue then
All the best,
Benni.
Updated by Christian Kuhn almost 3 years ago
- Status changed from Needs Feedback to New