Bug #44964
closedDataHandler - process_cmdmap - Canceled during execution - multiple images on original content
0%
Description
Hello,
If I use DataHandler with process_cmdmap on multiple pages at once (sublevels) and stop its execution,
the original copied page and already processed subpages have "cloned" their image file references in content elements.
The reason seem to be that normally the reference is re-mapped at the end of the copy process (process_cmdmap).
However, if I copy multiple sublevels this might not be such a good idea.
I have added screenshots that demonstrate the problem.
I am using the current git version.
Files
Updated by Andreas Wolf almost 12 years ago
- Assignee set to Oliver Hader
I think this is more a general TCEmain problem in conjunction with IRRE.
Updated by Oliver Hader almost 12 years ago
Hm... hard to tell... are there any TCEmain/DataHandler Hooks used that manipulate the processing?
Besides that in the 3rd screenshot I spotted that the copied page seems to have the same UID as the original page which might be the origin of the behavior concerning page resolving.
Updated by Andreas Allacher almost 12 years ago
The UID is definitely correct later on (you can't even store a page with the same UID in the db as the UID is the only primary key) I might have refreshed too soon or something because I wanted to stop the copy process before the "issue" is not visible anymore. The issue still exists even if the UID was correct from what I remember.
Updated by Alexander Stehlik almost 11 years ago
How exactly do you stop the execution?
Maybe the solution I provided in #44795 also fixes your problem?
Updated by Mathias Schreiber almost 10 years ago
- Status changed from New to Needs Feedback
- Assignee changed from Oliver Hader to Mathias Schreiber
- Is Regression set to No
tbh I am not sure what to do with this.
Aborting a request will break almost everything and unless we introduce transactions to all queries handled by DataHandler there is just no way to fix this.
Updated by Andreas Allacher almost 10 years ago
Execution can be stopped e.g. by pressing ESC during a request where DataHandler is executed.
Most of the time this will not take long. However, if one e.g. copies a whole pagetree it can take some time.
Shouldn't the images just be added correctly to the "new" elements, instead of the old one? That way the old elements will not get double images.
The worst case that would invoke would be that not all content elements/images are copied but as those are copies. I can just delete them and redo the copy process.
It is way more complex if the original content is modified.
Another way might be to involve the workspaces somehow, e.g. set workspaceId during creation to -9999 and then Update all -9999 at the end to 0?
Seems I forgot to setup a Watch on this ticket and forgot all about it because it really does not happen often that one cancels the execution of the DataHandler. Especially, if one knows about the issues involved.
Updated by Alexander Opitz over 9 years ago
- Category changed from File Abstraction Layer (FAL) to Backend API
- Status changed from Needs Feedback to New
- Target version set to 8 LTS
- Complexity set to hard
The fact isn't FAL depended, it is overall backend processing. But this is very hard to solve and won't happen for 7.
Updated by Andreas Allacher over 9 years ago
No problem, it isn't that "big" an issue.
It should still be solved though and yes it probably is a general issue, just noticed it with FAL the first time :)
Maybe the title should also be changed then?
Updated by Benni Mack over 7 years ago
- Target version changed from 8 LTS to next-patchlevel
Updated by Benni Mack over 5 years ago
- Target version changed from next-patchlevel to Candidate for patchlevel
Updated by Benni Mack over 4 years ago
- Status changed from New to Closed
Yeah, this should be fixed once we migrate DataHandler to v2.0 when we go with Event Sourcing or anything like "atomic" persistence. Will close this for now.