| Summary: | post-build refresh should report progress and be cancelable | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Tools] CDT | Reporter: | Chris Recoskie <recoskie> | ||||
| Component: | cdt-build | Assignee: | cdt-build-inbox <cdt-build-inbox> | ||||
| Status: | NEW --- | QA Contact: | Jonah Graham <jonah> | ||||
| Severity: | major | ||||||
| Priority: | P3 | CC: | cdt-build-inbox, cdtdoug, jamesblackburn+eclipse, malaperle, mario.pierro, vivkong, yevshif | ||||
| Version: | 7.0.1 | ||||||
| Target Milestone: | --- | ||||||
| Hardware: | All | ||||||
| OS: | All | ||||||
| Whiteboard: | |||||||
| Attachments: |
|
||||||
|
Description
Chris Recoskie
The current CDT logic is that the workspace must be consistent with filesystem after build. Following that logic (which I do not necessarily endorse), it must be consistent after partial build as well, i.e. it should be separate progress monitor. I think that refresh problem should be dealt separately, i.e. with bug 133881. BTW what's going on with that one? (In reply to comment #1) > The current CDT logic is that the workspace must be consistent with filesystem > after build. Following that logic (which I do not necessarily endorse), it must > be consistent after partial build as well, i.e. it should be separate progress > monitor. I think that refresh problem should be dealt separately, i.e. with bug > 133881. BTW what's going on with that one? Re: Bug 133881, I have design document in progress, but it's not nearly done yet. It turns out to be a more complicated problem than first thought because of the platform's refresh semantics are not compatible with the semantics of the output folder's exclusion semantics. I agree with the premise of this bug. We certainly shouldn't ignore cancellation of long running jobs like the end of build refresh. The user should always have control over stopping long running tasks. As per platform refresh semantics: it'lll be a lot of work, and unnecessary work at that, to workaround the issue of long blocking builds. All we need is to fix bug 313673 and add an asyncRefresh method to IResource and we'll be in a much happier place. For managed build we only need to refresh the final build artefact in-line with the build. The rest can be done best effort. Created attachment 185550 [details]
proposed patch
A few comments: - If you're going to do in a Job, the job should be a WorkspaceJob so change notifications are batched. - Also I wouldn't set a rule on the job -- in the patch you've got, the workspace will be locked for the entirety of the Job run, whereas without the rule other work could get in between each refreshLocal. - Also worth explicitly refreshing the final build artifact inline with the build, without this the API callers of build may not see the build artifact in the IResource model immediately after build. Finally I think this really is a workaround until Bug 313673 is fixed. I'm kind of at a loss as to what to do about the managed build case in general. It seems on further investigation that the dependency file post-processing is relying upon the refresh happening inline with the build to detect any new .d files and process them. So, we've got conflicting requirements here... If I refresh in a Job, the user will by default have their partially built resources show up even if they cancel the build. However, dependency processing will be messed up. If I refresh inline using the same progress monitor as the build, then canceling the build won't result in any new resources showing up. My $0.02 is that if we have to choose, I think it's more important that the user (In reply to comment #5) > A few comments: > - If you're going to do in a Job, the job should be a WorkspaceJob so change > notifications are batched. > - Also I wouldn't set a rule on the job -- in the patch you've got, the > workspace will be locked for the entirety of the Job run, whereas without the > rule other work could get in between each refreshLocal. > - Also worth explicitly refreshing the final build artifact inline with the > build, without this the API callers of build may not see the build artifact in > the IResource model immediately after build. All good points. > > Finally I think this really is a workaround until Bug 313673 is fixed. I think that even if the refresh is asynchronous, we need to make sure that we're telling the user what's going on, and let them cancel it if need be, so while I see asynchronous refreshes as important, I don't think it solves everything on its own either. (In reply to comment #6) > My $0.02 is that if we have to choose, I think it's more important that the > user > Sorry that sentence got cut off... it should say that I think it's more important that the user be able to cancel what's going on without breaking the build semantics of the case where the build continues normally, than it is to ensure the semantics of what happens when they cancel stay the same as they are currently. My reasoning is that if the user is hitting cancel, it's already a somewhat nondeterministic action, as you don't know when the build is actually going to be killed. Thus I don't see it as important that we by default show all the partially built resources if the build has been canceled. The user probably doesn't care what was built and what wasn't; they are likely going to either build clean next time or let the incremental build fix things up. Ideally of course we'd want all cases to work as they do now but that just doesn't seem possible. So if I have to pick, I'd vote for refreshing inline using the build's progress monitor. What do the rest of you think? (In reply to comment #6) > I'm kind of at a loss as to what to do about the managed build case in general. > > It seems on further investigation that the dependency file post-processing is > relying upon the refresh happening inline with the build to detect any new .d > files and process them. I hadn't realised that CDT uses the .d's, where does this happen? I though this was just used by the Makefiles for subsequent builds? This is the only sticking point I can think of, and even then it would be good to understand what CDT does with the .ds. Other than that I think the approach taken by the patch is good. (In reply to comment #8) > I hadn't realised that CDT uses the .d's, where does this happen? I though this > was just used by the Makefiles for subsequent builds? > > This is the only sticking point I can think of, and even then it would be good > to understand what CDT does with the .ds. > Other than that I think the approach taken by the patch is good. See org.eclipse.cdt.managedbuilder.makegen.gnu.GnuMakefileGenerator.generateDependencies() and friends. The dependency processing does a bunch of things. Firstly, it adds rules to the makefile to include any .d files that are generated, so that the rules that end up in those .d files are used in make's dependency calculations. It also mucks with the contents of the .d file to put a header in there ("generated file, do not edit, blah blah") and to massage some paths in the .d file so that they are relative to the build directory. This bug was assigned and targeted at a now released milestone (or Future or Next that isn't being used by CDT). As that milestone has now passed, the milestone field has been cleared. If this bug has been fixed, please set the milestone to the version it was fixed in and mark the bug as resolved. |