| Summary: | [IGenerator] Enable generating artifacts from multiple resources | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Modeling] TMF | Reporter: | Karsten Thoms <karsten.thoms> | ||||
| Component: | Xtext | Assignee: | Project Inbox <tmf.xtext-inbox> | ||||
| Status: | NEW --- | QA Contact: | |||||
| Severity: | enhancement | ||||||
| Priority: | P3 | CC: | Brodsky_Boris, btickets, christophkulla, dj.escay, ekke, jbugwadia, kai.toedter, markus.duft, oliver, rafael, sebastian.zarnekow, stefan.weise, sven.efftinge, temp44 | ||||
| Version: | 2.0.0 | ||||||
| Target Milestone: | --- | ||||||
| Hardware: | PC | ||||||
| OS: | Mac OS X - Carbon (unsup.) | ||||||
| Whiteboard: | |||||||
| Attachments: |
|
||||||
|
Description
Karsten Thoms
See this article for a possible solution approach: http://kthoms.wordpress.com/2011/07/12/xtend-generating-from-multiple-input-models/ Bumping this. This thing is keeping me from rewriting my generator from Xpand to Xtend2, as I do not want to use a workaround solution. I know Xtext focus is now on JVM languages and for those this is not really needed, but please add this some time in the future. Btw, I wonder why the JVM focus in the first place. I don't see a great benefit in generating Java from a language that is "almost-Java"... On the other hand, being able to have a simple language for e.g. complex XML configuration files and documentation at the same time is a real and relevant Xtext use-case requiring more attention than it is getting these days... (In reply to comment #2) > I know Xtext focus is now on JVM languages and for those this is not really > needed, but please add this some time in the future. The focus on JVM doesn't been that other kind of languages are no longer supported. Also this requirement also exists on the JVM sometimes (e.g. single configuration XML). > Btw, I wonder why the JVM focus in the first place. > I don't see a great benefit in generating Java from a language that is > "almost-Java"... You might have to look a bit closer. > On the other hand, being able to have a simple language for e.g. complex XML > configuration files and documentation at the same time is a real and relevant > Xtext use-case requiring more attention than it is getting these days... Yes and at one point you want to have expressions in your XML-replacement. Look at existing configuration languages out there, they all embed some expression language into XML. Xtext's JVM expressions (Xbase) allow to have this but in a statically typed way and much tighter integrated. It's JVM-based because we had to decide to start with one platform. After all a language needs to be executable somehow. We might think about other platforms in the future. I am looking for the same feature. I would like to be able to collect all Resource names and generate a Java enum from it. I want this to work in the UI as well as for command line execution. Any plans to support this? Thanks! Yes, it's a useful thing. But there is no concrete date yet. Created attachment 208435 [details]
"revised" Xtext 2.2 version of Karstens Thoms previous implementation
Suggestion of a possible solution for Xtext 2.2.x. Use this attached BuilderParticipant instead of the default to generate from multiple resource files.
To get this work you also need the following IGenerator Interface and a suitable implementation:
public interface IMultipleResourceGenerator extends IGenerator {
public void doGenerate(ResourceSet input, IFileSystemAccess fsa);
}
A detailed explanation on how to bind these classes can be found in the previous mentioned article of Karsten Thoms.
The underlying problem why this cannot be easily solved is that the Eclipse builder supports incremental builds and so does Xtext. This is very useful because it shortens turn-around times which is a major problem in many bigger projects. A code generator interface that is always triggered for all files largely conflicts with this idea.
However, given that there exist many situations in which one want to derive a single file from multiple sources and given that not everybody is facing scalability issues in their workspaces, we should think about improving the situation.
Ideally, and that is what I recommend, users don't sacrifice the incremental build feature but try to solve the 1-n generation problem by following a reconcilation approach. That is you update the single file according to the given changes.
My proposal for some sort of middle ground (and yet correct) solution is to introduce a callback interface, which gets all deltas, the information whether it's a full build or incremental build and the resourceSet.
I.e.:
interface IGeneratorExtension {
public void doGenerate(IResourceDescription.Delta changes, ResourceSet resourceSet, BuildKind buildKind, IFileSystemAccess fsa);
}
This method will be invoked only once per build and per language. Note that due to clustering the ResourceSet might not be completely loaded or contain all resources mentioned in the deltas.
The IGenerator is used in standalone builds where we don't have a notion of IResourceDescription.Deltas at the moment. In the IDE, IGeneratorExtension does seem to be the same as IXtextBuilderParticipant. Do you plan to synthesize deltas for the standalone build? (In reply to Sebastian Zarnekow from comment #8) > The IGenerator is used in standalone builds where we don't have a notion of > IResourceDescription.Deltas at the moment. In the IDE, IGeneratorExtension > does seem to be the same as IXtextBuilderParticipant. Do you plan to > synthesize deltas for the standalone build? Yes, that was the idea. It's clearly not 100% accurate as we don't know about the last build's state. But maybe a good-enough solution as long as we document the contract clearly. I would keep the BuildKind internal since it is part of the builder bundle and the information can be deduced from the deltas. (In reply to Sebastian Zarnekow from comment #10) > I would keep the BuildKind internal since it is part of the builder bundle I wasn't thinking of an existing type here, just some sort of indication whether it's a full build or an incremental one. > and the information can be deduced from the deltas. How? I think if all deltas are delete-deltas, one of these two cases is valid: a) trace markers are used (within eclipse): if all markers are removed due to delete-deltas, the file will be removed transparently (clean build) b) trace markers are not used: the files have to contains some markers in their contents (comments etc) that indicate where a certain region originates from. If - due to delete-deltas - all these regions have been removed, it is save to remove the entire file I'm not sure how the distinction incremental build / full build can be used in a meaningful way. I also assume that - within Eclipse - a full build is preceded by a clean build, so the file is most likely not present when a full build is startet. In the standalone case, incremental builds seem to be quite unlikely. I can imagine that some people might want to go with a simpler approach, e.g. ignoring all incremental builds and just do something on full builds. (In reply to Sven Efftinge from comment #13) > I can imagine that some people might want to go with a simpler approach, > e.g. ignoring all incremental builds and just do something on full builds. That would probably be equally wrong as 'treat all builds as full builds' :-) (In reply to Sebastian Zarnekow from comment #14) > (In reply to Sven Efftinge from comment #13) > > I can imagine that some people might want to go with a simpler approach, > > e.g. ignoring all incremental builds and just do something on full builds. > > That would probably be equally wrong as 'treat all builds as full builds' :-) AFAIK that's the defacto solution for clients who don't use IGenerator but have added their own code generation hook. (In reply to Sven Efftinge from comment #15) > AFAIK that's the defacto solution for clients who don't use IGenerator but > have added their own code generation hook. I understand your point. A flag which indicates an incremental or a complete (full / clean) build can help in these cases. (In reply to Sebastian Zarnekow from comment #12) > b) trace markers are not used: the files have to contains some markers in > their contents (comments etc) that indicate where a certain region > originates from. If - due to delete-deltas - all these regions have been > removed, it is save to remove the entire file It should not be necessary to have any markers in the target file. (In reply to Karsten Thoms from comment #17) > It should not be necessary to have any markers in the target file. Without this information its almost impossible to reconcile the target file in a scalable way. For many problem domains it's ok to do a full build for these cases, but in other domains that's not an option, thus this would be the only alternative that comes to my mind. (In reply to Sebastian Zarnekow from comment #18) > (In reply to Karsten Thoms from comment #17) > > > It should not be necessary to have any markers in the target file. > > Without this information its almost impossible to reconcile the target file > in a scalable way. For many problem domains it's ok to do a full build for > these cases, but in other domains that's not an option, thus this would be > the only alternative that comes to my mind. Well, of course 'markes in the file' can also be read as 'markers kept in another trace file' (In reply to Sebastian Zarnekow from comment #19) > Well, of course 'markes in the file' can also be read as 'markers kept in > another trace file' Using an external trace file necessitates sharing it via a source control management system, right? Otherwise I don't see a way (beside a full build) how to determine which area of a target artefact has to be updated after a model file has been edited. By experience users don't like to handle those metadata artefacts like trace files. If they are "hidden" like Xtend's trace files it's ok, but if you have to commit them, it feels cumbersome. Anyway, I would favor having external files over putting some metadata information into the target artefacts... reminds me of "protected regions". (In reply to Oliver Libutzki from comment #20) > (In reply to Sebastian Zarnekow from comment #19) > > Well, of course 'markes in the file' can also be read as 'markers kept in > > another trace file' > > Using an external trace file necessitates sharing it via a source control > management system, right? > > Otherwise I don't see a way (beside a full build) how to determine which > area of a target artefact has to be updated after a model file has been > edited. > > By experience users don't like to handle those metadata artefacts like trace > files. If they are "hidden" like Xtend's trace files it's ok, but if you > have to commit them, it feels cumbersome. > > Anyway, I would favor having external files over putting some metadata > information into the target artefacts... reminds me of "protected regions". The API won't impose anything to the client, so it's all up to the language implementor. The reconciling is a completely custom thing. (In reply to Sebastian Zarnekow from comment #21) > The API won't impose anything to the client, so it's all up to the language > implementor. The reconciling is a completely custom thing. Ah, I see. So this bug is just about extending the IGenerator interface in order to distinguish full / incremental build and to provide some information concerning the deltas. From Xtext's point of view you don't care how the language implementor gets the delta in line with the target artefact areas. Not exactly. This bug is about providing possibility to generate single artifact from multiple DSL files. For example, in a single web.xml enlist all servlets from all DSL files. (In reply to Boris Brodski from comment #23) > Not exactly. This bug is about providing possibility to generate single > artifact from multiple DSL files. > > For example, in a single web.xml enlist all servlets from all DSL files. As Oliver pointed out, this is only about the API that allows clients to do such things. Oh, I see. But do you plan to extend IGenerator interface? Currently I use the workaround from Comment #1. I enhanced it with a simple resource filter. Contributing to the main discussion, I can add, that in my particular case, I generate a simple Java file from all DSL files in a specified folder. Any substantial changes in this DSL files would change a great amount of generated Java file. Implementing incremental build in my case could be done like this: - Do full build * processing EMF-resources populating some simplified model, that contains only attributes/classes necessary for the particular generation * Generate files based on the constructed simplified model * Let Xtext save the simplified model for later use (trace-file?) - Do incremental build * Get a set of changed resources (Delta within a resource isn't relevant for me) * Get the simplified model saved earlier by Xtext * Populate simplified model from the set of changed resources * Generate files based on the refreshed simplified model * Let Xtext save the refreshed simplified model for later use Also we could provide two interfaces, like - ISimpleMultiresourceGenerator (no incremential build) - IIncrementalMultiresourceGenerator (In reply to Boris Brodski from comment #25) > Oh, I see. But do you plan to extend IGenerator interface? As Sven described in comment #7, it'll be an extension interface the the generator may implement optionally. Your workaround will still work after the interface was introduced. What you describe as simplified model would usually be stored as user-data in the index, if you want to support incremental generation and cannot reconcile the target file. Having the same problem. We're generating java code out of our models and need to also generate a factory for those classes... Note: The workaround doesn't work "out of the box", if using Maven. |