| Summary: | Support deployment testing | ||
|---|---|---|---|
| Product: | [RT] Virgo | Reporter: | Miles Parker <milesparker> |
| Component: | virgo-build | Assignee: | Project Inbox <virgo-inbox> |
| Status: | NEW --- | QA Contact: | |
| Severity: | enhancement | ||
| Priority: | P3 | CC: | eclipse, glyn.normington, leo.dos.santos, steffen.pingel |
| Version: | unspecified | ||
| Target Milestone: | --- | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
|
Description
Miles Parker
Missed a step there. IDE should be: 1&2. As above. 3. Import into Eclipse IDE using the current Virgo IDE build with a specified target platform. 4. Trigger build, including any Maven related bits that need to happen. 5. _Configure and launch server using SWT Bot or some other automated approach. _ 6. As 5,6,7 above. This is a great suggestion. Currently, the closest we have are the system verification tests which unzip and start a provided Virgo Tomcat Server zip file and then run JUnit tests which deploy applications and test the results by driving external interfaces such as the admin console. Perhaps the automation that the JUnit tests use is worth considering for these end-to-end tests. The crunch question from my perspective is how on earth would we automate the Eclipse IDE operations? If we did manage to automate them, would these tests be relatively stable or would they need reworking frequently as Eclipse evolves? (In reply to comment #2) > The crunch question from my perspective is how on earth would we automate the > Eclipse IDE operations? If we did manage to automate them, would these tests be > relatively stable or would they need reworking frequently as Eclipse evolves? Actually, that bit isn't too difficult. We already have working support for SWTBot testing thanks to Leo and Steffen. We'd just need to ensure that we have a working server target, then drive the Server wizards to creating adapters for the various servers and starting them. The hard part in all of this is coming up with appropriate wait times before making the next UI change. (In reply to comment #3) > (In reply to comment #2) > > > The crunch question from my perspective is how on earth would we automate the > > Eclipse IDE operations? If we did manage to automate them, would these tests be > > relatively stable or would they need reworking frequently as Eclipse evolves? > > Actually, that bit isn't too difficult. We already have working support for > SWTBot testing thanks to Leo and Steffen. We'd just need to ensure that we have > a working server target, then drive the Server wizards to creating adapters for > the various servers and starting them. The hard part in all of this is coming > up with appropriate wait times before making the next UI change. Absolutely. In fact I think wait times are probably the wrong solution especially as some CI servers seem to run very slowly. Is there no way for the SWTBot to report back when an operation is complete? (In reply to comment #4) > Absolutely. In fact I think wait times are probably the wrong solution > especially as some CI servers seem to run very slowly. Is there no way for the > SWTBot to report back when an operation is complete? That's one of the big weaknesses of using SWTBot actually..since the interaction with the functionality under testing is often inherently asynchronous and there is no way to trigger a callback from say invoking a menu defined by an extension point. I suppose some things could be rigged with mock objects and what not but that seems like it could lead us down the rabbit hole. It's basically the halting problem, right? (In reply to comment #5) > (In reply to comment #4) > > Absolutely. In fact I think wait times are probably the wrong solution > > especially as some CI servers seem to run very slowly. Is there no way for the > > SWTBot to report back when an operation is complete? > > That's one of the big weaknesses of using SWTBot actually..since the > interaction with the functionality under testing is often inherently > asynchronous and there is no way to trigger a callback from say invoking a menu > defined by an extension point. I suppose some things could be rigged with mock > objects and what not but that seems like it could lead us down the rabbit hole. > It's basically the halting problem, right? Unless we can get a notification that asynchronous operations are now complete, yes, it's pretty much the halting problem. So perhaps the best engineering solution would be to request an enhancement to SWTBot and pre-req. that. SWTBot changes may then need to pre-req. a change to SWT I guess. I suppose we could limp along meanwhile with wait times, but we'd probably have to make them massively larger than anticipated so we don't get caught out regularly on slow CI servers and such like. This could make the job very long running, but it wouldn't consume much CPU, so perhaps that wouldn't really matter so long as other jobs didn't get blocked by it. (In reply to comment #6) > Unless we can get a notification that asynchronous operations are now complete, > yes, it's pretty much the halting problem. But even with the notifications, right? I mean, imagine you add the greengages app to a runtime and start it. Then that should trigger the runtime starting. Even if the runtime could trigger a callback to the Test -- say by simply writing to a socket we're listening on -- you still wouldn't know if there was a problem until/unless the server actually responded, or at some arbitrary point you'd assume that it had failed. So you'd need a heuristic in any case. We're just using JUnit with SWTBot, so I *think* it actually isn't an SWTBot issue per se, it's more of a natural limitation with the underlying approach. In practice though I don't want to inflate the issue. With the typical UI cases it really isn't the hard to get stay well within a small range. If we have this setup as a separate system test that doesn't affect actual build delivery we could maintain the results pretty easily -- if you see something apparently failing, look at the delay time and adjust and vis. vs. (In reply to comment #7) > (In reply to comment #6) > > Unless we can get a notification that asynchronous operations are now complete, > > yes, it's pretty much the halting problem. > > But even with the notifications, right? I mean, imagine you add the greengages > app to a runtime and start it. Then that should trigger the runtime starting. > Even if the runtime could trigger a callback to the Test -- say by simply > writing to a socket we're listening on -- you still wouldn't know if there was > a problem until/unless the server actually responded, or at some arbitrary > point you'd assume that it had failed. So you'd need a heuristic in any case. > > We're just using JUnit with SWTBot, so I *think* it actually isn't an SWTBot > issue per se, it's more of a natural limitation with the underlying approach. > > In practice though I don't want to inflate the issue. With the typical UI cases > it really isn't the hard to get stay well within a small range. If we have this > setup as a separate system test that doesn't affect actual build delivery we > could maintain the results pretty easily -- if you see something apparently > failing, look at the delay time and adjust and vis. vs. Yeah, ok. We've had problems with other wait-based tests and I am simply a bit nervous about adding more. But you may be right that it is unavoidable for certain categories of test, so feel free to press on. (In reply to comment #8) > Yeah, ok. We've had problems with other wait-based tests and I am simply a bit > nervous about adding more. But you may be right that it is unavoidable for > certain categories of test, so feel free to press on. I think you're right to be nervous. :) I'm thinking that anything that Virgo does in this area should *not* be part of the main builds but run as a completely separate virgo job. |