| Summary: | test-on-modification feature [JUnit] | ||
|---|---|---|---|
| Product: | [Eclipse Project] JDT | Reporter: | Sidney Monteiro Jr <sidney_f_monteiro> |
| Component: | UI | Assignee: | JDT-UI-Inbox <jdt-ui-inbox> |
| Status: | RESOLVED WONTFIX | QA Contact: | |
| Severity: | enhancement | ||
| Priority: | P3 | CC: | daveo, david, m.moebius |
| Version: | 2.0 | Keywords: | helpwanted |
| Target Milestone: | --- | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
|
Description
Sidney Monteiro Jr
AmThu, 14 Mar 2002 10:57:31 -0500, "Sidney Monteiro" <sidney.monteiro@stratech.com> schrieb: >I am glad that you replied. That could mean that the concept has some >potential. >The concept is to provide unobstrusive early detection and reporting of >changes to source code that negatively affect its agreed upon functionality. >(Mouthful ain't it ? :) ) All features in one sentence and well explained ... >I agree that this is a very fine tuning of the efficiency of development >process and may not pay off for some projects. >I believe that IF the concept is solid, we (designers and tool developers) >can find reasonable ways to accomodate it. If someone already use unit testing, she/he would be pleased to start single tests or test suits automagically in background. So a smart start of the unit tests is imo most important. >Dissecting the concept statement: >The detection and reporting part is already satisfied by the testing >framework of JUnit. >Early here means as soon as possible, the original idea placed this feature >immediatelly following a successful incremental compilation and prior to >releasing the change to the integration stream. With release to integration stream you mean a scm-server (cvs)? Imo it would be sufficient, to get the results of tests 'soon' after a resource change. But this depends on the elapse time of a test. [snipped statements] >I see two separate conversations here: 1) defining the triggering condition >for a test and 2) how to implement the testing process in an unobstrusive >manner >Here are my first thoughts about them: >1) a new developer preference setting "invoke tester after successful >compilation(build)" dependent of the "Perform build automatically on >resource modification" setting for the Workbench. Ok the more easy part :-) >2) run the testing suite after a successful build (insert "1)" here) as a >separate thread in the same VM, feeding the results to the task list as they Or an extra view, should be possible with markers too. >become available. So you would try to start a test (Which? single, suite, whole project) with every ctrl+space? Thats not nice, my workflow consist of many edit code - ctrl+s - look for errors - remove errors - save again cycles. A timer for inactivity could solve this. So we could feed a queue with small test tasks (on every resource change) and delete all with same resource if there is acivity (editing), i would try to start with one minute. Should a change of a resource which test is already running, cancel this test? One thing where i've no experience, is how should we mapp from a resource to it's associated test? I have one test for single classes with ClassName+Test in a sub package .test, than there is one test suite summarizing these tests. I don't know if others are working similar. Are there other level of combined tests? But we need a mechanism to mapp a change of a resource to the right test. Ideas, experiences? Or just something like one test for one or more packages? The next point could be to maintain the state of already passed tests (successful or not). >I see a test server as the integration server, already part of the >development process. So using a remote jvm for running the tests is a good idea or not? martin A)With release to integration stream you mean a scm-server (cvs)? Sidney says: yes [snipped statements] >I see two separate conversations here: 1) defining the triggering condition >for a test and 2) how to implement the testing process in an unobstrusive >manner <snip> >2) run the testing suite after a successful build (insert "1)" here) as a >separate thread in the same VM, feeding the results to the task list as they >become available. B)Or an extra view, should be possible with markers too. Sidney says: an extra view does sound better ... C) So you would try to start a test (Which? single, suite, whole project) with every ctrl+space? Thats not nice, my workflow consist of many edit code - ctrl+s - look for errors - remove errors - save again cycles. A timer for inactivity could solve this. So we could feed a queue with small test tasks (on every resource change) and delete all with same resource if there is acivity (editing), i would try to start with one minute. Sidney says: I was thinking more along the lines of control+S=save with incremental compilation just like it does today, control+shift+S= same as control+S and follow it with applicable tests if the compilation is succesfull.. D) Should a change of a resource which test is already running, cancel this test? Sidney says: if we manage to make the process run in the background, canceling should not be relevant for the test results would simply be stale. E) One thing where i've no experience, is how should we map from a resource to it's associated test? I have one test for single classes with ClassName+Test in a sub package .test, than there is one test suite summarizing these tests. I don't know if others are working similar. Are there other level of combined tests? But we need a mechanism to mapp a change of a resource to the right test. Ideas, experiences? Or just something like one test for one or more packages? Sidney says: what I did with my team (since we ran into problems before with package naming conventions and actual location of the testing code ) was to think one level of abstraction higher and we used a finder method that used reflection on the class name of interest, trying a series of known permutations on the package name, tester class name, actual test entry point signature F) The next point could be to maintain the state of already passed tests (successful or not). Sidney says: how about a file "classATest.tr" for the test result? daveo@asc-iseries.com says: Overall: I think this is an incredible idea that could be as much a productivity gain as the idea of integrating the compiler (and error reporting) into the editor was. Reason: If I follow the principle of coding the test case first and the IDE automatically runs test cases after a compile, the IDE can now show me my *logic* errors as I code, not just syntax errors. Specific thoughts related to the thread: Martin Möbius wrote: > Surely nice for all which are already on the unit testing train, i'm > struggeling with this yet. I am too. However, direct IDE support such as what Sidney mentions would make me much more likely to do it (and the benefits much easier to reach). > Many tests for a a single class last longer than a compile for the > whole project. How to handle this? What should trigger a test run? I agree with Sidney that there should be an option to turn off auto-launching of unit tests. Those who develop using dual CPU boxes (which I don't yet), would hardly even notice the background process running for CPU-intensive tests. Others would have a judgement call based on the amount of CPU, I/O, etc., used by the unit test. If the auto-launching is on, I would favor launching unit tests (in the background) on successfull compile. If I then make a change and save again before the unit tests are complete, the testing process is killed and any items it has added to my task list are cleared just before it restarts the unit test over the new code. Responding to Sydney now: > B)Or an extra view, should be possible with markers too. > Sidney says: an extra view does sound better ... I guess I was thinking in terms of using markers and the task list like the compiler does, but I think having an extra view (or maybe a testing perspective) makes a lot of sense too. I can imagine scenarios where I am coding several new classes and am following the maxim to code the test cases first. In this case, I would appreciate having the test classes run as part of the edit/compile cycle for my main class so that the IDE can show me *logic* errors, not just compile errors, as I am coding. In this case, it is probably only necessary to use the task list or maybe another view running in the Java perspective. On the other hand, I can easily imagine wanting a unit test view or even a testing perspective where I can select some subset of tests to run and browse their results. > C) So you would try to start a test (Which? single, suite, whole > project) My thought here: if you integrate unit testing with the IDE, you don't have to run all unit tests. Since you have access to the dependency tree, you can just run the unit tests for the classes that were just recompiled--those that depend on what just changed. >> I see a test server as the integration server, already part of the >>development process. > > So using a remote jvm for running the tests is a good idea or not? I think it should be optional, but I think it's a good idea. > C) So you would try to start a test (Which? single, suite, whole project) > with every ctrl+space? Thats not nice, my workflow consist of many > edit code - ctrl+s - look for errors - remove errors - save again > cycles. > A timer for inactivity could solve this. So we could feed a queue with > small test tasks (on every resource change) and delete all with same > resource if there is acivity (editing), i would try to start with one > minute. > I don't see a problem with starting a test with every Ctrl-S as long as the test runs in the background and I can keep working. I don't understand the rationale for an inactivity timer to launch testing if testing is going to happen automatically in the background and can be restarted automatically if I press Ctrl-S again. Dave Orme Advanced Systems Concepts http://www.asc-iseries.com Here is the GIST of the test dependency logic: one can describe the
dependencies for testing in the same manner one describes compilation
dependencies ... make (ANT) dependencies.
Test results depend on both tester class and tested class (and test input
data, etc); if one changes a tester ... itself will be run; if one changes a
class, its tester wil be run; if one changes the test input data, the test
needs to run ..., etc
Think of classA as a set of files:
classA.java, classA.class, classATest.java (the actual test naming
convention is not the relevenat issue for this discussion, it is addressed
on previous postings), classATest.class, AND classATest.tr {tr extension
here is arbitrary, I chose it for (t)est(r)result}; optionally one can keep
tabs on the test input data also.
In terms of make suffix rules, that is
.java.class:
javac ...
.class.tr:
java <class>
In terms of long form make dependency rules:
classATest.tr: classATest.class classA.class [input test data file ...]
java <blah blah> classATest
That sounds like an easy apprach to improve upon using the very same
concepts being promoted by the IDE: integrated build/test
Sounds like a good third party opportunity. Moving to JDTUI for comment, as they provide JUnit plugin. This idea needs to grow further - not something for 2.0, therefore defering. [JUnit] No action planned for 2.1. But as already said this is a good third party opportunity. Marking as help wanted. Chaning state from assigned later to resolved later. Assigned later got introduced by the last bug conversion and is not a supported Eclipse bug state. Martin Mobius was kind enough to point out on eclipse.tools.jdt that this request has many similarities to the enhancement I proposed at https://bugs.eclipse.org/bugs/show_bug.cgi?id=51292. The plugin attached there does not yet do any dependency analysis for test selection, but it may be relevant to this discussion. As of now 'LATER' and 'REMIND' resolutions are no longer supported. Please reopen this bug if it is still valid for you. |