Community
Participate
Working Groups
MEMDB4OStore was meant to replace DB4OStore in our test suite, with the purpose of having a blazing fast implementation that makes the execution of the test-suite way faster. Caspar found that some tests did fail in the file based and not in the membased, so switched the default test config to use the file based. It would be nice to have another configuration to use the mem based configuration, enabling quick test-driven development. File based config must be executed once the mem-based config passes, to make sure the implementation conforms to the test-suite.
Moving all open enhancement requests to 4.1
Any activity here?
I'll handle it as soon as Im back to the office :)
Great ;-)
Just so we don't have any misunderstandings: all that's required is sth like an AllTestsDB4OMem class and a corresponding launch config. The MemDB4ORepositoryConfig class itself is still available (in AllTestsDB4O.java).
sure ;)
Created attachment 199250 [details] patch v1 Created AllTestsMEMDB4O Moved MEMDB4OConfig to AllTestsMEMDB4O (some tests are failing, but the same way AllTestDB4O does)
Created attachment 199334 [details] Patch v2 Our releng version builder was not configured, so nobody noticed that the manifest version was not increased to 4.0.100!
Yeah, my bad. In fact, I have the API Baseline deactivate, because I only see errors that ask me to increase to 5.0 :(
IIRC Caspar had a similar problem. Reinstalling the baseline helped him. I'd first try to click "Reset" in the "Edit Baseline" dialog.
Committed to TRUNK, rev 8634
Created attachment 199400 [details] Patch v3 (incremental) Can we keep the 2 DB4O configs identical in terms of the tests that are included/excluded? The following patch achieves this by having AllTestsMEMDB4O inherit from AllTestsDB4O. It also adds -Xmx1024m to the launch configs. I was getting some out of memory errors while running the DB4O suites.
Caspar, In fact I though the same, but forget to take some actions about it :P I also was getting some memory problems, but not related with the HEAP, but with the PermGen. Regarding skiped tests on AllTestDB4O, I'd say there are some that could be removed. The branching tests are no longer failing, because probably they are no longer being executed (branching tests shouldn't be executed in a IStore implementation that doesn't not support it). Also, it would be good to remove those that take too much, and reanalyze the junit report to determine which are now taking way too much. I found some tests in the file based store that were taking between 100 and 200 seconds. Cheers!
(In reply to comment #13) > I also was getting some memory problems, but not related with the HEAP, but > with the PermGen. I added a setting for that as well. > Regarding skiped tests on AllTestDB4O, I'd say there are some that could be > removed. The branching tests are no longer failing, because probably they are > no longer being executed (branching tests shouldn't be executed in a IStore > implementation that doesn't not support it). Yes -- but test-skipping logic like ConfigTest.skipUnlessBranching, doesn't get invoked until after the whole repo has been set up. And that's done for every test method if the class declared @NeedsCleanRepo. For our slower back ends (and that includes disk-based DB4O IMO) it's a serious waste of time to setUp/tearDown the repo dozens of times without executing any test logic, easily adding 10 minutes to the suite. > Also, it would be good to remove those that take too much, and reanalyze the > junit report to determine which are now taking way too much. I found some tests > in the file based store that were taking between 100 and 200 seconds. Yeah I noticed those too, very annoying. I haven't looked at them in detail, but at first glance they just seemed to be iteratively storing large numbers of objects. Not sure what the point is of that.. Anyone?
> > Regarding skiped tests on AllTestDB4O, I'd say there are some that could be > > removed. The branching tests are no longer failing, because probably they are > > no longer being executed (branching tests shouldn't be executed in a IStore > > implementation that doesn't not support it). > > Yes -- but test-skipping logic like ConfigTest.skipUnlessBranching, doesn't > get invoked until after the whole repo has been set up. And that's done > for every test method if the class declared @NeedsCleanRepo. For our slower > back ends (and that includes disk-based DB4O IMO) it's a serious waste of > time to setUp/tearDown the repo dozens of times without executing any test > logic, easily adding 10 minutes to the suite. I see your point and makes sense. I'd understand that for dozens of tests, but for 3, that's negligible. Wouldn't it be desirable to introduce some skip-ahead mechanism in the test-framework? Not sure how feasible is that... > > Also, it would be good to remove those that take too much, and reanalyze the > > junit report to determine which are now taking way too much. I found some tests > > in the file based store that were taking between 100 and 200 seconds. > > Yeah I noticed those too, very annoying. I haven't looked at them in detail, > but at first glance they just seemed to be iteratively storing large > numbers of objects. Not sure what the point is of that.. Anyone? Yeah, usually just large iterations of some actions, commits with thousands of objects... I wonder how that proves any scalability, if that's the actual intention...
> > Yes -- but test-skipping logic like ConfigTest.skipUnlessBranching, doesn't > > get invoked until after the whole repo has been set up. And that's done > > for every test method if the class declared @NeedsCleanRepo. For our slower > > back ends (and that includes disk-based DB4O IMO) it's a serious waste of > > time to setUp/tearDown the repo dozens of times without executing any test > > logic, easily adding 10 minutes to the suite. > > I see your point and makes sense. I'd understand that for dozens of tests, but > for 3, that's negligible. Wouldn't it be desirable to introduce some skip-ahead > mechanism in the test-framework? Not sure how feasible is that... Method annotations would be a good way.
(In reply to comment #15) > I see your point and makes sense. I'd understand that for dozens of > tests, but for 3, that's negligible. I'm not sure we understand each other. I disabled 3 test *classes*, not 3 tests. Those classes are BranchingTest and 2 classes derived from it, altogether containing 54 tests. They take 2-3 minutes on a fast machine, doing nothing but setting up and tearing down the repo 54 times. I personally find that long enough to be annoying, when I'm waiting for the tests to complete before I do a commit. Besides, it's the principle. With Derby, 54 setups/teardowns would probably take half an hour. > Yeah, usually just large iterations of some actions, commits > with thousands of objects... I wonder how that proves any > scalability, if that's the actual intention... The worst offenders are the 2 tests in Bugzilla_259869_Test, which compare the total time needed for 500 commits of a simple attribute change, to the time needed for 500 commits of... no change. But committing a transaction that's not dirty, is a no-op. Nothing is signaled to the server. I think these tests are meaningless. Btw, Eike, I see you commented but didn't review. Did you miss the flag? Maybe 2nd review-requests don't show up on the Mylin radar?
(In reply to comment #17) > Btw, Eike, I see you commented but didn't review. Did you miss the > flag? Maybe 2nd review-requests don't show up on the Mylin radar? It does, but due to my releng activities my workspace is entirely broken and I didn't have time to fix it yesterday. I still could not apply the patch but from looking at the diff there doesn't seem to be a problem with it.
Committed revision 8671.
> I'm not sure we understand each other. I disabled 3 test > *classes*, not 3 tests. Those classes are BranchingTest and 2 > classes derived from it, altogether containing 54 tests. They take > 2-3 minutes on a fast machine, doing nothing but setting up and > tearing down the repo 54 times. I personally find that long enough > to be annoying, when I'm waiting for the tests to complete before > I do a commit. Oh yeah, you are right. I wrote too fast!
Closing.