| Summary: | update Gerrit test suite to consume fixtures from service | ||
|---|---|---|---|
| Product: | z_Archived | Reporter: | Miles Parker <milesparker> |
| Component: | Mylyn | Assignee: | Steffen Pingel <steffen.pingel> |
| Status: | RESOLVED FIXED | QA Contact: | |
| Severity: | enhancement | ||
| Priority: | P3 | CC: | sam.davis, sebastien.dubois, shawn.minto, steffen.pingel |
| Version: | unspecified | ||
| Target Milestone: | 2.0 | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
| Bug Depends on: | 386344 | ||
| Bug Blocks: | |||
|
Description
Miles Parker
Steffen, I forget, was there something else that we wanted to do here? Yes, test fixtures should be consumed from the service along the lines of this: https://git.eclipse.org/r/#/c/8999/. Is there anything I can do now on this or is there something else happening on the server side still. Please note that our tests require there actually to be tasks with file tiems etc.. on them. Are there provisions for automating that or some other way of getting an initial population/clone in there? (I'm not as familiar w/ gerrit internals as I should be at this point.) Worse case, we could use the Gerrit connector to dog food creating the tasks, but that of course has it's own difficulties, one of which is that the push stuff isn't factored out of UI yet... The server side should be mostly done. There a few minor improvements for the puppet scripts that are still needed to fully automate provisioning but the test instances are already up and running. The next step is to automate provision of the repository with reviews. This can either happen from tests (preferably) or as part of the puppet provisioning. I started tinkering around with JGit to create a simple test harness. I'll see how far I get with this over the next few days and we can then discuss how to proceed. Cool. I'm not sure which way I feel about the test item provisioning. On the one hand it would be really clean to have this all dogfooding from the connector and that seems like it would be easier and more maintainble in the end then getting into the Gerrit internals. But OTOH the tests aren't independent anymore. It's moot for now given that we don't yet have the uploading part separated anyway. Here is the first round of changes that adds the latest Gerrit versions and streamlines the puppet setup so it's consistent with other connectors: https://git.eclipse.org/r/#/c/11958/1 . The test harness will be added in a separate change that it is still in the works. (In reply to comment #6) > Cool. I'm not sure which way I feel about the test item provisioning. On the one > hand it would be really clean to have this all dogfooding from the connector and > that seems like it would be easier and more maintainble in the end then getting > into the Gerrit internals. But OTOH the tests aren't independent anymore. It's > moot for now given that we don't yet have the uploading part separated anyway. I also realized that we're going to have to have the updates for some testing scenarios anyway, as we need to be able to test for case where we modify and need to get the change -- just noticed this because we weren't covering the case where comments are added to an existing patch set properly. I guess we could script that somehow, but it does seem better to ensure that we can do all of this from a clean slate. I'm curious what people's thoughts are in general about designing this sort of group of tests; it's sort of an interesting issue, and it seems like an issue that must have been encountered in other connector testing scenarios. It seems that you could: 1) Try to build the tests so that they trade off of each other, e.g. a) run some Push Comment change tests, and b) run the Pull comment updates tests. But in this scenario, (a) isn't really being tested until you get to b)! And there is always the possibility for some kind of cross-contamination ;) -- you could image some edge cases in which you are trying to detect a change in (b) but it's impossible to tell whether both failed or both succeeded. 2) Treat each test separately, so that you set up each test as a clean slate, e.g. a) run a test to Push some new comments, and then test it by Pulling the comments using the API, and b) set up the test by creating some comments and then test it by Pulling the comments. But then you're doing twice the maintenance, and you'll have both tests failing if something borks on either end. of course there is always: 3) Use some kind of scripting approach to reliably get you were you want to be and then test the state later, but then you're talking a lot of maintenance if it is even possible, and you're starting to get away from TDD principles. Tests should create the setup they need. It's true that a lot of tests fail if the setup is broken or the repository doesn't work as expected. The code that does test setup should be as simple as possible and tests will likely provide little value if it doesn't even work. Here is a simple first pass at getting basic test data for the synchronization tests in place: https://git.eclipse.org/r/#/c/12002/ . Miles, GerritRemoteFactoryTest will need to be rewritten to make assumptions based on dynamically generated test data. (In reply to comment #9) > Tests should create the setup they need. It's true that a lot of tests fail if > the setup is broken or the repository doesn't work as expected. The code that > does test setup should be as simple as possible and tests will likely provide > little value if it doesn't even work. Right. The concern is that you're actually testing two things at once. The setup and then the actual thing under test. Which means that when a failure occurs you'll see the failure show up on the inverse test as well as the actual failed test. So if you break the push test, you break the pull test as well. I mean, there isn't any way around that, but just to point it out. > Here is a simple first pass at getting basic test data for the synchronization > tests in place: https://git.eclipse.org/r/#/c/12002/ . > > Miles, GerritRemoteFactoryTest will need to be rewritten to make assumptions > based on dynamically generated test data. Okay. I'll try to take a look tommorrow. Thansk again for getting this all up and going! (In reply to comment #10) > Right. The concern is that you're actually testing two things at once. The setup > and then the actual thing under test. Which means that when a failure occurs > you'll see the failure show up on the inverse test as well as the actual failed > test. So if you break the push test, you break the pull test as well. I mean, > there isn't any way around that, but just to point it out. Yes. We have tried testing against pre-populated repositories with static test data and that approach has failed. It's a maintenance burden when new repositories are released, it's limiting in what tests can do, brittle when someone decides to manually edit the repository, often breaks when tests fail and leave the repository in a broken state and for the vast majority of tests there is no benefit. As you point out, with dynamic populating of data some stuff is harder to test though, e.g. character encoding. It's difficult to validate that the data in the repository is actually what the test intended to put in there :). I have updated the patch set and made sure Gerrit trigger tests still pass. Let's track updating of GerritRemoteFactoryTest on bug 405138. I have pushed a small fix to set the canonical web URL property to get rid of the bogus redirect on login: https://git.eclipse.org/r/#/c/12017/.
> > Miles, GerritRemoteFactoryTest will need to be rewritten to make assumptions
> > based on dynamically generated test data.
>
> Okay. I'll try to take a look tommorrow. Thansk again for getting this all up
> and going!
Steffen, just zeroing out the current Mylyn tests has totally broken everything for me. I didn't realize that you were going to do that before I have a new testing solution in place. This means that we won't be able to test any of the pushes for current work now. And it's going to take a quite a bit of time to figure out how to populate the tests w/o it given that we don't actually have the push API yet. :(
(In reply to comment #14) > Steffen, just zeroing out the current Mylyn tests has totally broken everything > for me. I didn't realize that you were going to do that before I have a new > testing solution in place. This means that we won't be able to test any of the > pushes for current work now. And it's going to take a quite a bit of time to > figure out how to populate the tests w/o it given that we don't actually have > the push API yet. :( I don't know what you mean. The previously existing test data is still there and all tests that previously passed still do: http://mylyn.org/gerrit-2.4.2/#/q/status:open,n,z . The only thing that has changed is the URL (it used to be gerrit-2.4, now it's gerrit-2.4.2). Just rebase your change. (In reply to comment #15) > I don't know what you mean. The previously existing test data is still there and > all tests that previously passed still do: > http://mylyn.org/gerrit-2.4.2/#/q/status:open,n,z . The only thing that has > changed is the URL (it used to be gerrit-2.4, now it's gerrit-2.4.2). Just > rebase your change. My mistake. I confused the bad URL for missing data. As you say, it works fine w/ new URL. :) It doesn't look like it will be that hard. I think the thing to do is use the raw Gerrit API to populate and then the remote stuff to test. I'd like to add more than one user btw. Do we do that in puppet or through API? (In reply to comment #17) > I'd like to add more than one user btw. Do we do that in puppet or through API? It needs to be added through puppet. It's in site.pp in the o.e.m.gerrit.releng module. I would recommend using the "user@mylyn.eclipse.org" account since that's already in the htpasswd file. The change was merged. Thanks for the review! |