Some Eclipse Foundation services are deprecated, or will be soon. Please ensure you've read this important communication.

Bug 405280

Summary: Hudson Per Project for LocationTech
Product: Working Groups Reporter: Thanh Ha <thanh.ha>
Component: LocationTechAssignee: Thanh Ha <thanh.ha>
Status: VERIFIED FIXED QA Contact: Andrea Ross <andrea.ross>
Severity: normal    
Priority: P3 CC: andrea.ross, denis.roy, thanh.ha, webmaster
Version: unspecified   
Target Milestone: ---   
Hardware: PC   
OS: Linux   
Whiteboard:
Bug Depends on:    
Bug Blocks: 403843    

Description Thanh Ha CLA 2013-04-09 09:34:27 EDT
Per Bug 403843 we should consider setting up LocationTech with per project Hudson instances.
Comment 1 Thanh Ha CLA 2013-04-15 10:46:27 EDT
Some initial thoughts:

OS:

process user: hudson-<project>

File system:

/hudsonroot -> /home/hudson
/hudsonroot/<project>         (don't forget sticky bit)

/hudson-workspace
/hudson-workspace/<project>   (don't forget sticky bit)


Webaccess:

http -> https
https://locationtech.org/<project>/hudson


* Use xinetd to start service on demand. 


Some questions for myself if we use xinetd for on demand Hudson access, how do we tell Hudson to stop when it's done?

Need some sort of interface to allow committers to restart Hudson. Is SSH sufficient for now?
(Would be nice to have it integrated in some sort of project management page on a website)


/hudson-workspace is part of the root file system. This should be a separate partition so that if projects fill up the hard disk the OS won't crash. Putting workspace on a none NFS share will stop us from seeing the .nfs lock files not being released issue. Does this need to have a quota? perhaps the suggestion can be that projects cleanup the workspace as part of the build?
Comment 2 Denis Roy CLA 2013-04-15 11:23:28 EDT
(In reply to comment #1)
> process user: hudson-<project>

Just to nit: Current group naming is project.component (such as technology.paho) so I'd prefer we keep the same naming scheme for the Hudson users.  technology.paho-hudson




> File system:
> 
> /hudsonroot -> /home/hudson
> /hudsonroot/<project>         (don't forget sticky bit)
> 
> /hudson-workspace
> /hudson-workspace/<project>   (don't forget sticky bit)

Why the sticky bit?  The hudson resources for one project should only be written by that instance's Hudson user.

In terms of filesystems:

- I think the hudsonroot should be in the project.hudson user's home directory.  On eclipse.org this is an NFS Mount.  This will allow us to launch the Hudson instance on any one of our virtual machines, greatly facilitate backups and allow us to restore service more rapidly should a vserver fail.

- I think the workspace directory should be on 'wilma', our build server storage unit. The individual vserver hosts have limited local disk resources (in terms of space and disk time) so multiple builds running simultaneously can cause performance issues not only for the builds themselves, but for other VMs.


> Webaccess:
> 
> http -> https
> https://locationtech.org/<project>/hudson

Sounds good


> 
> Need some sort of interface to allow committers to restart Hudson. Is SSH
> sufficient for now?
> (Would be nice to have it integrated in some sort of project management page
> on a website)

Let's solve this problem in version 2.0.  I want to keep committers away from the SSH shell as much as possible.  I have some ideas here, but plan on some mechanism that will track the project, the hostname that the project's hudson is operating on, as well as the port used.


 
> /hudson-workspace is part of the root file system. This should be a separate
> partition so that if projects fill up the hard disk the OS won't crash.
> Putting workspace on a none NFS share will stop us from seeing the .nfs lock
> files not being released issue. Does this need to have a quota? perhaps the
> suggestion can be that projects cleanup the workspace as part of the build?

As per above, let's put these resources on NFS.  We will likely encounter the .nfs bug, but there are workarounds (using rm -rf system call, or being less aggressive with the file/directory attribute caching).
Comment 3 Thanh Ha CLA 2013-04-15 11:37:15 EDT
(In reply to comment #2)
> (In reply to comment #1)
> > process user: hudson-<project>
> 
> Just to nit: Current group naming is project.component (such as
> technology.paho) so I'd prefer we keep the same naming scheme for the Hudson
> users.  technology.paho-hudson
> 

Sounds good, I'll use this scheme.

> > File system:
> > 
> > /hudsonroot -> /home/hudson
> > /hudsonroot/<project>         (don't forget sticky bit)
> > 
> > /hudson-workspace
> > /hudson-workspace/<project>   (don't forget sticky bit)
> 
> Why the sticky bit?  The hudson resources for one project should only be
> written by that instance's Hudson user.
> 

That's a good point, I won't bother setting that.


> In terms of filesystems:
> 
> - I think the hudsonroot should be in the project.hudson user's home
> directory.  On eclipse.org this is an NFS Mount.  This will allow us to
> launch the Hudson instance on any one of our virtual machines, greatly
> facilitate backups and allow us to restore service more rapidly should a
> vserver fail.
> 

I was planning on putting hudsonroot on /home/hudson which is NFS mounted. Then making each project hudson user's home /home/hudson/<project>. Just so that all the Hudson instance homes are in the same directory.

> - I think the workspace directory should be on 'wilma', our build server
> storage unit. The individual vserver hosts have limited local disk resources
> (in terms of space and disk time) so multiple builds running simultaneously
> can cause performance issues not only for the builds themselves, but for
> other VMs.
> 

I recalled we had a discussion on potentially making the workspace none NFS mounted so that we don't have the .nfs issue. Is "wilma" the same NFS mount that is currently mounting /home ?
Comment 4 Denis Roy CLA 2013-04-15 11:59:48 EDT
> I was planning on putting hudsonroot on /home/hudson which is NFS mounted.
> Then making each project hudson user's home /home/hudson/<project>. Just so
> that all the Hudson instance homes are in the same directory.

Awesome.

> I recalled we had a discussion on potentially making the workspace none NFS
> mounted so that we don't have the .nfs issue. Is "wilma" the same NFS mount
> that is currently mounting /home ?

I can think of more options:

1. Image file. If we want the best of both worlds (local storage + on a separate box) we can create a 50G sparse image file for each project's workspace on wilma's NFS space and, as part of the xinetd startup process for a project's hudson instance, mount that image file as a local disk on the vserver.  Data on a shared location, but the OS sees it as local storage.  I'd like to re-benchmark performance of this, though, as we haven't done it in a while.

2. Leverage iscsi.  If wilma has an iscsi target for each project (which maps to a directory) then the xinetd process that launches a hudson instance can initiate the iscsi device.  The server then sees the disk as local storage.  This is likely more complex than a single image file.

3. If it is acceptable to "lose" the workspace contents (assuming it is disposable, as good build artifacts should be moved to the downloads area) perhaps we can use local disk resources.  If the Hudson instance is instantiated on a different machine, the workspace is simply recreated from scratch.  I'd still be worried about disk churn on systems that are not optimized for disk I/O though.
Comment 5 Thanh Ha CLA 2013-04-17 09:55:54 EDT
I created a uDig Hudson instance at https://locationtech.org/udig/hudson

So far it's configured to hook into LDAP for access control and added the technology.udig project as administrators for the instance. Users in the project group can login with their email address.

One thing I'd like to understand is how much control / configuration we want to leave up to the individual projects? Or rather what's the minimum setup we want to do with a Hudson instance before giving them control?

(Such as, do we setup JDKs, Mavens, Gits, etc...)


We'd probably want to setup a /shared/comomon on LocationTech as well.


I've left the workspace default for now since it sounds like we need some more investigation on what our options are.


Regarding xinetd I think I'm going to need some help with getting this setup. I'm having trouble getting xinetd scripts working.
Comment 6 Thanh Ha CLA 2013-04-18 09:54:46 EDT
Had a meeting with Andrew and Denis yesterday. We decided that projects should have all permissions _except_ for "Administrator" and "Configuring Slaves".

We also need to come up with a list of standard Hudson plugins that are Eclipse approved which we will install on all Hudson instances by default. Projects wanting plugins outside of this list will need to create a bug.

Finally we need to also configure JDKs, Mavens, etc... for the Hudson instance. We also need a /shared/commons too.

I had a quick look at the current storage and it looks like none of the mount points seem to be NFS

/dev/xvda2             18G  6.1G   11G  36% /
/dev/xvdd1             77G   12G   62G  16% /home


Denis, can we get some storage hooked up for locationtech.org?

I believe we wanted

wilma -> Hudson workspaces
NFS Mount -> /home
NFS Mount -> /shared/commons
Comment 7 Denis Roy CLA 2013-04-25 13:11:01 EDT
> wilma -> Hudson workspaces
We'll mount this as /home/hudson to be consistent with our other similar deployments, correct?

> NFS Mount -> /home
If we have /home/hudson, I think we don't need this?

> NFS Mount -> /shared/commons
It's probably best to mount the existing /shared/common, right?
Comment 8 Thanh Ha CLA 2013-04-25 13:16:19 EDT
(In reply to comment #7)
> > wilma -> Hudson workspaces
> We'll mount this as /home/hudson to be consistent with our other similar
> deployments, correct?
> 

Yes lets do that and lets use the file partition like we did with LTS and just put the whole Hudson install in there. Hopefully we won't see any more nfs lock issues with this method.

> > NFS Mount -> /home
> If we have /home/hudson, I think we don't need this?
> 

Agreed.

> > NFS Mount -> /shared/commons
> It's probably best to mount the existing /shared/common, right?

Agreed.
Comment 9 Denis Roy CLA 2013-04-25 15:45:13 EDT
> > We'll mount this as /home/hudson to be consistent with our other similar
> > deployments, correct?

/home/hudson has been provisioned as a 200G device for workspaces.  I've left the old local /home/hudson.bak there, but all its data was copied to the "new" one.  Please delete the old one once you're satisfied all is there.



> > > NFS Mount -> /shared/commons
> > It's probably best to mount the existing /shared/common, right?

/shared is now available exactly like it is on Eclipse's Hudson boxes.
Comment 10 Thanh Ha CLA 2013-04-25 16:15:03 EDT
Seems to be working as expected.