| Summary: | Sponsored builds: you can build it, we can help | ||
|---|---|---|---|
| Product: | Community | Reporter: | Kim Moir <kim.moir> |
| Component: | Website | Assignee: | phoenix.ui <phoenix.ui-inbox> |
| Status: | RESOLVED WORKSFORME | QA Contact: | |
| Severity: | normal | ||
| Priority: | P3 | CC: | caniszczyk, david_williams, gunnar, mike.milinkovich, Mike_Wilson, remy.suen, sbouchet, wayne.beaton |
| Version: | unspecified | ||
| Target Milestone: | --- | ||
| Hardware: | PC | ||
| OS: | Windows XP | ||
| Whiteboard: | |||
|
Description
Kim Moir
We do have quite a bit of money via Friends of Eclipse. http://wiki.eclipse.org/Friends_of_Eclipse/Funds_Allocation I think the problem would be people to manage the machines? Chris, I opened a bug for new machines from FoE disbursements and was told no. see https://bugs.eclipse.org/bugs/show_bug.cgi?id=333594#c1 (In reply to comment #2) > Chris, I opened a bug for new machines from FoE disbursements and was told no. > see https://bugs.eclipse.org/bugs/show_bug.cgi?id=333594#c1 Right. Because - as Chris pointed out - the problem is the people costs, not the servers. The operative phrase in that comment is "...to be maintained by the Eclipse Webmasters". Not to put too fine a point on this, but I don't believe Kim is asking for hardware that would incur additional "people costs". She's just asking for the *existing* infrastructure (e.g. Mac, Hudson disk space,...) to be upgraded to the point that the foundation webmasters can spend *less* time keeping it limping along. (In reply to comment #4) > Not to put too fine a point on this, but I don't believe Kim is asking for > hardware that would incur additional "people costs". She's just asking for the > *existing* infrastructure (e.g. Mac, Hudson disk space,...) to be upgraded to > the point that the foundation webmasters can spend *less* time keeping it > limping along. If webmaster agrees, then I'm all for use FoE funds for this. > If webmaster agrees, then I'm all for use FoE funds for this.
We agree :) We need disk space for builds.
(In reply to comment #6) > > If webmaster agrees, then I'm all for use FoE funds for this. > > We agree :) We need disk space for builds. Let's go shopping! Should I file a new FoE proposal for this? Or should the webmaster with a more adequate description of funds. (In reply to comment #7) > Let's go shopping! > > Should I file a new FoE proposal for this? > > Or should the webmaster with a more adequate description of funds. Before we can approve anything, we need a description of the funds required. I don't care where that information comes from, but Webmaster seems like the obvious choice. > Before we can approve anything, we need a description of the funds required.
A disk array, 10 Terabytes or larger in size, to store Hudson Workspace data and other temporary build-related artifacts.
How is that? I can spec out the actual hardware.
(In reply to comment #9) > > Before we can approve anything, we need a description of the funds required. > > A disk array, 10 Terabytes or larger in size, to store Hudson Workspace data > and other temporary build-related artifacts. > > How is that? I can spec out the actual hardware. We'll need to get some sense of cost... Do you have as specific unit in mind? > We'll need to get some sense of cost...
>
> Do you have as specific unit in mind?
I have a specific unit in mind ... $5654 USD + shipping & duty
1. 1 Storform nServ A513 $5654.00 $5654.00
Details:
CPU: 1 x Opteron 6128 (2.0GHz, 8-Core, Skt G34, 512KB/Core L2 Cache, 12MB L3 Cache) 80W 45nm
RAM: 8GB (8 x 1GB) Operating at 1333MHz Max (DDR3-1333 ECC Unbuffered DIMMs)
NIC: Intel 82576 Dual-Port Gigabit Ethernet Controller - Integrated
Management: Integrated IPMI 2.0 with KVM and Dedicated LAN
Integrated Controller: 6-Port SATA Controller (AMD SP5100) - SAS Controller Required; See PCI Slots
SAS 2.0 Expander: Expander provides connectivity to all drives and expansion port (SAS Controller Required)
Expansion Port: External SAS 2.0 Connector (24Gb/s, SFF-8088) for JBOD Expansion
LP PCIe 2.0 x16 - 1: LSI 9260-4i 6Gb/s SAS/SATA RAID (4-Port Int) with 512MB Cache & BBU
LP PCIe 2.0 x8: No Item Selected
LP PCIe 2.0 x4 (x8 Slot) - 1: No Item Selected
Drive Set: 12 x 1TB Seagate Constellation ES (6Gb/s, 7.2K RPM, 16MB Cache) 3.5" SAS
RAID Configuration: RAID 6 with Hot Spare
System Volume: 60GB Boot Volume (Carved from RAID Array)
Power Supply: Redundant 1200W Power Supply with PMBus - 80 PLUS Gold Certified
Rail Kit: Quick-Release Rail Kit for Square Holes, 26.5 - 36.4 inches
OS: No Item Selected
Warranty: Standard 3-Year Warranty
Configured Power: 432 W, 443 VA, 1473 BTU/h, 4.0 Amps (110V), 2.1 Amps (208V)
==========================================================================
Total: $5654.00
(In reply to comment #12) > ========================================================================== > Total: $5654.00 Not cheap, but we should see what we can do. Can you open a request against Community->FoE Disbursements and we'll go from there? Also including what the new hardware would be used for. (In reply to comment #12) > 1. 1 Storform nServ A513 $5654.00 $5654.00 > > Details: > ... Just thought I'd mention ... I'm sure you already know this ... that what ever system you get or end up with should be able to "delete" files/directories very quickly too ... not just read them. For some reason, on current setup, I've noticed a few times where it takes a very long time to remove a large directory (and, yes, that's using a OS 'rm -fr <directory>') ... seems to take longer to delete it sometimes (like 20 minutes) than it took to write it there (though, I admit, I don't normally watch or measure the time). So ... maybe common knowledge ... but I'm just learning it ... "delete time" is very important for build machines. > It's a struggle to fund the infrastructure required to run builds at eclipse.
It is and always will be. Fortunately, companies like IBM, Google and Oracle are always stepping up to help. More should do so, but that's another thing.
|