| Summary: | s3sync completely borked | ||
|---|---|---|---|
| Product: | Community | Reporter: | Denis Roy <denis.roy> |
| Component: | Servers | Assignee: | Eclipse Webmaster <webmaster> |
| Status: | RESOLVED FIXED | QA Contact: | |
| Severity: | normal | ||
| Priority: | P3 | CC: | mistria, pwebster |
| Version: | unspecified | ||
| Target Milestone: | --- | ||
| Hardware: | PC | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Bug Depends on: | |||
| Bug Blocks: | 306060 | ||
|
Description
Denis Roy
Well I've got the AWS data mounted, but I'm not sure it's any better. Here are the problem(s) I'm seeing: Directories created by the s3sync script all generate I/O errors on the fuse mount, so I can't do anything with them. But if I create a directory with mkdir and rsync some files into it(which works well enough but slowwwww) then tools like s3Fox can't see the directory ( they see a 'file' of 0 size. ). This bug (http://code.google.com/p/s3fs/issues/detail?id=73) may explain what's going on: AWS doesn't have 'folders' so S3 tools create their own conventions on how to 'determine' what is(not) a folder. And the s3fs on fuse does it differently then s3Fox. At this time I think we have 3 options: 1) figure out what's busted in the s3sync ruby script and fix it. 2) delete everything(via s3Fox) and re-sync it using the s3fs mount. 3) Create a 'new' bucket,sync to it(via s3fs) and update the mirror pointers as required. One last note I tried the last comment on the above s3fs bug and it does seem to work for 'new' folders but it's kinda scary. -M. In the meanwhile, with s3sync, I'm getting a whopping 25 KB/sec uploading to s3: Create node maven.repo/org/eclipse/persistence/eclipselink/1.2.1-SNAPSHOT/eclipselink-1.2.1-20100318.084938-17.jar Progress: 5511965b 23945b/s 100% That's about what I was getting yesterday with s3fs and rsync: sent 555018493 bytes received 6222 bytes 23643.73 bytes/sec -M. Ok, the problem with the poor performance was related to our QoS rules. Here's what I'm seeing now with s3sync.rb: Progress: 3453952b 1144876b/s 89% 1.1 MB/sec makes more sense. This doesn't explain why some valid files were being deleted, but let's give s3sync one more try. It has lots of catching up to do. s3sync seems to be performing much better. The files that were being deleted are slowly reappearing. I'll assume user error, or mysterious glitch. |