upload problem (size?)

Nov 30, 2009 at 4:00 PM

 

drn-inactive.rar.002...Put failed on attempt 1: The request was aborted: The request was canceled.
Put failed on attempt 2: The request was aborted: The request was canceled.
Put failed on attempt 3: The request was aborted: The request was canceled.
Put failed on attempt 4: The request was aborted: The request was canceled.
Put failed on attempt 5: The request was aborted: The request was canceled.
The request was aborted: The request was canceled.
   at com.amazon.s3.AWSAuthConnection.put(String bucket, String key, Stream str, SortedList headers, Int64 startByte, Int64 bytesT
oPut)
   at s3.Commands.Put.Execute()
   at s3.Program.Main(String[] originalArgs)
F:\>

Hi all -- 

I'm testing out S3 for backup, and this software looks like a great solution for scripted operation.  However, I get the errors listed below when uploading a 3.3GB file.  I can upload the file OK using a GUI client like Cloudberry or  SpaceBlock, so the issue isn't at Amazon (their max object size is 5GB) or a problem with the file itself.  The connection is 4.5Mbps, and I'm not seeing excessive latency or any packet loss.  Suggestions appreciated, thx.

Jim Shilliday

 

F:\>s3 put drn-backup E:\rartmp\drn-inactive.rar.002 /verbose

s3.exe version 1.4 - check for updates at http://s3.codeplex.com

 

drn-inactive.rar.002...Put failed on attempt 1: The request was aborted: The request was canceled.

Put failed on attempt 2: The request was aborted: The request was canceled.

Put failed on attempt 3: The request was aborted: The request was canceled.

Put failed on attempt 4: The request was aborted: The request was canceled.

Put failed on attempt 5: The request was aborted: The request was canceled.

The request was aborted: The request was canceled.

   at com.amazon.s3.AWSAuthConnection.put(String bucket, String key, Stream str, SortedList headers, Int64 startByte, Int64 bytesT

oPut)

   at s3.Commands.Put.Execute()

   at s3.Program.Main(String[] originalArgs)

 

F:\>

 

Coordinator
Dec 1, 2009 at 12:06 AM

Hello there,

Would it be acceptable to use the /big option to split the file into chunks, as below?  As well as increased reliability, with /big you have the advantage that only chunks modified since the last upload will actually be uploaded.

s3 put drn-backup E:\rartmp\drn-inactive.rar.002 /big

(actually it looks like you've already split a RAR file up using a separate utility -- you could skip this step)

Hope that helps,

Max

 

Coordinator
Dec 1, 2009 at 12:09 AM

I should have said, if using /big, please use this special interim release that has greatly improving chunking:

http://s3.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=36555

Dec 1, 2009 at 12:15 AM

Hi Max–

Thanks for the response – the actual files get up to  around 20GB.  I actually tried /big:5000 on a 23GB file without success (same error).  I’ll experiment to see what size works best, and will try it with the interim release as you suggest. 

Jim

From: maxc [mailto:notifications@codeplex.com]
Sent: Monday, November 30, 2009 8:06 PM
To: Shilliday, Jim
Subject: Re: upload problem (size?) [s3:76601]

From: maxc

Hello there,

Would it be acceptable to use the /big option to split the file into chunks, as below? As well as increased reliability, with /big you have the advantage that only chunks modified since the last upload will actually be uploaded.

s3 put drn-backup E:\rartmp\drn-inactive.rar.002 /big

(actually it looks like you've already split a RAR file up using a separate utility -- you could skip this step)

Hope that helps,

Max

Read the full discussion online.

To add a post to this discussion, reply to this email (s3@discussions.codeplex.com)

To start a new discussion for this project, email s3@discussions.codeplex.com

You are receiving this email because you subscribed to this discussion on CodePlex. You can unsubscribe on codePlex.com.

Please note: Images and attachments will be removed from emails. Any posts to this discussion will also be available online at codeplex.com

Coordinator
Dec 1, 2009 at 12:27 AM

Hello,

If you don't mind lots of chunks, the default size of 10MB is good.  Amazon's own EC2 service writes multi-gigabyte machine images to S3 in 10MB chunks.  Larger sizes *should* work of course, but my personal experience has been that it isn't worth the hassle of big chunks failing to upload halfway through.  

Cheers,

Max

Dec 1, 2009 at 12:38 AM

Got it –

I’m new to S3 – I guess Amazon would know how to use it best.  As you say, smaller chunks could be an advantage since they are skipped when identical, but I’m not sure how rar packs its files – I generally keep a big compressed archive of user files and then “update” it periodically.  I suppose there’s a good argument here for skipping the compression and just uploading file by file, since most files don’t change.  That means I have to figure out whether saving the I/O is worth the extra S3 storage charges.

Thanks again!

Jim

From: maxc [mailto:notifications@codeplex.com]
Sent: Monday, November 30, 2009 8:28 PM
To: Shilliday, Jim
Subject: Re: upload problem (size?) [s3:76601]

From: maxc

Hello,

If you don't mind lots of chunks, the default size of 10MB is good. Amazon's own EC2 service writes multi-gigabyte machine images to S3 in 10MB chunks. Larger sizes *should* work of course, but my personal experience has been that it isn't worth the hassle of big chunks failing to upload halfway through.

Cheers,

Max

Read the full discussion online.

To add a post to this discussion, reply to this email (s3@discussions.codeplex.com)

To start a new discussion for this project, email s3@discussions.codeplex.com

You are receiving this email because you subscribed to this discussion on CodePlex. You can unsubscribe on codePlex.com.

Please note: Images and attachments will be removed from emails. Any posts to this discussion will also be available online at codeplex.com

Coordinator
Dec 1, 2009 at 12:47 AM

A /compress option would be rather good for uploading files individually, wouldn't it?  I might have a look at that.

If s3.exe works for you, please mention it on blogs, forums etc.

Cheers,

Max

Dec 1, 2009 at 1:07 AM

That would be nice – and while you’re at it, on-the-fly encryption (/encrypt:keykeykeykeykeymorekey….), and also (my holy grail and prob a separate app) a way to map an s3 bucket to an MS Distributed File System target.  Everything would be replicated to the cloud transparently in near-real-time. 

I’m sure S3 will work fine once I figure it all out, and will pass it on.

From: maxc [mailto:notifications@codeplex.com]
Sent: Monday, November 30, 2009 8:48 PM
To: Shilliday, Jim
Subject: Re: upload problem (size?) [s3:76601]

From: maxc

A /compress option would be rather good for uploading files individually, wouldn't it? I might have a look at that.

If s3.exe works for you, please mention it on blogs, forums etc.

Cheers,

Max

Read the full discussion online.

To add a post to this discussion, reply to this email (s3@discussions.codeplex.com)

To start a new discussion for this project, email s3@discussions.codeplex.com

You are receiving this email because you subscribed to this discussion on CodePlex. You can unsubscribe on codePlex.com.

Please note: Images and attachments will be removed from emails. Any posts to this discussion will also be available online at codeplex.com

Jan 11, 2010 at 3:34 AM

I'm having similar issues.  ~5mbps upload speed, no latency or connection issues.  Other software uploads to S3 fine.  Using latest release with md5 + chunk enhancements.  Can't seem to upload w/o chunking even mildly large files (~100mb) w/o the "request canceled" error.  Would prefer non /big but even /big is giving strange errors...though it finishes uploads at least.

D:\>c:\s3.exe put backup.bucket/lithium/ d:\LITHIUM\*2010*.* /backup /verbose
s3.exe version 1.4 - check for updates at http://s3.codeplex.com

lithium/ads_20100110.tar.bz2
Put failed on attempt 1: The request was aborted: The request was canceled.
Put failed on attempt 2: The request was aborted: The request was canceled.



D:\>c:\s3.exe put backup.bucket/lithium/ d:\LITHIUM\*2010*.* /backup /verbose /big
s3.exe version 1.4 - check for updates at http://s3.codeplex.com

WebException (The remote server returned an error: (404) Not Found.) with status code 404
lithium/ads_20100110.tar.bz2.000
WebException (The remote server returned an error: (404) Not Found.) with status code 404
lithium/ads_20100110.tar.bz2.001
WebException (The remote server returned an error: (404) Not Found.) with status code 404
lithium/ads_20100110.tar.bz2.002
WebException (The remote server returned an error: (404) Not Found.) with status code 404
lithium/ads_20100110.tar.bz2.003
WebException (The remote server returned an error: (404) Not Found.) with status code 404
lithium/ads_20100110.tar.bz2.004
WebException (The remote server returned an error: (404) Not Found.) with status code 404
lithium/ads_20100110.tar.bz2.005

 

 

Coordinator
Jan 11, 2010 at 3:51 AM

Hi MrBuzz,

Those messages look normal to me.  The /verbose option should really have been named /debug, as it produces messages that are not useful except when debugging.

I have a much faster connection and also have low latency, but from here in Korea I have always seen the same performance with ~100MB files that you're seeing.  But not elsewhere.  Where in the world are you?

Max

 

Jan 11, 2010 at 3:54 AM

Hmmm...well without using /big, the file never completes upload...it tries for a few attempts then gives up.

I'm in Toronto, Canada.

Coordinator
Jan 11, 2010 at 3:57 AM

Yeah, I get the same.  I didn't expect you to say you were in Canada though, it's just a little bit closer to Amazon than Korea.  If you have an opportunity to try it with a different ISP, I suspect you might see better results.

Jan 11, 2010 at 4:23 AM

On mac, i can use cyberduck which also has s3 + md5 and i can upload about 3gb without problems or timeouts or errors from the same connection.  On the connection, it uploads consistently at 5mbps just before the error occurs....somehow i dont think it's my connection.

Coordinator
Jan 11, 2010 at 4:28 AM

I don't think there's anything wrong with your connection, but you might still see better results with uploading to Amazon on a different one.  On my PC and connection, s3.exe performs as well or maybe a bit more reliably than 'S3 Firefox Organizer' -- if you find it's consistently worse in comparison on the same PC and connection then that would be of concern.

Jan 11, 2010 at 7:28 PM

OK....i just uploaded all files successfully 1st try (total of 3gb) using a tool called S3 Backup on the same computer/same connection.  I'd prefer a cmd line tool like yours though.  I'm wondering if it's simply because your app is more aggressive with timeouts or errors.

Coordinator
Feb 8, 2010 at 2:19 AM

Hi, FWIW I believe this is now fixed in version 1.6: http://s3.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=40116