AWS re:Invent keynote has S3 and Redshift surprises

Sitting here at the first [Amazon]( [AWS]( conference, re:Invent, I listened to the keynote this a.m. from Andy Jassy, SVP for AWS. Besides an overenthusiastic JPL scientist making melodrama that would have made a soap opera director blush, he had a couple of interesting things to say about AWS.The first concerned a significant drop in [S3]( pricing. Owing to the *virtuous circle* Andy described of increasing adoption leading to increasing capacity leading to increasing efficiencies leading to increasing adoption, AWS will cut S3 prices by roughly 24 percent to 27 percent across all regions. This is definitely a point in a trend, since there have been some 23 previous price cuts in AWS since 2006. That’s a fine record to stand on. The prices they flashed up were down from the present First 1 TB / month $0.125 per GB to something like $0.90 per GB, and that adds up quickly. Pretty significant I’d say.

The other venture was to announce the newest AWS service, [Redshift](, which will be an AWS data warehouse service available as instances of dw clusters. They’ve been running this in-house with amazon itself for a few months, in parallel to their *legacy* dw with a large set of data (2B rows) and six of their more complex queries. Two 16 TB nodes on RedShift cost $3.65 an hour or $32,000 and got faster queries for a tenth of the cost.

Pricing is attractive (for a 16TB instance):

Price/Hour per hs1.xlarge compute node Price/Hour per hs1.8xlarge compute node Effective Hourly Price per TB Effective Annual Price per TB
On-Demand $0.850 $6.80 $0.425 $3,723
1 Year Reserved $0.500 $4.00 $0.250 $2,190
3 Year Reserved $0.228 $1.82 $0.114 $999

This looks to bring costs down by 19x-25x. But, there seems to be significant exclusions in the BI tools presented by Andy. He mentioned Cognos, but not any Oracle BI, for example. It will remain to be seen how that develops and what Oracle will do in response, but it does appear to be getting interesting out there. Still, we need to do something about data costs for in/out of the clouds from our own DC, as we are shipping a lot and I figure it adds too much overhead to the cost model.


Leave a Reply

Your email address will not be published. Required fields are marked *